Article 1: Payne & Cameron (2013)
Implicit and explicit representations
A person might have two representations of the same thing – one that is implicit and one
that is explicit – and the two might be quite different. A person might have an implicit
representation that a specific racial group is unfriendly, but also an explicit representation
that the same group is perfectly sociable. The presentation that is active would determine
how a person perceives, interprets and responds in situations involving that group.
Others emphasize that a representation can be used either explicitly or implicitly. A person
might access a stereotype about a specific racial group, and whether that stereotype
translates into judgement or behavior will depend on how much awareness and intentional
control the person has over its application.
But… it’s both momentary and permanent (always implicit, sometimes explicit)
How is knowledge stored? Different models for mental representatinons
Theories of mental representations are always about metaphors, e.g. Bins or nodes.
Associative network models
Schema models (Predictive coding)
Connectionist models
Multiple format models
Embodied cognition
Situated cognition
Associated network models
Serial search models: rather than automatically spreading activation, there is an intentional search
following linked nodes until a concept is retrieved. The nodes and pathways are unconscious, but the
search process (e.g. Remembering someone by remembering how they look like) is conscious.
Representations are only momentarily implicit or explicit. The direction of attention determines what
is implicit or explicit.
Dual process model: uses both the parallel search (implicitly) and serial search (explicitly). The
implicit representations can become explicit by spreading activation that crosses a threshold
(quantitative) and intentionally performing serial search (qualitative distinction; different ways of
searching).
Associated network models are a-modal. Embodied cognition is modal (via sensory information).
Schema models
Perceivers “go beyond the information given”, Bruner (1957) Information is stored in an
abstract form.
Once a schema is operated, it operates as a lens through which you perceive the world
around you.
Directs attention, memory and judgement
Top-down approach: schemas are broad representations that structure and make sense of
psychological experiences. E.g. Activation of the black-hostility schema made people act more hostile
in line with the schema, and then interpret retaliatory hostility as due to the inherent hostility of the
interaction partner.
Schema theories suggest that knowledge structures are used implicitly in the processing of new
information, but not that the content of the schemas themselves are unconscious. Researchers often
measure the contents of schemas using self-reports. The content is implicit or explicit only
temporarily, as attention is direct toward or away from any given aspect of the knowledge base.
,Predictive coding
Predictive coding (also known as predictive processing) is a theory of brain function in which the
brain is constantly generating and updating a mental model of the environment. The model is used to
generate predictions of sensory input that are compared to actual sensory input. This comparison
results in prediction errors that are then used to update and revise the mental model.
Bayesian processes:
Priors affect perception They use prior knowledge in perceiving the world around us.
Priors are e.g. Coffee is tasty, being careful with hot coffee etc.
Posterior (‘comparison between perception and prior’) The prior about coffee at a new
specific bar is updated (when e.g. The coffee is not so tasty over there).
Bruner’s schemas can be interpreted as prior within predictive coding.
Connectionist models
Connectionism: parallel distribution processing
Nodes, that may vary in their level of activation
Facilitative and inhibitive links One node activates another
node. Just like in the associative network models. There are
also inhibiting links.
Concepts exist by means of dynamic interplay of distributed
elements A node has no semantic meaning. A whole
‘system’ of nodes and their including weights form a concept,
once activated. The representations are not static, but strongly activated by the
environment.
Input, hidden, and output elements
Connectionist models don’t assume that nodes have semantic meaning. Rather, representations are
distributed as emergent patterns across the entire set of connected nodes. When given a set of
inputs, the network eventually settles into a pattern that satisfies the parallel constraints of the
activated nodes and the weighted connections between them. Importantly, distributed
representations are not discrete because there are not distinct representations “stored” anywhere in
the connectionist network.
Different theories:
Local connection models “grandmother cells”—single neurons that represent specifi c single neurons that represent specifi c
things, places, or persons, such as your grandmother. These models are more similar to
associative network models than to connectionist models because these models abandon
the assumption of distributed pattern completion. They are distinguished by the operating
Principle of parallel constraint satisfaction.
Distributed connection models The distributed representations of connectionist networks
might therefore be described as efficient and unintentional.
,See table!!
Multiple format models
Multiple types of mental representations, e.g. The memory system model (each system within a
different part of the brain, mainly used within implicit bias):
Semantic memory system learns through more conceptual means that are implemented
by neocortical regions.
Affective memory system learns through mechanisms, including fear conditioning, that
are implemented by subcortical pathways in which the amygdala is central
Procedural memory system m learns through a network connecting the striatum and basal
ganglia to prefrontal cortex and motor regions
So: learning and behaviour are linked based on neurological pathways.
For all three systems, processing is presumed to be unconscious, though the outcome (e.g., a
semantic idea, an affective reaction, or a habitual response) may become temporarily conscious.
Embodied cognition
Fundamental distinction between “amodal” and “modal” forms of mental representation. Amodal
representations are abstract and disembodied symbols of objects or events that do not retain the
sensory components from the original experiences of those objects or events. For instance, I might
think about coffee as warm, tasty, and arousing, but in an abstract and conceptual way, that does not
actually call upon sensory information or details. By contrast, modal representations draw upon and
are constituted by those sensory experiences: the heat of the coffee, the flavor of the beans, the
caffeinated alertness.
, Embodied models of mental representation criticize associative network, schema, and
connectionist models for overlooking modal mental representations.
Situated cognition
Situated cognition: mental representations result from dynamic interactions between the brain,
body, and environment
Where the original models were mainly thinking about mental representations as mind of a
computer. Now, the models also include the body and the situation.
By relying less on internal information, the brain can delegate to features of the environment
and simplify decision making.
The multimodal representations used in simulations seem to be only temporarily implicit, inasmuch
as conscious attention can shift focus to information resulting from specific modalities. And
implicitness resides as much in embodied representations themselves as in how they are used.
They outsource our memory to Internet search engines such as Google
Instead of representations being in the head and representation use being in the world,
representations now just are the process of navigating and interacting with the world. E.g.
Rearranging Scrable tiles to think up new words.
Semin and Cacioppo’s (2009) social cognition model of adaptive interpersonal behavior:
When a person sees someone else’s action, two concurrent synchronization processes are
activated.
o One is an efficient, unintentional, uncontrollable, and nonconscious monitoring
process that mirrors and continually simulates the action of the other person.
o The other is a conscious, controllable, and intentionally goal-directed process that
allows for adaptive, complementary action simulations.
The implicitness here is only temporary and, because of the connection with situated action,
is fundamentally tied up with both representation structure and representation use.
Another principle is the notion of situational “affordances” (Gibson, 1977). As defined by Gibson
1977), affordances refer to implied possibilities for action that are present in the environment. For
example, a guitar sitting in the corner might afford me the possibility of playing a song
The situated inference model (Loersch & Payne, 2011) suggests that when primed representations
are misattributed to one’s own response to a situation, they alter judgments, goals, or behaviors
based on what kind of tacit question the situation affords. Not only do situations cue certain
representations, they also cue different kinds of use of those representations.
For some models, there is more evidence than for others. None of them will be entirely true, but
they can help us about the way mental representations are stored.