Summary: Introduction to Psycholinguistics: Understanding Language Science
Final exam: H1 t/m H14 + article Kita et al. (p. 245-247)
Chapter 1 // Lecture 1
What is language?
Language = a system of symbols and rules that enable us to communicate, a combination between stored
knowledge and knowledge of rules and how to combine those units.
We can describe language in many different ways of abstraction:
- Semantics = the (study of) meaning
- Syntax = the study of word order, the scope of a single sentence, from the capital to the dot. What is the
valid word order in the language?
- Morphology = the study of words and word formation, and how you make words in a language. Not only
what words mean, but more how do you make a plural out of a singular?
- Pragmatics = how language is used.
- Phonetics = study of raw sounds, the physical aspect of language, sound waves: the study of air pressure
that is part of the language
- Phonology = the study how sounds are used within a language, the way sounds are used and represented
in the brain/head. When you actually pronounce the letters, rather than they are physically represent
(phonology vs phonetics)
Stored (Lexicon)
Components of language:
• Phonemes: representations of sounds (a o i e)
• Syllables (klemtonen): every word consists of at least one syllable and every syllable at least
consist of one phoneme
• Morphemes: the smallest meaning carrying units in language
• Words: to make phrases
• Phrases/clauses: to make sentences
• Sentences
Generated (Grammar)
Two chief components of language: lexicon and grammar.
Grammar = a system of rules that can be combined into messages.
Lexicon = part of the long-term memory that stores information about words.
Languages need both components so that speakers can formulate messages that express propositions (statements
of who did what to whom). To create messages, a speaker searches for symbols in the lexicon that match the
concepts that he/she wished to convey. The grammar tells him/her how to combine the symbols to create the
appropriate signals (speech sounds) that will transmit the message to a listener.
Misconceptions about grammar
Prescriptive grammar = completely arbitrary rules that you can follow if you wish to sound like a proper English
gentleman. Collections of artificial rules, what you learn at school.
Example: never end a sentence with a preposition, like:
‘You’re the only one that I want to tell my secrets to’ – incorrect
‘Ending a sentence with a preposition is something up with which I will not put’ – correct
Descriptive grammar: systematic rules that determine how people actually speak, the set of rules or principles that
governs the way people use language ‘in the wild’ (how people naturally and normally think and behave).
Scientists study this sort of grammar, because they are interested in the human mind.
Example:
‘Each clause can have only one main verb’ – my grammar teachers liked gave rules
‘Verbs go in the middle’ – likes grammar teachers rules
1
,Three functions of grammar
Descriptive grammar explains why language takes the form that it does. Steven Pinker and Ray Jackendoff suggest
that grammars regulate the combination of symbols into messages in three crucial ways:
1. Order: grammar determines the order that symbols appear in expressions.
2. Agreement: the grammar dictates different kinds of agreement; certain words in a sentence must appear
in a specific form, because of the presence of another word in the sentence (e.g. plural).
3. Case marking: the grammar determines where words must appear in different forms depending on what
grammatical functions they fulfil.
Design features of language (Hockett):
• Semanticity: refers to the idea that language can communicate meaning and that specific signals can be
assigned specific meanings.
• Arbitrariness: refers to the fact that there is no necessary relationship between actual objects/events in
the world and the symbols that a language uses to represent those objects/events (as of tomorrow we
call cats as lerps). There is no inherent connection between the sound and the sound of the word does not
have any connection with how it’s shaped in the real word. Tree does not day anything about the shape of
the tree.
Onomatopoeia: words for large objects have deep-sounding vowels and small objects high-sounding
• Discreteness: refers to the idea that components of the language are organized into a set of distinct
categories, with clear boundaries between different categories. Language can be broken down into units.
• Displacement: refers to a language’s ability to convey information about events happening out of sight of
the speaker (spatial displacement), about events that happened before the moment when the person
speaks, and events that have not yet taken place as the person is speaking (temporal displacement).
Talk about things and refer to things that are not in the here and now ‘yesterday I was reading a book’.
• Duality of patterning: refers to the fact that we simultaneously perceive language stimuli in different
ways. Example: As a collection of phonemes and a set of words: wasp – w/o/s/p or combine with other
sounds: green area where you can play = park.
• Generality: refers to the fact that languages have a fixed number of symbols, but a very large and
potentially infinite number of messages that can be created by combining those symbols in different
patterns. The idea that we can combine words in to new combinations, combine to say whatever we want,
to place a component in another. The nails that Dan bought à the nails / which Dan bought.
Recursion = the ability to place one component inside another component of the same type.
For example: Tom likes beans – Susan thinks Tom likes beans.
- A core property of the grammars of all languages, the only property that is specific to human language
- Gives language the property of discrete infinity; the ability to generate infinite messages from finite
means.
- The language Pirahã does lack recursion, because recursion introduces statements into a language that do
not make direct assertations about the world which Pirahã does.
Hand me the nails that Dan bought
Give me the nails. Dan bought those very nails. They are the same (Pirahã).
History of Language
There are two ideas of how language is originated:
- Continuity = human language abilities are closely related to pre-existing communicative abilities and
represent a relatively modest upgrade from those abilities. We can apply general ideas about adoption
and natural selection to the development of human language, the same way we apply those ideas to
other characteristics of humans. (Taal is overgenomen van onze voorouders).
- Discontinuity = some aspects of modern human language abilities represent a clean break from the past,
that our language abilities are qualitatively different form more basic communication systems. (Er is een
totaal nieuwe set aan mentale processen ontstaan)
2
,Studies of Primates
- Diana monkeys: have separate calls for eagles and jaguars to warn each other which of the two is coming
- Bonobos monkey (Kanzi): learned to make different vocalizations in the context of different objects
(bananas vs. grapes)
- Mountain gorillas (Koko): learned to make signs with their hands (sign language)
- Chimpanzees (Nim) never learned to talk (could not vocalize), but did learn to make signs
Differences between apes and humans
Children/Humans Apes
Universal acquisition in children Variable acquisition in apes
Children experiment and innovate Apes copy
Children babble Apes don’t
Children’s grammar becomes more complex Apes signs are repeated
Humans apply grammatical rules consistently Apes apply grammatical rules inconsistently
Humans use words to comment and express Apes use signs as tools to get things
intentions
Humans interrupt not that much Apes interrupt far more than humans
Can apes learn human language?
Yes, if you limit the language and exclude the ‘rules’
No, if you mean language includes a timeline, syntax, unsupervised learning and acquisition, etc.
What do apes learn?
They learn something during all that instruction that can best be characterized as a protolanguage.
In spite of all the efforts, apes did not rise about the level of a five-year-old.
Language Origins
Many linguists agree that the human’s capacity for speech is an adaption (evolution); our vocal tracts differ from
other apes. They make us vulnerable to death by choking, but also allow us to produce speech.
Two theories were speech comes from:
1. Speech predates homo sapiens: producing speech requires the right kind of vocal tract and equal distance from
the larynx to the top of the throat and from the top of the throat to the mouth opening.
2. Speech does not predates homo sapiens: producing speech requires the ability to control the speech apparatus
and rapid changes in air flow. Human ancestors and apes lack the neural systems that are necessary for fine
breathing control.
Language and thought
- Watson and Skinner say that thought is a sub-vocal speech. If so, eliminating speech should eliminate
thought.
- You do not need language to think (“Brother John still capable to think through periodic failures to speak)
and that you can have sophisticated language skills despite poor functioning in non-language thought
domains (Williams syndrome and “Christopher”) = this pattern is double dissociation.
Double dissociation: you can have one good language without the
other good thought, so they are at least partially separate and are not
the same thing.
3
,Linguistic determinism (Whorf and Sapir) = helps to highlight the idea that language drives thought, that the way
we think is determined by the language we speak.
• Language dictates thought
• No convincing evidence: we don’t exclusively think in language, nor does the language we speak prevent
us from thinking any thoughts or making perceptual and conceptual distinctions.
Linguistic relativity: speakers of different languages think different; the way you preform is influencing language.
The structure and lexicon of one’s language influences how one perceives and conceptualizes the world, and they
do so in a systematic way.
• Language influences thought
• Some compelling evidence: language can make it easier to make certain perceptual and conceptual
distinctions
Two related problems with linguistic relativity:
• Circularity: evidence that people who talk differently also think differently is that they talk differently
• Thinking for speaking: when required to give a verbal response (e.g., describe a visual scene), speakers
cannot escape attending to the conceptual distinctions speaking their language requires them to make
Either way, if you want to PROPERLY test people’s thinking outside of language, you need to use non-linguistic
tasks
Language-processing system
Language as a mental module (Fodor):
• Domain specific: means that a mental processing unit deals with some kinds of information, but not
others. For example, the visual system responds to light but not to sound.
• Distinct neural structure: means that particular brain regions are associated with specific computations.
For example, basic visual processing takes place in the visual cortex; more complicated visual processing
takes place in other brain areas.
• Computationally autonomous: means that a mental processing mechanism does its job independent of
what is happening simultaneously in other processing mechanism.
The comprehension system:
• Speech perception: identify the words that appear in the input.
• Parsing process: Once you have identified a set of words, you need to figure out how they are organized
and how they relate to one another. Once you have more than one sentence to work with, you need to
figure out how those sentences elate to one another.
Chapter 2 // Lecture 2 + 3
Normal speech production is fast:
- About 3 words per second (3 Hz / 120 BPM)
- About 5 syllables per second (5 Hz / 300 BPM)
- About 15 phonemes per second (15 Hz / 900 BPM)
Basic processes in speech production:
- Conceptualization = Thinking about what we want to say
- Formulation = Choosing the right words to say it; you must figure out a good way to express that idea
given the tools that your language provides
- Articulation = You need to actually move your muscles to make a sound wave that a listener can perceive;
moving lips, tongue, vocal cords, larynx and lungs to say it.
4
,Levelt’s Model of Speech Production (WEAVER ++)
• Influential theoretical model of different stages of speech
production
• A computational implementation of this model is called
WEAVER++
WEAVER++ describes the intermediate steps between activating an idea
and activating the sounds that you need to express the idea.
Conceptual preparation in terms of lexical concepts = choose the idea(s) that you want to express, but make
sure that your idea lines up with words that you have in your language.
Lexical concept = an idea for which your language has a label. Lexicalization process serves as the interface
between non-language thought processes and the linguistics systems that produce verbal expressions that convey
those thoughts. It takes our ideas and finds the lexical form. For example: English has a word for female horse
(mare) but not for female elephant.
Lexical selection = the process of choosing the best word out of a number of different words that are close in
meaning to the idea that you wish to express.
Lemma = mental representation that reflects an intermediate stage between activating an idea and activating the
speech sounds that you need to express the idea. Incorporates information about what the word means and
syntactic information that you need to combine that word with other words to express more complex ideas.
Morphological encoding:
- Morphemes are units and building blocks of meaning in a language
- Morphological specification: tells us how the words behaves when it is placed in a sentence (eat – eats
– ate)
Having selected a set of morphemes to produce, morphological encoding activates the speech sounds
(phonemes) we need to plan the articulatory movements that will create the speech signal
è So far: concepts point you to lemmas. Lemmas point you to the morphological information you need to
combine lemmas into larger phrases. Morphological encoding points you to the speech sounds (phonemes)
you need to express specific sets of lemmas in specific forms.
Phonological encoding/syllabification:
- Phonemes = individual speech sounds
- When we speak, we do not simply emit a sting of phonemes. Those phonemes need to be organized
into larger units, because when we speak, we speak in syllables.
- Syllabification = figuring out how to map the set of activated phonemes onto a set of syllables.
o Activating metrical structure = indicates the relative loudness (accent) that each syllable
should receive (banana o-o’-o, Panama o’-o-o)
o Inserting individual speech sounds (phonemes) into positions in the metrical structure
è The speech planning system activates a set of morphemes/words and then it figures out the best way to
organize those morphemes and words into a set of syllables.
5
,Phonological word = a set of syllables that is produced as a single unit (‘escort’ and ‘us’ /ess-core-tuss). According
to the WEAVER++ model, you can begin to speak as soon as you have activated all of the syllables in a given
phonological word.
Phonological encoding = involves the activation of a metrical structure and syllabification: organizing a set of
phonemes into syllable-sized groups, whether the specific phonemes come from the same morpheme and
word or not.
Phonetic gestural score = the representation used by the moto system to plan actual muscle movements
(articulation) that will create sounds that the listener will perceive as speech.
Articulation = actual speech movements that will create sounds that the listener will perceive as speech/sound
waves.
Evidence for the WEAVER++ can be found in:
- Speech-errors
- Tip-of-the-tongue experiences
- Reaction time studies involving picture naming
Speech errors
- Semantic substitution: people substitute one word for another when they are speaking (cat/rat/dog)
I left my keys on the chair – I left my keys on the table
Reflects the conceptual preparation or lexical selection component of the speech production process.
- Sound exchanges: the correct set of phonemes is produced, but some phonemes appear in the wrong
positions in the utterance (big fit / fig beet). These errors are not random, most of the time they occur in
the same phrase.
- Word exchange error: when a word that should have appeared in one position is produced in a different
position. My girlfriend plays the piano – My piano plays the girlfriend
Happens in the conceptual preparation, lexical selection and phonetic encoding.
Access interrupts: Tip-of-the-tongue experiences (TOT)
When you are trying to retrieve a word, you have a strong subjective impression that you know the word, but you
are temporarily unable to consciously recall and pronounce the word.
According contemporary production theories, TOT states occur when you have accessed the correct lemma, but
you have been unable to fully activate the phonological information that goes along with that lemma.
Happens in the morphological encoding and phonological encoding/syllabification.
During a TOT experience, people:
- Accurately predict whether they will come up with the correct word soon.
- Report the correct number of syllables.
- Accurately report the first phoneme.
- Are more accurate about the beginning and end phonemes than the middle.
- Report words that sound like the target.
- Have more TOT’s for less frequent words.
- Resolve about 40% of TOT’s within a few seconds to a few minutes.
Prospecting = testing if someone knows the word, is able to access the appropriate sounds straight away or they
experience a TOT.
Picture naming:
How do you find the word you need to express a concept? How do you activate the sounds that make up the
word? Picture naming depends on word frequency; if the word is less frequent, naming the pictures takes longer.
Picture-word interface paradigm: naming a picture with a word printed on top with it
6
,Low frequent words have longer naming ratifies than long frequent words
- Identify condition = word can be the same as the picture. People respond fastest, because the word in the
picture points towards the same concept
- Semantic condition = word can be related to the picture. People are slower, because the word printed on
top of the picture interfere at the concept selection stage
- Phonological condition = word can sound similar, but in meaning they are most of the time nothing
related (like only the same ending). People are quite fast, because the word and the picture do not
interfere at the concept selection stage. The phonological information leads to facilitation.
Levelt’s Model of Speech Production (or Weaver++):
- Feedforward
- Discrete: selection has to take place before activation at the next level starts
Grey Dell’s Spreading Activation Model:
- Interactive = feedforward + feedback
- Cascading: activation spreads throughout the system immediately
Evidence for Feedback
1. Lexical bias effects: errors are more likely than chance to produce real words. Phonological exchange
happens more often when the result is two real words. You are more likely to say Big feet > Fit Beet, than
Big horse > Hig Borse.
2. Errors respect phonotactic constraints (rules about how phonemes can be combined): they result in
phoneme sequences that are allowed by language. Slip > glip, rather than slip > tlip.
3. Mixed errors: the word that a person produces by mistake is related in both meaning and sound to the
intended word. So, a person is more likely to say “lobster” by mistake when they mean to say “oyster”
than they are to say “octopus”, because “lobster” both sound like and has a similar meaning of the target.
Potential limitations of lemma theory
Alfonso Caramazza argues that lemma theory does not do a very good job dealing with evidence from patients
with brain damage. There are patients who have a pattern of problems in speech, while the opposite pattern can
occur in written language production, within the same patient. A given patient could have trouble with function
words (but not content words) in writing, and trouble with content words (but not function words) while speaking.
If both processes tap into the same set of lemmas, it should not be possible for this pattern of problems to appear.
Self-monitoring and self-repair
Self repair = happens after an error
Self monitoring = used to catch errors before they are produced
- When speakers make an error, they often replace the error with the correct word with no delay or nearly
no delay between producing the error and producing the correct word
- Because speech planning takes time, the plan for the correction must be undertaken as the error is being
produced
- Therefore, the error must have been detected before it was spoken
Articulation
= to make speech muscles move to produce sound: the movement of your lips, tongue, lunges, etc. Articulation
perturbs airflow to create different patterns of sound waves.
Articulatory phonology theory:
Speech planning creates a gestural score that tells the articulators how to move
1. Move a particular set of articulators
2. Toward a location in the vocal tract where a constriction occurs
3. With a specific degree of constriction
4. Occurring in a characteristic dynamic manner
Articulator movement produces phonemes (basic speech sounds, which can be classified according to
- Place of articulation
7
, - Manner of articulation
- Voicing
Coarticulation = the gesture for one phoneme overlap in time with the gestures for the preceding and following
phoneme, influences the production and perception of speech.
Foreign Accent Syndrome (FAS) = betekent dat het lijkt dat iemand na hersenschade een ander dialect spreekt. Dit
kan optreden wanneer onderdelen van de spreekproductie worden aangetast als het proces van syllabificatie, of
de articulatie van phonemes.
Three important problems in speech perception:
1. The segmentation problem: speech is sticky
- There are no spaces between words in running speech
- And the pauses that are there, are not in the right place
2. The invariance problem: speech sounds are not stable
- Different conditions (speech rate, noise)
- Different speakers
- Coarticulation (produces redundancy and variability)
3. The flow of information
- Top-down, bottom-up, interactive, serial or parallel
Speech segmentation: we don’t speak with spaces
Embedded words: words within and across words (sack in saxophone / I scream – ice cream)
Motor Theory of Speech Perception
- When people perceive speech sounds, they try to figure out the “articulatory gestures” (either intended
or actual) that produced these speech sounds
- Relationship between gesture and phoneme closer than relationship between acoustic signal and
phoneme.
The motor theory of speech perception proposes that gestures (tongue against back of teeth, lip position etc.),
rather than sounds, represent the fundamental unit of mental representation in speech. When you speak, you
attempt to move your articulators to particular places in specific ways. Each of these movements constitutes a
gesture. The motor part of the speech production system takes the sequence of words you want to say and comes
up with a gestural score (movement plan) that tells your articulators how to move. According to the theory, if you
can figure out what gestures created a speech signal, you can figure out what the gestural plan was, which takes
you back to the sequence of syllables or words that went into the gestural plan in the first place.
Categorical perception: many signals, few phonemes (categories). There is a gradual physical change but an abrupt
perceptual change. Er wordt geen onderscheid gemaakt tussen objecten/klanken die hetzelfde klinken of redelijk
hetzelfde zijn, maar ze worden ondergebracht in categorieën. Volkswagen en Opel = categorie auto
McGurk effect: non-acoustic information affects the speech perception. Because visual and haptic touch
information can help identify gestures. Treedt op als visuele info en audio niet kloppen met elkaar. Wanneer het
beeld een persoon laat zien die ‘ga’ zegt, maar het geluid is van iemand die ‘ba’ zegt, dan denkt men dat er ‘da’
gezegd wordt. Dit effect kan optreden omdat je hersenen visuele en audio informatie met elkaar combineren.
General Auditory Theory
The general auditory (GA) approach to speech perception starts with the assumption that speech perception is not
special. Some studies have looked at the way people and animals respond to voicing contrasts (the difference
between unvoiced consonants like /p/ and voiced consonants like /b/). These studies suggest that our ability to
perceive voicing is related to fundamental properties of the auditory system. We can tell whether two sounds
occurred simultaneously if they begin more than 20 ms apart.
- Ganong effect: favor the real word over the non-word
- Phonemic restoration effect: restore of missing phonemes
8
,Chapter 3 // Lecture 4
Phonemic restoration = happens on the fly, in the sentence and you can’t do nothing about it. Automatic process.
“What you heard is not what Cookie Monster said”.
The Mental Lexicon
Language consists of two components: Lexicon (information about words, their component and meaning) and
Grammar to combine in phrases (put things in lexicon together).
Mental Lexicon represents information about words in two ways:
1. Represent the form that words take in lexical networks (sound and look in a phonetic and orthographic
code)
2. Represent meaning refer to semantic coding system in semantic memory or conceptual store
Word form representation contains hierarchical components
1. Phonetic feature (= place and manner of articulation).
2. Phoneme (= they were combined).
3. Syllables (= flow air. Divided into onset and rimes).
4. Morpheme (= speech sounds. This is the smallest unit of a language with meaning).
5. Word (= combination of morphemes).
Words: morphology
à A word is not necessarily a morpheme, it’s a component part.
Morpheme = the smallest unit that carries meaning
- Free morphemes = can appear as a word by themselves (e.g.: dog, cat, bread)
- Bound morphemes = cannot appear as a word by themselves (e.g.: -s in dogs, -ed in walked)
o Inflection bound morphemes: does not change grammatical category (e.g.: dog-dogs – noun-
noun, walk-walked = verb-verb)
o Derivation bound morphemes: changes grammatical category (e.g.: bake-baker = verb-noun,
interest-interesting = noun-adjective)
Monomorphemic words = one morpheme (example: dog)
Polymorphemic words = more than one morpheme (example: dog + s = dogs). Poly means multiple (combination
of free and bound morphemes)
How many words do we know?
Word types and word tokens: “Mama, mama, mama’ = 1 type and 3 tokens.
Word types = the word you say, every word has to be different “How many words did you know by age five?”
Word tokens = the amount of words you say, you can say it more times “How many words should my paper be?”
What does not count?
- Don’t count inflected and derived forms ( e.g.: dog, dogs, dogless, etc.), and their grammatical category
- Don’t count transparent compound forms (e.g. dogfood, dog park, etc.). Transparent means that if you
put words together you can easily see what the meaning is.
- Don’t count names of people, places,… Not every word can count, also misspelled words.
Then we can make a list of approximately 62.00 lemmas in English, of which approximately 18.000 are base words
of which the remaining words are composed.
Lemma= dictionary form of a words. Dog is and dogs, the plural, isn’t. Also compound form of words wound be
included in the dictionary.
Base words= the difference with lemmas is that these don’t have its own dictionary forms.
The more word types you already know, the less likely you are to encounter a new word type.
9
, Meanings: fixed or fuzzy
Fixed meanings – there is a basic meaning for each word. Words are associated with an episode experience in
someone’s life
- Words filed as a series of snapshots
- What is the meaning of ‘cat’ or ‘animal’?
- Checklist theory: sufficient and necessary features: If we want to describe a cat, we at least need to have
enough fixed meanings to describe it.
Fuzzy meanings – word meanings are inevitably fluid. One thing what is missing from this list is that a word is like a
prototype. Some meaning captures the essence of the word to say better than others.
- The ‘fuzzy edge phenomenon’: Word meanings have sort of prototypes; like when we want to describe a
bird we will mention the type of the bird, like sparrow.
- The ‘family resemblance syndrome’: As well as game, you have a lot of specific games. A lot of different
specific features
Word association: what people think of about a word.
Word association experiments:
1. Words selected from the same semantic ‘field’ (‘needle’ -> ‘thread’)
2. Word ‘partners’ (‘big’ -> ‘small’, ‘husband’ -> ‘wife’)
3. Words from the same class (noun -> noun, adjective -> adjective)
However:
Associations arise from words regularly occurring together. Associations are words that are frequently mentioned
together, but have semantically nothing in common. A sort of cause and effect relation.
Semantic(meaning) relations arise from shared contexts and higher-level relations. The context that they are
mentioned in are similar.
‘’Definition’’ hypothesis: the meaning of a word corresponds to its list of necessary/core features
But some words have different meanings (e.g. bank). Problems:
à Some words do not have consistent features (e.g. “game”)
à Meanings are not equally good across different context (e.g. “red hair” is a worse example of
“red” than “fire-engine red”.
How are they represented in lexical selection?
- Dictionary entry theory (Introspection): drawing conclusions from subjective experience. Words refer to
types. We store the core of essential properties. But, some words are represented in a big number of
features (ex. Cat could be everything). Many categories/concepts are vague. So sidestep problems:
- Semantic network theory: Word meaning are reflecting collections of associated concepts. Meaning is
represented by a set of nodes (the concepts) and links (the relationships) between them. Those become
activated à Spreading activation, but the total amount of spreading is limited. Two properties: 1. This is
automatic (fast and no control), 2. and the more distance the less strongly. Spreading activation also leads
to semantic priming (= representing a stimulus to let people respond to the other). Connectivity reflects
how many words are associated with a target word.
Spreading activation is thought to diminish substantially beyond one or two links in the network.
Evidence for this comes from mediated priming studies involving pairs of words like lion-stripes. The word
lion is related to the word stripes though the mediating word tiger (lion is associated with tiger, tiger is
associated with stripes). But: the total amount of activation that can be spread is limited.
Models for semantic memory: Both depend on corpora (= large collections of utterances). And share the idea that
semantic representations incorporate a large number of dimensions and that words meaning can be described as
vectors across large number of dimensions.
- Hyperspace Analog to Language (HAL): How close the words are. Word-to-word occurrence.
- Latent Semantic Analysis (LSA): Number of times a word is in an episode.
10