100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Summary MLE Course: Integration Module Exam (Grade: 8.5) €6,99   In winkelwagen

Samenvatting

Summary MLE Course: Integration Module Exam (Grade: 8.5)

 11 keer bekeken  1 aankoop

My summary got me a grade of 8.7 on the integration exam, as part of the MLE course of the Brain and Cognition Specialization. By studying with my notes, you won't need to consult the lectures or any other materials. I include screenshots of the slides, plenty of images alongside concepts for bette...

[Meer zien]

Voorbeeld 4 van de 33  pagina's

  • 11 april 2023
  • 33
  • 2021/2022
  • Samenvatting
Alle documenten voor dit vak (5)
avatar-seller
elenafresch
INTEGRATION LECTURES

I1 – Semantic Memory

How is knowledge (meaning) represented?
The idea of meaning representation as an encyclopedia is not a good analogy to model
semantic memory: it is not the case that all knowledge is stored in the same place in the brain.
On the other hand, this analogy accurately represents the classification system in semantic
memory: like in a library, information is organized in a hierarchical and systematic way (e.g., all
information about topic x is in the same place). Overall, the analogy is both a good and a bad
model and, as it turns out, semantic memory is very complex.




Until 1960, there was hardly any attention to semantic memory in memory psychology. The
interest emerged from the notion that (semantic) memory has a role in everyday life. At the
same time, computers progressed, leading computer engineers to wonder how to implement
human-type knowledge into computers: the field of AI was born out of this need. Around the
same period, psychologists came up with the idea that semantic memory is organized as a
network of connections between different concepts.

AI (Artificial Intelligence)
AI has been having ups and downs: currently, it is a hot field. There are two basic – largely
refuted – assumptions regarding AI:

1) If we know how a computer generates knowledge and uses it, then we will know how
memory works (the underlying structure might be the same).

2) If we know how human memory works, then we can make a super-powerful computer.




1

,If we look at how AI represents things, the relationship between entities can be visualized like
this:




This is a network representation of semantic memory by means of rules, where “isa” means “a
type of (category)”. For instance, ‘my dog’ is a type of dog and ‘dog’ is a type of animal. The
circle represents a concept: the dog eats meat; my dog chases the Frisbee.

By feeding the AI with many of these rules, you can encode knowledge about the world. This is
one way to represent knowledge: visualizing it by means of a network of rules and
relationships. However, this would normally be in computer language (code).

The roots of AI
AI was based on the idea that human thinking (knowledge) is mostly based on rules, rather than
fast thinking: having many rules for many instances, just like chess players. In fact, de Groot
(psychology professor at the UvA) explored this hypothesis by studying how chess masters
remember chess situations. When faced with existing chess board configurations, chess
masters were much better than students. However, when faced with impossible situations
(e.g., violating the rules of chess), the masters lost their advantage. This led him to conclude
that grandmasters are so good because they have a huge store of chess knowledge (e.g., earlier
games played, watched, or studied). Thus, they built up huge chess databases over time. De
Groot’s work inspired Herbert Simon to co-found AI: the idea consisted in feeding an “expert”
system with different sets of rules and situations, so that whenever a situation occurred, the
rule-based expert system answered based on similar known situations from the database.




2

,Using semantic knowledge

Sentence verification: Hierarchical Models & Network Models

• Hierarchical: Collins & Quillian Model
At the top of the hierarchy, there are more general categories. Each level adds more
specific properties and characteristics.




This way, you can verify sentences such as “a penguin has wings” or answer questions
such as “can a canary sing?”. Reaction times measure how long it takes you to find the
appropriate level in the model: in the first case, you have to go from specific (penguin)
to generic (bird) whereas the opposite is true for the second question (you have to go to
the specific “canary” level to know its properties). However, this model is wrong: it
makes incorrect predictions in many cases:

doesn’t explain the typicality effect: you identify “a robin is a bird” faster than
other birds because it is more typical.
frequency of association is more important than distance in the hierarchy. So,
while “a cat is a mammal” is true, the sentence “a cat is an animal” is responded
to faster because cat-animal are more frequently associated than cat-mammal
(despite being higher in the hierarchy).
doesn’t explain how NO-answers are created: when you don’t find the answer
within the tree. These no-answers cannot be converted into reaction times.



3

, • Network: Spreading Activation Model
items (e.g., properties, things) are associated as nodes in an interconnected network.
This model works with semantic priming: in a lexical decision task, you will be faster at
identifying a target word when this is preceded by a semantically related word.
However, this model still doesn’t explain reaction times in many important cases.




Priming
• Associative
“cats and dogs” may prime “weather” because of their association in the
common expression “it’s raining cats and dogs”. So, the two are found together
in the real world (although they are not necessarily related in meaning).

• Semantic
this type of priming occurs when there is an inherent bond between the words
(e.g., “dog” primes “Labrador”, which is a type of dog).

The lexical decision task is often used to test semantic priming: here, participants are
sequentially presented the prime (briefly) and the target word. They are faster to
respond to the target word (e.g., robin) if the preceding prime is semantically related
(e.g., bird). In contrast, response times are slower if the prime is unrelated to the target,
as in bird-arm.

When people expect to be primed with a semantically
related word, but you give them an unrelated one,
reaction times slow to a great extent (depending on SOA).



4

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper elenafresch. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €6,99. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 73314 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€6,99  1x  verkocht
  • (0)
  Kopen