HC aantekeningen Week 7
Quizlet link: https://quizlet.com/Annabel2703/folders/leren-en-geheugen/sets
Inhoud
College 17: Computational models of learning and memory (part II) .............................................................................. 2
College 18: De ziekte van Alzheimer en dementie ......................................................................................................... 15
College 19: Geheugen consolidatie & reorganisatie ....................................................................................................... 32
,College 17: Computational models of learning and memory (part II)
➢ Synapsen belangrijk voor leren, dus dat moet je goed kunnen simuleren
❖ Recap:
➢ basic ideen voor modelling, hoe
kunnen we individuele neuronen
modelleren op verschillende
niveaus?
➢ We covered 3 models, 3 levels:
Hodgkin-Hodgkin-Huxley (most
biophysical, currents), Izhikevich
(little more simplified), Leaky
integrate-and-fire model (most
simple but useful for basic
features, used during workshop).
❖ How are models of learning implemented
in these neural network models?
Models of learning and memory
❖ Now that we can simulate a neuron, we can start looking at learning rules
❖ We can classify the learning strategies of the brain in three main categories, which can also be used in
artificial approaches of learning.
➢ Unsupervised learning: We can classify the learning strategies of the brain in three main
categories,
▪ input from outside world -> activity between input and layer x is the main thing that
changes the output?
▪ nothing that oversees how the network learns and performs, free, unsupervised
▪
➢ Supervised learning: the neural network receives input from the outside world and also the
desired output, so that the network can change its synaptic weights to reach such output.
▪ input en output vergelijken -> weights aanpassen
▪
➢ Reinforcement learning: the neural network receives input from the outside world, and a
reward/punishment teaching signal which bias the learning towards a desired output.
▪ extension van supervised learning, teaching signal. Het signaal is niet de desired outpu
die we willen, niet een erg specifiek signaal maar een beloning/straf.
, ▪
▪ (Ik vraag me af: krijg je dus als netwerk alleen te horen ‘je deed het verkeerd’ of krijg je ook een richting
aangegeven zoals ‘je uitkomst was te laag’, of is dat meer supervised learning?)
❖ Biological examples:
➢ Unsupervised learning: for example, receptive fields.
➢ Supervised learning: links with biological mechanisms still unclear. A good candidate is
learning in the cerebellum (teaching signals).
➢ Reinforcement learning: classical conditioning.
Unsupervised learning
❖ Unsupervised learning is a learning process in which synaptic weights change as a function of the
inputs (and internal activity) only.
❖ Simplicity and plasticity -> good for experiments, computational pov
❖ It is therefore easy to map this process to the learning of biological neural systems and changes in
biological synapses.
❖ The first biological principle of synaptic changes associated with learning is the Hebb’s principle:
“Neurons that fire together, wire together.
❖ “Neurons that fire together, wire together”- Donald Hebb
➢ ->
➢ -> ->
➢ ->
➢ ->
➢ Synaptisch weight increases
, ➢ “WHEEL” reminds us of the car
➢ Activation of neuron -> activation of neuron connected by stronger synapse
❖ This principle allows to recover neural activity patterns, or neural assemblies, from incomplete or
noisy data, leading to the concept of associative memory.
➢ This happens without any kind of supervision from external agents.
❖
❖ We can consider a variety of learning rules to train neural networks in an unsupervised way.
➢ Some of these rules come from biology (i.e. refined versions of the Hebb rule, or other different
rules also found in synapses).
➢ Other rules can be considered on the basis of their theoretical and computational properties
(such as stability, simplicity or fast training times).
➢ We will cover several classical learning rules used in unsupervised learning
❖ thepricial principes for guidelines
❖ The BCM rule:
➢ formulated by Elie Bienenstock, Leon Cooper and Paul Munro in 1982. It attempts to explain
learning in the visual system.
➢ This rule is an extension of the Hebb rule (but for continuous values) which solves two
important aspects of the stability problem of the Hebb rule. (Hebb would make the synapses
either stronger and stronger, or weaker and weaker, but we want something more stable)
➢ More precisely, the BCM rule adds
▪ (i) a leaky term to incorporate depression and make unused synapses weaker,
▪ and (ii) a sliding threshold to balance potentiation with depression and prevent
runaway increase of synaptic weights
➢ Equation:
▪
▪ temporal evolution of wij: synaptic weight between neurons i
and j
▪ with φ(x) is the sigmoidal function, which imposes a cap in the increase of the synaptic
weight.
▪ This function introduces a sliding threshold (𝜃!), which provides the stability factor
missing in the standard Hebb rule.
▪ The leaky term provides a long-term depression mechanism