Auditory and Higher Order Language Processing
Week 1: Introduction – Sound in the brain & identifying auditory cortical areas
Learning goals
1. What are the physical and perceptual properties of sound? What is the link between
them?
Physical definition: Sound is the pressure changes in the air or other medium
When movements or vibrations of an object causes pressure changes in the air, the pressure
increases (compression) and decreases (refraction) and a pattern of alternating high and
low-pressure regions in the air, as neighbouring air molecules affect each other, this called a
sound wave (pattern travels through air at 340 meters per second. The air molecules stay at
location and just move back and forth, transmitted is only the pattern
Pure tone: occurs when changes in air pressure occur in a pattern described as a sine wave
Pure and complex tones (layered frequencies), a complex tone has underlying frequencies,
the fundamental frequency is the first frequency (the largest common factor of the other
frequencies and is a pure tone), A harmonic is a wave with a frequency that is a positive
integer multiple of the fundamental frequency
Frequency: number of cycles per second that the changes repeat (Hertz (Hz))
-> link = pitch (high and low)
Amplitude: size of the pressure change (decibel (dB), -> link = loudness of the sound
Perceptual definition: Sound is the experience we have when we hear
Amplitude = loudness, but not linear relationship, frequency also
plays role in the thresholds (see picture), each own frequency has
a threshold or baseline that can barely be heard, and loudness
increases as the level of that baseline is increased
Fundamental frequency = pitch
Timbre = quality of distinguishing two tones, even when they
have the same frequency and amplitude, tones can sound
different to us (e.g. sharp, round, brassy), hard to study
2. How does transduction of the soundwave exactly work? (Focus on general mechanism)
Three steps: 1) sound stimulus to receptors 2)
transduction of sound stimulus from pressure
changes into electrical signals 3) processing these
electrical signals
(Outer ear) Pinnae catches waves, which then
travel to the ear canal, amplifies frequencies
relevant for speak (middle ear) to the ear drum
(membrane), which then vibrates, which in turn
vibrates the small ossicles (bones Incus, Malleus
and Stapes). They are needed because of
differently dense fluids, which is a mismatch in how easily the waves are transmitted. The
ossicles increase the vibration and sends it to the cochlea. (Inner ear) the cochlea is filled
with liquid. The vibration makes the liquid moving, which creates waves. Hair like structures,
called (stereocilia) sit on top of hair cells and are grouped together as hair cell bundles
inside the cochlea. They ride these waves and the hair bundles move. They turn this
,movement into electrical signals. As the hair bundles move, ions rush into the top of the hair
cells causing the release of chemicals at the bottom of the hair cell. The chemicals bind to
the auditory nerve cells and cause an electrical signal. Different hair cells react to different
frequencies. Base = higher pitch, top = lower pitch, tuning curve, phase locking: hair cells
move volley principle
3. Propagation of signal in auditory pathway. What is the underlying anatomical structure,
in which auditory processing stages are they implicated?
Signals travel along the auditory nerve to the cochlea
nuclei in the brainstem on the ipsilateral side. Most of
the auditory info processes over (primary pathway),
however, each cerebral hemispheres processes
stimuli from both the ipsilateral (secondary pathway)
and contralateral sides. A1 is tonotopically organised,
areas are tuned to specific frequencies
SONIC MG: SON -> IC -> MG -> A1
4. What happens to hair cells when we go to a Metallica concert when not wearing ear
protection? Are there different forms of hearing loss?
The hair cells bend more with more noise. Usually, they bend back after some time.
However, if loud noise damaged too many of the hair cells, some of them will die.
Repeated exposures to loud noises will over time destroy many hair cells. This can
gradually reduce your ability to understand speech in noisy places. Eventually, if
hearing loss continues, it can become hard to understand speech even in quieter
places. Noise can also damage the auditory nerve that carries information about
sounds to your brain.
Sensorineural hearing loss: stereocilia is damaged
Conductive hearing loss: obstructions in the outer or middle ear perhaps due to fluid,
tumours. earwax or ear formation prevents sound from getting to the inner ear
Mixed hearing loss: combination
Auditory neuropathy spectrum disorder: sound is able to enter an ear normally and reach
the acoustic nerve, but there is a problem when the sound is transmitted to the brain.
5. What is the concept behind Fourier transformation?
If multiple sound waves are present at once,
the pressure difference is the sum of the two
individual waves. The wave is then not a pure
sine wave.
How can you decompose such a signal into the
pure frequencies that make it up?
FT turns a function in the time domain into a
function in the frequency domain: Resulting
from that, a spectrogram can be created, with
frequency on the y-axis, time on the x-axis and
colour coding the amplitude.
, Week 2: Identifying auditory cortical areas
Formisano et al: Mirror symmetric tonotopic maps in human primary auditory cortex
Tonotopic maps: A1 neurons that selectively respond to the spectral content of sounds
create one or more maps in which nearby regions of the cortical surface react to similar
frequencies
→ In the monkey brain, there are three (AI, R, RT) subdivisions form the A1 core. This is the
first stage of parallel processing of auditory information and at least two of the subdivisions
(AI & R) show selective activity for specific frequencies.
→ Tonotopy is not only a property of the primary areas as the lateral belt has been proven
to show tonotopy as well.
→ How is this tonotopy in human auditory cortex?
Method: fMRI study while playing tones of different frequencies:
→ Stimuli that were presented had a frequency of 0.3, 0.5, 0.8, 1, 2, 3 kHz. All stimuli were
presented at a sound presentation level of 70 dB for all subjects.
However, the scanner does a really loud noise. How do they deal with scanner noise?
→ by using a Sparse sampling/stroboscopic event-related scheme
• The auditory stimuli are presented in the silent periods between scans
• By changing up the time between the stimulus and the fMRI scan you collect
information at different points of the BOLD response curve. This is only possible
thanks to the metabolic delay of BOLD response: The image is taken after the sound
was played as the BOLD response is delayed, and the sound of that scan does not
disturb that activity anymore.