Auditory System Flashcards Preview

Neuro > Auditory System > Flashcards

Flashcards in Auditory System Deck (53)
Loading flashcards...
1
Q

What is the measure of sound intensity?

A

*The intensity of a sound increases when the air is compressed more forcefully during the peak compression in each cycle, resulting in increased density of air.
*The perceptual correlate of intensity is called loudness; the larger the intensity, the greater the perceived loudness of the sound.
*Sound intensity is usually expressed on a logarithmic scale as decibels of sound pressure level (dB SPL).
*The amplitude of a sound in dB SPL (decibels Sound Pressure Level) is given by dB SPL = 20 log10[P1/P2] where P2 is a standardized reference pressure of 20 x 10-6 Newtons/m2 (set arbitrarily to the threshold pressure for human
hearing at 1000 Hz.)

2
Q

What is the lower threshold for human hearing?

A

*by convention at 1000 Hz
*The amplitude of a sound in dB SPL (decibels Sound Pressure Level) is given by dB SPL = 20 log10[P1/P2] where P2 is a standardized reference pressure of 20 x 10-6 Newtons/m2 (set arbitrarily to the threshold pressure for human
hearing at 1000 Hz.)

3
Q

What are the common comparisons for dB levels of 20, 40, 80, 140 and 180?

A
20- country at night
40 - quiet conversation
80 - busy street
140 - front row rock concert
180 - space shuttle launch at 150ft
4
Q

What is the threshold for “danger zone” in dB?

A

120dB and above can cause permanent hearing loss

5
Q

What is the absolute range of human hearing?

A
  • 20Hz - 20,000 Hz

* Peak is 3k Hz (3kHz)

6
Q

How do you determine the wavelength of sound?

A

wavelength (lambda) = velocity/frequency

*velocity assume to be 340m/s (speed of sound)

7
Q

How do audiologists quantify hearing loss?

A

for each ear, and at different frequencies

  • the smallest dB SPL that a subject can just detect (called the threshold).
  • The most common cause of hearing loss is age. As we age it is very common to suffer a loss of high frequency hearing, or “presbycusis”.
  • Loss of high frequency hearing becomes especially problematic for the perception of speech since fricative consonants (such as t, p, s, f) are distinguished by high frequency components that fall in the upper end of the human audiogram.
8
Q

What does presbycusis mean?

A

age-related loss of hearing, particularly in the higher frequencies.
*makes it tough to hear consonants

9
Q

What are the three divisions of the ear?

A

· external ear composed of the pinna and external auditory meatus (ear canal) bounded by the tympanic membrane

· middle ear cavity containing the ossicular chain or 3 middle ear bones, malleus, incus and stapes

*inner ear containing the cochlea and the semicircular canals (part of the vestibular system involved in detection of movement and maintaining balance to be discussed in later lectures)

10
Q

Why does the middle ear function as an impedance matcher?

A

*normal air:water impedance would result in a -30dB sound loss with a direct tympanic membrane:inner ear fluid interface

  • The middle ear alleviates the impedance mismatch between fluid and air in two ways.
  • Pressure (P) is equal to force (F) divided by area (A), P = F/A. So you can increase the pressure reaching the inner ear by either increasing the force and/or by decreasing the area being pushed by the sound waves.
  • First, the area of the tympanic membrane is ~20 times that of the stapes footplate. Thus, low amplitude vibrations falling onto the large tympanic area are concentrated into large amplitude motions of the much smaller stapes footplate.
  • Second, the orientation of the middle ear bones confers a levering action resulting in a larger force (a gain of about 1.3:1).
  • These factors together (1.3x20 = 26-fold gain in pressure, or 20*log10[26/1] = +28 dB) are sufficient to nearly overcome the otherwise severe acoustic impedance mismatch.
11
Q

What is meant by conductive hearing loss?

A

When the mechanical transmission of sound energy through the middle ear is degraded it results in conductive hearing loss. Common causes are 1) filling of the middle ear with fluid during otitis media (i.e., ear infection); 2) otosclerosis, in which arthritic bone growth impedes the movement of the ossicles; 3) malformations of the ear canal (atresia), including “swimmer’s” and “cauliflower” ear; 4) perforation/rupture of the tympanic membrane; 5) interruption of the ossicular chain; 6) static pressure in middle ear. Losses of 10-60 dB can occur in these cases.

12
Q

What are some common causes of conductive hearing loss?

A

Common causes are:

1) filling of the middle ear with fluid during otitis media (i.e., ear infection);
2) otosclerosis, in which arthritic bone growth impedes the movement of the ossicles;
3) malformations of the ear canal (atresia), including “swimmer’s” and “cauliflower” ear;
4) perforation/rupture of the tympanic membrane;
5) interruption of the ossicular chain;
6) static pressure in middle ear. Losses of 10-60 dB can occur in these cases.

13
Q

What is meant by sensorineural hearing loss?

A

Sensorineural hearing loss occurs from damage to or the loss of hair cells and or nerve fibers.

14
Q

How can you clinically separate sensorineural from conductive hearing loss?

A

You can see if they can hear a tuning fork on their head but not through the air

  • The otolaryngologist can distinguish conductive hearing loss from sensorineural hearing loss by comparing the audibility of a 512 Hz tuning fork held in the air or pressed against the skull.
  • Because, like fluid, bone has high input impedance the transfer of energy from the tuning fork through the bone to the fluid filling the inner ear is efficient. Thus, in conductive hearing loss the latter position (fork against bone) is effective at presenting sound by bone conduction, thus overcoming the conductive loss that pertains to air-borne sound.
15
Q

What are some causes of sensorineural hearing loss?

A

Common causes are

1) excessively loud sounds (iPod!!);
2) exposure to ototoxic drugs (diuretics, aminoglygocide antibiotics, aspirin, cancer therapy drugs); and
3) age (presbycusis).

16
Q

What are the three compartments of the chochlea?

A

spirals of the cochlea showing the three compartments: the scala vestibuli, the scala media and the scala tympani

17
Q

What do the IHCs do in the ear?

A
  • the sensory cells of the hearing modality
  • The business end of the cochlear compartments is the scala media and scala typmpani, which are separated by the basilar membrane (BM).
  • Sitting within the scala media and on top of the BM is the organ of Corti, which contains the inner hair cells that transduce sound into electrical signals.
  • The IHCs are attached to the BM due to their placement in the organ of Corti. Because of this, movements of this membrane (elicited by sound) are ultimately translated by IHCs into electrical signals. The mechanical properties of the BM play a key role in the discrimination of sound frequency.
18
Q

Describe the movement of the basilar membrane during an oscillatory sound wave

A

During an oscillatory sound wave the BM will therefore move up towards the scala vestibuli during rarefaction and down towards the scala tympani during compression.

19
Q

Describe the “tuning” of the basilar membrane using the piano analogy

A

Just as in a piano where the thinner less flexible strings vibrate to high frequencies, the BM vibrates best to high frequencies towards the base of the cochlea. In contrast, the thicker more flexible piano strings vibrate best to low frequencies, and the BM vibrates best to low frequencies at the apex of the cochlea.

20
Q

What is the primary stimulus attribute that is mapped along the cochlea?

A

*sound frequency and intensity b/c of “tuning” of the BM and the IHCs
*IHC = inner hair cell
Each IHC will respond best to a certain frequency determined by the mechanical properties of the BM at that particular location. Thus the primary stimulus attribute that is mapped along the cochlea is sound frequency (and intensity).

21
Q

Describe the transduction of the hair cell

A
  • It is in IHCs where the conversion of mechanical vibration into membrane potential changes (“transduction”) takes place.
  • From the apical surface of a hair cell projects an array of stereocilia
  • The stereocilia within any single hair bundle vary in length. Movement of the bundle of stereocilia results in a change in the membrane potential of the hair cell.
  • The hair cell normally has a membrane potential of about -50 mV.
  • When the stereocilia bundle is pushed in the direction toward the longest stereocilia, the membrane potential becomes more positive (depolarizes);
  • when the bundle is pushed in the direction toward the shortest stereocilia, the potential becomes more negative (hyperpolarizes)
  • Note that the cell is never “at rest” (i.e. it does not rest at the K+ equilibrium potential, EK+, of -70 mV).
22
Q

what is a common cause of congenital deafness?

A

A collapse of the
endocochlear potential due to a mutation in the gap junction subunit connexin 32, which is important in active transport of potassium in the stria vascularis, is the major cause of congenital deafness.

23
Q

concerning potassium, what is the driving force for potassium influx/efflux at the apical membrane of the IHC?

A

Bending of the stereocilia results in altered gating of transduction channels located near the tips of the individual hairs. The transduction channel is a non-specific cation channel that is voltage-insensitive. A K+-rich fluid called endolymph fills the scala media and bathes the stereocilia on the apical end of hair cells. In contrast, the basal end of the hair cell is bathed by perilymph, a fluid with ionic composition similar to blood (high Na, low K+) that fills the scala vestibuli and scala tympani. Endolymph has high K+ and low Na+ concentration and is the only place in the body that contains such an unusual fluid. The stria vascularis, an epithelium on the side of the scala media actively pumps K+ into the endolymph maintaining a high K+ concentration. Active pumping of K+ by the stria vascularis results in a positive potential inside the scala media. This potential, the endocochlear potential, has a magnitude of +80 mV (endolymph positive with respect to perilymph).

*Since K+ is the major extracellular cation in endolymph, K+ influx underlies hair cell depolarization. Note that K+ moves into hair cells through open mechanotransduction channels (Fig. 6). Because external [K+]is roughly equal to internal [K+], EK is near 0 mV. The negative membrane potential of IHCs “pulls” K+ ions into the cell. The endocochlear potential provides further driving force for influx of K+ ions into the cell. In essence the endocochlear potential is added to the cell’s membrane potential so that the total driving potential across the stereocilia membrane is a whopping –130 mV (the membrane potential, -50 mV, minus the endocochlear potential, +80 mV). A collapse of the endocochlear potential results in sensorineural deafness because of the loss in driving force for transduction.

24
Q

What happens when the stereocilia on the IHCs are opened?

A

Since K+ is the major extracellular cation in endolymph, K+ influx underlies hair cell depolarization.

  • K+ moves into hair cells through open mechanotransduction channels
  • Because external [K+]is roughly equal to internal [K+], EK is near 0 mV. The negative membrane potential of IHCs “pulls” K+ ions into the cell.
  • The endocochlear potential provides further driving force for influx of K+ ions into the cell. In essence the endocochlear potential is added to the cell’s membrane potential so that the total driving potential across the stereocilia membrane is a whopping –130 mV (the membrane potential, -50 mV, minus the endocochlear potential, +80 mV).
  • A collapse of the endocochlear potential results in sensorineural deafness because of the loss in driving force for transduction. A collapse of the
25
Q

what is the endocochlear potential?

A
  • A K+-rich fluid called endolymph fills the scala media and bathes the stereocilia on the apical end of hair cells.
  • In contrast, the basal end of the hair cell is bathed by perilymph, a fluid with ionic composition similar to blood (high Na, low K+) that fills the scala vestibuli and scala tympani.
  • Endolymph has high K+ and low Na+ concentration and is the only place in the body that contains such an unusual fluid.
  • The stria vascularis, an epithelium on the side of the scala media actively pumps K+ into the endolymph maintaining a high K+ concentration. Active pumping of K+ by the stria vascularis results in a positive potential inside the scala media.
  • This potential, the endocochlear potential, has a magnitude of +80 mV (endolymph positive with respect to perilymph)
26
Q

Outline the steps of sound transduction into electrochemical signals

A

· Airborne pressure waves in the external ear canal set up vibrations of the eardrum
· Eardrum vibrations move the 3 ossicles
· Vibrations of the stapes on oval window set up traveling waves in the cochlear fluids
· These fluid waves cause a vertical displacements of the basilar and tectorial membranes
· The relative shearing force between membranes bends the ciliary bundles of the hair cells
· Ciliary bending leads to depolarization and hyperpolarization of the membrane potential,
· Which causes increased and decreased rates of transmitter release, respectively
· Transmitter (aspartate or glutamate) causes depolarization of the afferent auditory nerve fiber
This results in action potentials that are sent to second order neurons in the brainstem.

27
Q

Per cell, are the IHCs or the OHCs more innervated?

A
  • there are more afferent fibers per IHC than OHC. 10-30 type I auditory nerve fibers innervate one IHC
  • There are ~16,000 hair cells, with the ~3,500 IHCs having the responsibility of transducing sound into electrical activity.
  • Our world of sound at one ear is conveyed through just 3,500 neurons (in contrast, there are 1,000,000 photoreceptors in each retina)!
  • 95% of the 30,000 ANFs are Type I and contact the IHCs, while 5% are Type II and contact the OHCs.
  • Thus, ~28,500 Type I ANFs innervate ~3,500 IHCs. This occurs because 10-30 Type I ANFs innervate a single IHC.
  • Only 1,500 Type II ANFs innervate ~12,500 OHCs. This occurs because each Type II ANF innervates ~10 different OHCs.
28
Q

How is the VIII cranial nerve divided in it’s inner ear innervation?

A

The IHCs and OHCs are innervated by the VIIIth nerve, or spiral ganglion (the nerves follow the spiral of the cochlea).

  • This is often called the “auditory” nerve fibers (ANFs).
  • There are two types: Type I ANFs innervate the IHCs and are myelinated and Type II innervate the OHCs and are not myelinated.
29
Q

What do OHCs do?

A

Like IHCs, OHCs bear stereocilia that possess mechanotransducing channels.

  • However, unlike IHCs, OHCs are poorly innervated by afferent ANFs, and are not thought to act as transducers. *Rather, the efferent innervation from the central auditory system act upon OHCs to amplify the movements of the BM.
  • OHCs respond to changes in voltage with a change in length – they are “electromotile”. The change in length is due to the presence of a motor protein, Prestin, which is voltage sensitive.
  • These motor proteins change the length of the hair cell during the response to sound.
  • Because the OHCs are attached to the BM, the change in length of the OHC pulls the BM toward or away from the tectorial membrane, and thus changes the mechanical frequency selectivity of the BM.
  • The OHCs are thought to contribute up to 50 dB of the cochlea’s sensitivity to sound!
30
Q

What property of the basilar membrane is changed by innervation of the OHCs?

A
  • Because the OHCs are attached to the BM, the change in length of the OHC pulls the BM toward or away from the tectorial membrane, and thus changes the mechanical frequency selectivity of the BM.
  • The OHCs are thought to contribute up to 50 dB of the cochlea’s sensitivity to sound!
31
Q

What is meant by cochlear amplification?

A
  • Both native OHC function and efferent nerve stimulation will result in the OHC modulating the BM in a frequency-dependent fashion
  • the end result is the amplification (or increasing the intensity) of a pure tone (one frequency) stimulus
  • OHCs are also innervated by efferent neurons called medial olivocochlear neurons (MOC). MOC neurons sense the context of the sound environment (i.e., frequency and intensity) and act as feed-back control devices to change the cochlear sensitivity via the OHCs.
  • The action of the motor proteins of the OHC is frequency-tuned as are the MOC neurons. For example, an OHC located in the region of the BM that responds best to 1000 Hz will undergo the largest changes in length when stimulated with sound of 1000 Hz.
  • Because of this, the OHC enhances the movement of the BM in a frequency-dependent manner, which results in a larger and sharper response of the BM to pure tone sounds
  • The mechanical amplification of the displacement of the BM by OHCs is called the cochlear amplifier.
  • The cochlear amplifier is important clinically because a substantial fraction of cases of sensorineural deafness are due to damage of OHCs. Ototoxic antibiotics, such as streptomycin and gentamycin, can block the transduction channel of the OHCs and, with prolonged action, can kill them resulting in deafness.
  • OHCs are more sensitive to these ototoxic antibiotics than IHCs. In addition, OHCs are more sensitive than IHCs to damage due to prolonged exposure to loud sounds.
  • The active OHCs can also create sounds, the so-called otoacoustic emissions (OAEs). Spontaneous or sound- evoked movements of the OHCs set into motion the BM, which essentially causes the tympanic membrane to act as a loudspeaker. Miniature microphones placed in the ear canal can pick up these faint sounds created in the cochlea.
32
Q

What is clinically significant about the cochlear amplifier?

A
  • The cochlear amplifier is important clinically because a substantial fraction of cases of sensorineural deafness are due to damage of OHCs. Ototoxic antibiotics, such as streptomycin and gentamycin, can block the transduction channel of the OHCs and, with prolonged action, can kill them resulting in deafness.
  • OHCs are more sensitive to these ototoxic antibiotics than IHCs. In addition, OHCs are more sensitive than IHCs to damage due to prolonged exposure to loud sounds.
33
Q

How is both pitch (frequency) and loudness (intensity) encoded by ANFs?

A

ANFs = auditory nerve fibers

  • both are encoded by frequency of action potentials.
  • pitch, however, is more encoded by the tuning of the BM by the OHCs and by the innate properties of the cochlea and BM
  • remember that each IHC is well innervated. Thus, the IHC at a pure tone placement on the BM (base is high frequency, farther from the base is low frequency)
34
Q

What makes auditory neuropathy strange?

A

*can hear, but can’t really interpret speech
An unusual type of sensorineural hearing disorder (not necessarily accompanied by hearing loss) called auditory neuropathy results from some problem with neural transmission from IHC to ANFs, or in the ANF function itself. Patients present with normal OAEs and even normal tone thresholds. One problem appears to be that ANFs have lost the ability to phase lock, leading to deficits in discriminating or even understanding speech.

35
Q

Trace the pathway of auditory information from the cochlea to the cortex

A
  • ANFs with cell bodies in the spiral ganglion relay signals from the cochlear hair cells to the brain.
  • The axons of these cells, which form the auditory portion of the VIIIth nerve, bifurcate upon entering the brainstem:
  • one branch innervates the ventral portion of the cochlear nucleus, the ventral cochlear nucleus (VCN), and the other the dorsal cochlear nucleus (DCN).
  • Both these nuclei are found on the dorsal and lateral aspects of the inferior cerebellar peduncle.
  • Some axons from cells in the cochlear nucleus cross the midline to the opposite side of the brain in the dorsal acoustic stria (from DCN) and trapezoid body (from VCN).
  • These tracts regroup as the lateral lemniscus and ascend to the inferior colliculus of the midbrain.
  • Along the way many axons terminate in various nuclear complexes in the pons: the several nuclei comprising the superior olivary complex and the nuclei of the lateral lemniscus. The medial olivocochlear neurons (MOC) which feed back to the OHCs are located in the superior olivary complex. Projections from these stations join the lateral lemniscus enroute to the inferior colliculus.
  • Other axons from cells in the cochlear nucleus join the lateral lemniscus ipsilaterally and terminate in the inferior colliculus.
  • from inferior colliculus it goes to (bilateral) medial geniiculate (thalamus)
  • from thalamus to the primary auditory cortex (or A1), part of the superior temporal gyrus, via the auditory radiations.
36
Q

What is a clinical consequence of the fact that the cochlear nucleus will send up fibers to the inferior colliculus through the lateral lemniscus from all over the place and from both sides?

A
  • above the cochlear nucleus, a one-sided lesion would not produce unilateral deafness
  • below or through the CN, however, WOULD cause unilateral deafness
  • a medial/central lesion has a greater effect on sound localization
  • Because some axons from cells in the cochlear nucleus join the ipsilateral lateral lemniscus while others join the contralateral lateral lemniscus, unilateral lesions rostral to the cochlear nuclei do not produce unilateral deafness, whereas lesions caudal to and including the CN produce unilateral deafness. Lesions central to CN are often accompanied by deficits in sound source localization,
37
Q

What information does the inferior colliculus receive from where?

A
  • The inferior colliculus, which receives both direct projections from the cochlear nuclei and multisynaptic input from the pontine nuclei (the nuclei of the superior olivary complex), is an obligatory relay and integration center for ascending auditory information.
  • From the inferior colliculus fibers project mainly to the ipsilateral medial geniculate in the thalamus, but also to the contralateral inferior colliculus and medial geniculate.
  • The medial geniculate sends projections to primary auditory cortex (or A1), part of the superior temporal gyrus, via the auditory radiations.
  • Specific regions of auditory cortex are linked by association fibers (on the same side) and via the anterior commissure (for regions on the opposite side of the brain).
  • Finally, there is good preservation of the tonotopic organization of the cochlea throughout all levels of this pathway, right up to the level of the primary auditory cortex, where a cochleo-topic (tonotopic) map is found on the cortical surface.
38
Q

How is sound differentiated in the brain?

A
  • specific spectral and temporal patterns
  • It is the spectral (the pattern of sound frequencies and their intensities) and temporal (the pattern of the relative times of the sound frequencies) properties of the sound that allows us to discriminate between and attach meaning to speech sounds.
39
Q

Because sound reception is not as direct as mechanoreception, what must happen to determine a sound’s location?

A

*sound location must be computed centrally in the auditory system based on the neural representations of the spectral and temporal characteristics of the acoustic stimuli arriving at the two ears

40
Q

What three acoustic keys go into encoding sound localization?

A
  • Interaural Time Delays
  • Interaural Level Differences
  • Monaural spectrum shape
41
Q

What’s up with interaural time delays?

A

the fact that the two ears are separated by the head means sound will hit each ear at different times depending on where the sound originated
*the brain processes the difference in sensation timing to help localize a sound

42
Q

What causes interaural level differences?

A

The fact that the two ears are separated by an obstacle, the head, results in the ILD cue to sound location.

  • For sounds of high frequency (or short wavelength) the head essentially creates an “acoustic shadow” for the far ear as sounds with wavelengths on the order of the diameter of the head and smaller are reflected off the near side of the head.
  • Consequently, the resulting sound arriving at the ear farthest from the source is effectively attenuated relative to that arriving at the near ear thereby creating direction-dependent differences in the amplitudes, or levels, of the sounds that reach the two ears.
  • As expected, ILDs are small in magnitude for low frequency sounds and increase in magnitude for high-frequency sounds.
  • Therefore, ILDs are primarily useful for localization of high frequencies.
43
Q

What is the monaural spectral shape the most useful for?

A
  • distinguishing where a sound source is in space in terms of elevation (below, above)
  • The ITD and ILD cues are not useful for sound localization in elevation because their values change little with variations in source elevation.
  • The monaural spectral shape cues, however, do change systematically with source elevation. Spectral shape cues arise from direction- and frequency-dependent reflection and diffraction of the pressure waveforms of sounds by the pinna that result in broadband spectral patterns, or shapes, that change with location. These spectral cues are used both for localization in elevation as well as facilitating the distinction between sound sources in front from sound sources behind of an observer.
  • Spectral cues are created primarily for high-frequency sounds whose wavelengths are on the order of the dimensions of the pinna and its convolutions.
44
Q

The properties of the ITDs and ILDs and the monaural sepectral shape are each affected how in presbycusis?

A

People with hearing impairments, including presbycusis (age-dependent high-frequency hearing loss), experience difficulty in localizing sounds, particularly in elevation. *Localization impairments are also common for those using hearing aids and cochlear implants.
*Patients with auditory neuropathy cannot use ITDs for sound localization because ITD coding relies on phase-locking, but can use ILDs, which use rate coding.

45
Q

How can patients with auditory neuropathy still somewhat localize sound?

A

*Patients with auditory neuropathy cannot use ITDs for sound localization because ITD coding relies on phase-locking, but can use ILDs, which use rate coding.

46
Q

The medial superior olive and the lateral superior olive process what components of sound?

A

Medial superior olive (MSO) - responsible for the ITD (interaural time difference) response
*thus MSO is based on phase locking and works better with lower frequencies
Lateral superior olive (LSO) - responsible for the ILD (interaural level difference) response
*thus LSO neurons are geared towards higher frequency sounds

47
Q

Trace the MSO response to ITD’s and the signal relay mechanism.

A

1) Afferent inputs to MSO neurons from ANFs and AVCN (anteroventral cochlear nucleus) cells carry timing information in the form of phase-locked neural responses. Phase locking is the mechanism by which the peripheral auditory system keeps track of the times of occurrence of the ongoing amplitude fluctuations in sounds.
2) MSO neurons behave like coincidence detectors, responding maximally only when action potentials from the AVNC cells from the left ear and the right ears arrive nearly simultaneously at the MSO neuron.
3) The axons of the the AVNC cell inputs to MSO form delay lines due to differences in the neural path length from the AVCN to MSO. Differences in neural path length result in differences in neural conduction times to the MSO, which are then offset by a physical ITD cue.
* Since the projections to the contralateral MSO have a naturally longer path length than those to the ipsilateral MSO, MSO neurons encode ITDs produced by sounds on the contralateral side of the body where the ITD can compensate for the longer conduction time from the contralateral ear. In addition, different MSO neurons have different length delay-line inputs, so different MSO neurons will be maximally sensitive to different ITD values. MSO neurons are sensitive to predominantly low sound frequencies (

48
Q

The cells that are responsible for recognizing and relaying ITDs are on what side of the brainstem relative to the sound?

A

*low-frequency sensitive neurons in the (dorsal nucleus of lateral lemniscus) DNLL and IC encode the ITDs of sounds primarily on the contralateral side of the body.

49
Q

Describe the encoding of ILDs and the pathway of signal transmission

A

Lateral superior olive (LSO) neurons also receive afferent inputs from both ears.

  • The input from the ipsilateral ear is conveyed via ANF synapses with AVCN cells, which then send an excitatory projection to the ipsilateral LSO.
  • The afferent input from the contralateral ear also comes from AVCN but in this case the cells project across the midline to the neurons of the medial nucleus of the trapezoid body (MNTB).
  • The AVCN cell synapse onto MNTB neurons forms the massive calyx of Held, the largest synapse in the entire CNS; the large size of this unique synapse results in very fast and secure synaptic transmission.
  • MNTB neurons are glycinergic, so their projection to the LSO has an inhibitory effect. In contrast to the MSO, both the MNTB and the LSO are biased predominantly to neurons that are sensitive to high sound frequencies where the ILD cues themselves are prominent.
  • Because of the interplay of the ipsilateral excitation and contralateral inhibition, LSO neurons essentially compute the difference between the neural representations of the intensity of the sounds present at the two ears. Hence, LSO neurons encode ILDs. The sound intensity at each ear is encoded by a rate code, in that the number of action potentials is proportional to the SPL of the sound at the ear.
  • Because LSO neurons are excited by sounds at the ipsilateral ear but inhibited by sounds at the contralateral ear, they will respond best to ipsilateral sound sources that produce ILDs favoring the excitatory ear. In order to achieve the contralateral representation of space (a basic feature of sensory and motor areas of the brain) LSO neurons of the ventromedial LSO send excitatory projections to the contralateral DNLL and IC. As a result DNLL and IC neurons respond best to contralateral sound sources that produce ILDs favoring the contralateral ear.
50
Q

For ILD’s and ILT’s, the DNLL and IC neurons respond best to sound sources on what side of the body?

A

CONTRALATERAL for both modalities.

  • ILDs are conveyed in a switching pattern early on and end up being conveyed largely to the contralateral inferior colliculus
  • ILTs are processed early on and relayed to the contralateral IC as well
51
Q

What are the symptoms expected from a unilateral lesion to the inferior colliculus?

A
  • the cues to location, encoded in the MSO, LSO, and DCN, reconverge at the contralateral IC
  • unilateral lesions in the IC or above result in deficits in sound source localization for sources contralateral to the lesion. And because the IC is heavily innervated by neurons from both ears, unilateral lesions at the IC or more central do not result in unilateral deafness.
52
Q

After the inferior colliculus, where does auditory sensory information go?

A

The IC then sends primarily excitatory projections ipsilaterally to the auditory portions of the thalamus, the medial geniculate body (MGB)
*Like the rest of the ascending auditory system, the MGB is tonotopically organized.
*Interestingly, there are large projections to the amygda from the MGB, and studies have shown that the MGB is critical for auditory fear conditioning.
*The MGB then projects to the auditory cortical areas, located in the superior temporal gyrus
The primary auditory cortex (A1) (Broadmann’s area 41) is arranged in a tonotopic map with neurons responding to lower frequencies located anteriorly and neurons responding to higher frequencies more posterior.
*AI is located deep in the lateral sulcus and is surrounded by the secondary auditory cortex (AII) (Broadmann’s area 42).

53
Q

Damage to Wernicke’s area produces what?

A

These secondary areas surrounding AI includes Wernike’s Area, an important cortical area for understanding and processing spoken language.
*Damage to Wernike’s Area leads to Wernike’s aphasia, which is presents as a general impairment in language comprehension, but not necessarily language production.

Decks in Neuro Class (59):