Dr. Toby Dye's lecture on auditory information processing provided extremely
insightful evidence about interaural differences of time and level
discrimination. It is known how auditory nerves behave differently between
those with impaired hearing, however, much of this vital research is conducted
in quiet laboratories. Are there differences between how individuals with
impaired hearing perceive auditory cues in an environment that is more noisy?
If so, what are these differences?
Dr. Dye's talk began by laying out the foundation of knowledge needed in
order to understand his research implications. We learned that since distance
exists between the two ears, a sound wave must travel further in order to reach
the ear that is further from the sound stimulus. From this concept, interaural
delays of time (IDTs) arise for both ears that can be calculated by taking the
difference in path length between the two ears and dividing it by the speed of
the sound in air (the Woodworth formula). On the other hand, there are also
interaural delays of level (IDLs) due to the fact that the head is a solid
object between the two ears, creating a "sound shadow" for the distal
ear. In his previous research, Dr. Dye has been able to conclude that threshold
IDLs for 1-2 dB for 753 Hz tones in isolation are elevated to 6-9 dB by the
presence of diotic 553 and 953 Hz components. A possible cause of threshold
elevation could be due to the auditory system averaging binaural information
from non-informative distractors, leading to a single fused intracranial
"image" being heard. In order to further assess the extent to which
listeners could segregate informative cues (targets) from non-informative cues
(distractors), Dye and his colleagues created a new task, termed SALT
(synthetic/analytic listening task). Analytic listening refers to target
interaural differences that are independent of the distractor interaural
differences. This requires listeners to form separate intracranial images of
the target and distractor as well as to maintain information about the spectral
composition of each. On the other hand, synthetic hearing could be identified
as having judgments based on the target influencing distractor interaural
values. Listeners fail to form separate images in this synthetic task, as well
as fail to maintain compositional information for each. From the collected
data, a weighted average of information from the target and the distractor were
able to be determined. Dye and his colleagues found that more participants
appeared to weigh the higher frequency component more than the lower regardless
of which was designated as the target.
However, there were a lot of individual
differences. As a consequence, they conducted another study in which being “analytic”
was restricted to binaural tasks. To determine IDLs, the listener was presented
with a three cues. The first interval presented the target frequency, the
second presented the standard level, and the third tone was the tone which was
judged. Listeners had to say whether they thought the third tone was greater or
lower in level than the standard interval. A synthetic condition was also ran
in which the listener indicated whether composite level decreases or increases
between intervals two and three. It was found that listeners who were capable
of adjusting weights in one task were able to also adjust them in the other
task. It is widely accepted that IDTs are extracted by a process of neural cross
correlation which is carried out between the outputs of matched frequency
channels. The fact that many participants gave heavy high-frequency weighting
in the monaural level discrimination task jeopardizes this commonly accepted composite
cross correlation model.
Dye and his colleagues were able to discover this by questioning generally
accepted concepts. Henry and Heinz were researchers that also made a very
interesting discovery by taking a commonly accepted view one step further. In the
article
Hearing Impaired Ears Hear
Differently in Noisy Environments, Henry and Heinz wanted to test how
individuals with impaired hearing differed in auditory perception when factoring
in noise. Most auditory information processing experiments take place in a
quiet laboratory, but what about in real life when there is noise in the
background? How do individuals with impaired hearing hear in these types of
conditions? Henry and Heinz set out to find out this information, which found a
physical difference in the way an auditory nerve fiber processes information.
In this study, having impaired hearing was classified as participants with
damaged sensory cells in the cochlea as well as damaged cochlear neurons. Chinchillas
were used in this experiment because according to Henry and Heinz, chinchillas
have a similar range of hearing to humans. Chinchillas with impaired hearing
were compared to normal hearing chinchillas in a setting with noise, which was
meant to represent what one would hear in a crowded room. What the study found
was surprising and unexpected. Essentially, there was no difference for both
groups in terms of how the cochlear neurons processed information in a quiet
environment. The differences occurred in the noisy setting, where researchers
found that hearing impairment reduced coding of temporal fine structures in
chinchillas with hearing impairment. In other words, the neurons were not as
synchronized in this task compared to the tasks without noise; the auditory
nerves appeared to be distracted by the noise.
This experiment demonstrates the importance of not just testing individuals
in quiet environments, but also in more realistic noisy environments as well in
order to gain a full perspective on how the auditory system is really
processing sounds. A lot of research is conducted in order to learn more about
the auditory system, however, by taking these discoveries and testing them in
more realistic, noisy environments, we have the potential to learn a great deal
more on exactly how the auditory system transmits sounds to our brain. In both
of the experiments, the researchers took the Jefress model of cross correlation
and found differences in IDTs, and coding of temporal fine structures in
hearing impairment. Both studies suggest that the cross correlational model of
auditory processing can vary, depending on these types of factors.
Dr. Toby Dye's lecture on auditory information processing provided extremely
insightful evidence about interaural differences of time and level
discrimination. It is known how auditory nerves behave differently between
those with impaired hearing, however, much of this vital research is conducted
in quiet laboratories. Are there differences between how individuals with
impaired hearing perceive auditory cues in an environment that is more noisy?
If so, what are these differences?
Dr. Dye's talk began by laying out the foundation of knowledge needed in
order to understand his research implications. We learned that since distance
exists between the two ears, a sound wave must travel further in order to reach
the ear that is further from the sound stimulus. From this concept, interaural
delays of time (IDTs) arise for both ears that can be calculated by taking the
difference in path length between the two ears and dividing it by the speed of
the sound in air (the Woodworth formula). On the other hand, there are also
interaural delays of level (IDLs) due to the fact that the head is a solid
object between the two ears, creating a "sound shadow" for the distal
ear. In his previous research, Dr. Dye has been able to conclude that threshold
IDLs for 1-2 dB for 753 Hz tones in isolation are elevated to 6-9 dB by the
presence of diotic 553 and 953 Hz components. A possible cause of threshold
elevation could be due to the auditory system averaging binaural information
from non-informative distractors, leading to a single fused intracranial
"image" being heard. In order to further assess the extent to which
listeners could segregate informative cues (targets) from non-informative cues
(distractors), Dye and his colleagues created a new task, termed SALT
(synthetic/analytic listening task). Analytic listening refers to target
interaural differences that are independent of the distractor interaural
differences. This requires listeners to form separate intracranial images of
the target and distractor as well as to maintain information about the spectral
composition of each. On the other hand, synthetic hearing could be identified
as having judgments based on the target influencing distractor interaural
values. Listeners fail to form separate images in this synthetic task, as well
as fail to maintain compositional information for each. From the collected
data, a weighted average of information from the target and the distractor were
able to be determined. Dye and his colleagues found that more participants
appeared to weigh the higher frequency component more than the lower regardless
of which was designated as the target. However, there were a lot of individual
differences. As a consequence, they conducted another study in which being “analytic”
was restricted to binaural tasks. To determine IDLs, the listener was presented
with a three cues. The first interval presented the target frequency, the
second presented the standard level, and the third tone was the tone which was
judged. Listeners had to say whether they thought the third tone was greater or
lower in level than the standard interval. A synthetic condition was also ran
in which the listener indicated whether composite level decreases or increases
between intervals two and three. It was found that listeners who were capable
of adjusting weights in one task were able to also adjust them in the other
task. It is widely accepted that IDTs are extracted by a process of neural cross
correlation which is carried out between the outputs of matched frequency
channels. The fact that many participants gave heavy high-frequency weighting
in the monaural level discrimination task jeopardizes this commonly accepted composite
cross correlation model.
Dye and his colleagues were able to discover this by questioning generally
accepted concepts. Henry and Heinz were researchers that also made a very
interesting discovery by taking a commonly accepted view one step further. In the
article
Hearing Impaired Ears Hear
Differently in Noisy Environments, Henry and Heinz wanted to test how
individuals with impaired hearing differed in auditory perception when factoring
in noise. Most auditory information processing experiments take place in a
quiet laboratory, but what about in real life when there is noise in the
background? How do individuals with impaired hearing hear in these types of
conditions? Henry and Heinz set out to find out this information, which found a
physical difference in the way an auditory nerve fiber processes information.
In this study, having impaired hearing was classified as participants with
damaged sensory cells in the cochlea as well as damaged cochlear neurons. Chinchillas
were used in this experiment because according to Henry and Heinz, chinchillas
have a similar range of hearing to humans. Chinchillas with impaired hearing
were compared to normal hearing chinchillas in a setting with noise, which was
meant to represent what one would hear in a crowded room. What the study found
was surprising and unexpected. Essentially, there was no difference for both
groups in terms of how the cochlear neurons processed information in a quiet
environment. The differences occurred in the noisy setting, where researchers
found that hearing impairment reduced coding of temporal fine structures in
chinchillas with hearing impairment. In other words, the neurons were not as
synchronized in this task compared to the tasks without noise; the auditory
nerves appeared to be distracted by the noise.
This experiment demonstrates the importance of not just testing individuals
in quiet environments, but also in more realistic noisy environments as well in
order to gain a full perspective on how the auditory system is really
processing sounds. A lot of research is conducted in order to learn more about
the auditory system, however, by taking these discoveries and testing them in
more realistic, noisy environments, we have the potential to learn a great deal
more on exactly how the auditory system transmits sounds to our brain. In both
of the experiments, the researchers took the Jefress model of cross correlation
and found differences in IDTs, and coding of temporal fine structures in
hearing impairment. Both studies suggest that the cross correlational model of
auditory processing can vary, depending on these types of factors.
Dye, R., Stellmack, M. and Jurcin, N. (2005). "Observer weighting strategies in interaural time-difference discrimination and monaural level discrimination for a multi-tone complex". Journal of the Acoustical Society of America (117)5, 4079-3090.
http://www.sciencedaily.com/releases/2012/09/120911151934.htm