Auditory Signals Translated to Speech
In his article “Morphological and physiological development of auditory synapses”, Dr.
Wei-Ming Yu observed the functions of auditory synapses throughout their development. The
ribbon synapse is associated with fast, precise, and sustained signaling and the endbulb of Held
is associated with precise timing for interaural sound localization (speech perception). In his
presentation, “Genetic dissection of the Auditory Neural Circuits”, Dr. Yu took these
observations further by drawing on their genetic origins. He found that the c-Maf transcription
factor plays a role in the development of the firing properties of primary auditory neurons,
especially during their differentiation. He discovered this in mice by using cell-type-specific gene
knockout to remove the c-Maf gene and then used auditory brainstem response recordings
which recalled electrical signals in the brain. He discovered that c-Maf mutants, or those lacking
the gene, have high hearing impairments, leading to delayed auditory responses. This provides
evidence that c-Maf is essential for the transmission of sound information to the central nervous
system. Understanding how auditory synapses develop is important in order to recognize their
roles in speech perception and formation.
Audition and speech are closely related in that the process of translating audition to
coherent speech requires brain signals. When speech comes fluidly to many of us, it is easy to
take the process for granted. Many people suffer from speech impediments due to injuries such
as stroke as well as other neurodegenerative disorders like amyotrophic lateral sclerosis, or
ALS. In an article from The New York Times, researchers are discovering ways to directly
decode the brains signals into speech using a virtual prosthetic voice. The developing system
uses motor commands to create speech that suits the individual’s actual speaking flow.
Although the system had only been tested on people with normal speech, scientists decided to
test its functionality on patients who had surgery for epilepsy. These patients, since they tend to
not do well with medications, frequently prefer to have brain surgery done. Before doing so,
doctors must locate the spot in the patient’s brain that serves as the origin for seizures by
identifying electrical storms with electrodes placed in the brain. These electrodes are placed in
locations of the brain that are responsible for auditory signaling as well as movement. Since this
process can take a long time, studies are often conducted on the patients at the same time.
Researchers kept track of the electrodes as they recorded the firing patterns from the motor
cortex. They then correlated the patterns to the patients’ facial movements and translated them
into speech. Researchers found that through decoding brain signaling associated with speech,
they could make more natural sounding and accurate speech simulations. With future research,
scientists hope to provide individuals with speech impairments a more natural and efficient way
to communicate and interact with the world.
“Scientists Create Speech from Brain Signals”
https://www.nytimes.com/2019/04/24/health/artificial-speech-brain-injury.html
“Morphological and physiological development of auditory synapses” & “Genetic Dissections of
the Auditory Neural Circuits”, Dr. Wei-Ming Yu
No comments:
Post a Comment