Wednesday, December 15, 2021

The Importance of Phonemes in Language

    Language defines many aspects of our lives. Where we live, what we do, what we want others to know believe or think. Ultimately for us language as a way of communications however for other species it might be certain sounds that might signal warning or danger. Language is very complex in which it it's influenced by our mental lexicon. Specifically word forms are the most important within the two studies Phonetic acquisition in critical dynamics, a computational approach and A hierarchal sparse coding model predicts acoustic feature encoding in both auditory midbrain and cortex they both emphasize the importance of phonemes which are the smallest part of a sound or word that alters its meaning. Both studies investigate different aspects of language to illustrates how complex it is and how little we still know about the different pathways.

    The first study will be discussing is Phonetic acquisition in critical dynamics, a computational approach by Dario Dematties et al, Silvio Rizzi, George K Thiruathukal, Alejandro Wainselboim, and B. Silvano Zanutto. This study was completed through monitoring word classification through the already made model of the support vector machine in the presence of her environment with "white noise, reverberation and pitch and voice variation"(pg.1). This is because they ultimately wanted to find a way to mimic early "phonetic acquisition in humans"(pg.22) which is accomplished by infants. The researchers continue to emphasize the importance of language being a top-down process. In which phonemes plays a special role within it. They ultimately emphasize the importance of phenoms in which are the smallest units of sound that make a difference in meaning. Phonemes are a part of the mental lexicon, in which they play in important role in word formation and understanding to be able to categorize the words. They can also be influenced by the "grammatical and somatic dimensions present in the human language" (pg. 2). The study also mentions that the mammalian cortex processing of information is robust and low in error discrimination of the stimuli's that are present. The researchers have stated that their focus is temporal dynamics of speech and linguistic contracts, and because of that they're able to use their model to identify phonic classification based on observing the structural and functional properties in them mammalian cortex. The computational model in which they used was CSTM. They stated that it "stimulated a patch of cortical tissue and incorporates a columnar organization, spontaneous micro- columnar formation and partial depolarization and adaptation of the contextual activities"(pg 6). Ultimately the results that were obtained supported the hypothesis that the computational model can mimic phonetic invariances and generalizations.

    The second study will be focusing on is A hierarchal sparse coding model predicts acoustic feature encoding in both auditory midbrain and cortex by Qingtian Zhang, Xiaolin Hu, Bo Hong, and Bo Zhang. The researchers wanted to address hi speech ultimately enters the years specifically within the encoding mechanisms that are underlying the neural functions. To be able to do that they created a hierarchal sparse coding model. Similarly, to the first study they mentioned how our hearing is understood through series of auditory pathways such as "phonetic features phonemes, syllables, words, and grammatical features" (pg 2) are all in coded through hierarchal processing stages. They want to address the lack of her explanation through the experience that have already been done specifically the experimental yield specific outcomes that have not been addressed. They want to focus on special coding and temporal coding. They mentioned that the two models were created explain certain neurological responses at different stages of the auditory pathway however it's stated that their "computational principles are different"(pg.2) and as a result we need to find out what exactly in the auditory pathway predicts in coding. In the end they were able to demonstrate that the computational model can predict the neural properties that are found along the auditory pathway. Ultimately that led to them being able to demonstrate the sparse and hierarchal coding that  can be found within the auditory system.

    Both studies emphasize the hardships that are present with Ana trying to find a model to understand how language is perceived within ourselves. Well, the first study focuses on phonic acquisitions in humans and the second study focuses on the hierarchal processing stages in the auditory pathway. They both put into question what systems does the brain have in place for us to understand what is being said and the way that its being said. 


Works Cited

Dematties, Dario, et al. “Phonetic Acquisition in Cortical Dynamics, a Computational Approach.” Plos One, vol. 14, no. 6, 2019, doi:10.1371/journal.pone.0217966.

Zhang, Qingtian, et al. “A Hierarchical Sparse Coding Model Predicts Acoustic Feature Encoding in Both Auditory Midbrain and Cortex.” PLOS Computational Biology, vol. 15, no. 2, 2019, doi:10.1371/journal.pcbi.1006766.

No comments:

Post a Comment