Friday, October 13, 2023

Co-speech Gesture on Overall Language Comprehension

    Understanding language is a complicated and intricate task with many facets to comprehend. Whether it be wagging a finger in disapproval or playing charades, gestures and actions are key factors in our overall understanding of language being both aides to and other forms of communication. Pro-speech gesture is the term for actions performed in place of a word while co-speech gestures are actions performed in tandem with speech, and post-speech gestures modify a word spoken before, and all of these forms of gestures have been shown to assist in learning and understanding language. 


    In a study provided by Tieu et al. titled "Linguistic inferences without words", the research team aimed to prove the power of co-speech gestures along with other forms of visual aid and how their usage can trigger linguistic inferences that allow for language comprehension. This was done through a series of experiments involving about 100 Amazon Mechanical Turk workers each, one involving gesture and one involving visual animation, both aiming to observe the strength of the videos presented in guiding inferences that appeared in text below the videos. Their results showed similar yet interesting results in both experiments. For the gesture experiment, the researchers concluded that the gestured information was joined and stored among speech information possibly showing that iconic information behaves like normal speech, thus assisting in comprehension and retention. The animation experiment gave similar results but proved a very different hypothesis, showing that content that had not been seen before in a linguistic context was still able to be divided in the same way the gestural information was, thus showing presuppositions are likely not generated via lexical learning, but can possibly be generated spontaneously "on the fly". However, these inference generations are usually thought to be language-specific in many cases, thus more research regarding co-speech gestures and multilingual understanding is to be done.


    In a similar study by Zielenski and Wakefield titled "Language Proficiency Impacts the Benefits of Co-Speech Gesture for Narrative Understanding Through a Visual Attention Mechanism", the researchers aim to focus on the ability of co-speech gestures to aid in language comprehension for multi-lingual children. The researchers ran their experiment with an experimental group of 17 English-Polish bilingual children ages ranging from six to eight wherein the children were presented with a series of videos in both English and Polish and were asked to recall as much information as possible from the stories presented in them. The storyteller in the videos would have co-speech gestures with some of the information. The overall results showed that information recall was much weaker when the story was presented in Polish rather than English, but also showed that recall of events for the Polish videos was strengthened when co-speech gestures were present. The results of their study also revealed that co-speech gesture was more helpful in Polish rather than in English, showing that co-speech gesture could be more helpful in polyglots' weaker languages. This experiment once again illustrates the utility and aid that co-speech gesture can provide in the expression of spoken language.


    Both of these experiments exemplify and provide proof of the effectiveness and value of co-speech gesture in overall language comprehension. The first experiment shows that not only co-speech gestures but other visual aid not paired with speech can help in understanding language, while also showing the multi-modality of language comprehension as a whole. The second experiment takes this idea and further deepens its implications by looking at its effectiveness in actively learning a second language. These experiments provide a substantial amount of information regarding such an interesting topic since language is so fundamental to our world's functioning and definitely encourage others to look into and research the topic in more depth.


 Sources:

Neuroscience News. (2019, April 26). Meaning without words: Gestures and visual animations reveal cognitive origins of linguistic meaning. https://neurosciencenews.com/gestures-visual-linguistics-12063/

Schlenker, P. (2019). Gestural semantics. Natural Language & Linguistic Theory37(2), 735–784. https://doi.org/10.1007/s11049-018-9414-3

Tieu, L., Schlenker, P., & Chemla, E. (2019). Linguistic inferences without words. Proceedings of the National Academy of Sciences, 116(20), 9796–9801. https://doi.org/10.1073/pnas.1821018116

Zielinski, N., & Wakefield, E. (n.d.). Language Proficiency Impacts the Benefits of Co-Speech Gesture for Narrative Understanding Through a Visual Attention Mechanism.

No comments:

Post a Comment