Tuesday, May 3, 2022

Pointing to an Answer: The Neuroscientific Basis of Gesture-Based Learning

    Dr. Wakefield and her co-authors of the study, "Learning math by hand: The neural effects of gesture-based instruction in 8-year-old children," hypothesized that a similar neurobiological mechanism would be observed via fMRI in individuals learning through both speech and gesture as in past data for individuals learning through action. This mechanism, they predicted, would be substantially different than that observed in individuals learning through speech alone. The basis of this proposed difference is the recruitment of both sensory and motor regions of the brain in action- and gesture-based learning, which reinforces the information learned by forming neural networks that are subsequently utilized when this information is recalled; this reinforcement was not present to the same degree in solely speech-based learning, which does not similarly engage the participant, and thus allows for more passive learning. The participants of this study were seven- to nine-year-old children at the same initial level of understanding of mathematical equivalence. These children were divided into an experimental group (taught how to solve a mathematical equivalence problem via speech and gesture) and a control group (given the same instruction as the experimental group, only without gesture). All participants who demonstrated an adequate understanding of equivalence on a follow-up test then underwent an fMRI session. As was hypothesized, action- and gesture-based learning was found to share a common mechanism, the activation of frontal-parietal motor processing regions of the brain, separate from that of speech-based learning.

    In another recent study, "Instructed Hand Movements Affect Students’ Learning of an Abstract Concept From Video," Yunyi Zhang and her research group at UCLA explored the extent to which directed gestures impact individuals' understanding of statistical modeling while simultaneously watching an instructional video on the concept. The study consisted of a control group shown the video with no accompanying gesture, a "content-match" group shown the video with a gesture mirroring the orientation of the presented data, and a "content-mismatch" group shown the video with a gesture contrary to the orientation of the presented data. Participants were told that the focus of this study was multitasking, not gesture-based learning, as to give the impression that their gestures were not directly associated with the information they were learning. It was found that participants in the content-match group demonstrated a better understanding of the concept than participants in both the control and content-mismatch groups, suggesting that a gesture is only useful to understanding a concept insofar as the specific orientation of the gesture reinforces the concept in question. In addition, participants in both the content-match and content-mismatch groups reported upward trends in their understanding with each successive viewing of the video, whereas participants in the control group reported a decrease in understanding after their third viewing; this suggests that the participants' secondary hand-placement task kept them engaged in their primary video-watching task, while participants in the control group grew disengaged with no secondary task.

    Zhang's study points in conjunction with Dr. Wakefield's study to future work on the effectiveness of teaching styles with respect to the learner's procedural and semantic memory. As it was the objective of Dr. Wakefield's research group to compare the activated brain regions while recalling information taught by different techniques, their study did not explicitly conclude that gesture-based learning is more effective than speech-based learning for retaining learned information; however, they cite two other studies (Cook et al., 2008; Goldin-Meadow et al., 2009) whose findings support this idea. Dr. Wakefield's and Zhang's studies differ in several respects that have noteworthy implications for this topic. Children comprised the participants of the former study, while undergraduate students comprised the participants of the latter. In addition, whereas Dr. Wakefield's research group focused on the procedural memory of their participants (assessing their ability to solve equivalence problems), Zhang's research group tested the semantic memory of their participants (assessing their conceptual understanding of statistical modeling). As such, it could be reasonably suggested that, in general, verbal instruction when accompanied by gesture is a more effective technique than verbal instruction alone, without respect to the learner's age or the type of information (procedural or semantic). Future studies will likely attempt to determine whether there is any difference in the effectiveness of gesture-based learning when controlling for all variables except age and/or type of information, and to determine the degree to which this difference may exist.


References:

Hutson, Matthew. “Students Who Gesture during Learning 'Grasp' Concepts Better.” Scientific American, Scientific American, 13 Apr. 2021, https://www.scientificamerican.com/article/students-who-gesture-during-learning-grasp-concepts-better/.

Wakefield, Elizabeth M et al. “Learning math by hand: The neural effects of gesture-based instruction in 8-year-old children.” Attention, perception & psychophysics vol. 81,7 (2019): 2343-2353. https://doi.org/10.3758/s13414-019-01755-y. 

Zhang, I., Givvin, K.B., Sipple, J.M., Son, J.Y. and Stigler, J.W. (2021), "Instructed Hand Movements Affect Students’ Learning of an Abstract Concept From Video." Cognitive Science, 45: e12940. https://doi.org/10.1111/cogs.12940.

No comments:

Post a Comment