Thursday, February 26, 2026

TBIs: Shaking up Executive Function and Auditory Processing

We learn more every year about the effects of traumatic brain injuries and their long-lasting aftereffects. Dr. Krizman, in her presentation a couple of weeks ago, highlighted the fact that Frequency Following Response (FFR) has a distinct neural signature in young children with concussions.  Even further, they noticed that their brains process the pitch of people's voices more slowly than that of children without traumatic brain injuries. Her research was a powerful demonstration of the lasting impact of traumatic brain injuries on auditory processing. I was interested in this research because we often study how TBI can impact cognitive function in the long term but seeing it from an auditory perspective was fascinating.  

Following this, Theodoroff et al. (2022) paper is building this, showing how even though there are lasting auditory effects to concussion and other TBIs, it lacks clinical practice. When getting a concussion screen, no section tests any auditory subtypes, such as tinnitus, noise sensitivity, or hearing difficulties. A new paper from the TRACK-TBI study, published in the Journal of Head Trauma Rehabilitation, continues Krizman's work by examining the long-term effects of untreated auditory symptoms following TBIs. Armstrong and colleagues are asking: What are the long-term consequences of ignoring auditory symptoms after any TBI, and how can this affect the patient's cognitive function? 

Using Krizman's paper as a basis, they observed a disrupted brainstem's response to the sound and how it differs from that of those without concussions. The work of Theodoroff and their team highlights the disparity and the lack of clinical knowledge used to treat the auditory symptoms of TBI's. Finally, Armstrong shows how it actively affects cognition in the long term.  

The TRACK-TBI study enrolled nearly 2700 participants with various TBIs from 18 Level I trauma centers across the country, and among these, they analyzed data from 1267 participants. Two weeks post-injury, they were asked, "Since your injury, has your hearing been worse in either ear?" and among the recipients, approximately 17% responded yes. 6 months post-injury, they were given a wide range of cognitive tests, focusing on executive function called the Trail Making Test (TMT) and processing speed using the WAIS-IV Processing Speed Index.  

After analyzing data from the remaining participants, they found that those with TBI-related hearing impairment at the two-week marker had significantly worse executive function at 6 months than those without hearing impairment. Those showing the most symptoms, hearing loss in both ears, and intracranial pathology in both ears, had the strongest association between the executive function level of hearing impairment following a TBI.  

Even though a connection was formed linking hearing impairment from a TBH to executive function issues, none were linked to processing speed. This distinction is important because it indicates that there are only sensitivities in higher-level cognitive functioning, such as the ability to switch between tasks or to inhibit distractions.  

         It was interesting to read this work and compare it with Dr. Krizman's because her work highlights that the brainstem's ability to encode FFRs for sound is damaged after a brain injury. Damage to this part of the brain makes it more difficult to distinguish one sound in a busy area, meaning that the cortex has to work much harder to differentiate frequencies. The Armstrong paper highlights this by showing that many people who experienced auditory symptoms are still experiencing executive function issues six months later. In both papers, degraded sensory input makes it more difficult for the brain to process higher-order cognitive tasks, such as executive function. In total, there needs to be more research surrounding the long-term effects of concussions on executive functions. Still, more extensive screening for TBI is needed to alleviate symptoms and develop better treatment protocols.  

 

The Impact of Gestures on Learning and Comprehension of a Language

     I had the pleasure of listening to Dr. Elizabeth Wakefield talk about her research in a seminar recently. Specifically, the work in her and Dr. Natalia Zielinski’s paper, “Language Proficiency Impacts the Benefits of Co-Speech Gesture for Narrative Understanding Through a Visual Attention Mechanism.” In the talk, she spoke about how language and gestures are linked together. When people talk, they use hand gestures naturally. Using hands or body when talking is known as co-speech, and these gestures can help people understand. An example proposed in the talk used a person describing someone putting on jewelry. The first video, without motion, said that they were putting on jewelry, while the second video used a motion of putting a ring on a finger during the sentence. Talking with the motion gives more context to what is being said and gives the listener the answer to what specific jewelry was being put on, without verbally mentioning a ring.

    In the paper previously mentioned, they studied Polish-English Bilingual children and tested language proficiency to see if there is any link between gestures and understanding a narrative. Stories were spoken in both languages, some with and some without gestures. During the study, a correlation was found between a weaker language (Polish) and a benefit from using hand gestures. When the children knew the language less, they relied more on gestures, and it helped them understand the story. In English, there was a slight negative impact of adding gestures to the narratives. This is caused by the children focusing too much on the gestures and not enough on the story presented. In Polish, a story without gestures was the lowest remembered of the tested categories. The study mainly looked at children due to their lower language proficiency. Language proficiency is seen in this study to be tied with the use of gestures to comprehend a narrative more compared to no gestures.

    In a recent study done by Xiaoyi Huang, Nayoung Kim, and Kiel Christianson, they looked at pairing a gesture with learning a foreign language. The participants were taught words in Mandarin, and a gesture was paired with the words. There were iconic gestures, such as talking on the phone or drinking, and arbitrary gestures that have no prior connection to a word. The thirty participants in the study had no previous experience speaking Mandarin or any Chinese language. Eighteen words were presented, with six being accompanied by iconic gestures, six with arbitrary gestures, and six without gestures. After two sessions, the students took a multiple-choice test with the instructor speaking the words and the gesture associated with them. Students were eight to ten percent better with gestures compared to without. According to the research Kiel Christianson, “visualizing a gesture with each word creates multiple pathways into the semantics of new words and helps students remember them better.” There is a fall off after twelve words, after which the students cannot learn and retain all those words. Overall, it is helpful for students to learn a new language while the teacher is using hand gestures.


References

Huang, X., Kim, N. and Christianson, K. (2019), Gesture and Vocabulary Learning in a Second Language. Language Learning, 69: 177-197. https://doi.org/10.1111/lang.12326

Zielinski, Natalia, and Elizabeth M. Wakefield. “Language Proficiency Impacts the Benefits of Co-Speech Gesture for Narrative Understanding through a Visual Attention Mechanism.” Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 43, no. 43, 2021, escholarship.org/uc/item/63r5d3qq.

Tuesday, February 24, 2026

The role of auditory processing in concussions, ADHD and APD

         One of the most common brain injuries seen are concussions, or mild traumatic brain injury (mTBI). Concussions can be a result of direct impact such as falling or sports collisions, or from indirect force such as whiplash where your head snaps forward or back. Despite the awareness of concussions, they tend to be difficult to diagnose as they do not show up on CT or MRI scans, they lack specific symptoms and could be misinterpreted as symptoms of other conditions, and they are subjective, or self-reported which may not be dependable and may be biased. It is imperative for concussions to be diagnosed, and an objective test be designed because of the potential for concussions to impact cognition, neurologic function, socio-emotional wellbeing, and academic achievement [1]. Without a diagnosis, concussions may go unnoticed and thus, untreated.  
     I recently had the pleasure of listening to Dr.Krizman speak during a neuroscience seminar on her phenomenal paper, “Auditory biological marker of concussion in children. This paper discusses how auditory processing, which can be disrupted by concussions, can be used to identify concussion occurrence and severity [1]. The study followed concussed and non-concussed child athletes and measured auditory processing by using the frequency-following response (FFR). FFRs can capture the fundamental frequency, harmonics, amplitude (intensity), timing, and noise. Fundamental frequency (distinguishing who is talking) and harmonics (distinguishing words) are found to be diminished in concussed patients. The study followed some of the concussed children from Lurie Children’s Hospital to observe recovery and found results of partial recovery. FFR provides clues for biological factors disrupted after a concussion; applying these findings allows the researchers to identify a signature neural profile and opens clinical potential for diagnosing and tracking concussions in patients. 
      Dr.Krizman’s talk made me aware of just how expansive auditory processing is. Auditory processing has many roles in language, reading, filtering, perception, and more. For those reasons, I wanted to know more about the impact of auditory processing in other conditions. While searching, I came across an article about attention deficit hyperactive disorder (ADHD) and auditory processing disorder (APD) and the link between them [2]. ADHD and APD can both affect attention, consequently contributing to the inaccuracy of diagnosis of both conditions. But why are they mistaken? The article discusses the 2017 study [3], “Auditory Processing Assessment in Children with Attention Deficit Hyperactivity Disorder”, where children with ADHD were found to not perform as well on auditory processing tests. However, when treated with methylphenidate (more common brand names are Ritalin or Concerta), ADHD children were found to perform as well as children without ADHD [2, 3]. Therefore, symptoms of challenges in auditory processing can be associated with ADHD. Consequently, APD symptoms can be misdiagnosed and interpreted as the individual having attention deficits. Both conditions require careful evaluation to address them. 
     From both the seminar talk and the article from Medical News Today, I have begun to realize the broader implications of auditory processing (listening behavior) being a direct reflection of neurological conditions in the brain rather than only being involved in hearing. Auditory processing involves auditory memory, attention, distinguishing sounds, blocking background noise, and location of sound. It spans many neural systems and thus, can be implicated in a variety of conditions. If multiple conditions exhibit similar auditory symptoms, clinicians face the challenges of accurately diagnosing. Dr.Krizman’s study [1] introduces a biological marker for concussions, which are known for being difficult to diagnose. It makes me wonder how an auditory biological marker can be applied to other conditions with auditory symptoms like those mentioned in the article by Medical News Today. 
 
References:
1. Kraus, N., Thompson, E. C., Krizman, J., Cook, K., White-Schwoch, T., & LaBella, C. R. (2016). Auditory biological marker of concussion in children. Scientific reports6, 39009.  
2. Villines, Z. (2021, November 9). ADHD and auditory processing disorder: Difference, diagnosis, and more. Medical News Today. https://www.medicalnewstoday.com/articles/adhd-and-auditory-processing-disorder 

3. Lanzetta-Valdo, B. P., Oliveira, G. A., Ferreira, J. T., & Palacios, E. M. (2017). Auditory Processing Assessment in Children with Attention Deficit Hyperactivity Disorder: An Open Study Examining Methylphenidate Effects. International archives of otorhinolaryngology21(1), 72–78. 


Thursday, December 11, 2025

What we don't know about dreams

I had the good luck to attend a class where Dr. Gabriela Torres spoke about her research on dreaming and what happens in the human brain during dreams. Due to my interest in this subject, I have made the decision to learn more about it and present my findings today. Because dreams could only be researched after awakeningwhich I believe is fantastic, dream science was unreliable and incomplete for most of history. Dr. Torre's research and other studies, however, are changing this by showing that dreams are not closed off experiences but rather measurable and, more significantly, interactive brain states. 

Before speaking in my Neuro 300 seminar class, Dr. Torres presented a study that shows how REM sleepers can use eye movements or facial muscle signals to receive questions and answers in real time. This implies that while completely asleep, the dream brain is capable of understanding speech, carrying out mental tasks, and purposefully responding. Their research demonstrates that dream cognition is more complex as well as well-organized and connected to reality than previously thought. 

I became interested in this because, when doing my own research, I found that a supplementary study by Horikawa, Tamaki, Miyawaki, and Kamitani (2013) further supports this. They used fMRI and machine-learning models to directly translate the visual content of dreams from brain activity. Because the same visual regions that are active during waking perception also become active during dream imagery, researchers were able to predict what patients were dreaming before they woke up. This shows that the brain's neuronal architecture for dream imagery is the same as that of real perception. 

When taken as a whole, these investigations offer a convincing conclusion, which is that dreams are not personal hallucinations but rather readable and interactive in cognitive states. While Konkoly (in the study, Dr. Torres presented) showed that dreamers can communicate openly, Horikawa, Tamaki, Miyawaki, and Kamitani (2013) (in the study I found) showed that researchers can analyze dream material inwardly. These findings support the idea that consciousness is not an "on and off" switch but rather a fluid spectrum, with elements of waking perception, memory, and reasoning staying partially active even during REM sleep. This new science may be used to treat PTSD and nightmares, as well as to enhance creativity, memory, and emotional processing while you sleep. Together, these investigations reveal a link between neurological interpretation and discourse, transforming dreams from a mysterious internal experience into a measurable and reachable part of human thought. 

 

APA Citations  

Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y. (2013). Neural decoding of visual imagery during sleep. Science, 340(6132), 639–642..https://pubmed.ncbi.nlm.nih.gov/23558170/ 

Konkoly, K. R., Appel, K., Chabani, E., Mangiaruga, A., Gott, J., Mallett, R., Caughran, B., Witkowski, S., Whitmore, N. W., Mazurek, C. Y., Berent, J. B., Weber, F. D., Türker, B., Leu-Semenescu, S., Maranci, J.-B., Pipa, G., Arnulf, I., Oudiette, D., Dresler, M., & Paller, K. A. (2021). Real-time dialogue between experimenters and dreamers during REM sleep. Current Biology, 31(7), 1417–1427.e6. 


Temperature as a Universal Regulator of Sleep

    Humans spend around one-third of their life sleeping. Sleep plays a huge role in memory consolidation, waste removal, cellular repair, and much more. Poor quality sleep is associated with problems such as dementia and CVD. Even with sleep playing such a big role in our lives, most of us neglect sleep, getting about 3 and a half ultradian sleep cycles or even less. Sleep is heavily regulated by many factors, with temperature being one of the biggest factors. In order for humans to fall asleep, their core temperature must drop by 1–3 °F. The timing of sleep is controlled by a circadian clock, the suprachiasmatic nucleus (SCN) in humans, and lateral posterior clock neurons (LPNs) in Drosophila. These structures are strongly influenced by temperature, and as they receive temperature information, they release signals that regulate sleep.

    Daniel J. Cavanaugh’s paper, “The cell-intrinsic circadian clock is dispensable for lateral posterior clock neuron regulation of Drosophila rest-activity rhythms,” written by Charlene Y.P. Guerrero et al., examines the role of LPNs in Drosophila circadian and sleep regulation. LPNs receive input from thermosensory pathways, which is essential for circadian rhythm regulation. Guerrero reports that reduced LPN excitability causes a slight reduction in sleep. However, when LPNs are silenced using CRISPR, total sleep is reduced, sleep becomes fragmented, and sleep bouts become shorter. Researchers then used CRISPR to delete clock genes (per and tim) inside LPNs, which normally create a 24-hour cycle acting as an internal clock. The flies with their LPN clock genes deleted still received timing information from other clock neurons with functional per and tim cycles, maintaining a normal 24-hour rhythm. This shows that LPNs do not set their own time but instead receive timing from other structures, functioning as driven oscillators. Guerrero’s article demonstrates that LPNs are important for regulating sleep but do not require their own internal clocks. This knowledge may lead to therapeutic strategies for circadian rhythm disorders.

    In a study by Roy J. E. M. Raymann et al. titled “Skin deep: enhanced sleep depth by cutaneous temperature manipulation,” Raymann discusses how skin temperature controls sleep quality in humans. Small increases in skin temperature have significant effects on sleep. Raymann et al. argue that adjusting the thermostat in a room has limited impact on sleep, while manipulating skin temperature has powerful effects. Previous experiments show that even a 0.4–1.0°C increase in skin temperature affects sleep. Researchers used slight skin warming and adjustments to the bedding microclimate to observe how it influenced sleep. They recorded sleep onset time, deep sleep duration, interruptions, and comfort. Participants also slept in cool and hot rooms to test whether air temperature made a difference. Results showed that in hot rooms, deep sleep decreased, wakefulness increased, and overall sleep quality worsened. When skin temperature was raised independently of room temperature, results changed dramatically: participants fell asleep faster, experienced more deep sleep, and had overall more restorative sleep. The study concludes that skin temperature plays a larger role in sleep quality than room temperature.

    Together, these findings show that temperature plays a major role in regulating sleep. In Drosophila, LPNs integrate thermosensory information to regulate sleep even without their own internal clocks. In humans, manipulation of skin temperature affects sleep more strongly than room temperature. Both articles highlight that sleep systems depend heavily on how the brain interprets thermal information. As research in this field continues, new strategies and treatments for sleep and circadian rhythm disorders may emerge.


References

1.) Guerrero, C. Y. P., Lam, V. H., Cusumano, P., Van Doren, M., & Cavanaugh, D. J. (2025). The cell-intrinsic circadian clock is dispensable for lateral posterior clock neuron regulation of Drosophila rest–activity rhythms. Neurobiology of Sleep and Circadian Rhythms. https://doi.org/10.1016/j.nbscr.2025.100198

2.) Raymann, R. J. E. M., Swaab, D. F., & Van Someren, E. J. W. (2008). Skin deep: Enhanced sleep depth by cutaneous temperature manipulation. Brain, 131(2), 500–513. https://doi.org/10.1093/brain/awm315

Network-based Timekeeping in Humans and Flies

    The circadian rhythm is responsible for organizing countless physiological and behavioral processes across many species. During his talk Dan Cavanaugh emphasized that circadian rhythms are not controlled by just one singular cell, but instead, it comes from the activity of many coordinated neurons working together. In his paper “The cell-intrinsic circadian clock is dispensable for lateral posterior clock neuron regulation of Drosophila rest-activity rhythms” (Cavanaugh et al., 2025), demonstrated this network-based idea in what I found a really surprising way: some neurons that participate in circadian timing don’t actually need their individual molecular clocks to regulate daily behavior. This challenged the common assumption that every clock neuron must keep track of time within themselves. 

In  Drosophila (flies), about 240 clock neurons form a network that coordinates daily patterns of activity. Cavanaugh focused on a subset called the lateral posterior neurons (LPNs). These neurons were thought to rely on their own intrinsic molecular clocks to support circadian behavior. However, by genetically disrupting the molecular clock only within LPNs, Cavanaugh et al. found that transitions, and rest-activity structure remained despite the absence of an intrinsic clock in these cells. The reason is that the LPNs receive timing cues from other clock  neurons, they don’t just function alone. But when Cavanaugh silenced the LPN completely using neuronal inhibition, the flies’ circadian rhythms weakened significantly. This shows that communication within the network is more essential than each neuron having its own cycle (Cavanaugh et al., 2025). 

This network-based view of circadian timing is similar in human neuroscience. In his article from 2025, “Brain circadian clocks timing the 24 hour rhythm of behavior,” Mendoza reviews recent research showing that human circadian rhythms also comes from suprachiasmatic nucleus (SCN) which is traditionally described as the “master peacemaker,” Mendoza explains that many other brain regions, including the hypothalamus, brainstem, and cortex, contain their own molecular clocks that interact with one another. These regions communicate through hormonal, metabolic, and neural pathways to generate a stable 24 hour rhythm (Mendoza, 2025). Just like the flies, coordination among nodes is seen to be more important than perfecting timing within every individual node.

What makes the two papers connect so well is their combined message that circadian rhythms are at their core network phenomena. Cavanaugh’s LPN study illustrates this idea on a cellular level that shows some neurons can lose their intrinsic clock without disrupting behavior because the rest of the network carries them. Mendoza’s review expands on this concept just on a larger scale–the human brain. This highlighted that the circadian stability depends on synchrony and communication, not isolated oscillators. When that communication fails in these systems, and rhythms break down, whether that is in a small fly or across the human brain.

The broader message of these findings are the most important. Understanding the circadian system as a distributed network helps explain why sleep and activity rhythms usually hold up to cellular-level distributions, but are still extremely sensitive to metabolic disorders. Both papers emphasize that future research should look into going beyond singular cells or singular genes and rather look into how circadian information is transmitted, integrated and lined up across the system. 

Together, Cavanaugh’s seminar and the research papers discussed in this post reinforce and back up the idea that biological timing isn’t focused on having perfect clocks in all individual neurons but it's more so about how those neurons can communicate with one another. Whether it's in flies or humans the brain is able to keep timing via the collaborative activity of the neurons. 

References

Cavanaugh, D. J., Guerrero, C. Y. P., Cusick, M. R., Samaras, A. J., & Shamon, N. S. (2025). The cell-intrinsic circadian clock is dispensable for lateral posterior clock neuron regulation of Drosophila rest-activity rhythms. Neurobiology of Sleep and Circadian Rhythms, 18, 100124. https://doi.org/10.1016/j.nbscr.2025.100124 

Mendoza, J. (2025). Brain circadian clocks timing the 24 h rhythms of behavior. NPJ Biological Timing & Sleep, 2(1). https://www.nature.com/articles/s44323-025-00030-8 


How Oxyphor 2P Is Transforming Deep-Tissue Oxygen Imaging

Understanding how oxygen is delivered and used in the brain is essential for uncovering the mechanisms behind stroke, dementia, traumatic brain injury, and many other neurological conditions. Yet measuring oxygen levels deep within living tissue has historically been extremely challenging. Traditional imaging tools either can’t reach deep enough, damage tissue during delivery, or are too slow to capture rapid physiological changes.

A new research breakthrough, Oxyphor 2P, a high-performance oxygen-sensing probe, may finally change that. Developed by Esipova, Barrett, Erlebach, Masunov, Weber, and Vinogradov, this innovative phosphorescent probe represents a major step forward in neuroscience and biomedical imaging. Their findings demonstrate that Oxyphor 2P makes oxygen imaging deeper, faster, and more stable than previously possible, opening the door to new insights into brain function and disease.

The authors introduce Oxyphor 2P as a high-performance phosphorescent probe designed specifically for measuring oxygen in biological systems. Unlike earlier probes, Oxyphor 2P is optimized for two-photon phosphorescence lifetime microscopy, a technique that allows scientists to visualize oxygen levels deep inside tissue using minimally invasive infrared light.

The researchers report several major advancements including two-photon imaging up to 600 micrometers deep, imaging speeds nearly 60 times faster than previous methods, delivery method that avoids local tissue damage, and reliable multi-day longitudinal oxygen measurements. These innovations make Oxyphor 2P one of the most promising tools for studying oxygen dynamics in living brains.

Oxygen is the primary fuel of the brain. Even slight disruptions can influence cognition and behavior and may contribute to conditions such as stroke, Alzheimer’s disease, Parkinson’s disease, and age-related cognitive decline. Historically, limited imaging tools have prevented scientists from monitoring oxygen levels at depth or over extended periods.

With Oxyphor 2P, researchers can achieve a multitude of things, including, observing oxygen levels across deeper cortical layers, tracking how oxygen changes during neural activity, studying chronic vascular and metabolic changes after injury, monitoring oxygen dynamics over days instead of minutes, and investigating early biomarkers of neurological disorders. The ability to perform multi-day, deep-tissue imaging without damaging the brain allows for more accurate and biologically realistic studies.

At a recent Loyola Neuroscience seminar, Dr. Tatiana V. Esipova, one of the lead authors of the Oxyphor 2P study, shared her experiences and scientific goals behind developing this probe. Dr. Esipova is a faculty member in the Department of Chemistry and Biochemistry at Loyola University Chicago, where she is now an Associate Professor. Her career has included research positions at the University of Pennsylvania and EPFL in Switzerland, building deep expertise in chemical probe design and biophysics.

During the seminar, Dr. Esipova emphasized the importance of designing oxygen probes that can reach deep brain layers without disrupting normal tissue architecture. She also highlighted how long-term tracking of oxygen levels can help researchers understand how the brain adjusts during learning, recovery, injury, and disease progression. Her work demonstrates how chemistry and neuroscience can intersect to create transformative tools.

The development of Oxyphor 2P shows how advancements in chemical probe design can reshape what neuroscientists are capable of studying. Just as multi-ancestry genetics has expanded our understanding of Parkinson’s disease, advanced oxygen imaging tools have the potential to unlock new insights into brain health.

Oxyphor 2P represents a major leap forward in deep-tissue oxygen imaging. With deeper penetration, faster imaging speeds, and multi-day stability, this probe allows researchers to view the brain’s oxygen landscape with an unprecedented level of detail. The work of Dr. Esipova and her collaborators highlights how innovation at the intersection of chemistry and neuroscience can drive meaningful progress in understanding human health and disease.

References

Esipova, T. V., Barrett, M. J. P., Erlebach, E., Masunov, A. E., Weber, B., & Vinogradov, S. A. (2024). Oxyphor 2P: A high-performance probe for deep-tissue longitudinal oxygen imaging. Cell Reports Methods.

Vinogradov, S. A., Wilson, D. F., & Lebedev, A. Y. (2020). Phosphorescent probes for oxygen imaging in vivo: Principles and applications. Progress in Molecular Biology and Translational Science.

Weber, B., & Helmchen, F. (2019). Imaging oxygen in the brain: Methods and applications. Annual Review of Neuroscience.