Neural Orchestra of Language: Insights from Speech, Cognition, and Development

Philipp Kuhnke Chair
Leipzig University
Wilhelm Wundt Institute for Psychology
Leipzig, Saxony 
Germany
 
Stephanie Forkel Chair
Stephanie Forkel
Stephanie Forkel
Nijmegen, Gelderland 
Netherlands
 
Tuesday, Jun 25: 12:00 PM - 1:15 PM
Oral Sessions 
COEX 
Room: Grand Ballroom 101-102 
This session presents a fascinating exploration into the intersection of neuroscience, language, and communication. It brings together an array of studies that push the boundaries of how we understand spoken language and its neurological underpinnings. From deep speech-to-text models that unravel the neural basis of spontaneous speech in everyday conversations to non-invasive mapping techniques predicting language outcomes post-eloquent tumor resection, the session showcases cutting-edge research in speech and language neuroscience. Topics include the joint contribution of the left inferior parietal lobe and auditory cortex in sound knowledge retrieval, the complexity of auditory speech repetition networks, and the unique cortical language representations during real-time dialogues. Additionally, the session explores the developmental aspects of language and communication, examining how prenatal environments influence early network development and how rhythmic cues impact neural oscillations differently in adults and children. The session culminates with a novel approach to understanding narrative comprehension, using data-driven models to detect event boundaries in reading. Attendees will be immersed in a multidisciplinary discussion that bridges gaps between neurology, linguistics, and cognitive science, offering new insights into the neural mechanisms of language and communication.

Presentations

Prenatal environment is associated with the pace of network development over the first 3 years

Environmental influences on brain structure and function during development have been well-characterized, and the pace of early brain development has been associated with important risk factors and behavioral outcomes (Shaw et al. 2010; Farah 2017). As children mature, intrinsic cortical networks become more segregated, with sets of brain regions displaying more densely interconnected patterns of connectivity and large-scale systems becoming increasingly distinct (Grayson & Fair 2017). Some theoretical models posit that environmental influences on brain development might arise by way of effects on the pace of brain development, such that brain development proceeds faster in neonates and toddlers from lower-SES backgrounds (Tooley et al. 2021). 

View Abstract 1285

Presenter

URSULA TOOLEY, Ph.D., Washington University in Saint Louis Saint Louis, MO 
United States

Mapping individual and shared cortical language representations during real-time natural dialogues

How is language encoded in the brain during everyday conversations, and how is that linguistic encoding shared across interlocutors? Typical studies of the neural basis of language present subjects with predetermined, isolated words or sentences (Price, 2010), and do not consider the role of spontaneous language production nor linguistic neural coupling (Garrod & Pickering, 2004). Here, we aim to address both gaps and map brain areas involved in both speech production and comprehension during natural dialogue. 

View Abstract 1052

Presenter

Zaid Zada, Princeton University Princeton, NJ 
United States

Deep Speech-to-Text Models Capture the Neural Basis of Spontaneous Speech in Everyday Conversations

One of the most distinctively human behaviors is our ability to use language for communication during spontaneous conversations. Here, we collected continuous speech recordings and concurrent neural signals recorded from epilepsy patients during their week-long stay in the hospital, resulting in a uniquely large ECoG dataset of 100 hours of speech recordings during spontaneous, open-ended conversations. Deep learning provides a novel computational framework that embraces the multidimensional and context-dependent nature of language (Goldstein et al., 2022; Schrimpf et al., 2021). Here, we use Whisper, a deep multimodal speech-to-text model (Radford et al., 2022) to investigate the neural basis of speech processing. 

View Abstract 1053

Presenter

Haocheng Wang, Princeton University Princeton, NJ 
United States

Non-Invasive Mapping Predicts Language Outcomes after Eloquent Tumor Resection

Glioma patients undergoing surgery in eloquent regions consistently sustain permanent postoperative language deficits that decrease both quality of life and survival. The origins of these poor outcomes remain unknown. Despite the advent of intraoperative mapping techniques, subjective judgements frequently determine important surgical decisions. Transcranial magnetic stimulation (TMS) has recently emerged as a promising non-invasive, preoperative language mapping technique. We aim to elucidate the determinants of aphasic surgical deficits by building an individualized predictive model based on TMS, routinely acquired preoperative imaging data, and the resection volume. The results shed light on the structure and function of large-scale language networks in glioma patients and lead to a clinical imaging approach for predicting and avoiding postoperative aphasic decline. 

View Abstract 108

Presenter

Matthew Muir, MD Anderson Cancer Center Houston, TX 
United States

Left inferior parietal lobe and auditory cortex jointly contribute to sound knowledge retrieval

Conceptual knowledge is central to human cognition. Previous neuroimaging studies suggest that conceptual processing relies on the joint contribution of modality-specific perceptual-motor and multimodal brain regions (Kuhnke et al. 2023). In particular, the multimodal left inferior parietal lobe (IPL) coupled with auditory cortex during sound knowledge retrieval and with somatomotor cortex during action knowledge retrieval (Kuhnke et al. 2021). However, as neuroimaging is correlational, it remains unknown whether the interaction between modality-specific and multimodal cortices is causally relevant for conceptually-guided behavior. To tackle this issue, we applied inhibitory transcranial magnetic stimulation (TMS) over modality-specific cortex (somatomotor, auditory, or sham), before 24 healthy participants received TMS over multimodal cortex (IPL, or sham) during action and sound judgment tasks on written words (Figure 1A). 

View Abstract 1002

Presenter

Philipp Kuhnke, Leipzig University
Wilhelm Wundt Institute for Psychology
Leipzig, Saxony 
Germany

Degeneracy in the neurological model of auditory speech repetition

The neurological language model (1) posits that auditory speech repetition engages four left hemisphere brain regions sequentially: primary auditory cortex (A1), Wernicke's area (WA), Broca's area (BA), and primary motor cortex (M1), with the arcuate fasciculus mediating information relay. Recent studies challenge this, emphasising the importance of areas near WA and BA (2). Here, we investigate the bilateral interaction amongst these and their involvement with A1 and M1 during auditory speech repetition. Using previously identified activations (2) we estimate effective connectivity (i.e., directed interactions) across these areas using Dynamic Causal Modelling (DCM)3. Our findings reveal variable effective connectivity across word or pseudoword repetition, indicative of functional degeneracy. 

View Abstract 1051

Presenter

Noor Sajid, Wellcome Centre for Human Neuroimaging, University College London
Brain Sciences
London, London 
United Kingdom