Voxel-wise encoding of naturalistic audio stimuli using very-high-density diffuse optical tomography

Poster No:

1982 

Submission Type:

Abstract Submission 

Authors:

Morgan Fogarty1, Wiete Fehner1, AAHANA BAJRACHARYA1, Jerry Tang2, Zachary Markow1, Jason Trobaugh1, Alexander Huth2, Joseph Culver1

Institutions:

1Washington University in St. Louis, St. Louis, MO, 2The University of Texas at Austin, Austin, TX

First Author:

Morgan Fogarty  
Washington University in St. Louis
St. Louis, MO

Co-Author(s):

Wiete Fehner, MS  
Washington University in St. Louis
St. Louis, MO
AAHANA BAJRACHARYA  
Washington University in St. Louis
St. Louis, MO
Jerry Tang, PhD  
The University of Texas at Austin
Austin, TX
Zachary Markow, PhD  
Washington University in St. Louis
St. Louis, MO
Jason Trobaugh, DSc  
Washington University in St. Louis
St. Louis, MO
Alexander Huth, PhD  
The University of Texas at Austin
Austin, TX
Joseph Culver, PhD  
Washington University in St. Louis
St. Louis, MO

Introduction:

Functional neuroimaging using naturalistic stimuli has led to mapping visual (Huth et al., 2012) and linguistic (Huth et al., 2016) semantic representations across the cortex using fMRI. However, the physical constraints of fMRI make some natural studies and widespread clinical applications impractical. These semantic maps could be of clinical importance for understanding the initial language defect or recovery trajectory of patients with post-stroke communication disorders, such as aphasia. High-density diffuse optical tomography (HD-DOT) is shown to be a surrogate for fMRI (Eggebrecht et al., 2014) with the advantage of wearability. This wearable neuroimaging approach is advantageous for translating these encoding advancements into a therapeutic tool for aphasia patients. Here, we address the feasibility of using very-high-density DOT for semantic auditory brain mapping in healthy adults using autobiographical podcasts as engaging, naturalistic stimuli.

Methods:

Our VHD-DOT imaging system comprises 255 sources and 252 detectors distributed across the scalp for nearly whole head coverage and a spatial resolution of approximately 10 mm. Data were collected from two healthy control participants listening to 10, 10–15-minute stories from the Moth Radio Hour podcast and completing functional localizers over two imaging sessions (Fig 1A-B). A 10-minute validation story was also repeated once per session. Semantic features from the stimulus stories were extracted using a word co-occurrence semantic model. Features were concatenated across all stories and time-delayed to create a linear finite impulse response model for estimating the hemodynamic response function (3940 features total). Ridge regression was used to calculate the feature weights for each voxel. Bootstrapping was used to select a single regularization coefficient for all voxels. We evaluated the model by predicting the neural responses of each voxel to a held-out story (Fig 2A). The correlation between predicted and measured responses of the validation story was computed to evaluate accuracy. All analyses are conducted for each participant individually.

Results:

To verify the co-registration between imaging sessions, block-wise correlation (Fig 1C, E) and overlap (Fig 1D, F) maps for the visual and auditory localizer tasks display high similarity for both subjects. These maps indicate that cap placement was consistent across imaging sessions. Pearson pairwise correlation was computed between repeated validation stories, verifying that neural responses are consistent and repeatable for this story (Fig 2B, E). When validating the encoding model, high correlation regions between the predicted and collected HD-DOT responses (Fig 2C, F) align with semantic areas, including the lateral temporal cortex, superior prefrontal cortex, and temporal-parietal junction. For Subject 1, words related to the highest correlation voxel included "victims," "protect," "traitor," "threatened," and "justice," indicating that this voxel responds to words related to conflict or harm. For further validation, the stimulus features were scrambled before regression to misalign with the VHD-DOT story data. Here, model performance is low (Fig 2D, G) as expected, which validates that the encoding model is selective for semantic features.
Supporting Image: OHBM_Figure1_Fogarty.png
Supporting Image: OHBM_Figure2_Fogarty.png
 

Conclusions:

This lays the groundwork for semantic brain mapping using VHD-DOT as a surrogate for fMRI. These results suggest that VHD-DOT achieves the spatial resolution and image quality required for this detailed semantic mapping as validated in healthy adults. Future work includes extending these methods to post-stroke aphasia patients as a clinical population and using dimensionality reduction techniques to map word categories across the cortex. These findings establish VHD-DOT for semantic brain mapping and enable innovative clinical neuroimaging studies of language recovery in aphasia patients.

Language:

Language Comprehension and Semantics 2

Novel Imaging Acquisition Methods:

NIRS 1

Keywords:

Aphasia
Computational Neuroscience
Language
Near Infra-Red Spectroscopy (NIRS)
OPTICAL
Other - High density diffuse optical tomography; Naturalistic stimuli; Semantics; Novel imaging methods

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I do not want to participate in the reproducibility challenge.

Please indicate below if your study was a "resting state" or "task-activation” study.

Task-activation

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

Yes

Are you Internal Review Board (IRB) certified? Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.

Yes, I have IRB or AUCC approval

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Optical Imaging
Structural MRI

For human MRI, what field strength scanner do you use?

3.0T

Which processing packages did you use for your study?

Free Surfer
Other, Please list  -   NeuroDOT, NIRFASTer, fMRIPrep

Provide references using APA citation style.

Eggebrecht, A. T., Ferradal, S. L., Robichaux-Viehoever, A., Hassanpour, M. S., Dehghani, H., Snyder, A. Z., Hershey, T., & Culver, J. P. (2014). Mapping distributed brain function and networks with diffuse optical tomography. Nature Photonics, 8(6), 448-454. https://doi.org/10.1038/nphoton.2014.107

Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016). Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600), 453-458. https://doi.org/10.1038/nature17637

Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain. Neuron, 76(6), 1210-1224. https://doi.org/10.1016/j.neuron.2012.10.014

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No