Modeling the visualization of personal experiences during imagination in DMN

Poster No:

849 

Submission Type:

Abstract Submission 

Authors:

Andrew Anderson1

Institutions:

1Medical College of Wisconson, Milwaukee, WI

First Author:

Andrew Anderson  
Medical College of Wisconson
Milwaukee, WI

Introduction:

Imagination enables the human brain to relive personal experiences by simulating scenes, dialogue, feelings etc. Component processes of this ability have been linked to subsystems of the brains' default mode network (DMN). Medial Temporal DMN is associated with visual scene construction. Core DMN is associated with self-referential cognition, and Frontotemporal DMN is associated with abstract semantic, and social cognition [1-4]. However, the representational codes associated with visualizing personal experiences, prospectively in MT DMN, are understudied, in part because quantitatively modelling what different people imaging is challenging. To target this, we scanned fifty peoples' brain activity with fMRI as they reimagined twenty diverse natural scenarios (e.g. wedding/funeral/driving) [5]. To model the visualization of personal experiences, we constructed computational visual models of what participants imagined for each scenario based on verbal self-reports taken outside the scanner. We deployed a language model to control for more abstract semantics of the verbal descriptions. We hypothesized that MT DMN would show selectively high sensitivity to the visual model. To evaluate this hypothesis, we deployed Representational Similarity Analysis [6] to compare the representational geometries of fMRI activation patterns within DMN subsystems to those of the visual and language models.

Methods:

Methods: Figure 1
Panel 1. Mental imagery data from 50 participants were reanalyzed [5]. 20 generic scenario cues (e.g. dancing, exercising, wedding) were read to each participant, who vividly imagined themselves personally experiencing each scenario. Participants provided brief verbal descriptions of what they imagined for each scenario. Participants then underwent fMRI as they re-imagined the same scenarios in random order on written prompt. fMRI preprocessing produced a single fMRI volume for each mental image per participant. To computationally model visualization of scenes, we deployed Stable Diffusion [7] to simulate each verbal description as images and then modeled simulated image content with embeddings from VGG-16 [8] (an Image Classification model). To model abstract semantics of the verbal descriptions, we extracted word embeddings from GPT-2 [9], a language model that has been effective in modeling the semantics of natural language in the brain.

Panel 2. To evaluate whether MT DMN showed selective sensitivity to visual representations, we deployed partial correlation Representational Similarity analysis [6] to compare fMRI activation patterns within MT, Core and FT DMN subsystems, to visual model representations, controlling for the language model, and vice versa. For completeness, the analysis was also repeated on 14 other networks in the Yeo 17 Network parcellation [10].
Supporting Image: ohbm2025_fig1.png
   ·Figure 1. Methods (See Main Text)
 

Results:

Results: Figure 2
Top Row: MT DMN showed selectively strong sensitivity to visual representations. Core DMN was also captured by the visual model to a lesser degree. The language model also made independent contributions to partial RSA in MT DMN but was more effective in capture the representational geometry of Core DMN.

Bottom Row: To gain confidence that MT DMN's visual sensitivity was associated with the active imagination of personal experiences, we ran a control analysis on a separate fMRI dataset scanned as 14 people read 240 sentences without an overt imagination task[11]. An example sentence was "The girl broke the glass at the shop". Visual and language models were built from sentence stimuli, and RSA was deployed as above. In line with the hypothesis, the visual model no longer captured MT DMN. In contrast all DMN subsystems and 14 other networks were sensitive to the language model.
Supporting Image: ohbm2025_fig2.png
   ·Figure 2. Results (See Main Text)
 

Conclusions:

1. fMRI activation patterns in MT DMN selectively reflected visual representations from computational models.
2. MT DMN visual representations were present when participants imagine personal experiences but absent when people read sentences without an overt imagination task.

Higher Cognitive Functions:

Imagery 2

Language:

Language Comprehension and Semantics

Learning and Memory:

Long-Term Memory (Episodic and Semantic) 1

Modeling and Analysis Methods:

Activation (eg. BOLD task-fMRI)

Keywords:

Computational Neuroscience
FUNCTIONAL MRI
Language
Machine Learning
Memory

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.

Please indicate below if your study was a "resting state" or "task-activation” study.

Task-activation

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

Yes

Are you Internal Review Board (IRB) certified? Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.

Yes, I have IRB or AUCC approval

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Functional MRI

For human MRI, what field strength scanner do you use?

3.0T

Provide references using APA citation style.

1. Andrews-Hanna JR (2012) The brain's default network and its adaptive role in internal mentation. Neuroscientist 18:251–270.
2. Andrews-Hanna JR, Grilli MD (2021) Mapping the imaginative mind: charting new paths forward. Curr Dir Psychol Sci 30:82–89.
3. Andrews-Hanna JR, Smallwood J, Spreng RN (2014) The default network and self-generated thought: component processes, dynamic control, and clinical relevance. Ann N Y Acad Sci 1316:29–52.
4. Shao X, Krieger-Redwood K, Zhang M, Hoffman P, Lanzoni L, Leech R, Smallwood J, Jefferies E. Distinctive and complementary roles of default mode network subsystems in semantic cognition. Journal of Neuroscience. 2024 May 15;44(20).
5. Anderson AJ, McDermott K, Rooks B, Heffner KL, Dodell-Feder D, Lin FV. 2020. Decoding individual identity from brain activity elicited in imagining common experiences. Nature communications. 20;11(1):1-4.
6. Kriegeskorte N, Mur M, Bandettini PA. 2008. Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in systems neuroscience. 24;2:249.
7. Podell D, English Z, Lacey K, Blattmann A, Dockhorn T, Müller J, Penna J, Rombach R. 2023. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952.
8. Simonyan K., Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. International Conference on Learning Representations, ICLR (2015).
9. Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. 2019. Language models are unsupervised multitask learners. OpenAI blog. 1(8):9.
10. Yeo BTT, Krienen FM, Sepulcre J, Sabuncu MR, Lashkari D, Hollinshead M, Roffman JL, Smoller JW, Zöllei L, Polimeni JR, et al. 2011. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J Neurophysiol. 106:1125–1165.
11. Anderson AJ, Binder JR, Fernandino L, Humphries CJ, Conant LL, Aguilar M, Wang X, Doko D, Raizada, RDS. 2016. Predicting neural activity patterns associated with sentences using a neurobiologically motivated model of semantic representation. Cerebral Cortex. doi: 10.1093/cercor/bhw240.

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No