Neurotranslate: Mapping across MRI brain representations with generative models

Poster No:

1364 

Submission Type:

Abstract Submission 

Authors:

Samuel Naranjo Rincon1, Fyzeen Ahmad2, Ty Easley1, Tristan Glatard3, Gregory Kiar4, Shirin Shoushtari1, Ulugbek Kamilov1, Janine Bijsterbosch1

Institutions:

1Washington University, St. Louis, MO, 2University of Minnesota, Minneapolis, MN, 3Krembil Centre for Neuroinformatics, Toronto, Ontario, 4Center for Data Analytics, Innovation, and Rigor, Child Mind Institute, NYC, NY

First Author:

Samuel Naranjo Rincon  
Washington University
St. Louis, MO

Co-Author(s):

Fyzeen Ahmad  
University of Minnesota
Minneapolis, MN
Ty Easley  
Washington University
St. Louis, MO
Tristan Glatard  
Krembil Centre for Neuroinformatics
Toronto, Ontario
Gregory Kiar  
Center for Data Analytics, Innovation, and Rigor, Child Mind Institute
NYC, NY
Shirin Shoushtari  
Washington University
St. Louis, MO
Ulugbek Kamilov  
Washington University
St. Louis, MO
Janine Bijsterbosch  
Washington University
St. Louis, MO

Introduction:

Resting state functional MRI (rfMRI) has become a main source of data for identifying biomarkers across a range of disorders for over a decade, yet there has been little translation to clinical practice. Many different brain representations have been developed to summarize high dimensional rfMRI data into summary features such as connectomes or maps of spatial network topography.3 Although brain representations share variance2, a key factor limiting clinical translation is the lack of a standardized brain representation. However, such consensus is challenging to achieve due to unavailable ground truth for validation, resulting in an ever-expanding range of available brain representations. To support crosspollination of findings in the presence of many valid brain representations, we introduce 'NeuroTranslate': a deep-learning architecture aimed at mapping across brain representations without time-consuming re-analysis.

Methods:

Brain representations: We focus on translations between two brain representations: connectome (full correlation between Schaefer 100 parcellated timeseries) and maps of network topography (independent component analysis into 15 networks followed by dual regression to obtain subject-specific maps).1,8
Models: We developed dedicated deep learning models to translate brain representations in both directions (connectome-->topography & topography-->connectome). All model architectures have an encoder module (Surface image Transformers [SiT]5 or Brain Graph Transformers [BGT]7), and either a simple linear layer decoder or a more complex decoder module that corresponds to the output (Fig. 1). Generative versions of models were created by adding Variational Autoencoder (VAE) modules to sample the latent space. Loss functions that explicitly account for within- and between- subject variability were combined with reconstruction loss to help performance6.
Data: All models are trained using the Human Connectome Project Young Adult dataset (N=1002). Data were split 80/10/10 across training (Ntr = 762), validation (Nvl = 98), and testing (Nte = 142) with twins assigned to the same split.
Model Comparisons: The training sample was used for learning model parameters and the validation set was only used for evaluation every five epochs. The model with the best validation performance was used for testing. Model performance was assessed based on correlations between demeaned true and demeaned predicted brain representations.
Supporting Image: OHBM_2025_fig1.png
   ·Schematics for generative models
 

Results:

Across translations, performance on the training data outperformed the test data (Fig. 2). Translations from topography to connectome outperformed translations from connectome to topography, consistent with the expectation that translating from a higher-dimensional to a lower-dimensional features space is more generalizable than the reverse. The best performing model for topographyconnectome translations was the VAE version including a SiT encoder and BGT decoder, which achieved good generalization to unseen data (demeaned train correlation 0.45 and demeaned test correlation 0.43).
Supporting Image: OHBM_2025_fig2.png
   ·Performance of generative models
 

Conclusions:

We find that although these models are very good at mapping brain representations in the training dataset, some lack generalizability to the testing sample. This alludes to an underlying overfitting trend for models, especially in the connectome topography direction (Fig. 2). This project confirms the presence of shared information across seemingly disparate MRI brain representations2 and establishes that transforms between disparate brain representations can be learned and mapped with deep learning models, but further work is needed to achieve full generalizability to unseen data. In future extensions of this project, we will be focusing on techniques to address the overfitting challenge by using the larger Adolescent Brain Cognitive Development (ABCD) dataset and leveraging diffusion models to learn data distributions and reconstruct from that approximated statistical space.

Modeling and Analysis Methods:

fMRI Connectivity and Network Modeling 1
Methods Development
Other Methods 2

Neuroinformatics and Data Sharing:

Brain Atlases

Keywords:

Computational Neuroscience
FUNCTIONAL MRI
Other - generative modeling

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.

Please indicate below if your study was a "resting state" or "task-activation” study.

Resting state

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

Yes

Are you Internal Review Board (IRB) certified? Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.

Not applicable

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Not applicable

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Functional MRI
Computational modeling

For human MRI, what field strength scanner do you use?

3.0T

Which processing packages did you use for your study?

Other, Please list  -   HCP minimal preprocessing pipeline

Provide references using APA citation style.

Beckmann, Christian. F., and Stephen M. Smith. 2004. Probabilistic independent component analysis for functional
magnetic resonance imaging. IEEE Transactions on Medical Imaging, 23(2), 137–152.
https://doi.org/10.1109/TMI.2003.822821.

Bijsterbosch, Janine D, Mark W Woolrich, Matthew F Glasser, Emma C Robinson, Christian F Beckmann, David C Van
Essen, Samuel J Harrison, and Stephen M Smith. 2018. “The relationship between spatial configuration and functional connectivity of brain regions.” eLife 7:e32992. https://doi.org/10.7554/eLife.32992.

Bijsterbosch, Janine D, Samuel J. Harrison, Saad Jbabdi, Mark Woolrich, Christian Beckmann, Stephen Smith, and Eugene P. Duff. 2020. “Challenges and future directions for representations of functional brain organization.” Nature Neuroscience, 1–12. https://doi.org/10.1038/s41593-020-00726-z.

Dadi, Kamalaker., Mehdi Rahim, Alexandre Abraham, Darya Chyzhyk, Michael Milham, Bertrand Thirion, and Gaël Varoquaux. 2019. “Benchmarking functional connectome-based predictive models for resting-state fMRI.” NeuroImage, 192, 115–134. https://doi.org/10.1016/j.neuroimage.2019.02.062.

Dahan, Simon, Abdulah Fawaz, Logan Z. J. Williams, Chunhui Yang, Timothy S. Coalson, Matthew F.
Glasser, A. David Edwards, Daniel Ruckert, and Emma C. Robinson. 2022. “Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis.” https://doi.org/10.48550/arXiv.2203.16414.

Jamison, Keith W, Zijin Gu, Qinxin Wang, Ceren Tozlu, Mert R. Sabuncu, and Amy Kuceyeski. 2024. “Release the
Krakencoder: A unified brain connectome translation and fusion tool.” https://doi.org/10.1101/2024.04.12.589274.
Kan, Xuan, Wei Dai, Hejie Cui, Zilong Zhang, Ying Guo, and Carl Yang. 2022. “Brain Network Transformer.” https://doi.org/10.48550/arXiv.2210.06681.

Kong, Ru, Yan Rui Tan, Naren Wulan, Leon Qi Rong Ooi, Seyedeh-Rezvan Farahibozorg, Samuel Harrison, Janine D. Bijsterbosch, Boris C. Bernhardt, Simon Eickhoff, Thomas Yeo. 2023. “Comparison Between Gradients and Parcellations for Functional Connectivity Prediction of Behavior”. NeuroImage, 120044. https://doi.org/10.1016/j.neuroimage.2023.120044.

Nickerson, Lisa D., Stephen M. Smith, Döst Öngür, & Christian F. Beckmann. 2017. “Using Dual Regression to Investigate Network Shape and Amplitude in Functional Connectivity Analyses.” Frontiers in Neuroscience, 11, 115. https://doi.org/10.3389/fnins.2017.00115.

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No