Machine Learning for Brain Imaging: Predicting Traits, Disease Progression, and Treatment Response

Juan Helen Zhou, Ph.D. Organizer
National University of Singapore
Singapore
Singapore
 
Daniel Alexander, Ph.D. Co Organizer
University College London
London
United Kingdom
 
Vince Calhoun Co Organizer
GSU/GATech/Emory
TReNDS
Atlanta, GA 
United States
 
Neil Oxtoby Co Organizer
University College London,
London
United Kingdom
 
1897 
Symposium 
This symposium is timely and important due to recent advancements in multimodal neuroimaging and machine learning including deep learning and generative models, which now allow for more effective learning and representation of brain structure and function. It is crucial to develop machine learning approaches for brain imaging data which are accurate, reliable, generalizable, interpretable, and sustainable. Furthermore, the symposium’s focus on neuroscience and clinical applications ranging from disease prognosis, progression monitoring, to treatment response prediction, makes it highly relevant, as it offers deeper insights into how brain mapping combined with machine learning can potentially facilitate more precise and personalized approaches for disease diagnosis and treatment.

Objective

The desired learning outcomes for this symposium are to enhance understanding of integrating machine learning with brain imaging, introduce advanced computational methods for modelling large-scale brain imaging data, explore how these representations link to cognition and mental health, and demonstrate their potential in real-life clinical applications. Additionally, the symposium aims to foster cross-disciplinary collaboration to advance research in AI for neuroscience and medicine. 

Target Audience

Target audience are neuroscientists, psychologists, clinicians, and computational neuroscientists or anyone who are interested in the potential of machine learning for neuroscience and medicine. 

Presentations

Data Driven Modelling for Clinical Trials on Alzheimer’s Disease

Most, if not all, papers on data-driven computational modelling of neurodegenerative disease progression only claim to improve clinical trials, e.g., through trial enrichment, and/or reduced sample sizes. This is because individual patient-level data from clinical trials is rarely shared. The A4 Study has bucked this trend — all data became available in July 2024 from this secondary prevention clinical trial of solanezumab in preclinical Alzheimer’s disease. We performed a post hoc subgroup analysis on this data by stratifying trial participant data into brain atrophy map subtypes discovered by Subtype and Stage Inference (SuStaIn). Treatment response differed between subgroups, with one subtype showing a trend towards treatment efficacy. I’ll discuss the implications of this and explore how data-driven brain mapping can be incorporated into clinical trial design.  

Presenter

Neil Oxtoby, University College London, London
United Kingdom

A Real-world Clinical Validation for AI-based MRI Monitoring in Multiple Sclerosis

Modern management of MS targets No Evidence of Disease Activity (NEDA): no clinical relapses, no magnetic resonance imaging (MRI) disease activity and no disability worsening. While MRI is the principal tool available to neurologists for monitoring clinically silent MS disease activity and, where appropriate, escalating treatment, standard radiology reports are qualitative and may be insensitive to the development of new or enlarging lesions. Existing quantitative neuroimaging tools lack adequate clinical validation. In this talk, we will discuss our recent work covering these gaps. In 397 multi-center MRI scan pairs acquired in routine practice, we demonstrate superior case-level sensitivity of a clinically integrated AI-based tool over standard radiology reports (93.3% vs 58.3%), relative to a consensus ground truth, with minimal loss of specificity. We also demonstrate equivalence of the AI-tool with a core clinical trial imaging lab for lesion activity and quantitative brain volumetric measures, including percentage brain volume loss (PBVC), an accepted biomarker of neurodegeneration in MS (mean PBVC −0.32% vs −0.36%, respectively), whereas even severe atrophy (>0.8% loss) was not appreciated in radiology reports. Finally, the AI-tool additionally embeds a clinically meaningful, experiential comparator that returns a relevant MS patient centile for lesion burden, revealing, in our cohort, inconsistencies in qualitative descriptors used in radiology reports. AI-based image quantitation enhances the accuracy of, and value-adds to, qualitative radiology reporting. Scaled deployment of these tools will open a path to precision management for patients with MS. 

Presenter

Chenyu Wang, The University of Sydney Sydney, New South Wales 
Australia

Brain Dynamics Foundation Model: Pretraining and Adaption for Disease Prognosis and Trait Prediction

Foundation models have emerged as powerful tools for analyzing large-scale brain activity data. In this talk, I will present recent advances in brain foundation models, focusing on our work on Brain-JEPA, which introduces a Joint-Embedding Predictive Architecture (JEPA) for brain dynamics. Brain-JEPA achieves outstanding performance across multiple tasks including demographic prediction, disease diagnosis, and trait prediction. The model features two key innovations: Brain Gradient Positioning, which introduces a functional coordinate system for brain parcellation, and Spatiotemporal Masking, which addresses the unique challenges of heterogeneous fMRI time-series patches. I will also briefly discuss our complementary approaches including Scaffold Prompt Tuning (ScaPT) for efficient adaptation of brain foundation model and Brain Tokenized Graph Transformer (TokenGT) for longitudinal analysis. These advances demonstrate the potential of foundation models to transform our understanding and analysis of brain activity data. 

Presenter

Zijian Dong, National University of Singapore Singapore, N/A 
Singapore

Brain Dynamics Foundation Model: Pretraining and Adaption for Disease Prognosis and Trait Prediction

Foundation models have emerged as powerful tools for analyzing large-scale brain activity data. In this talk, I will present recent advances in brain foundation models, focusing on our work on Brain-JEPA, which introduces a Joint-Embedding Predictive Architecture (JEPA) for brain dynamics. Brain-JEPA achieves outstanding performance across multiple tasks including demographic prediction, disease diagnosis, and trait prediction. The model features two key innovations: Brain Gradient Positioning, which introduces a functional coordinate system for brain parcellation, and Spatiotemporal Masking, which addresses the unique challenges of heterogeneous fMRI time-series patches. I will also briefly discuss our complementary approaches including Scaffold Prompt Tuning (ScaPT) for efficient adaptation of brain foundation model and Brain Tokenized Graph Transformer (TokenGT) for longitudinal analysis. These advances demonstrate the potential of foundation models to transform our understanding and analysis of brain activity data. 

Presenter

Juan Helen Zhou, Ph.D., National University of Singapore Singapore
Singapore

Deep Generative Modeling for Latent Source Separation and Psychosis Continuum Estimation from Neuroimaging Data

In this talk, we will introduce Deep Multidataset Independent Subspace Analysis (DeepMISA), a unified framework that encompasses multiple linear and nonlinear multivariate methods, including Multimodal Independent Vector Analysis (MMIVA) (Silva et al. 2024), Multimodal Subspace Independent Vector Analysis (MSIVA) (Li et al. 2024a), and Deep Independent Vector Analysis (DeepIVA) (Li et al., in preparation). We demonstrate that DeepMISA methods successfully recover multimodal sources that are linearly or nonlinearly mixed from various synthetic datasets, significantly outperforming baseline methods. We then show that DeepMISA methods reveal linked sources associated with phenotypic measures such as age, sex and psychosis in large-scale multimodal neuroimaging datasets. Next, we will present a functional network connectivity (FNC) interpolation framework (Li et al. 2024b), which uses an unsupervised generative model to capture the neuropsychiatric continuum and heterogeneity. We apply this framework to interpolate static FNC (sFNC) and dynamic FNC (dFNC) data from controls and patients with schizophrenia or autism spectrum disorder. Our results show that the proposed framework captures individual variability, sFNC progression patterns, and group-specific dFNC states, providing new insights into personalized mental disorder characterization and progression prediction. Finally, we highlight the advantages of deep generative models in neuroimaging analysis and discuss future directions. 

Presenter

Xinhui Li, Georgia Institute of Technology Atlanta, GA 
United States