Explainable AI (XAI) in neuroimaging: challenges and future directions

James Cole, PhD Organizer
University College London
Computer Science
London, London 
United Kingdom
 
Mohamad Habes Co Organizer
University of Texas Health San Antonio
San Antonio, TX 
United States
 
Wednesday, Jun 26: 3:45 PM - 5:00 PM
1054 
Symposium 
COEX 
Room: Hall D 2 
While XAI is important topic for wider society, given the increasingly pervasive nature of AI in many domains, the importance of XAI for neuroimaging in particularly is rapidly increasing with the more widespread adoption of AI/ML methods. Most importantly, the performance of AI/ML models is now approaching appropriate levels for clinic use. However, without intuitive explanation of the model predictions, clinical trust will be reduced, limiting wider use of neuroimaging to help patients with brain disorders. This makes XAI a pivotal component of clinical neuroimaging, and is a very current topic, since commercial applications for clinical neuroimaging are becoming increasingly available.

We aim to elucidate XAI and its importance for neuroimaging, with the goal of presenting the state-of-the-art to the field, and providing an overview of where the XAI in neuroimaging is heading in the future. Our specific learning outcomes are:
- To understand what XAI is.
- To appreciate how XAI will help clinical adoption of neuroimaging analysis.
- To recognise what the challenges in validating XAI methods are.
- To learn about future directions for XAI in neuroimaging.

Objective

- To understand what XAI is.
- To appreciate how XAI will help clinical adoption of neuroimaging analysis.
- To recognise what the challenges in validating XAI methods are.
 

Target Audience

Neuroimagers using artificial intelligence, deep learning or machine learning.
Clinicians interested in clinical applications of neuroimaging
 

Presentations

A systematic evaluation and validation of explainable deep learning methods using amyloid-PET classification in dementia as a ground truth

One of the earliest signs of Alzheimer's disease (AD) is the accumulation of abnormal amyloid-beta protein in the brain. Imaging techniques such as positron emission tomography (PET) can detect these amyloid deposition to identify individuals with high amyloid burden at an early stage and signal those at risk of developing dementia.

Deep learning models have shown to be highly effective at pattern recognition, with high diagnostic performance across several medical tasks. However, the complex nature of these models makes it difficult to trust their predictions, which is a key obstacle to adopting these tools in clinical practice. Explainable AI (XAI) is an emerging research area aiming to produce human-interpretable, robust, and useful explanations. Validating model explanations has proved a challenge across existing studies, particularly in the medical imaging domain due to difficulties obtaining a ground-truth representation of the disease-specific pathology (Martin et al. 2023, https://doi.org/10.1002/alz.12948). Efforts to validate model explanations have so far relied on generating synthetic datasets, to control and generate sources of bias or synthetically generated features (Stanley et al. 2023 https://arxiv.org/pdf/2311.02115.pdf). However, these studies are often limited to localised features such as brain lesions and do not evaluate the model’s ability to identify sparser patterns across the images. Moreover, few studies focus on validating model explanations in the context of dementia with structural imaging, as it is difficult to define a suitable “explanation” due to patient heterogeneity as well as the widespread and global nature of typical AD disease pathology.

In this work, we utilised PET imaging and deep learning to classify amyloid-positive and amyloid-negative individuals based on visual reads from the AMYPAD PNHS dataset. We employed a 3D convolutional neural network, a powerful framework for image classification but with limited interpretability due to its black-box nature. Although identifying amyloid-positivity from PET images is not clinically challenging, this task offers a unique opportunity to validate and assess whether the important brain regions according to model explanations correlate with the underlying disease pathology. Specifically, we leveraged the fact that amyloid-uptake is highly correlated with prevalence of the disease and clearly visible via PET imaging. We compared the heatmaps produced by state-of-the-art XAI methods by correlating regional importance scores with regional visual reads, SUVR values, and centiloid quantification. In this talk, I will share the results of this project, demonstrating a systematic approach to XAI method validation in the context of neuroimaging and dementia research.
 

Presenter

Sophie Martin, MRes, University College London London, London 
United Kingdom

Cortical mapping of kinematic parameters during upper limb movement using explainable AI

Cortical representation of motor kinematics is crucial for understanding human motor behaviour. Although conventional single-neuron studies have found the existence of a relationship between neuronal activity and motor kinematics such as acceleration, velocity, and position, it is hard to distinguish the neural representations with macroscopic modalities such as electroencephalography (EEG) and magnetoencephalography (MEG) due to their limited spatial resolution.
Deep neural network (DNN) models have shown excellent performance in predicting movement characteristics. This presentation demonstrates that neural features of each kinematic parameter can be identified with a time-series DNN model for decoding with an explainable AI method. It implements integrated gradients between cortical activity and predicted kinematic parameters during reaching movement (Kim et al., 2023, https://doi.org/10.1016/j.neuroimage.2022.119783). We extract the cortical regions strongly contributing to decoding each kinematics from the DNN model.
There are common regions, including bilateral supramarginal gyri and superior parietal lobules, known to be related to the goal of movement and sensory integration. There are also dominant regions for each acceleration, velocity, and position kinematic parameter. In addition, by evaluating differences in cortical contribution values to the movement direction, we found out the global contribution of the brain to move. The movement prediction also required ipsilateral contribution as well as contralateral activity. The explainable AI approach can decompose brain processes into various kinematic components.
 

Presenter

June Sic Kim, Konkuk University Medical Center
Clinical Research Institute
Seoul, . 
Korea, Republic of

Leveraging Model Explanations to Uncover Hidden Insights in Neuroimaging Data

Machine learning model explanations provide transparency into how models make decisions. However, the utility of these explanations extends far beyond model interpretation. As demonstrated by recent studies, model explanations can be quantitatively analyzed to directly test scientific hypotheses and gain new insights.
This talk will discuss how explanation methods have been used for subtype discovery and group-level inference in two recent neuroimaging studies. Schulz et al. (2020) performed clustering analysis directly on explanation images from a diagnostic classifier. By amplifying disease-relevant variations, this allowed more reliable discovery of disease subtypes compared to clustering on the original data. In a separate study, Schulz et al. (2023) tested whether psychological stress impacts structural brain health through similar mechanisms in healthy individuals and MS patients. To do so, they conducted inferential statistics on explanation images for a brain age prediction model. Again, the explanation space provided a representation optimized for the scientific question.
These examples demonstrate the potential of explanations not just for transparency, but as a refined data representation that can empower new research strategies. This talk will review these studies and discuss opportunities and challenges for quantitative analysis of explanations in future neuroimaging studies. 

Presenter

Marc-Andre Schulz, Charite Berlin
Germany

Explainable AI methods reveal Alzheimer’s disease patterns aligning with a large meta-analysis of neuroimaging studies

While Deep neural networks have proved to be superior in the early detection of Alzheimer's disease (AD), the inherent complexity of such models poses challenges to their interpretability. In this talk, I will discuss state-of-the-art heatmap-based explainable artificial intelligence (XAI) techniques, which we have adopted to provide visual interpretations of deep learning decisions. Nonetheless, the absence of a definitive ground truth for comparison complicates the validation of these interpretive heatmaps. Our team faces this issue by comparing heatmaps generated from deep neural networks aimed at AD classification and an established ground truth derived from a comprehensive meta-analysis of 77 independent voxel-based morphometry (VBM) studies. Utilizing T1-weighted MRI images from the ADNI database, we developed 3D CNN classifiers and applied three leading XAI heatmap methodologies: Layer-wise Relevance Propagation (LRP), Integrated Gradients (IG), and Guided grad-CAM (GGC). We then obtained precise quantitative measures by computing overlap with the ground truth. The findings were significant—all three XAI methods consistently highlighted brain regions agreeing with the meta-analytic map, with IG showing superior alignment. Moreover, the performance of the three heatmap methods outperformed that of linear Support Vector Machine (SVM) models, indicating that using the latest heatmap techniques to analyze deep nonlinear models can generate more meaningful brain maps compared to linear and shallow models. Ultimately, our research underscores the efficacy of XAI methods in elucidating the impact of Alzheimer's disease, thereby enhancing the biological interpretability and utility of deep learning in neuroimaging research. 

Presenter

Mohamad Habes, University of Texas Health San Antonio San Antonio, TX 
United States