Explainable AI methods reveal Alzheimer’s disease patterns aligning with a large meta-analysis of neuroimaging studies

Mohamad Habes Presenter
University of Texas Health San Antonio
San Antonio, TX 
United States
 
Wednesday, Jun 26: 3:45 PM - 5:00 PM
Symposium 
COEX 
Room: Hall D 2 
While Deep neural networks have proved to be superior in the early detection of Alzheimer's disease (AD), the inherent complexity of such models poses challenges to their interpretability. In this talk, I will discuss state-of-the-art heatmap-based explainable artificial intelligence (XAI) techniques, which we have adopted to provide visual interpretations of deep learning decisions. Nonetheless, the absence of a definitive ground truth for comparison complicates the validation of these interpretive heatmaps. Our team faces this issue by comparing heatmaps generated from deep neural networks aimed at AD classification and an established ground truth derived from a comprehensive meta-analysis of 77 independent voxel-based morphometry (VBM) studies. Utilizing T1-weighted MRI images from the ADNI database, we developed 3D CNN classifiers and applied three leading XAI heatmap methodologies: Layer-wise Relevance Propagation (LRP), Integrated Gradients (IG), and Guided grad-CAM (GGC). We then obtained precise quantitative measures by computing overlap with the ground truth. The findings were significant—all three XAI methods consistently highlighted brain regions agreeing with the meta-analytic map, with IG showing superior alignment. Moreover, the performance of the three heatmap methods outperformed that of linear Support Vector Machine (SVM) models, indicating that using the latest heatmap techniques to analyze deep nonlinear models can generate more meaningful brain maps compared to linear and shallow models. Ultimately, our research underscores the efficacy of XAI methods in elucidating the impact of Alzheimer's disease, thereby enhancing the biological interpretability and utility of deep learning in neuroimaging research.