Leveraging Model Explanations to Uncover Hidden Insights in Neuroimaging Data

Marc-Andre Schulz Presenter
Charite
Berlin
Germany
 
Wednesday, Jun 26: 3:45 PM - 5:00 PM
Symposium 
COEX 
Room: Hall D 2 
Machine learning model explanations provide transparency into how models make decisions. However, the utility of these explanations extends far beyond model interpretation. As demonstrated by recent studies, model explanations can be quantitatively analyzed to directly test scientific hypotheses and gain new insights.
This talk will discuss how explanation methods have been used for subtype discovery and group-level inference in two recent neuroimaging studies. Schulz et al. (2020) performed clustering analysis directly on explanation images from a diagnostic classifier. By amplifying disease-relevant variations, this allowed more reliable discovery of disease subtypes compared to clustering on the original data. In a separate study, Schulz et al. (2023) tested whether psychological stress impacts structural brain health through similar mechanisms in healthy individuals and MS patients. To do so, they conducted inferential statistics on explanation images for a brain age prediction model. Again, the explanation space provided a representation optimized for the scientific question.
These examples demonstrate the potential of explanations not just for transparency, but as a refined data representation that can empower new research strategies. This talk will review these studies and discuss opportunities and challenges for quantitative analysis of explanations in future neuroimaging studies.