Multi-modal predictors of fMRI-identified brain states

Poster No:


Submission Type:

Abstract Submission 


Karl-Heinz Nenning1, Arielle Tambini1, Eduardo Gonzalez-Moreira1, Ting Xu2, Takuya Ito3, Francisco Castellanos4, Stanley Colcombe1, Alexandre Franco1, Michael Milham2


1Nathan Kline Institute, Orangeburg, NY, 2Child Mind Institute, New York, NY, 3IBM Research, Yorktown Heights, NY, 4NYU Grossman School of Medicine, New York, NY

First Author:

Karl-Heinz Nenning  
Nathan Kline Institute
Orangeburg, NY


Arielle Tambini  
Nathan Kline Institute
Orangeburg, NY
Eduardo Gonzalez-Moreira  
Nathan Kline Institute
Orangeburg, NY
Ting Xu  
Child Mind Institute
New York, NY
Takuya Ito, PhD  
IBM Research
Yorktown Heights, NY
Francisco Castellanos  
NYU Grossman School of Medicine
New York, NY
Stanley Colcombe  
Nathan Kline Institute
Orangeburg, NY
Alexandre Franco  
Nathan Kline Institute
Orangeburg, NY
Michael Milham  
Child Mind Institute
New York, NY


Human behavior and attentional focus vary over time. Functional magnetic resonance imaging (fMRI) is well-suited to track associated dynamic and spatial patterns of widespread network interactions, referred to as brain states [1]. Previous work has shown that fMRI-identified brain states are related to fluctuations in ongoing behavior and may be promising markers of psychiatric disorders [2,3]. However, practical challenges of high-quality fMRI data collection (extensive infrastructure, cost, challenges for patient populations), especially at large scales, fundamentally limit its utility. In contrast, pupillometry or electroencephalography (EEG) are readily available less expensive methods that are better suited for large-scale data collection and clinical applications. Here, we examine the utility of non-fMRI-based signals (pupillometry, EEG) to characterize and predict fMRI-identified brain states to bridge the gap between sensitive fMRI markers and more accessible physiological measures.


We studied an openly available dataset that includes the simultaneous collection of fMRI, EEG, eye tracking, and other physiological measures [4]. Data from 22 individuals were acquired across two sessions while participants performed multiple tasks including resting-state, flickering checkerboard, and naturalistic movie viewing paradigms, allowing us to characterize brain states associated with activity across stimulus-driven task conditions and task-free states. Co-activation pattern (CAP) analysis [6] was used to characterize fMRI-identified brain states across task conditions. We first examined whether the strength of each CAP (i.e. graded measure of brain state) cross-correlated reliably with non-fMRI signals (i.e. pupil diameter, PD) at varying temporal lags. We next adapted a previously introduced regression framework to establish pupillometry and EEG predictors of time-varying fMRI-identified brain states [5]. The fMRI-identified CAP strength was used as a target in the regression framework, and for each time-point, non-fMRI features within a preceding time-window were used to train the coefficients of the prediction model. We performed cross-subject prediction (i.e. leave one participant out) to test the generalizability of non-fMRI predictors, and quantified performance with Pearson's correlation between predicted and actual data.


Temporal clustering of the fMRI data yielded 4 pairs of CAPs, comprising distinct modes of time-varying brain states: one CAP strongly weighted the visual network, another reflected internal vs. external focus (differentially weighting the default mode vs. dorsal attention network), and others weighted frontoparietal networks. Preliminary results of the cross-subject regression framework revealed that PD was predictive for visual CAPs during the checkerboard stimulus (r=0.3) and the internal-focus CAP during movie watching (r=0.25), but less reliable during rest (r<0.05 for all CAPs). Preliminary results using single electrode EEG prediction revealed reliable predictive performance (r>0.5) for all tasks and CAPs. Preliminary analyses indicated that the electrodes associated with CAP-specific spatial patterns exhibited the most reliable predictions, demonstrating the feasibility of the approach.


Our preliminary findings suggest the feasibility of bridging the gap between sensitive fMRI markers of dynamic brain states and more readily scalable physiological measures such as EEG and pupillometry. A generalizable model could have practical implications, enhancing the value of more accessible physiological measures. Further research is necessary to identify the most sensitive non-fMRI features and their optimal predictive potential.

Modeling and Analysis Methods:

Classification and Predictive Modeling 1
EEG/MEG Modeling and Analysis
Other Methods 2

Novel Imaging Acquisition Methods:



Electroencephaolography (EEG)
Other - eyetracking; brain states; EEG-fMRI; co-activation pattern

1|2Indicates the priority used for review

Provide references using author date format

1. Greene, AS (2023), ‘Why is everyone talking about brain state?’, Trends in Neurosciences, vol. 46, pp. 508–524.
2. Marshall, E (2020), ‘Coactivation pattern analysis reveals altered salience network dynamics in children with autism spectrum disorder’, Network Neuroscience, vol. 4, pp. 1219–1234.
3. Cai, W (2021), ‘Latent brain state dynamics distinguish behavioral variability, impaired decision-making, and inattention’, Molecular Psychiatry, vol. 26, pp. 4944–4957.
4. Telesford, QK (2023), ‘An open-access dataset of naturalistic viewing using simultaneous EEG-fMRI’, Scientific Data, vol. 10, pp. 1–13.
5. Meir-Hasson, Y, (2014), ‘An EEG Finger-Print of fMRI deep regional activation’, Neuroimage, vol. 102, pp. 128-141
6. Liu, X, (2018), ‘Co-activation patterns in resting-state fMRI signals’, Neuroimage, vol. 180, pp. 485-494