Poster No:
1092
Submission Type:
Abstract Submission
Authors:
Chang Li1, Yamin Li1, Haatef Pourmotabbed1, Shengchao Zhang2, Jorge Salas1, Roza Bayrak1, Catie Chang1
Institutions:
1Vanderbilt University, Nashville, TN, 2Rhode Island Hospital (Brown University Health), Providence, RI
First Author:
Chang Li
Vanderbilt University
Nashville, TN
Co-Author(s):
Yamin Li
Vanderbilt University
Nashville, TN
Shengchao Zhang
Rhode Island Hospital (Brown University Health)
Providence, RI
Introduction:
Detecting vigilance states from resting-state fMRI scans is challenging due to the lack of overt behavioral responses, and vigilance indicators (such as EEG and pupillometry) are typically absent from datasets. Previous works have demonstrated the possibility of inferring vigilance levels directly from fMRI scans (Tagliazucchi & Laufs, 2014; Goodale et al., 2021; Zhang et al., 2023). However, current methods for vigilance-state classification are either limited in temporal resolution or in their ability to quantify corresponding states across scans (as opposed to relative variations within scans).
EEG data can be acquired simultaneously with fMRI, and provides well-established indicators of vigilance. Leveraging EEG information together with fMRI allows for learning brain patterns with high temporal and spatial resolution. We propose a deep learning vigilance detection model that is trained on paired fMRI and EEG data to capture intrinsic vigilance-related brain patterns. For vigilance state detection in testing, our model only uses fMRI data, compensating for the typical absence of paired vigilance indicators.
Methods:
We collected 29 resting-state fMRI scans from 22 healthy subjects (3T scanner, TR=2.1s), with simultaneous 32-channel EEG. After removing EMG/ECG channels, 26 channels of EEG data are used. From the fMRI data, regions of interest are extracted using the Dictionaries of Functional Modes atlas (Dadi et al., 2020) with 64 ROIs. We use an 80%-20% train/test split, with no data from testing subjects seen in training. Another dataset collected at a different site, consisting of 28 resting-state EEG-fMRI scans from 14 healthy subjects (3T, TR=2.1s), is used for external validation.
Our objective is to classify a 10-fMRI-frame interval of fMRI data into one of two (alert, drowsy) vigilance states. Frame-wise labels are calculated from EEG using Vigilance Algorithm Leipzig (VIGALL) (Olbrich et al., 2015) and are thresholded to derive these two states. We extract fMRI and EEG features using separate encoders and perform intramodal contrastive learning (Chen, Kornblith, Norouzi, & Hinton, 2020). Each encoder comprises two transformers that learn spatial and temporal attention with feature fusion. To incorporate EEG knowledge into the fMRI domain, we use convolutional neural network to map the extracted fMRI and EEG features to a common space and then perform contrastive learning on latent features. We input the fMRI features (from a given 10-fMRI-frame interval) to a three-layer MLP for prediction. In testing, our model only uses fMRI data as input. We implement our model following previous works (Misra, Girdhar, & Joulin, 2021; Lu et al., 2023).

Results:
In vigilance states classification in the test set at a 10-fMRI-frame level, our model achieves an mF1 (F1: 2 × precision×recall/(precision+recall)) score of 79.01%, with F1drowsy of 82.32% and F1alert of 75.69%. In comparison, a model that uses only a three-layer MLP with fMRI data as input has an mF1 score of 62.93% (F1drowsy: 73.23%, F1alert: 52.63%), suggesting that our encoders help to capture hidden brain patterns linked to drowsiness and alertness. Using only intra-fMRI contrastive training yields an mF1 score of 69.51% (F1drowsy: 77.00%, F1alert: 62.02%), revealing that integrating EEG modal knowledge can significantly improve performance. For reference, random guessing yields a performance of mF1: 49.19% (F1drowsy: 52.60%, F1alert: 45.77%). In external validation, our model's mF1 score is 71.74% (F1drowsy: 74.72%, F1alert: 68.77%), suggesting the potential to generalize across datasets.
Conclusions:
Our model has competitive performance in classifying alert/drowsy states within scans with a 10-fMRI-frame granularity. Our work supports the notion that vigilance states can be detected from fMRI data alone, and indicates that leveraging knowledge from EEG data can enhance the model's performance. Future work will focus on interpreting fMRI features and probing human brain vigilance patterns.
Modeling and Analysis Methods:
Classification and Predictive Modeling 1
fMRI Connectivity and Network Modeling 2
Keywords:
Electroencephaolography (EEG)
FUNCTIONAL MRI
Modeling
Other - Deep Learning
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.
Please indicate below if your study was a "resting state" or "task-activation” study.
Resting state
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Healthy subjects
Was this research conducted in the United States?
Yes
Are you Internal Review Board (IRB) certified?
Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.
Yes, I have IRB or AUCC approval
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Yes
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Functional MRI
EEG/ERP
For human MRI, what field strength scanner do you use?
3.0T
Which processing packages did you use for your study?
FSL
AFNI
Provide references using APA citation style.
Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597–1607).
Dadi, K., Varoquaux, G., Machlouzarides-Shalit, A., Gorgolewski, K. J., Wassermann, D., Thirion, B., & Mensch, A. (2020). Fine-grain atlases of functional modes for fmri analysis. NeuroImage, 221, 117126.
Goodale, S. E., Ahmed, N., Zhao, C., de Zwart, J. A., Özbay, P. S., Picchioni, D., . . . Chang, C. (2021). fmri-based detection of alertness predicts behavioral response variability. elife, 10, e62376.
Lu, Y., Xu, C., Wei, X., Xie, X., Tomizuka, M., Keutzer, K., & Zhang, S. (2023). Open-vocabulary point-cloud object detection without 3d annotation. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 1190–1199).
Misra, I., Girdhar, R., & Joulin, A. (2021). An end-to-end transformer model for 3d object detection. In Proceedings of the ieee/cvf international conference on computer vision (pp. 2906–2917).
Olbrich, S., Fischer, M. M., Sander, C., Hegerl, U., Wirtz, H., & Bosse-Henck, A. (2015). Objective markers for sleep propensity: comparison between the multiple sleep latency test and the vigilance algorithm leipzig. Journal of sleep research, 24(4), 450–457.
Prerau, M. J., Brown, R. E., Bianchi, M. T., Ellenbogen, J. M., & Purdon, P. L. (2017). Sleep neurophysiological dynamics through the lens of multitaper spectral analysis. Physiology, 32(1), 60–92.
Tagliazucchi, E., & Laufs, H. (2014). Decoding wakefulness levels from typical fmri resting-state data reveals reliable drifts between wakefulness and sleep. Neuron, 82(3), 695–708.
Zhang, S., Goodale, S. E., Gold, B. P., Morgan, V. L., Englot, D. J., & Chang, C. (2023). Vigilance associates with the low-dimensional structure of fmri data. NeuroImage, 267, 119818.
No