Inferring vigilance fluctuations from functional MRI using automated sleep stage decoding

Poster No:

2087 

Submission Type:

Abstract Submission 

Authors:

Rezwana Rosen Razzaque1, Jim Pollaro1, Hong Chen1, Donna Dierker1, Zachary Markow1, Jiaqi Li2, Helmut Laufs3, Muriah Wheelock1

Institutions:

1Department of Radiology, Washington University in St. Louis, St. Louis, MO, United States, 2Department of Statistics, The University of Chicago, Chicago, IL, United States, 3Department of Neurology, Christian-Albrechts University Kiel, Arnold-Heller-Strasse 3, 24105, Kiel, Germany

First Author:

Rezwana Rosen Razzaque, B.S. in EEE, IUT, BD  
Department of Radiology, Washington University in St. Louis
St. Louis, MO, United States

Co-Author(s):

Jim Pollaro  
Department of Radiology, Washington University in St. Louis
St. Louis, MO, United States
Hong Chen  
Department of Radiology, Washington University in St. Louis
St. Louis, MO, United States
Donna Dierker  
Department of Radiology, Washington University in St. Louis
St. Louis, MO, United States
Zachary Markow, PhD  
Department of Radiology, Washington University in St. Louis
St. Louis, MO, United States
Jiaqi Li  
Department of Statistics, The University of Chicago
Chicago, IL, United States
Helmut Laufs  
Department of Neurology, Christian-Albrechts University Kiel
Arnold-Heller-Strasse 3, 24105, Kiel, Germany
Muriah Wheelock  
Department of Radiology, Washington University in St. Louis
St. Louis, MO, United States

Introduction:

Functional Magnetic Resonance Imaging (fMRI) provides a non-invasive way to study neural dynamics of reduced periods of vigilance, including unintentional Non-Rapid Eye Movement (NREM) sleep in resting-state (RS) scans. Sleep stage transitions from light (N1) to deep (N2, N3) sleep impact functional connectivity (FC), reducing measurement accuracy (Tagliazucchi et al., 2012). While EEG is the gold standard for sleep staging, it requires additional setup, has reliability issues, and is often unavailable (Goodale et al., 2021). The short-term goal of our study is to replicate Tagliazucchi's 2012 findings using template-based sleep stage decoding with concurrent EEG-fMRI data while our long-term goal is to extend this approach to EEG-independent datasets, such as the ABCD study.

Methods:

We used a previously established EEG-fMRI dataset (Tagliazucchi et al., 2012) involving 74 adults during 52-minute, eyes-closed fMRI (TR/TE=2080/30 ms). This dataset included concurrent 30-channel EEG recordings which were manually scored for wake and sleep stages (N1, N2, N3) based on AASM criteria. Unlike previous methodologies where no motion censoring or GSR was applied (Tagliazucchi et al., 2012), our RS-FC data underwent motion correction (frame displacement < 0.2mm). We implemented two RS-FC preprocessing pipelines: 1) DCAN BOLD processing (DCAN-Labs/Abcd-Hcp-Pipeline, 2019/2024) with GSR (30 nuisance parameters), and 2) XCP-D (Mehta et al., 2024)without GSR (24 nuisance parameters). Data were divided into a training set (n=56) and two testing sets: subjects experiencing all four sleep stages (n=9), and subjects awake throughout (n=9). The template-based decoding involved two stages (Figure 1A). First, templates were generated by manually parsing RS-fMRI data based on EEG-identified sleep stages, followed by the calculation of 394x394 FC matrices through Pearson correlations across 394 ROIs (333 cortical and 61 subcortical) (Gordon et al., 2016; Sitzmann et al., 2020). These matrices were then averaged across 56 subjects to create stage-specific templates (Figure 1B). In testing, we employed 2-minute sliding windows to segment time courses into epochs, matching corresponding FCs against templates (Figure 1A) to assign sleep stages based on the highest Pearson correlation with ground truth, determined by the predominant sleep stage in each window.
Supporting Image: OHBM_Figure_1.png
 

Results:

Figure 2A shows that DCAN with GSR achieved an overall accuracy of 59.3% and a balanced accuracy of 53.9% for test set 1, with sensitivities for wake, N1, N2, and N3 of 66.4%, 59.6%, 86.8%, and 2.7%, respectively. For test set 2, wake sensitivity reached 92.4%. The XCP-D without GSR achieved overall and balanced accuracies of 58.2% and 58.0% for test set 1, with sensitivities for wake, N1, N2, and N3 of 54.2%, 58.8%, 51.9%, and 67%, respectively. This pipeline showed a decreased wake sensitivity of 33.9% in test set 2. Notably, N3 was the most misclassified stage with GSR, though its sensitivity improved significantly without GSR. Decoded hypnograms from XCP-D without GSR more closely matched actual hypnograms than those processed with GSR (Figure 2B). The probability of being awake sharply decreased in the initial 5 minutes, reaching a minimum of ~ 0.40 for both ground truth, and DCAN with GSR, and < ~0.20 XCP-D without GSR, then gradually increased (Figure 2C). N1 sleep was most prominent, peaking at 10 minutes, while N2 peaked at 20 minutes for ground truth and GSR pipeline. N3 sleep likelihood peaked at about 30 minutes, consistent across ground truth and both pipelines.
Supporting Image: OHBM_Figure_2.png
 

Conclusions:

Our findings differed from Tagliazucchi's 2012 study, likely due to differences in FC preprocessing and decoding models. Future work will reconsider multiclass SVM for improved accuracy. Global signal introduced FC variability among sleep stages, enhancing N3 detection but reducing wake accuracy. Decoders show promise for analyzing sleep stages in RS-fMRI without needing EEG, enabling deeper insights into sleep's impact on brain function.

Modeling and Analysis Methods:

Classification and Predictive Modeling
Connectivity (eg. functional, effective, structural)
Task-Independent and Resting-State Analysis 2

Perception, Attention and Motor Behavior:

Sleep and Wakefulness 1

Keywords:

FUNCTIONAL MRI
Multivariate
Sleep
Other - Decoding

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I am submitting this abstract as the outcome of the OHBM-OSSIG reproducibility challenge, having reproduced previously submitted work with the original author(s)’ agreement. I have cited the original work and acknowledged the origin team in the abstract.

Please indicate below if your study was a "resting state" or "task-activation” study.

Resting state

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

No

Please indicate which methods were used in your research:

Functional MRI
EEG/ERP

For human MRI, what field strength scanner do you use?

3.0T

Provide references using APA citation style.

DCAN-Labs/abcd-hcp-pipeline. (2024). Developmental Cognition and Neuroimaging Labs. https://github.com/DCAN-Labs/abcd-hcp-pipeline (Original work published 2019)

Goodale, S. E. et al. (2021). fMRI-based detection of alertness predicts behavioral response variability. eLife, 10, e62376. https://doi.org/10.7554/eLife.62376

Gordon, E. M. et al. (2016). Generation and Evaluation of a Cortical Area Parcellation from Resting-State Correlations. Cerebral Cortex, 26(1), 288–303. https://doi.org/10.1093/cercor/bhu239

Mehta, K. et al. (2024). XCP-D: A robust pipeline for the post-processing of fMRI data. Imaging Neuroscience, 2, 1–26. https://doi.org/10.1162/imag_a_00257

Sitzmann, V. et al. (2020). Implicit Neural Representations with Periodic Activation Functions (arXiv:2006.09661). arXiv. http://arxiv.org/abs/2006.09661

Tagliazucchi, E., & Laufs, H. (2014). Decoding Wakefulness Levels from Typical fMRI Resting-State Data Reveals Reliable Drifts between Wakefulness and Sleep. Neuron, 82(3), 695–708. https://doi.org/10.1016/j.neuron.2014.03.020

Tagliazucchi, E. et al. (2012). Automatic sleep staging using fMRI functional connectivity data. NeuroImage, 63(1), 63–72. https://doi.org/10.1016/j.neuroimage.2012.06.036

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No