Poster No:
1113
Submission Type:
Abstract Submission
Authors:
Yufei Dong1,2, Jingyuan Li1,2, Xiao Fan1,2, Jinxu Zhang1,2, Shilong Yu1,1, Hailin Huang1,2, Wenchao Zhang1,2, Yang Hu1,2, Guanya Li1,2, Yi Zhang1,2, Yang Liu1,2
Institutions:
1Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi'an, Shaanxi 710126, China, 2International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi 710126, China
First Author:
Yufei Dong
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Co-Author(s):
Jingyuan Li
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Xiao Fan
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Jinxu Zhang
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Shilong Yu
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Hailin Huang
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Wenchao Zhang
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Yang Hu
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Guanya Li
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Yi Zhang
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Yang Liu
Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education|International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University
Xi'an, Shaanxi 710126, China|Xi'an, Shaanxi 710126, China
Introduction:
Many neuroimaging studies with deep learning methods showed advanced performance in decoding brain activity patterns, detecting brain diseases and analyzing pathological patterns. However, most of the deep learning architectures focused on either local or global time series features, few take into account both of them. In addition, simultaneous modeling of spatial and temporal domains was seldom adopted, thereby limiting the ability to characterize functional brain networks. To overcome these disadvantages, we propose an enhanced temporal-spatial feature fusion model (ETSFF), which can not only capture local and global representations, but also map the spatial-temporal features of functional magnetic resonance imaging (fMRI) data.
Methods:
The overall architecture of ETSFF is presented in Fig 1, we initially introduce the multi-window temporal fusion module (MWTF) that operates on different scale of temporal windows to generalize local and global context (Fig 2 (a)). Then, in order to capture the spatial features buried in fMRI data, an enhanced spatial self-attention module (ESSA) is embedded following MWTF and the functional connectivity matrix between different ROIs is introduced to facilitate ESSA to capture the dynamic spatial dependencies (Fig 2(b)). Finally, a self-attention module is introduced to build interaction between local and global features with a standard transformer encoder layer. A total of 172 participants were recruited and completed a 2-back fMRI task scanned with 3.0-T GE scanner. A gradient-echo T2*-weighted echo planar sequence was used for acquiring functional images with the following parameters: TR = 2 sec, TE = 30 ms, matrix size = 64×64, FOV = 256×256 mm2, flip angle = 90º, in-plane resolution of 4 mm2, and 32 axial slices. The fMRI experiment contains 2 runs, each with six task epochs: three "2-back" and three "0-back" conditions(Zhang, 2023). Task-fMRI was preprocessed using Statistical Parametric Mapping 8. Preprocessing steps included slice-timing, head movement correction, spatial normalization and smooth, and the final fMRI data were split into trail pieces.

Results:
The fMRI data of 172 participants, containing 2064 trials, were split into 1651/206/206 for training/validation/testing. All the models were optimized by Adam and cross-entropy loss under the batch size of 64. The initial learning rate and weight decay were set to 0.005 and 2×10-4. Our model and all the competing methods were implemented with PyTorch and trained with NVIDIA GTX 1080 Ti. To quantify the effectiveness of the proposed framework, we compared ETSFF with several popular models designed for fMRI data (Table 1), and ETSFF achieves the best performance with an accuracy of 84.38%. Compared to the second- best method (Blot (Bedel, 2023)), ETSFF achieves an average improvement of 7.75%, 8.73%, 6.08% in accuracy/precision/recall. In addition, ablation studies were constructed to demonstrate the impacts of MWTF, ESSA and the feature aggregation module in ETSFF (Table 2). Specifically, to evaluate the effectiveness of MWTF and ESSA, MWTF is replaced with a global temporal self-attention transformer module and ESSA is replaced with a fully connected module(T. & D. 2023). As shown in Table 2, MWTF leads to performance improvements in accuracy and recall, and the performance drops significantly when a fully connected module replaces ESSA. Furthermore, the combination of self-attention module effectively facilitates model to achieve a better performance.

Conclusions:
In the current study, we introduce ETSFF that uses MWTF to fusion local and global temporal features and ESSA to capture spatial-temporal characteristics. Results show that local and global context are both essential for task fMRI classification, and dynamic spatial dependencies facilitate model to achieve a better performance of 2-back and 0-back classification.
Learning and Memory:
Working Memory 2
Modeling and Analysis Methods:
Classification and Predictive Modeling 1
Connectivity (eg. functional, effective, structural)
Keywords:
FUNCTIONAL MRI
Other - deep learning, temporal-spatial feature fusion, working memory
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.
Please indicate below if your study was a "resting state" or "task-activation” study.
Task-activation
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Patients
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Yes
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Functional MRI
Computational modeling
For human MRI, what field strength scanner do you use?
3.0T
Which processing packages did you use for your study?
SPM
Provide references using APA citation style.
In-text citation:
(Zhang, 2023) ,(Kan, 2022) ,(Arslan, 2018), (Bedel, 2023) , (T. & D. 2023)
Reference citation:
1. Arslan S. (2018). Graph Saliency Maps through Spectral Convolutional Networks: Application to Sex Classification with Brain Connectivity.
2. Bedel HA. (2023). BolT: Fused window transformers for fMRI time series analysis. MED IMAGE ANAL 88:102841
3. Kan X. (2022). FBNetGen: Task-aware GNN-based fMRI Analysis via Functional Brain Network Generation. Proc Mach Learn Res 172:618-637
4. T. M. (2023). AVE-CLIP: AudioCLIP-based Multi-window Temporal Transformer for Audio Visual Event Localization 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), p 5147-5156
5. Zhang Y. (2023). Associations among body mass index, working memory performance, gray matter volume, and brain activation in healthy children. CEREB CORTEX 33:6335-6344
No