Poster No:
1273
Submission Type:
Late-Breaking Abstract Submission
Authors:
Selim Süleymanoğlu1, Dong Hye Ye2
Institutions:
1Georgia State University, Atlanta, GA, 2Georgia State University, ATLANTA, GA
First Author:
Co-Author:
Late Breaking Reviewer(s):
Wei Zhang
Washington University in St. Louis
Saint Louis, MO
Introduction:
Clinical research has revealed abnormalities in both functional connectivity (FNC) and structural connectivity (SC) in schizophrenia (SZ) (Mazumder, 2024). Despite MRI advances, classifying SZ from healthy controls (HC) using a single modality remains challenging. Meta-analyses show that while multimodal neuroimaging captures complementary information, its performance gain over unimodal methods is modest (Porter, 2023). Moreover, fusing FNC and SC has demonstrated improved classification accuracy (Gutiérrez-Gómez, 2020). Here, we propose an explainable cross‐attention fusion pipeline to jointly model FNC and SC data. Our approach uses separate Vision Transformer (ViT) branches to extract 1×1024 latent features from padded 64×64 FNC and SC images. A cross‐attention block fuses these features by using the SC embedding as both query and key and the FNC embedding as value. The normalized fused representation is then fed to a linear classifier for SZ prediction. We evaluated our method with 5‐fold cross‐validation, concatenating out‐of‐sample predictions from 20 runs to compute overall accuracy, precision, and F1‐score.
Methods:
Dataset and preprocessing
We used a subset of the FBIRN dataset (165 subjects: 93 SZ, 72 HC). Resting‐state fMRI data were processed with the Neuromark pipeline to yield FNC matrices (53 components), while diffusion MRI data produced SC matrices via deterministic tractography. Both 53×53 matrices were reshaped to include a channel dimension and zero‐padded to 64×64.
Proposed method
Separate ViT branches (Dosovitskiy, 2020) processed FNC and SC images by dividing each padded image into fixed-size patches with positional embeddings, yielding 1×1024 latent features per modality. A cross-attention mechanism then fused these features: the SC embedding served as both query and key, and the FNC embedding as value. The attention output was normalized (without residual addition) to produce a joint 1×1024 representation, which was passed through a linear classifier for binary predictions (SZ vs. HC). We adopted 5-fold cross-validation (with an 80% training and 20% validation split) and selected the best model by validation accuracy. Out-of-sample test predictions from all folds were concatenated to cover the entire dataset, and this process was repeated 20 times to obtain final metrics.

·Figure 1
Results:
Unimodal analysis showed that FNC alone achieved 77.24% accuracy (77.20% precision, 72.27% F1‐score), while SC alone yielded 62.48% accuracy (61.08% precision, 47.36% F1‐score). Notably, fusing SC and FNC via cross-attention improved performance to 77.55% accuracy (78.06% precision, 72.39% F1‐score), indicating that integrating complementary connectivity enhances SZ classification.

·Table 1
Conclusions:
We introduced an explainable cross‐attention fusion framework for SZ classification that integrates FNC and SC. By extracting modality-specific features with separate ViT branches and fusing them-using SC as query/key and FNC as value-our pipeline produces a joint representation classified via a linear classifier. Experiments on the FBIRN dataset demonstrate that this fusion strategy outperforms unimodal approaches, highlighting its potential for improved clinical diagnosis.
Disorders of the Nervous System:
Psychiatric (eg. Depression, Anxiety, Schizophrenia) 2
Modeling and Analysis Methods:
Connectivity (eg. functional, effective, structural) 1
Keywords:
Data analysis
Machine Learning
Psychiatric Disorders
Schizophrenia
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.
Please indicate below if your study was a "resting state" or "task-activation” study.
Resting state
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Patients
Was this research conducted in the United States?
Yes
Are you Internal Review Board (IRB) certified?
Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.
Yes, I have IRB or AUCC approval
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Yes
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Functional MRI
Diffusion MRI
Provide references using APA citation style.
1. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
2. Gutiérrez-Gómez, L., Vohryzek, J., Chiêm, B., Baumann, P. S., Conus, P., Do Cuenod, K., ... & Delvenne, J. C. (2020). Stable biomarker identification for predicting schizophrenia in the human connectome. NeuroImage: Clinical, 27, 102316.
3. Keator, D. B., van Erp, T. G., Turner, J. A., Glover, G. H., Mueller, B. A., Liu, T. T., ... & Potkin, S. G. (2016). The function biomedical informatics research network data repository. Neuroimage, 124, 1074-1079.
4. Mazumder, B., Kanyal, A., Wu, L., D. Calhoun, V., & Hye Ye, D. (2024, October). Physics-Guided Multi-view Graph Neural Network for Schizophrenia Classification via Structural-Functional Coupling. In International Workshop on PRedictive Intelligence In MEdicine (pp. 61-73). Cham: Springer Nature Switzerland.
5. Porter, A., Fei, S., Damme, K. S., Nusslock, R., Gratton, C., & Mittal, V. A. (2023). A meta-analysis and systematic review of single vs. multimodal neuroimaging techniques in the classification of psychosis. Molecular Psychiatry, 28(8), 3278-3292.
No