Poster No:
1118
Submission Type:
Abstract Submission
Authors:
Reihaneh Hassanzadeh1, Vince Calhoun2
Institutions:
1Georgia Institute of Technology, Atlanta, GA, 2GSU/GATech/Emory, Atlanta, GA
First Author:
Co-Author:
Introduction:
Multimodal data analysis can lead to more accurate diagnoses of brain disorders due to the complementary information that each modality adds. However, a major challenge of using multimodal datasets in the neuroimaging field is incomplete data, where some modalities are missing for certain subjects. Hence, effective strategies are needed for completing the data. In this study, we proposed a generative adversarial network method designed to reconstruct missing modalities from existing ones while preserving the disease patterns. We used T1-weighted structural magnetic resonance imaging and functional network connectivity as two modalities. Our findings showed a 9% improvement in the classification accuracy for Alzheimer's disease versus cognitive normal groups when using our generative imputation method compared to the traditional approaches.
Methods:
We developed a cycle-GAN [1] network to transform FNC to T1 and vice versa. The cycle-GAN learns the underlying distribution of the two domains and maps the distribution to each other using an unpaired data transition and under an unsupervised setting. We also incorporated weak supervision when paired data were available. Our proposed model, as illustrated in Fig. 1, consists of two generators, G1 and G2, to translate data between the two domains, and two discriminators, D1 and D2, to distinguish the real samples from the generated ones.

·Fig. 1. Generative model architecture.
Results:
We used 2910 T1 images and 414 FNC maps from the Alzheimer's Disease Neuroimaging Initiative (ADNI). To evaluate the quality of the generated samples, we adopted the structural similarity index measure (SSIM) between the real T1 images and their corresponding generated T1 images, and the Pearson correlation between the real FNC features and the generated ones. Our results showed an SSIM of 0.89±0.003 and a Pearson correlation of 0.71±0.004. Using the real T1 and FNC data along with the generated data, we trained a multi-modal classification model of AD vs. cognitively normal (CN) and measured the performance of the model with accuracy, precision, recall, and F1 score as shown in Fig. 2. Furthermore, we compared the performance of the model with the following baselines: 1) subsampling, where the input data includes only the data for which both modalities are available, and 2) zero-imputation, where the missing modality is replaced with zeros. According to our results, our generative-imputation approach achieved an accuracy of 86.87%±2.9, outperforming the subsampling and zero-imputation approaches by 8.6% and 9.4%, respectively. Additionally, our proposed approach achieved an F1 score of 0.88, a recall of 0.86, and a precision of 0.91, all of which were superior to the baselines.

·Fig. 2. Classification Performance of AD vs. CN.
Conclusions:
In this study, we explored the capability of generative models for the brain function-structure translation within the context of Alzheimer's disease. We developed a Cycle-GAN adapted to our data to synthesize functional connectivity maps and T1 images from each other. Our findings suggested that this approach could learn distinctive brain patterns associated with Alzheimer's disease. We then applied our generative method to address the missing modality data by integrating the generated samples into a multi-modal classification model. This generative imputation method resulted in a 9% improvement in classification accuracy compared to the baselines.
Disorders of the Nervous System:
Neurodegenerative/ Late Life (eg. Parkinson’s, Alzheimer’s) 2
Modeling and Analysis Methods:
Classification and Predictive Modeling 1
Keywords:
Other - Generative Adversarial Networks, Multi-Modal Classification, Alzheimer’s Disease
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I do not want to participate in the reproducibility challenge.
Please indicate below if your study was a "resting state" or "task-activation” study.
Resting state
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Patients
Was this research conducted in the United States?
Yes
Are you Internal Review Board (IRB) certified?
Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.
Yes, I have IRB or AUCC approval
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Yes
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Functional MRI
Structural MRI
For human MRI, what field strength scanner do you use?
1T
Provide references using APA citation style.
[1] Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
No