Effects of style images on brain MR harmonization using style transfer generative networks

Poster No:

1873 

Submission Type:

Abstract Submission 

Authors:

Shruti Gadewar1, Siddharth Narula1, Elizabeth Haddad2, Alyssa Zhu2, Sunanda Somu3, Iyad Ba Gari1, Neda Jahanshad4

Institutions:

1University of Southern California, Los Angeles, CA, 2USC, Marina del Rey, CA, 3Mark & Mary Stevens Neuroimaging & Informatics Institute, Keck School of Medicine, University of Sou, Los Angeles, CA, 4University of Southern California,, Marina del Rey, CA

First Author:

Shruti Gadewar  
University of Southern California
Los Angeles, CA

Co-Author(s):

Siddharth Narula  
University of Southern California
Los Angeles, CA
Elizabeth Haddad  
USC
Marina del Rey, CA
Alyssa Zhu  
USC
Marina del Rey, CA
Sunanda Somu  
Mark & Mary Stevens Neuroimaging & Informatics Institute, Keck School of Medicine, University of Sou
Los Angeles, CA
Iyad Ba Gari  
University of Southern California
Los Angeles, CA
Neda Jahanshad  
University of Southern California,
Marina del Rey, CA

Introduction:

Differences in imaging protocols and scanner hardware can have a considerable impact on multisite neuroimaging findings. AI models have been developed for harmonization of MR images including generative adversarial networks, diffusion models and transformer models. These aim to learn the data distributions separating style (scanner induced contrast) and content (brain anatomy) in an image to follow an image-to-image translation approach using a single target (style) image to transform each input (content) image. The implications of selecting the specific target MRI image for this harmonization, and the extent to which pathological "content/anatomy" variations might be inadvertently embedded in "contrast/style" should be considered when choosing a model. Here, we tested the sensitivity of three models: StarGANv2[1], ArtFlow[2] and StyTr2[3] to the choice of input target using 100 different target T1-weighted (T1w) with varying levels of white matter hyperintensity (WMH; pathology), also comparing their harmonization performance in a traveling subjects dataset.

Methods:

Fig 1A shows the architectural details of StarGANv2, ArtFlow and StyTr2 models for harmonization, trained on subsets of T1ws from multiple datasets (Fig 1B). T1ws were skull stripped using HD-BET[4], bias corrected using ANTs[5], registered to the MNI template using FSL flirt[6], zero padded and transformed into a series of 3-channel axial slices. FreeSurferv7.1 (FS)[7] was run on the unharmonized and harmonized T1w to extract WMH volumes, average cortical thickness, total surface area and hippocampal volume. Performance evaluation was conducted using 10 subjects scanned with 6 different scanners from the ON-Harmony dataset[8]. All the subjects in this dataset were harmonized to their respective baseline scan (Siemens Prisma 32-coil channels). Frechet inception distance (FID), spatial FID (sFID), universal image quality index (UIQI) and intraclass correlation coefficient (ICC2k)[9] were calculated for all the FS extracted measures to compare the variability after harmonization. The influence of WMH pathologies was determined using T1ws from the same site of UK Biobank[10]. We harmonized 4 subjects with different WMH values to 100 target subjects (Sh:50 with WMH>20000 mm3; Sl:50 with WMH<1000 mm3). To assess the consistency of each model across different style images, we performed voxel-wise variance analysis on harmonized results for each content subject selected from UKB harmonized to all 100 style images and assessed differences with an F-test. We ran a paired t-test on the FS extracted measures between harmonized outputs and compared it across all the 3 harmonization methods.
Supporting Image: Picture1.jpg
 

Results:

Harmonized ON-Harmony outputs from StyTr2 showed lowest FID and sFID (Fig 2A), indicating more structural information is retained in the outputs. ICC2k for FS measures was consistently high (>0.95) for StyTr2 when compared to unharmonized and other models (Fig 2B). The root mean squared error between FS measures extracted from StyTr2 outputs was lowest and the only method that showed identical FS outputs when the target was harmonized to itself (Fig 2C). In UKB, WMH volumes were consistent across different style images for harmonization using StyTr2. ArtFlow tended to underestimate WMH and StarGANv2 produced more variable results (Fig 2D). Voxel-wise comparison showed significantly higher variance in harmonized WMH volumes for StarGANv2 and ArtFlow across different style subjects (Fig 2E, 2F).
Supporting Image: Picture2.jpg
 

Conclusions:

StyTr2 was more robust to the selection of the template image than StarGANv2 and ArtFlow as it did not add or remove WMH from the content T1w after harmonization. Further research is required to see how harmonized outputs are affected by presence of other pathologies in the target image and to determine if transformers are indeed a more appropriate choice for MRI harmonization tasks.

Modeling and Analysis Methods:

Other Methods 2

Novel Imaging Acquisition Methods:

Anatomical MRI 1

Keywords:

STRUCTURAL MRI
Other - harmonization, generative AI

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.

Please indicate below if your study was a "resting state" or "task-activation” study.

Other

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Patients

Was this research conducted in the United States?

Yes

Are you Internal Review Board (IRB) certified? Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.

Not applicable

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Not applicable

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Structural MRI

For human MRI, what field strength scanner do you use?

1.5T
2.0T
3.0T

Which processing packages did you use for your study?

Free Surfer

Provide references using APA citation style.

[1]Choi, Yunjey, et al. "Stargan v2: Diverse image synthesis for multiple domains." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
[2]Hua, Xue, et al. "Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials." Neuroimage 66 (2013): 648-661.
[3]Deng, Yingying, et al. "Stytr2: Image style transfer with transformers." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[4]Isensee, Fabian, et al. "Automated brain extraction of multisequence MRI using artificial neural networks." Human brain mapping 40.17 (2019): 4952-4964.
[5]Tustison, Nicholas J., et al. "N4ITK: improved N3 bias correction." IEEE transactions on medical imaging 29.6 (2010): 1310-1320.
[6]Jenkinson, Mark, et al. "Improved optimization for the robust and accurate linear registration and motion correction of brain images." Neuroimage 17.2 (2002): 825-841.
[7]Fischl, Bruce. "FreeSurfer." Neuroimage 62.2 (2012): 774-781.
[8]Warrington, Shaun, et al. "A resource for development and comparison of multimodal brain 3 T MRI harmonisation approaches." Imaging Neuroscience 1 (2023): 1-27.
[9]Koo, Terry K., and Mae Y. Li. "A guideline of selecting and reporting intraclass correlation coefficients for reliability research." Journal of chiropractic medicine 15.2 (2016): 155-163.
[10]Alfaro-Almagro, Fidel, et al. "Image processing and Quality Control for the first 10,000 brain imaging datasets from UK Biobank." Neuroimage 166 (2018): 400-424.

Acknowledgements:
The work is supported by NIH grants: R01AG059874, R01AG058854, U01AG068057, RF1NS136995, and S10OD032285. This work was completed under UK Biobank Resource under application number 11559. Data used in the preparation of this article was obtained from Human Connectome Project (HCP; https://db.humanconnectome.org/), Adolescent Brain Cognitive Development (ABCD; https://nda.nih.gov/abcd), Open Access Series of Imaging Studies-3 (OASIS-3; https://www.nitrc.org/project/OASIS3), Alzheimer’s Disease Neuroimaging Initiative (ADNI; https://ida.loni.usc.edu/login.jsp?project=ADNI), International Consortium for Brain Mapping (ICBM;https://ida.loni.usc.edu/home/projectPage.jsp?project=ICBM), Parkinson’s Progression Markers Initiative (PPMI;https://www.ppmi-info.org/access-data-specimens/data),ON-HARMONY (https://doi.org/10.1162/imag_a_00042), UK Biobank (https://www.ukbiobank.ac.uk/enable-your-research/).

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No