Poster No:
1500
Submission Type:
Abstract Submission
Authors:
Ruo-Ci Yu1, Yi-Ping Chao1
Institutions:
1Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan, Taiwan
First Author:
Ruo-Ci Yu
Department of Computer Science and Information Engineering, Chang Gung University
Taoyuan, Taiwan
Co-Author:
Yi-Ping Chao
Department of Computer Science and Information Engineering, Chang Gung University
Taoyuan, Taiwan
Introduction:
Parkinson's Disease (PD) is a neurodegenerative disorder that affects motor function. In advanced stages, Deep Brain Stimulation (DBS) delivers high-frequency stimulation via electrodes implanted in the subthalamic nucleus (STN) to alleviate symptoms. However, post-operative imaging for DBS patients is challenging: while high-field 3T MRI provides superior image quality, it poses risks such as heating or displacement of metallic implants due to stronger RF pulses and magnetic forces. In contrast, 1.5T MRI is safer but yields lower-quality images with reduced signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). Leveraging advancements in GAN-based style transfer, we aim to synthesize high-field quality images from low-field MRI scans using paired real-world 1.5T and 3T MRI data. This approach enhances image quality and clinical feasibility, enabling safer and more effective post-operative monitoring of DBS patients.
Methods:
The study used 102 paired T1-weighted MRI images from the ADNI (Alzheimer's Disease Neuroimaging Initiative) dataset, including healthy controls and PD patients. Data were split into 80% training, 10% validation, and 10% testing. Preprocessing steps included zero-padding, linear registration for spatial alignment, normalization, and masking to remove non-brain areas. Histogram equalization was tested to address contrast discrepancies. Three models were evaluated: mDCSRN-GAN (multi-level densely connected super-resolution network), Pix2pix, and Pix2pixHD. First, mDCSRN-GAN is a 3D DenseNet-based GAN tailored for volumetric MRI images. Patches of size (64, 64, 64) were extracted for training. Second, Pix2pix is a 2D conditional GAN with a U-Net generator and PatchGAN discriminator, efficient but limited in capturing fine details. Last, Pix2pixHD is an advanced GAN with multi-scale generators and discriminators, optimized with NVIDIA's MONAI (Medical open network for AI) methods and mixed precision training for high-resolution image synthesis.
Results:
Performance comparisons for mDCSRN-GAN, Pix2pix, and Pix2pixHD were conducted across multiple experiments and was evaluated using SSIM, PSNR, MSE, and RMSE. The quantitative results are presented in Figure 1, while the corresponding visualizations are shown in Figure 2. In mDCSRN-GAN, basic preprocessing achieved SSIM = 0.7481, PSNR = 31.63. Histogram equalization improved SSIM to 0.8186 but increased MSE, indicating a trade-off between structural similarity and pixel-level accuracy. The 3D architecture struggled to balance perceptual quality with spatial fidelity. In Pix2pix, optimized preprocessing steps improved SSIM and RMSE, showing its ability to enhance structural similarity. However, its simpler architecture lacked the capacity to generate fine details compared to Pix2pixHD. In Pix2pixHD, it was outperformed other models with SSIM = 0.8367 and PSNR = 36.83 in its standard configuration. Incorporating MONAI methods and mixed precision training further improved SSIM to 0.8536 while slightly reducing PSNR, reflecting a focus on perceptual quality. Pix2pixHD achieved superior structural fidelity and minimized reconstruction errors, making it the most effective model.

·Figure 1. Evaluation Results for 10 Testing data

·Figure 2. Comparison of Sagittal Slice Results Across Different Experiments
Conclusions:
This study demonstrates the effectiveness of GAN-based methods for generating high-field MRI images from low-field data. Pix2pixHD outperformed the other models in both perceptual quality and structural preservation, making it the most suitable model for high-resolution MRI image generation. While mDCSRN-GAN showed promise for volumetric data, it faced challenges optimizing 3D spatial fidelity. Pix2pix, despite its efficiency, lacked the ability to preserve high-frequency details. Future work will address dataset limitations by exploring unsupervised approaches like 3D-CycleGAN to utilize unpaired 1.5T and 3T MRI data. This could improve scalability and generalization while preserving volumetric information for enhanced clinical applications.
Modeling and Analysis Methods:
Methods Development 1
Neuroinformatics and Data Sharing:
Informatics Other 2
Novel Imaging Acquisition Methods:
Anatomical MRI
Keywords:
Machine Learning
MRI
Open Data
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.
Please indicate below if your study was a "resting state" or "task-activation” study.
Resting state
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Patients
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Yes
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Structural MRI
For human MRI, what field strength scanner do you use?
1.5T
3.0T
Which processing packages did you use for your study?
FSL
Provide references using APA citation style.
[1] Chen, Y. (2018). Efficient and Accurate MRI Super-Resolution Using a Generative Adversarial Network and 3D Multi-level Densely Connected Network. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I. Springer-Verlag, Berlin, Heidelberg, 91–99.
[2] Isola, P., Zhu, J., Zhou, T., & Efros, A. A. (2017). Image-to-Image Translation with Conditional Adversarial Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 5967-5976.
[3] Wang, T., Liu, M., Zhu, J., Tao, A., Kautz, J., & Catanzaro, B. (2018). High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 8798-8807, doi: 10.1109/CVPR.2018.00917.
No