Foundation model for accurate brain segmentation in neonatal MRI

Poster No:

1655 

Submission Type:

Late-Breaking Abstract Submission 

Authors:

Bowen Xin1, Alex Pagnozzi2, Tisha Jhabak1, Jess Bugeja2, Kerstin Pannek2, DanaKai Bradford2, Jurgen Fripp2

Institutions:

1Australian e-Health Research Centre, CSIRO, Sydney, Australia, 2Australian e-Health Research Centre, CSIRO, Brisbane, Australia

First Author:

Bowen Xin  
Australian e-Health Research Centre, CSIRO
Sydney, Australia

Co-Author(s):

Alex Pagnozzi  
Australian e-Health Research Centre, CSIRO
Brisbane, Australia
Tisha Jhabak  
Australian e-Health Research Centre, CSIRO
Sydney, Australia
Jess Bugeja  
Australian e-Health Research Centre, CSIRO
Brisbane, Australia
Kerstin Pannek  
Australian e-Health Research Centre, CSIRO
Brisbane, Australia
DanaKai Bradford  
Australian e-Health Research Centre, CSIRO
Brisbane, Australia
Jurgen Fripp  
Australian e-Health Research Centre, CSIRO
Brisbane, Australia

Late Breaking Reviewer(s):

Fernando Barrios, Ph.D.  
Universidad Nacional Autónoma de México
Querétaro, Querétaro
Yi-Ju Lee, Dr.  
Academia Sinica
Taipei City, Taipei City
Casey Paquola  
Institute for Neuroscience and Medicine, INM-7, Forschungszentrum Jülich
Jülich, NA

Introduction:

Accurate segmentation of neonatal brain MRI is of fundamental importance in studying normal and abnormal brain development, and enabling early identification of at-risk infants for timely intervention. However, neonatal brain segmentation is challenging for several reasons, including overlapping white matter and gray matter intensities causing low tissue contrast in brain MRI (Sun et al., 2021) . Conventional atlas-based methods have difficulty accurately predicting regions with significant anatomical difference (Cabezas et al., 2011), while supervised deep learning methods often struggle with generalising beyond the training domain (Shen et al., 2023). Lately, foundation models (such as MedSAM (Ma et al., 2024), and BrainSegFounder (Cox et al., 2024)) have shown great potential in improving segmentation accuracy and model generalisability through pretraining the deep learning models on a huge and diverse dataset. In this study, we aim to investigate the feasibility of a foundation model (pretrained on a large-scale brain dataset before fine-tuning on the neonatal dataset) for accurate brain tissue segmentation in neonatal MRI.

Methods:

The foundation model was pretrained on unlabelled 82,800 brain T1 and T2 MRIs from UK Biobank (UKB) dataset (Littlejohns et al., 2020), and then fine-tuned on 885 neonatal T2 brain MRIs from the developing Human Connectome Project (dHCP) dataset with 87 brain region labels (Makropoulos et al., 2018). A vision transformer model, SwinUNETR (Tang et al., 2022), was used with the workflow illustrated in Figure 1. In the first stage (Figure 1a), the model encoded anatomical brain structures from the large-scale unlabelled UKB dataset using self-supervised mechanisms, including masked volume inpainting, 3D image rotation, and contrastive coding (Cox et al., 2024). Then, the foundation model identified downstream neonate-specific attributes, using neonatal dHCP dataset (Makropoulos et al., 2018) via self-supervised learning (Figure 1b) and fine-tuning (Figure 1c). The tissue prediction results were independently assessed on the dHCP testing set using metrics including Dice score, Jaccard index, recall, and precision (Figure 1d). Mean values and standard deviation across all brain regions were reported. Statistically, we compared our SwinUNETR results (pretrained on the UKB dataset), with the one without pretraining on the UKB dataset using paired t-tests.
Supporting Image: Figure1.png
   ·Figure 1. Overall study design. (a) Firstly, the SwinUNETR model was pretrained on 82800 T1 and T2 MRI from the UKB dataset via self-supervised learning. (b) Secondly, the SwinUNETR model was pretrain
 

Results:

The neonatal brain model was trained on 80% of the dHCP dataset (708 T2 MRIs) and independently tested on a holdout dataset (20% of the dHCP dataset, 177 T2 MRIs). Figure 2a-2d shows that our foundation model (SwinUNETR with UKB data pretraining) outperformed their counterpart (SwinUNETR without UKB data pretraining) in terms of Dice score (0.869 +/- 0.077 vs 0.858 +/- 0.110, p-value 0.025), Jaccard index (0.773 +/- 0.080 vs 0.761 +/- 0.107, p-value 0.006), and recall (0.850 +/- 0.087 vs 0.836 +/- 0.115, p-value 0.005). Although the foundation model also achieved better precision (0.897 +/- 0.044 vs 0.893 +/- 0.085), no statistical difference was found (p-value 0.64)). Notably, the foundation models produced more stable segmentation results on all metrics (less standard deviation). In Figure 2e-2f, we showed visual comparison of input MRIs, predictions from SwinUNETR without UKB pretraining, predictions from SwinUNETR with UKB pretraining, and ground truth.
Supporting Image: Figure2.png
   ·Figure 2. Quantitative and qualitative comparison of the foundation model (SwinUNETR with UKB pretraining) and SwinUNETR without UKB pretraining. Figure 2a-2d shows the bar plot comparisons of Dice sc
 

Conclusions:

This study highlights the strength of the foundation model, leveraging the brain anatomy knowledge gained from large-scale adult datasets, to improve prediction accuracy, segmentation stability, training convergence and generalisability across the lifespan.

Disorders of the Nervous System:

Neurodevelopmental/ Early Life (eg. ADHD, autism) 2

Modeling and Analysis Methods:

Methods Development
Segmentation and Parcellation 1

Novel Imaging Acquisition Methods:

Anatomical MRI

Keywords:

Machine Learning
Neurological
STRUCTURAL MRI

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I do not want to participate in the reproducibility challenge.

Please indicate below if your study was a "resting state" or "task-activation” study.

Other

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

No

Please indicate which methods were used in your research:

Structural MRI

Provide references using APA citation style.

Cabezas, M., Oliver, A., Lladó, X., Freixenet, J., & Bach Cuadra, M. (2011). A review of atlas-based segmentation for magnetic resonance brain images. Computer Methods and Programs in Biomedicine, 104(3), e158–e177. https://doi.org/10.1016/j.cmpb.2011.07.015

Cox, J., Liu, P., Stolte, S. E., Yang, Y., Liu, K., See, K. B., Ju, H., & Fang, R. (2024). BrainSegFounder: Towards 3D foundation models for neuroimage segmentation. Medical Image Analysis, 97, 103301. https://doi.org/10.1016/j.media.2024.103301

Makropoulos, A., Robinson, E. C., Schuh, A., Wright, R., Fitzgibbon, S., Bozek, J., Counsell, S. J., Steinweg, J., Vecchiato, K., Passerat-Palmbach, J., Lenz, G., Mortari, F., Tenev, T., Duff, E. P., Bastiani, M., Cordero-Grande, L., Hughes, E., Tusor, N., Tournier, J.-D., … Rueckert, D. (2018). The developing human connectome project: A minimal processing pipeline for neonatal cortical surface reconstruction. NeuroImage, 173, 88–112. https://doi.org/10.1016/j.neuroimage.2018.01.054

Segment anything in medical images | Nature Communications. (n.d.). Retrieved March 3, 2025, from https://www.nature.com/articles/s41467-024-44824-z

Shen, D. D., Bao, S. L., Wang, Y., Chen, Y. C., Zhang, Y. C., Li, X. C., Ding, Y. C., & Jia, Z. Z. (2023). An automatic and accurate deep learning-based neuroimaging pipeline for the neonatal brain. Pediatric Radiology, 53(8), 1685–1697. https://doi.org/10.1007/s00247-023-05620-x

Sun, Y., Gao, K., Wu, Z., Li, G., Zong, X., Lei, Z., Wei, Y., Ma, J., Yang, X., Feng, X., Zhao, L., Le Phan, T., Shin, J., Zhong, T., Zhang, Y., Yu, L., Li, C., Basnet, R., Ahmad, M. O., … Wang, L. (2021). Multi-Site Infant Brain Segmentation Algorithms: The iSeg-2019 Challenge. IEEE Transactions on Medical Imaging, 40(5), 1363–1376. IEEE Transactions on Medical Imaging. https://doi.org/10.1109/TMI.2021.3055428

Tang, Y., Yang, D., Li, W., Roth, H. R., Landman, B., Xu, D., Nath, V., & Hatamizadeh, A. (2022). Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis. 20730–20740. https://openaccess.thecvf.com/content/CVPR2022/html/Tang_Self-Supervised_Pre-Training_of_Swin_Transformers_for_3D_Medical_Image_Analysis_CVPR_2022_paper.html

The UK Biobank imaging enhancement of 100,000 participants: Rationale, data collection, management and future directions | Nature Communications. (n.d.). Retrieved March 3, 2025, from https://www.nature.com/articles/s41467-020-15948-9

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No