A Comprehensive Framework for Automated Segmentation of Perivascular Spaces in MRI with the nnU-Net

Poster No:

1642 

Submission Type:

Abstract Submission 

Authors:

William Pham1, Alexander Jarema2, Donggyu Rim1, Zhibin Chen3, Mohamed Salah Khlif3, Vaughan Macefield1, Luke Henderson4, Amy Brodtmann5

Institutions:

1Monash University, Melbourne, Victoria, 2Alfred Hospital, Melbourne, Victoria, 3Monash University, Melbourne, VIC, 4Univeresity of Sydney, Sydney, New South Wales, 5Cognitive Health Initiative, School of Translational Medicine (STM), Monash University, Melbourne, VIC

First Author:

William Pham  
Monash University
Melbourne, Victoria

Co-Author(s):

Alexander Jarema  
Alfred Hospital
Melbourne, Victoria
Donggyu Rim  
Monash University
Melbourne, Victoria
Zhibin Chen  
Monash University
Melbourne, VIC
Mohamed Salah Khlif  
Monash University
Melbourne, VIC
Vaughan Macefield  
Monash University
Melbourne, Victoria
Luke Henderson  
Univeresity of Sydney
Sydney, New South Wales
Amy Brodtmann  
Cognitive Health Initiative, School of Translational Medicine (STM), Monash University
Melbourne, VIC

Introduction:

Perivascular spaces (PVS) are fluid filled channels between the vascular lumen, where blood flows, and the interstitial space where neurons reside (Wardlaw et al., 2020). Enlargement of perivascular spaces is common in neurodegenerative disorders including cerebral small vessel disease, Alzheimer's disease, and Parkinson's disease, potentially associated with impaired clearance pathways. Advancements in biomedical image processing have provided various approaches for the automated segmentation of PVS from anatomical MRI (Pham et al., 2022; Waymont et al., 2024). We aimed to optimise a widely used deep learning model, the no-new-UNet residual encoder (nnU-Net) (Isensee et al., 2021, 2024), for PVS segmentation.

Methods:

In 30 healthy participants (mean±SD age: 50±18.9 years; 13 women), T1-weighted MRI images were acquired using three different MRI scanners (3T Siemens Tim Trio, 3T Philips Achieva, and 7T Siemens Magnetom). After standard preprocessing, PVS were manually delineated across ten predefined axial slices in each participant's T1-weighted image. The nnU-Net was used to analyse the remaining unlabelled slices using a sparse annotation strategy (Gotkowski et al., 2024). These were then manually corrected and comprised the training dataset. In total, 11 models were compared using various strategies for image handling, preprocessing, and semi-supervised learning with pseudo-labels. Model performance was evaluated using 5-fold cross validation (5FCV). The main performance metric was the Sørensen-Dice Similarity Coefficient (DSC).

Results:

The voxel-spacing agnostic model (mean±SD DSC=64.3±3.3%) outperformed models which resampled images to a common resolution (DSC=40.5–55%). Model performance improved substantially following iterative label cleaning (DSC=85.7±1.2%). Semi-supervised learning with pseudo-labels (n=12,740) from 18 additional datasets improved the agreement between actual and predicted PVS cluster counts (DSC=85.6±1.4%, Lin's concordance correlation coefficient=0.89, 95%CI: 0.82–0.94). The final model demonstrated robust performance across 3T and 7T MRI scans, achieving DSC of 87.3±3.3% and 82.1±8.4%, respectively. Moreover, the model capabilities were extended, allowing us to detect PVS in the midbrain (DSC=64.3±6.5%) and hippocampus (DSC=67.8±5%).
Supporting Image: Figure_1_OHBM_v2.png
 

Conclusions:

Our deep learning models provide a robust and comprehensive framework for the automated quantification of PVS in brain MRI.

Modeling and Analysis Methods:

Methods Development 2
Segmentation and Parcellation 1

Keywords:

Data analysis
Machine Learning
MRI
Open-Source Code
Open-Source Software
Segmentation
STRUCTURAL MRI
Other - Perivascular spaces

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.

Please indicate below if your study was a "resting state" or "task-activation” study.

Other

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Patients

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Structural MRI

For human MRI, what field strength scanner do you use?

3.0T
7T

Which processing packages did you use for your study?

FSL
Free Surfer
Other, Please list  -   FastSurfer, nnU-Net

Provide references using APA citation style.

1. Gotkowski, K., Lüth, C., Jäger, P. F., Ziegler, S., Krämer, L., Denner, S., Xiao, S., Disch, N., Maier-Hein, K. H., & Isensee, F. (2024). Embarrassingly Simple Scribble Supervision for 3D Medical Segmentation (arXiv:2403.12834). arXiv. http://arxiv.org/abs/2403.12834
2. Isensee, F., Jaeger, P. F., Kohl, S. A. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2), 203–211. https://doi.org/10.1038/s41592-020-01008-z
3. Isensee, F., Wald, T., Ulrich, C., Baumgartner, M., Roy, S., Maier-Hein, K., & Jaeger, P. F. (2024). nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image Segmentation (arXiv:2404.09556). arXiv. http://arxiv.org/abs/2404.09556
4. Pham, W., Lynch, M., Spitz, G., O’Brien, T., Vivash, L., Sinclair, B., & Law, M. (2022). A critical guide to the automated quantification of perivascular spaces in magnetic resonance imaging. Frontiers in Neuroscience, 16(December), 1–27. https://doi.org/10.3389/fnins.2022.1021311
5. Wardlaw, J. M., Benveniste, H., Nedergaard, M., Zlokovic, B. V., Mestre, H., Lee, H., Doubal, F. N., Brown, R., Ramirez, J., MacIntosh, B. J., Tannenbaum, A., Ballerini, L., Rungta, R. L., Boido, D., Sweeney, M., Montagne, A., Charpak, S., Joutel, A., Smith, K. J., & Black, S. E. (2020). Perivascular spaces in the brain: Anatomy, physiology and pathology. Nature Reviews Neurology, 16(3), 137–153. https://doi.org/10.1038/s41582-020-0312-z
6. Waymont, J. M. J., Valdés Hernández, M. D. C., Bernal, J., Duarte Coello, R., Brown, R., Chappell, F. M., Ballerini, L., & Wardlaw, J. M. (2024). Systematic review and meta-analysis of automated methods for quantifying enlarged perivascular spaces in the brain. NeuroImage, 297, 120685. https://doi.org/10.1016/j.neuroimage.2024.120685

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No