Poster No:
1638
Submission Type:
Abstract Submission
Authors:
Jong Sung Park1, Juhyung Ha1, Siddhesh Thakur2, Spyridon Bakas2, Eleftherios Garyfallidis1
Institutions:
1Indiana University, Bloomington, IN, 2Indiana University, Indianapolis, IN
First Author:
Co-Author(s):
Introduction:
Despite the advances, skullstripping remains an unsolved challenge in medical imaging. Methods have been proposed in the past for a fast and robust method (Smith, 2002). Deep Learning models have been introduced as well, focusing on enhancing detail (Park, 2024), accommodating multimodal data (Hoopes, 2022), or addressing specific scenarios like non-human (Hsu, 2020) and pathological imaging (Thakur, 2020). While each method excels in its focus area, creating a single model that addresses multiple tasks is challenging.
Training a model with diverse data may help but there is no guarantee of having enough training data for unique cases. Synthetic data (Hoopes, 2022) hold promise on generalizability, by allowing greater flexibility in brain shape and intensity variations. However, current models rely heavily on anatomical segmentation priors, limiting the application to mostly human brains.
Methods:
We create random brain images under two assumptions: 1. The brain is the primary object, generally elliptical in shape, and 2. It is encased by a surrounding hollow ellipse, representing the head.
Steps include creating a hollow head-sized ellipsoid in a 128x128x128 voxel sized cube, and adding a smaller ellipsoid inside. Intensities are sampled from a random Gaussian distribution. Linear/non-linear transformations are applied to simulate various shapes, modalities, and orientations (see Fig. 1a). Fig. 1b shows example images.
The model is trained to predict the main object (brain) and its boundaries. After prediction, only the main object is selected. Dilation is applied to compensate for the loss of boundary information, and the overlapping regions between the dilated mask and the boundaries are selected. Fig. 1b shows the post processing effects on a T1 weighted image from the IXI dataset.
Results:
Our model was tested qualitatively and quantitatively on various datasets that could represent multimodal, multispecies and pathological data. BET (Smith, 2002) and Synthstrip (Hoopes, 2022) were used for comparison due to their generalizability without fine-tuning.
Fig. 2a shows predicted masks for multiple datasets: IXI (healthy adult), TCGA (Scarpace, 2016) (tumor), MINDS (Hata, 2023) (Marmoset) and CAMRI (Hsu, 2021) (Rodents). Despite the variation in type of brains, our method provides robust results on all images without any fine-tuning to the real data. However, small segmentation errors do exist in low contrast regions.
Quantitative results across three datasets, TCGA, MINDS, and LPBA40 (Shattuck, 2008) (healthy adult) are plotted in Fig. 2b. The metrics show the stability of our model's predictions, suggesting high generalizability among data.
Overall, our method demonstrates robustness across human and non-human mammalian datasets, with consistently high performance even on challenging data such as tumor and non-human species.

Conclusions:
Recent work (Ma, 2024) has proposed that a single Deep Learning model can generalize to many types of images when trained with multiple unique datasets. Built on this idea, our findings suggest that even models trained on synthetic data with minimum assumptions can achieve strong performance across diverse skullstripping tasks despite the lack of any anatomical priors from a template or an image.
Nevertheless, the model suffers from segmentation errors, especially around the cortical surface or the eye. Adding regularized deformation or adding weak assumptions of non brain tissues in the dataset could prove to be advantageous. Future work may incorporate additional common brain features to enhance model performance on complex regions.
In conclusion, while specialized, data-specific models exist, the inevitable prior and assumptions in the training data has limited their generalizability. Our approach offers a foundation for various semi-supervised medical image segmentation tasks, by demonstrating that completely synthetic data with minimum assumptions could be enough.
Modeling and Analysis Methods:
Methods Development 2
Segmentation and Parcellation 1
Keywords:
Machine Learning
Segmentation
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.
Please indicate below if your study was a "resting state" or "task-activation” study.
Other
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Patients
Was this research conducted in the United States?
Yes
Are you Internal Review Board (IRB) certified?
Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.
Not applicable
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Not applicable
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Structural MRI
Diffusion MRI
For human MRI, what field strength scanner do you use?
If Other, please list
-
Varies by dataset
Which processing packages did you use for your study?
Other, Please list
-
DIPY
Provide references using APA citation style.
Hata, J. (2023). Multi-modal brain magnetic resonance imaging database covering marmosets with a wide age range. Scientific Data, 10(1), 221.
Hoopes, A. (2022). SynthStrip: skull-stripping for any brain image. NeuroImage, 260, 119474.
Hsu, L. M. (2020). Automatic skull stripping of rat and mouse brain MRI data using U-Net. Frontiers in neuroscience, 14, 568614.
Hsu, L.-M. (2021). ‘CAMRI Mouse Brain MRI Data’ [Dataset]. doi:10.18112/openneuro.ds002868.v1.0.1
Ma, J. (2024). Segment anything in medical images. Nature Communications, 15(1), 654.
Park, J. S. (2024). Multi-scale V-net architecture with deep feature CRF layers for brain extraction. Communications Medicine, 4(1), 29.
Scarpace, L. (2016). Radiology data from the cancer genome atlas glioblastoma multiforme [TCGA-GBM] collection. The Cancer Imaging Archive, 11(4), 1.
Shattuck, D. W. (2008). Construction of a 3D probabilistic atlas of human cortical structures. Neuroimage, 39(3), 1064-1080.
Smith, S. M. (2002). Fast robust automated brain extraction. Human brain mapping, 17(3), 143-155.
Thakur, S. (2020). Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training. Neuroimage, 220, 117081.
No