Toward full head tissue segmentation: adding the skull

Poster No:

1648 

Submission Type:

Abstract Submission 

Authors:

Romain VALABREGUE1, Ines Khemir1, Mélanie Didier1

Institutions:

1Paris Brain Institute (ICM), Paris, France

First Author:

Romain VALABREGUE, PhD  
Paris Brain Institute (ICM)
Paris, France

Co-Author(s):

Ines Khemir  
Paris Brain Institute (ICM)
Paris, France
Mélanie Didier  
Paris Brain Institute (ICM)
Paris, France

Introduction:

Although when performing full tissue segmentation, region outside the brain are considered as background, we believe that full head tissue segmentation is of great interest. We propose extending SynthSeg deep learning model (Billot et al., 2023), by adding an additional class: the skull. By training the model with labels from only five subjects, we achieve a robust, contrast-agnostic model, which we evaluate across different contrast types.

Methods:

We built our synthetic training dataset based on labels drawn from five subjects who underwent a multi contrast MRI and a CT scan session. We use the mpr2rage volume to segment the following labels. Gray/White Matter (GM/WM), amygdala, hippocampus and ventricle labels were computed from the freesurfer. Deep brain nucleus were obtained from AssemblyNet (Coupé et al., 2020) and cerebellar GM from DeepCERES (Morell-Ortega et al., 2024). The skull was manually segmented on CT scan, and a CSF label was created from the space between GM and the skull. Finally we use the MIDA template to get other tissue label outside the skull (Valabregue, et al., 2024a). These choices were motivated by the importance of choosing anatomically correct labels even for the synthetic training (Valabregue, et al., 2024b). The synthetic dataset with data augmentation was generated as previously (Valabregue, et al., 2024a) and we trained a standard nnUnet model (Isensee et al., 2021)

Results:

We present the Dice score, evaluated on 9 subjects with different inputs: a CT scan (CT) and 4 MRI contrasts-FLAIR, MP2RAGE inv2, MP2RAGE uni, and a UTE (ultra-short echo time) acquisition. We observe that the model automatically adapts to the specific information present in each dataset. As expected, the UNI image yields the best prediction for gray matter (GM), as it provides the best gray/white matter contrast. The CT scan serves as an extreme example, where mostly the skull is visible, leading to the best prediction for the skull and the poorest for other tissues.
A visual inspection of the results also shows relatively good predictions for the ventricles in the CT scans, despite the varying shapes between subjects. By adjusting the windowing of the CT display, we can observe a (noisy) contrast for the cerebrospinal fluid (CSF). Additionally, having a good prediction for the ventricles improves the accuracy of the spatial prior for the deep nuclei, for which there is no direct contrast.

Conclusions:

Regardless of the potential impact of full-head tissue segmentation, we are convinced that adding anatomically precise tissue labels will enhance the synthetic approach. In the future, we plan to include additional missing tissue compartments, such as the dura mater and vascular tree, which should further improve the precision of GM segmentation.
When examining predictions under very different contrasts, we observe that the model can either precisely follow tissue contrast or produce a spatial prior when no information is present in the data. While this is a convenient feature, it leads us to an important open question: How can we quantify the amount of spatial prior involved in predicting a given tissue boundary ?

Modeling and Analysis Methods:

Methods Development 2
Segmentation and Parcellation 1

Keywords:

Data analysis
Segmentation
Other - skull

1|2Indicates the priority used for review
Supporting Image: FIG_OHBM.png
 

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.

Please indicate below if your study was a "resting state" or "task-activation” study.

Other

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Structural MRI

For human MRI, what field strength scanner do you use?

3.0T

Which processing packages did you use for your study?

Other, Please list  -   torchio and nnU-Net

Provide references using APA citation style.

Billot, B. (2023). SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Medical Image Analysis
Coupé, P., (2020). AssemblyNet: A large ensemble of CNNs for 3D whole brain MRI segmentation. NeuroImage
Isensee, F., (2021). nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nature Methods
Morell-Ortega, (2024). DeepCERES: A Deep learning method for cerebellar lobule segmentation using ultra-high resolution multimodal MRI (arXiv:2401.12074).
Valabregue (2024a). Comprehensive analysis of synthetic learning applied to neonatal brain MRI segmentation. Human Brain Mapping
Valabregue, (2024b). Unraveling Systematic Biases in Brain Segmentation: Insights from Synthetic Training. Medical Imaging with Deep Learning.

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No