Poster No:
1644
Submission Type:
Abstract Submission
Authors:
Leen Hakki1, Melisa Özakçakaya2, Belal Tavashi2, Uluç Pamuk2, Oğuzhan Hüraydın2, Esin Öztürk Işık2, Pınar Özbay2
Institutions:
1Boğaziçi University, Beylikdüzü, Istanbul, 2Boğaziçi University, Istanbul, Istanbul
First Author:
Leen Hakki
Boğaziçi University
Beylikdüzü, Istanbul
Co-Author(s):
Introduction:
Brain extraction, is an essential preprocessing step that removes the non-brain tissue from MRI images. Accurate brain extraction is critical for downstream analyses in preclinical MRI. Traditional brain extraction tools like FSL BET or AFNI fail to extract rodent brains accurately and require manual correction.
Several methods for rodent brain extraction have been developed, such as Rapid Automatic Tissue Segmentation (RATS) (Oguz et al., 2014), and SHape descriptor selected External Regions after Morphologically filtering (SHERM) (Liu et al., 2020). However, those methods perform differently depending on a brain's size, shape, and contrast (Lin et al., 2024). Recent U-Net-based deep learning methods showed promise for rodent brain extraction by improving robustness (Hsu et al., 2020; Lin et al., 2024).
Accurate brain extraction is critical for downstream MRI analysis pipelines, including segmentation, registration, and morphometry studies. Errors in proper brain extraction can lead to propagation through the pipeline, reducing the accuracy of registration or grey matter (GM) and white matter (WM) segmentation. To overcome these problems, this study uses a U-Net model with a 19-convolutional layer (VGG19) encoder (Simonyan & Zisserman, 2014) for automated rat brain extraction. By incorporating U-Net brain extraction into segmentation pipelines, we ensure reliable inputs for GM and WM segmentation, thus increasing the efficiency of preclinical MRI workflows.
Methods:
Data Acquisition:
Imaging was performed on 36 female Wistar rats (4-8 weeks old) using a 7T preclinical MRI scanner (MR Solutions Ltd., Guildford, UK). The imaging protocol included the following MRI sequences: T1-weighted, T2-weighted, Inversion Recovery FLASH (IR FLASH), Multiple Echo Multi-Slice (MEMS), and Multi Gradient Echo (MGE). For U-Net model training, T2-weighted images were manually segmented using 3D Slicer, according to the anatomical Sigma Rat Brain Atlas (Liu et al., 2019).
Preprocessing:
MRI images and their corresponding masks were preprocessed by slicing, resizing, normalizing, and applying data augmentation using in-house MATLAB and Python scripts. A total of 26 brain images (572 slices) were used for training, five images (129 slices) for testing, and five images (123 slices) for validation.
U-Net Deep Learning Method:
This study used pre-trained VGG19 Transfer Learning as the encoder to enhance model performance. Attention Gates helped the model focus on key regions. An Atrous Spatial Pyramid Pooling (ASPP) (Chen et al., 2018) module was included to capture multi-scale features. The model performance was evaluated using metrics such as Tversky, Dice coefficient, Jaccard Index, and sensitivity.
Segmentation Pipeline:
Brain extraction using the U-Net model was integrated into a segmentation pipeline. SPM was used to create tissue probability maps (TPMs) for morphometric analysis. Therefore, establishing a standardized pipeline ensures reproducibility by minimizing variability due to manual adjustments made in the brain extraction step.
Results:
The model achieved a Dice coefficient of 0.97, a Jaccard index of 0.95, and a Tversky index of 0.98. Although the model trained only on T2-weighted images, it effectively segments other MRI contrasts, including MGE and T1-weighted images. Figure 1 illustrates how the model closely matches the manual masks and outperforms FSL in segmenting brain regions across different contrasts. Figure 2 shows the pipeline to generate the TPM maps after brain extraction. Mean tissue volumes were 1.29×10⁶ mm³ (GM), 6.87×10⁵ mm³ (WM), and 4.04×10⁵ mm³ (CSF) calculated on five rats of the same age.

·Figure 1. Comparison of FSL and U-Net brain extraction results for T2-weighted, T1-weighted, and MGE MRI contrasts.

·Figure 2. Workflow for rat brain segmentation and tissue probability map (TPM) generation.
Conclusions:
Integrating the model into the pipeline automates brain extraction, saving time in manual corrections. This enables researchers to devote more time to higher-level analyses, especially with large datasets or longitudinal studies.
Acknowledgments
This study is funded by the TUBITAK 1004 grant (22AG016).
Modeling and Analysis Methods:
Segmentation and Parcellation 1
Neuroinformatics and Data Sharing:
Workflows 2
Keywords:
ANIMAL STUDIES
Data Registration
MRI
Segmentation
Workflows
Other - Deep Learning
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I do not want to participate in the reproducibility challenge.
Please indicate below if your study was a "resting state" or "task-activation” study.
Other
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Healthy subjects
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Not applicable
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Yes
Please indicate which methods were used in your research:
Structural MRI
Which processing packages did you use for your study?
SPM
FSL
Other, Please list
-
3D Slicer
Provide references using APA citation style.
1. Chen, L. C. (2018). DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 834–848.
2. Hsu, C. (2020). Deep-learning-based framework, U-Net, for automatic delineation of rodent brain boundaries in MR images. Journal of Neuroscience Methods, 337, 108669.
3. Lin, Y. (2024). RS2-Net: An end-to-end deep learning framework for rodent skull stripping in multi-center brain MRI. NeuroImage, 298, 120769.
4. Liu, M. (2019). The SIGMA rat brain templates and atlases for multimodal MRI data analysis and visualization. ResearchGate.
5. Liu, Y. (2020). Automatic brain extraction for rodent MRI images. Neuroinformatics, 18, 395–406.
6. Oguz, I. (2014). RATS: Rapid automatic tissue segmentation in rodent brain MRI. Journal of Neuroscience Methods, 221, 175–182.
7. Simonyan, K. (2014). Very deep convolutional networks for large-scale image recognition. arXiv.
No