Poster No:
1085
Submission Type:
Abstract Submission
Authors:
Henry Njoku1, Chika Ojiako2
Institutions:
1University of Ibadan, Nigeria, 2University of Lagos, Nigeria
First Author:
Co-Author:
Introduction:
Brain tumors, particularly gliomas, are among the deadliest cancers, with gliomas representing about 30% of all brain and central nervous system tumors and 80% of malignant brain tumors. The prognosis for glioma patients remains poor, especially for those with glioblastoma, which has a median survival of just 12–15 months. This high mortality rate is largely due to the infiltrative and complex nature of gliomas, which complicates complete surgical removal and often leads to recurrence from residual tumor cells.
Magnetic resonance imaging (MRI) is essential in diagnosing and planning treatment for brain tumors because it offers high soft tissue contrast and detailed insights into tumor size, location, and relation to nearby structures. However, manually segmenting tumors on MRI scans is time-intensive, subjective, and prone to variability among clinicians. This challenge has motivated the development of automated brain tumor segmentation methods.
The Brain Tumor Segmentation (BraTS) Challenge has emerged as a key platform for testing advanced machine learning models in glioma detection, segmentation, and classification. It provides a large, annotated dataset of multi-modal MRI scans-T1-weighted, T1-contrast enhanced, T2-weighted, and FLAIR images. Each MRI modality contributes unique and complementary information about the tumor and surrounding edema, supporting greater segmentation accuracy.
Methods:
The Swin UNETR architecture integrates Swin transformers' hierarchical feature extraction with CNNs' localized processing in a U-shaped design. It uses the Swin transformer, as the encoder and a CNN-based decoder, connected by multi-resolution skip connections, effectively capturing both global and local features for enhanced segmentation accuracy. The Swin UNETR model is trained using the soft Dice loss function, which enhances segmentation accuracy by optimizing the overlap between predicted and true segmentations. This function effectively addresses class imbalance in medical images, where background pixels often dominate. Various data augmentation techniques-such as random flips, rotations, and intensity shifts-are applied to improve the model's generalization and prevent overfitting, thereby increasing performance on unseen data. The model is trained for 300 epochs at a learning rate of 0.0004, using a cosine annealing scheduler to gradually reduce the learning rate, stabilizing training and avoiding premature convergence to suboptimal solutions. Swin UNETR was implemented using PyTorch and MONAI, leveraging PyTorch's flexibility and MONAI's specialization in medical image analysis for an efficient training platform. Input images are normalized to zero mean and unit standard deviation based on non-zero voxels, ensuring consistent scaling for stable model training. During training, random 128×128×128 patches are cropped from 3D image volumes, enabling the model to learn from various image regions and enhancing data diversity.

·Swin UNETR model architecture
Results:
Our experiments reveal that Swin UNETR attains Dice scores of 0.971, 0.975, and 0.967 for Tumor Core (TC), Whole Tumor (WT), and Enhancing Tumor (ET) regions, respectively, on the BRaTS datasets. These results underscore the model's proficiency in accurately segmenting different tumor subregions, which is vital for reliable diagnosis and treatment planning.
Conclusions:
Swin UNETR advances brain tumor segmentation by combining the strengths of transformers and CNNs, allowing it to capture both global and local features effectively. This hybrid architecture makes the model robust against variations in image quality, enhancing its adaptability and performance. High accuracy on BRaTS and Sub-Saharan African datasets demonstrates its promise for aiding brain tumor diagnosis and treatment planning, especially in low-resource settings. By addressing limitations of existing models and utilizing hierarchical feature extraction, Swin UNETR establishes a new standard for automated brain tumor segmentation.
Modeling and Analysis Methods:
Classification and Predictive Modeling 1
Image Registration and Computational Anatomy
Methods Development 2
PET Modeling and Analysis
Segmentation and Parcellation
Keywords:
Computing
Machine Learning
Modeling
MRI
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.
Please indicate below if your study was a "resting state" or "task-activation” study.
Task-activation
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Patients
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Yes
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Structural MRI
Optical Imaging
Computational modeling
For human MRI, what field strength scanner do you use?
3.0T
Provide references using APA citation style.
Bakas, S., et al. (2021). The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain tumor segmentation and radiogenomic classification. arXiv preprint arXiv:2107.02314.
Devlin, J., et al. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Hatamizadeh, A., et al. (2022). Swin UNETR: Swin transformers for semantic segmentation of brain tumors in MRI images. arXiv preprint arXiv:2201.01266.
Louis, D. N., et al. (2007). The 2007 WHO classification of tumors of the central nervous system. Acta Neuropathologica.
Menze, B. H., et al. (2015). The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging.
Milletari, F., et al. (2016). V-Net: Fully convolutional neural networks for volumetric medical image segmentation.
Wang, W., et al. (2021). TransBTS: Multimodal brain tumor segmentation using transformer. In International Conference on Medical Image Computing and Computer-Assisted Intervention.
Yes
Please select the country that the first author on this abstract resides and works in from the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries (based on gross national income per capita).
Nigeria