Unsupervised Fetal Brain MRI Quality Assessment based on Orientation Prediction Uncertainty

Poster No:

1515 

Submission Type:

Abstract Submission 

Authors:

Mingxuan Liu1, Haoxiang Li1, Zihan Li1, Hongjia Yang1, Jialan Zheng2, Haibo Qu3, Qiyuan Tian1,4

Institutions:

1School of Biomedical Engineering, Tsinghua University, Beijing, China, 2Tanwei College, Tsinghua University, Beijing, China, 3Department of Radiology, West China Second University Hospital, Sichuan University, Chengdu, China, 4Tsinghua Laboratory of Brain and Intelligence, Beijing, China

First Author:

Mingxuan Liu  
School of Biomedical Engineering, Tsinghua University
Beijing, China

Co-Author(s):

Haoxiang Li  
School of Biomedical Engineering, Tsinghua University
Beijing, China
Zihan Li  
School of Biomedical Engineering, Tsinghua University
Beijing, China
Hongjia Yang  
School of Biomedical Engineering, Tsinghua University
Beijing, China
Jialan Zheng  
Tanwei College, Tsinghua University
Beijing, China
Haibo Qu  
Department of Radiology, West China Second University Hospital, Sichuan University
Chengdu, China
Qiyuan Tian  
School of Biomedical Engineering, Tsinghua University|Tsinghua Laboratory of Brain and Intelligence
Beijing, China|Beijing, China

Introduction:

MRI is crucial for assessing fetal brain development and pathology (Manganaro et al., 2023). Nevertheless, the acquisition of 2D thick-slice T2-weighted images is vulnerable to inter- and intra-slice motion (Xu et al., 2020). Recent efforts employ deep learning models for image quality assessment (IQA) in fetal MRI (Largent et al., 2021; Sanchez et al., 2023; Xu et al., 2020; Zhang et al., 2024), aiming for on-the-fly IQA and image re-acquisition during scans. However, most methods require image quality labels, obtained through time-consuming and subjective visual inspection by expert radiologists. Furthermore, the heterogeneity of fetal MRI data, acquired using different scanners and imaging sequences across hospitals, makes it challenging to directly apply pre-trained models to clinical data. To address aforementioned challenges, we propose an orientation recognition KAN model (OR-KAN) for unsupervised fetal brain IQA without quality labels for training, enhancing robustness against domain shifts due to clinical data heterogeneity.

Methods:

Data Acquisition. 2D T2-weighted images in the axial, coronal, and sagittal planes were acquired on 784 pregnant women (20-36 weeks gestation) with normal fetal brains were enrolled, with written informed consent forms and IRB approval.The single shot fast echo (TSE-SSH) sequence was used for 708 cases, while the balanced turbo field echo (BTFE) sequence for 76 cases. For TSE-SSH data, the image quality was annotated by obstetricians, categorizing them into 568 high-quality stacks and 140 low-quality stacks. Since all 76 BTFE stacks were of high quality, we employed the method proposed by (Duffy et al., 2021) to generate low-quality stacks with simulated motion artifacts.

Network Architecture (Fig. 1A). The OR-KAN deep learning model was proposed for classifying TSE images into axial, coronal, and sagittal orientations. The first five blocks were employed to extract features from the input. The sixth block increased non-linearity of the model. The final block mapped features to classification vectors using Kolmogorov-Arnold Network (KAN) (Liu et al., 2024). OR-KAN was trained using slices sampled from several fetal brain MRI atlases (Ciceri et al., 2024), eliminating the need for manual annotation.

Quality Assessment Pipeline (Fig. 1B). In the IQA phase, a stack of fetal MRI images is initially processed by a network for brain extraction, followed by background removal and resizing. The processed images are then fed into OR-KAN to generate predictive vectors for each slice, representing the probabilities of orientation. Significant motion can degrade image quality, causing inconsistent predictions and increased variation. Consequently, the quality score is derived from predictive entropy, with a small constant added to avoid logarithmic errors, while a normalization factor is included to maintain consistency.
Supporting Image: Fig11.jpg
   ·Figure 1. Proposed OR-KAN Model and Quality Assessment Pipeline.
 

Results:

Two test sets were created from TSE-SSH scans (354 stacks) and BTFE scans (152 stacks). We compared four supervised deep learning methods: DL_slice (Xu et al., 2020), DL_stack (Sanchez et al., 2023), ResNet18 (He et al., 2016), and CoAtNet (Dai et al., 2021). ResNet18 and CoAtNet were trained on TSE-SSH scans (354 stacks), while DL_slice and DL_stack used published pre-trained models.

Fig. 2 (A-C) show that OR-KAN achieves high IQA accuracy without need for manually labeled datasets. For the TSE-SSH dataset, the AUROC reached 0.818, AUPR was 0.943, precision was 0.917, and recall was 0.916, just behind the DL_slice method trained on 7177 labeled slices. In the BTFE dataset, AUROC was 0.886, AUPR was 0.880, precision was 0.803, and recall was 0.816, outperforming all other methods. This advantage is mainly because other supervised learning methods trained on TSE-SSH data are not robust to domain shifts.
Supporting Image: Fig21.jpg
   ·Figure 2. Quantitative Results on the Test Dataset.
 

Conclusions:

We present an innovative unsupervised approach for fetal brain MRI quality assessment using the Orientation Recognition KAN model (OR-KAN).

Modeling and Analysis Methods:

Methods Development 1
Motion Correction and Preprocessing 2

Neuroinformatics and Data Sharing:

Workflows
Informatics Other

Novel Imaging Acquisition Methods:

Anatomical MRI

Keywords:

Data analysis
Design and Analysis
Machine Learning
MRI
PEDIATRIC
STRUCTURAL MRI
Other - Fetal Brain, Quality Assessment

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I do not want to participate in the reproducibility challenge.

Please indicate below if your study was a "resting state" or "task-activation” study.

Other

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Structural MRI
Other, Please specify  -   Deep Learning

For human MRI, what field strength scanner do you use?

1.5T

Which processing packages did you use for your study?

Other, Please list  -   Pytorch

Provide references using APA citation style.

1. Ciceri, T. (2024). Fetal brain MRI atlases and datasets: A review. NeuroImage, 292, 120603. https://doi.org/10.1016/j.neuroimage.2024.120603

2. Dai, Z. (2021). CoAtNet: Marrying convolution and attention for all data sizes. arXiv. http://arxiv.org/abs/2106.04803

3. Duffy, B. A. (2021). Retrospective motion artifact correction of structural MRI images using deep learning. NeuroImage, 230, 117756. https://doi.org/10.1016/j.neuroimage.2021.117756

4. He, K. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778.

5. Largent, A. (2021). Image quality assessment of fetal brain MRI using multi-instance deep learning methods. Journal of Magnetic Resonance Imaging, 54(3), Article 3. https://doi.org/10.1002/jmri.27649

6. Liu, Z. (2024). KAN: Kolmogorov-Arnold networks. arXiv. https://doi.org/10.48550/arXiv.2404.19756

7. Manganaro, L. (2023). Fetal MRI: What’s new? A short review. European Radiology Experimental, 7(1), Article 1. https://doi.org/10.1186/s41747-023-00358-5

8. Sanchez, T. (2023). FetMRQC: An open-source machine learning framework for multi-centric fetal brain MRI quality control. arXiv. http://arxiv.org/abs/2311.04780

9. Xu, J. (2020). Semi-supervised learning for fetal brain MRI quality assessment with ROI consistency. arXiv. http://arxiv.org/abs/2006.12704

10. Zhang, W. (2024). A joint brain extraction and image quality assessment framework for fetal brain MRI slices. NeuroImage, 290, 120560. https://doi.org/10.1016/j.neuroimage.2024.120560

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No