Poster No:
1653
Submission Type:
Abstract Submission
Authors:
Brady Williamson1, Owen Phillips2, Nathan Strong2, Kevin Aquino2
Institutions:
1University of Cincinnati, Cincinnati, OH, 2BrainKey, San Francisco, CA
First Author:
Co-Author(s):
Introduction:
Background: Brain volumetrics are increasingly used in precision medicine to understand pathophysiology underlying several neurological diseases. Specifically, regional brain volumes have been an essential predictor of neurodegenerative diseases, such as Alzheimer's Disease (Cheung et al., 2021). Options for automated brain segmentation, the most crucial step in calculating regional volumes, exist both in research and commercially. However, studies consistently find discrepancies among the current solutions (Koussis et al., 2023). Commercial software often requires specific sequences that are not always clinically feasible. BrainKey aimed to overcome this limitation by developing an automated brain segmentation tool specifically designed for clinical utility by ensuring accurate volumetric data across various sites,
Methods:
Methods: Preprocessing: MRI scans are converted to NIFTI and preprocessed to harmonize inputs for the KeyLayer algorithm. Preprocessing includes image alignment, model-based bias correction using ITK-4 (Tustison et al., 2010), and non-local average denoising to ensure limits of image intensities (Manjón et al., 2010). T1 tissue segmentation: Segmentation is achieved with outputs from a machine learning model, where the outputs are fully automated. The data used for training has been hand labeled by neuroimaging experts in house, totaling 800 scans. Hand labels targeted major cortical lobes (Occipital, Parietal, Temporal, Frontal), sub-cortical regions (the striatum), Cerebellum, the brain stem, and the major ventricular formation (Lateral ventricles, third and fourth ventricle). This training dataset is agnostic to imaging scanners and scanner manufacturers, with a varying degree of signal to noise, and different imaging resolutions of traditional T1-weighted images. The learning scheme is based on a U-net, and has incorporated image augmentations (e.g., noise, artifact injection, image rotations) to increase the accuracy and validity of the training model. Because of the wide range of training inputs, the KeyLayer algorithm can handle a variety of imaging inputs, allowing inclusion of results from different imaging sites. Volume is calculated for each brain region and is presented as percentage of intracranial volume (ICV), where ICV is calculated as the total segmented brain volume. Validation dataset: Our validation comes in two parts: Testing the data from a hold-out test set, as well as testing on an independent software: FastSurfer - a deep-learning variant of the Freesurfer workflow (Henschel et al., 2020). We use 100 hand-labeled test scans to calculate mean DICE scores, and we assessed our performance compared to FastSurfer examples. We note that the cortical parcellation from the Fast-surfer segmentation was coarse grained into lobar segments to match the BrainKey segmentation.
Results:
Results: BrainKey's performance on the held-out dataset was DICE score of 0.92 averaged across all regions, with high accuracy on the hippocampus (>0.95), and total gray and white matter (>0.95). We found agreement with FastSurfer's segmentation results (Fig. 1A) across all regions (DICE=0.8 in this example). However, in some regions, we found superior segmentation in hippocampal regions (Fig. 1B), lowering the DICE score. As BrainKey is trained on varied datasets, it can segment poorer quality scans with significant artefacts (See Fig 2).
Conclusions:
Conclusion: In this work, we present an efficient cloud-based deep learning method to accurately segment several brain regions from T1-weighted MRI scans of various quality and across several sites. Lack of generalizability is a key issue hindering clinical application of deep learning-based segmentation (Eche et al., 2021). We have shown our approach provides a potential solution to this issue by maintaining accuracy across a wide range of scan quality, parameters, scanners, and sites.
Modeling and Analysis Methods:
Segmentation and Parcellation 1
Neuroinformatics and Data Sharing:
Informatics Other 2
Keywords:
Machine Learning
MRI
Segmentation
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I do not want to participate in the reproducibility challenge.
Please indicate below if your study was a "resting state" or "task-activation” study.
Other
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Healthy subjects
Was this research conducted in the United States?
Yes
Are you Internal Review Board (IRB) certified?
Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.
Not applicable
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Not applicable
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Structural MRI
For human MRI, what field strength scanner do you use?
1.5T
3.0T
Which processing packages did you use for your study?
Other, Please list
-
BrainKey (proprietary)
Provide references using APA citation style.
1. Cheung, E. Y., Chiu, P. K., Kwan, J. S., Shea, Y. F., & Mak, H. (2021). Brain regional volume analysis to differentiate Alzheimer’s disease (AD) and vascular dementia (VD) from healthy control (HC): Machine learning approach. Alzheimer’s & Dementia, 17(S5). doi: 10.1002/alz.058343
2. Eche, T., Schwartz, L. H., Mokrane, F.-Z., & Dercle, L. (2021). Toward Generalizability in the Deployment of Artificial Intelligence in Radiology: Role of Computation Stress Testing to Overcome Underspecification. Radiology: Artificial Intelligence, 3(6), e210097. doi: 10.1148/ryai.2021210097
3. Henschel, L., Conjeti, S., Estrada, S., Diers, K., Fischl, B., & Reuter, M. (2020). FastSurfer - A fast and accurate deep learning based neuroimaging pipeline. NeuroImage, 219, 117012. doi: 10.1016/j.neuroimage.2020.117012
4. Koussis, P., Toulas, P., Glotsos, D., Lamprou, E., Kehagias, D., & Lavdas, E. (2023). Reliability of automated brain volumetric analysis: A test by comparing NeuroQuant and volBrain software. Brain and Behavior, 13(12), e3320. doi: 10.1002/brb3.3320
5. Manjón, J. V., Coupé, P., Martí-Bonmatí, L., Collins, D. L., & Robles, M. (2010). Adaptive non-local means denoising of MR images with spatially varying noise levels. Journal of Magnetic Resonance Imaging, 31(1), 192–203. doi: 10.1002/jmri.22003
6. Tustison, N. J., Avants, B. B., Cook, P. A., Zheng, Y., Egan, A., Yushkevich, P. A., & Gee, J. C. (2010). N4ITK: Improved N3 Bias Correction. IEEE Transactions on Medical Imaging, 29(6), 1310–1320. doi: 10.1109/tmi.2010.2046908
No