Poster No:
1845
Submission Type:
Abstract Submission
Authors:
Thuy Dao1, Xincheng Ye1, Chris Rorden2, Steffen Bollmann1
Institutions:
1University of Queensland, Brisbane, Queensland, 2University of South Carolina, Columbia, SC
First Author:
Thuy Dao
University of Queensland
Brisbane, Queensland
Co-Author(s):
Xincheng Ye
University of Queensland
Brisbane, Queensland
Introduction:
Recent advancements in deep learning (DL) have the potential to enable the automatic extraction of meaningful insights from complex medical imaging data (Liyanage et al., 2019).
However, deploying these models effectively in clinical settings often requires finetuning using domain-specific data. This is particularly critical since medical experts need tailored assistance across diverse tasks, each demanding unique annotated ground truths. The process of optimizing for local variability is hindered due to several factors including patient privacy considerations, complex software setups, and limited hardware resources.
While most pathologic data that is useful for training is not shared publicly due to patient data privacy regulations, there are some sources of open data for public access to provide neuroimaging knowledge. However, they lack the detailed annotations, which require substantial domain knowledge, needed to train a DL model.
As a result, many DL models underperform in real-world applications due to the lack of access to high-quality annotated data and the variability of image features (e.g. signal to noise, contrast, geometric distortions, resolution) across sites. Developers of these DL models often do not have the domain experts to prepare the ground truths while clinicians and researchers face significant technical and resource barriers to finetune models for their specific tasks.
Therefore, we aim to develop a zero-footprint, user-friendly, interactive, and secure browser-based finetuning of neuroimaging DL models. This abstract showcases a proof-of-concept to finetune a classification model on the client side in a browser (https://iishiishii.github.io/onnx-training/).
Methods:
The proposed platform utilizes the NiiVue (Niivue, n.d.) package for a versatile viewing experience and flexible access to the diverse medical image formats including its data array and metadata.
The training process leverages the ONNX Runtime which enables single-threaded CPU training with a WebAssembly backend optimized for rapid loading time and efficient execution in all modern browsers. While training is constrained to a single CPU thread, this design aligns with the typical computational resources available in clinical settings, where GPUs are unavailable. Hence, the only software dependency on the clinician's computer is a web browser.
To execute a model, its computational graph is exported into the ONNX format (e.g. from a TensorFlow or PyTorch model), and stored as training artifacts including the training model, optimizer model, eval model and checkpoint file. These artifacts are then imported to the web application and executed by the ONNX Runtime in the browser.
Crucially, image data stays local in the browser sandbox and all computing is performed client-side while providing a user-friendly interface.
Results:
The proof-of-concept application implements a MobileNetV2 (Sandler et al., 2019) model to classify neuroimages with and without brain lesions that runs fully client-side (Figure 1). The training artifacts for the model and its weights altogether take up 8.6MB of storage. Finetuning on 100 slices with 5 epochs, batch size of 5 on an Intel i5-9400H CPU @ 2.5GHz with 16GB of RAM in the Chrome browser takes 279.4s and 2.2GB RAM usage.
The interface provides the option to customize the model training process by adjusting parameters like the number of training epochs, batch size, and maximum number of images. Users can train on their own data, following a naming convention to include "yes-lesion" or "no-lesion" to provide appropriate training labels.
Conclusions:
The goal of this project is an open-source platform supporting DL model training without complex installation, fostering collaboration among institutions. This capability opens up new opportunities for federated learning in the browser to improve the general performance of a model without having the data leave the site.
Modeling and Analysis Methods:
Classification and Predictive Modeling 2
Neuroinformatics and Data Sharing:
Workflows 1
Keywords:
Computational Neuroscience
Computing
Data analysis
Open-Source Code
Open-Source Software
Workflows
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.
Please indicate below if your study was a "resting state" or "task-activation” study.
Other
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Healthy subjects
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Not applicable
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Structural MRI
Provide references using APA citation style.
Liyanage, H. (2019). Artificial Intelligence in Primary Health Care: Perceptions, Issues, and Challenges: Primary Health Care Informatics Working Group Contribution to the Yearbook of Medical Informatics 2019. Yearbook of Medical Informatics, 28(01), 041–046. https://doi.org/10.1055/s-0039-1677901
Niivue. (n.d.). https://github.com/niivue/niivue
Sandler, M.(2019). MobileNetV2: Inverted Residuals and Linear Bottlenecks (No. arXiv:1801.04381). arXiv. https://doi.org/10.48550/arXiv.1801.04381
No