Poster No:
1834
Submission Type:
Abstract Submission
Authors:
Taylor Bolt1, Lucina Uddin2
Institutions:
1UCLA Health, Los Angeles, CA, 2Department of Psychology, University of California Los Angeles, Los Angeles, CA
First Author:
Co-Author:
Lucina Uddin, Ph.D.
Department of Psychology, University of California Los Angeles
Los Angeles, CA
Introduction:
Skilled observation of scientific measurements is a crucial component of both quality assurance and scientific discovery. For complex data sources, visualization software can enable trained observers to spot and extract phenomena of interest from their data. In the field of fMRI research, statistical and machine learning analyses have served as the principle means for extraction and representation of phenomena from data. Visualization of fMRI data has largely served as a means of quality assurance of preprocessing pipelines and post-hoc interpretation of analysis outputs. However, a 'data-analysis-first' approach ignores the scientific potential of careful observation by a trained observer. We developed a specialized visualization tool, FMRI Interactive Navigation and Discovery (FIND) viewer1, to facilitate observation and interpretation of spatiotemporal patterns in fMRI recordings. This tool provides dynamic visualizations of the temporal progression of fMRI recordings over the course of a scan, as well as integrated visualization of simultaneously collected task presentation and/or physiological time courses. In addition, preprocessing and analytic tools to facilitate pattern discovery and complement visual observations are integrated into the tool.
Methods:
FIND viewer is a locally hosted web application displayed via a web browser (e.g. Google Chrome). The back-end of the application is built in Flask2 (v3.0.3) and Python (v3.11.9). Reading, writing and preprocessing of fMRI files (.nii, .gii) are performed with nibabel 4(v5.2.1) and nilearn 5(v0.10.4). The front-end of the application is built in Bootstrap (v4.5.2) and custom HTML, CSS and JavaScript. Volume- and surface-based fMRI, as well as time course visualization is performed with Plotly3 (v3.0.0).
Results:
FIND viewer (https://github.com/tsb46/fmri-findviz) supports visualization of nifti volume (.nii; Figure 1) and gifti surface (.gii; Figure 2) fMRI file formats. Key features of the tool include:
• FMRI Visualization: support for orthogonal (sagittal, coronal and axial) and montage (consecutive slices along the same axis) views are supported for nifti files. 3D surface views are supported for gifti files. Plotly-supported user interactions (e.g. hover info) are available in both file formats. Standard fMRI plot features are also available: colormaps, color-scale changes, direction markers (e.g. left-right), and world-coordinate information.
• FMRI Preprocessing: several preprocessing options are available that facilitate fMRI pattern discovery, including normalization of time courses (z-score or mean centering), temporal filtering (Butterworth bandpass filter) and gaussian spatial smoothing. Note, FIND viewer does not perform end-to-end fMRI preprocessing.
• Analytics: calculations of whole-brain temporal similarity/distance, peak finding, time point averaging, and cross-correlation.
• Time Course Visualization: integrated time course visualization with task stimuli and/or physiological recordings (Figure 3). Visualization of user-selected voxel/vertex fMRI time courses is also available.


Conclusions:
FIND Viewer (https://github.com/tsb46/fmri-findviz) is in active development, and we are working towards our first stable release (v0.1.0) in early 2025. Several visual and analytic features are on the 'to-do' list, including brain parcellation/label visualization and dimension-reduction analysis (e.g. PCA, ICA). Feedback and contributions are welcome.
Modeling and Analysis Methods:
Exploratory Modeling and Artifact Removal 2
Neuroinformatics and Data Sharing:
Informatics Other 1
Keywords:
Informatics
Open-Source Software
Other - visualization
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I do not want to participate in the reproducibility challenge.
Please indicate below if your study was a "resting state" or "task-activation” study.
Other
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Healthy subjects
Was this research conducted in the United States?
Yes
Are you Internal Review Board (IRB) certified?
Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.
Not applicable
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Not applicable
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Functional MRI
Structural MRI
For human MRI, what field strength scanner do you use?
3.0T
Which processing packages did you use for your study?
Other, Please list
-
nilearn, nibabel
Provide references using APA citation style.
References
1. Bolt, T. tsb46/fmri-findviz. (2024).
2. Pallets. flask. Pallets (2024).
3. Inc, P. T. Collaborative data science. https://plot.ly (2015).
4. Brett, M. et al. nipy/nibabel: 5.2.1. Zenodo https://doi.org/10.5281/zenodo.10714563 (2024).
5. Abraham, A. et al. Machine learning for neuroimaging with scikit-learn. Front. Neuroinformatics 8, (2014).
No