Building accessible and validated neuroimaging software with NMIND

Poster No:

1835 

Submission Type:

Abstract Submission 

Authors:

Jason Kai1, Lucile Moore2, Jon Cluce1, Begim Fayzullobekova2, Mathias Goncalves3, Timothy Hendrickson2, Connor Lane1, Erik Lee2, Jacob Lundquist2, Christopher Markiewicz3, rae McCollum2, Laura Newman1, Paul Reiners2, Tamsin Rogers1, Florian Rupprecht1, Biraj Shrestha1, Damien Fair2, Michael Milham1, Gregory Kiar1

Institutions:

1Child Mind Institute, New York, NY, 2University of Minnesota, Minneapolis, MN, 3Stanford University, Stanford, CA

First Author:

Jason Kai  
Child Mind Institute
New York, NY

Co-Author(s):

Lucile Moore, PhD  
University of Minnesota
Minneapolis, MN
Jon Cluce  
Child Mind Institute
New York, NY
Begim Fayzullobekova  
University of Minnesota
Minneapolis, MN
Mathias Goncalves  
Stanford University
Stanford, CA
Timothy Hendrickson, PhD  
University of Minnesota
Minneapolis, MN
Connor Lane  
Child Mind Institute
New York, NY
Erik Lee  
University of Minnesota
Minneapolis, MN
Jacob Lundquist  
University of Minnesota
Minneapolis, MN
Christopher Markiewicz  
Stanford University
Stanford, CA
rae McCollum  
University of Minnesota
Minneapolis, MN
Laura Newman  
Child Mind Institute
New York, NY
Paul Reiners  
University of Minnesota
Minneapolis, MN
Tamsin Rogers  
Child Mind Institute
New York, NY
Florian Rupprecht  
Child Mind Institute
New York, NY
Biraj Shrestha  
Child Mind Institute
New York, NY
Damien Fair, PhD  
University of Minnesota
Minneapolis, MN
Michael Milham  
Child Mind Institute
New York, NY
Gregory Kiar  
Child Mind Institute
New York, NY

Introduction:

Brain imaging researchers have historically found it necessary to develop their own set of tools to work with large heterogeneous datasets [1]. This practice raises concerns about long term viability, in terms of both engineering practices and funding availability [2]. Furthermore, the validation and accessibility of robust, well-designed tools are crucial for ensuring reproducible research, particularly as neuroimaging datasets grow in scale and complexity [3]. The NMIND consortium was established to create community standards curated from existing software standards, foster collaboration in scientific software, minimize redundant efforts, and improve reproducibility in neuroimaging [2]. As part of the effort, a software standards checklist was created for evaluating developed tools. Here, we highlight a recent collaborative effort to evaluate several tools designed for processing and analyzing neuroimaging data.

Methods:

Over the course of one day, a hackathon-style session was organized to perform an initial evaluation of tools following the software standards checklist. To simulate real-world use, developers completed an initial evaluation of their own tools in the NMIND repository, and a third-party conducted an editorial review of the evaluation to ensure the checklist was complete and accurate. If necessary, revisions were communicated with the checklist submission author. Fig. 1A demonstrates the checklist submission and review workflow. Upon approval, the reviewed tool's evaluation was added to the proceedings page or updated if an entry already existed with relevant links (e.g. documentation, source code, etc). For this effort, tool evaluation was split amongst two developer groups, each associated with the primary development of a set of tools being reviewed. One group of developers performed the initial checklist submission on the tools for which they had ownership, while the other performed the review. Towards the end of the session, feedback from all involved was collected in order to implement changes to improve the tool evaluation process.
Supporting Image: figure1.png
 

Results:

During the session, participants initially evaluated 17 new tools and updated evaluations for 2 existing tools. The editorial process involved communication between reviewers and developers that directly led to improvements towards respective tool repositories in order to satisfy the evaluation criteria. All evaluated tools were either added to or updated on the NMIND proceedings page following finalization of reviews. Fig. 2 shows examples of the editorial review process, as well as the NMIND proceedings page. Feedback from participants highlighted areas for enhancement, such as a method for indicating a checklist item is "non-applicable" to the evaluated tool, as well as associated free-form text fields to provide direct comments.

One such example of an implemented change elicited from the session was aimed at reducing reviewer friction. Originally, reviewing a submitted checklist involved additional steps of saving evaluated checklists as a file and uploading it to the checklist page. With evaluations submitted in a structured template, the entire checklist is encapsulated in a code block that could be extracted directly. To simplify the process, additional import capabilities were added to import the checklist from text, allowing for an easy copy-paste method for rendering checklists from an ongoing review.
Supporting Image: figure2.png
 

Conclusions:

This collaborative effort highlights ongoing developments towards making robust scientific software accessible. Feedback provided from contributors leads to enhanced community standards. By making the checklist available through NMIND and the review process open, we hope to engage the broader scientific community towards an agreement on scientific software standards through submissions of tool evaluations while minimizing friction to the process. Combined community efforts will be essential towards research and clinical advancements.

Neuroinformatics and Data Sharing:

Workflows 2
Informatics Other 1

Keywords:

Informatics
Open-Source Code
Open-Source Software
Workflows
Other - Standardization;

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I do not want to participate in the reproducibility challenge.

Please indicate below if your study was a "resting state" or "task-activation” study.

Other

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

Yes

Are you Internal Review Board (IRB) certified? Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.

Not applicable

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Not applicable

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Other, Please specify  -   Software development

Provide references using APA citation style.

[1] Yarkoni, T., Markiewicz, C. J., de la Vega, A., Gorgolewski, K. J., Salo, T., Halchenko, Y. O., McNamara, Q., DeStasio, K., Poline, J. B., Petrov, D., Hayot-Sasson, V., Nielson, D. M., Carlin, J., Kiar, G., Whitaker, K., DuPre, E., Wagner, A., Tirrell, L. S., Jas, M., Hanke, M., … Blair, R. (2019). PyBIDS: Python tools for BIDS datasets. Journal of open source software, 4(40), 1294. https://doi.org/10.21105/joss.01294

[2] Kiar, G., Clucas, J., Feczko, E., Goncalves, M., Jarecka, D., Markiewicz, C. J., ... & Fair, D. (2023). Align with the NMIND consortium for better neuroimaging. Nature human behaviour, 7(7), 1027-1028.

[3] Szucs, D., & Ioannidis, J. P. (2020). Sample size evolution in neuroimaging research: An evaluation of highly-cited studies (1990–2012) and of latest practices (2017–2018) in high-impact journals. NeuroImage, 221, 117164.

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No