Poster No:
1975
Submission Type:
Abstract Submission
Authors:
Nathalia Bianchini Esper1, Adam Santorelli1, Bryan Gonzalez1, Nicole Burke1, Samuel Louviot2, Alp Erkent1, Apurva Gokhe1, Camilla Strauss1, Celia Maiorano1, Iktae Kim1, Freymon Perez1, John d’Antonio-Bertagnolli1, Stan Colcombe1, Alexandre Franco2, Gregory Kiar3, Michelle Freund1, Michael Milham1
Institutions:
1Child Mind Institute, New York City, NY, 2Nathan Kline Institute, Orangeburg, NY, 3Center for Data Analytics, Innovation, and Rigor, Child Mind Institute, NYC, NY
First Author:
Co-Author(s):
Iktae Kim
Child Mind Institute
New York City, NY
Gregory Kiar
Center for Data Analytics, Innovation, and Rigor, Child Mind Institute
NYC, NY
Introduction:
With the continuous refinement of brain imaging methods such as MRI and EEG, researchers are gaining better insights into brain structure, function, and connectivity, enabling advances in understanding neurological and psychiatric disorders, as well as the neural basis of cognition, behavior, and emotion (Warbrick, 2022). A multimodal approach allows researchers to integrate complementary data types, addressing limitations of single-modality experiments and providing a broader perspective on brain function, connectivity, facial expressions, body language, and environmental context (Calhoun & Sui, 2016; Wagner et al., 2019). This approach is particularly valuable in social and cognitive sciences, offering deeper insights into communication, emotion regulation, and social interactions (Madsen & Parra, 2024). Here, we present a laboratory design for the next generation of data collection: a multimodal brain/body imaging approach (MoBI). A MoBI laboratory (Makeig et al., 2009) uses various techniques and data sources to examine brain activity in dynamic, interactive scenarios, along with other physiological and behavioral measures, including EEG, eye-tracking, motion capture, electromyography (EMG), electrocardiography (ECG), galvanic skin response, and audio/video recordings.
Methods:
Multimodal data collection presents two primary challenges. First, each modality's hardware and software requirements often rely on multiple computing systems and peripheral devices, creating logistical and operational burdens. Second, the independent acquisition of data streams for each modality results in separate files, complicating synchronization of timestamps across devices. To overcome these challenges, we developed a centralized hub and data collection system for MoBI setups. This approach consolidates data streams from multiple modalities onto a single computer using the Lab Streaming Layer (LSL) framework (Kothe et al., 2024). By leveraging a shared system clock, we achieve precise synchronization and effective correction for time drift across devices, eliminating the need for additional synchronization hardware or extensive post hoc adjustments. An integrated architecture reduces latency and hardware complexity, ensures real-time monitoring, and maintains the temporal fidelity of multimodal datasets. To support this implementation, we developed comprehensive documentation that provides detailed guidance on every process step, from equipment evaluation through data collection. We include specifications for the necessary hardware components to build a MoBI system, insights into the configuration of our centralized computer, and data acquisition best practices. This documentation aims to streamline the process for researchers and facilitate the adoption of robust and synchronized multimodal data collection workflows.
Results:
We have successfully integrated a range of data acquisition systems into our MoBI framework, including (but not limited to) dry and semi-dry EEG systems; table-top and wearable eye-tracking devices; and physiological measurement modalities such as electrocardiography, electrodermal activity, inductive respiration, and peripheral oxygen saturation. Additionally, the setup supports multiple audio and video streams, motion capture systems, and cognitive task presentations presented on PsychoPy and MindLogger (Klein et al., 2021). Figure 1 shows a schema of our setup. By centralizing computing resources to a single device, we exhibit a cost reduction of up to 50% compared to a traditional laboratory design.

·Basic schema of a MoBI lab setup. We demonstrate the connection between each piece of equipment and the core computer.
Conclusions:
Preliminary testing has demonstrated the LSL framework's robust capability to achieve clock synchronization and time drift correction across diverse devices, ensuring accurate temporal alignment of multimodal datasets. Our detailed documentation, publicly hosted at childmindresearch.github.io/MoBI_Docs, provides the research community with a valuable resource for replicating and adapting these methodologies for multimodal studies.
Neuroinformatics and Data Sharing:
Informatics Other 2
Novel Imaging Acquisition Methods:
Multi-Modal Imaging 1
Keywords:
Acquisition
Informatics
Workflows
Other - Multimodal data
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I do not want to participate in the reproducibility challenge.
Please indicate below if your study was a "resting state" or "task-activation” study.
Other
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Patients
Was this research conducted in the United States?
Yes
Are you Internal Review Board (IRB) certified?
Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.
Not applicable
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Not applicable
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
EEG/ERP
Behavior
Other, Please specify
-
eye-tracking, physiological measures
Provide references using APA citation style.
Calhoun, V. D., & Sui, J. (2016). Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 1(3), 230–244. https://doi.org/10.1016/j.bpsc.2015.12.005
Klein A, Clucas J, Krishnakumar A, Ghosh SS, Van Auken W, Thonet B, Sabram I, Acuna N, Keshavan A, Rossiter H, Xiao Y, Semenuta S, Badioli A, Konishcheva K, Abraham SA, Alexander LM, Merikangas KR, Swendsen J, Lindner AB, Milham MP. Remote Digital Psychiatry for Mobile Mental Health Assessment and Therapy: MindLogger Platform Development Study. J Med Internet Res. 2021 Nov 11;23(11):e22369. doi: 10.2196/22369. PMID: 34762054; PMCID: PMC8663601.
Kothe, C., Shirazi, S. Y., Stenner, T., Medine, D., Boulay, C., Grivich, M. I., Mullen, T., Delorme, A., & Makeig, S. (2024). The Lab Streaming Layer for synchronized multimodal recording. In bioRxiv : the preprint server for biology. https://doi.org/10.1101/2024.02.13.580071
Madsen, J., & Parra, L. C. (2024). Bidirectional brain-body interactions during natural story listening. Cell Reports, 43(4), 114081. https://doi.org/10.1016/j.celrep.2024.114081
Makeig, S., Gramann, K., Jung, T.-P., Sejnowski, T. J., & Poizner, H. (2009). Linking brain, mind and behavior. International Journal of Psychophysiology, 73(2), 95–100. https://doi.org/10.1016/j.ijpsycho.2008.11.008
Wagner, J., Martinez-Cancino, R., Delorme, A., Makeig, S., Solis-Escalante, T., Neuper, C., & Mueller-Putz, G. (2019). High-density EEG mobile brain/body imaging data recorded during a challenging auditory gait pacing task. Scientific Data, 6(1), 211. https://doi.org/10.1038/s41597-019-0223-2
Warbrick, T. (2022). Simultaneous EEG-fMRI: What have we learned and what does the future hold? Sensors (Basel, Switzerland), 22(6), 2262. https://doi.org/10.3390/s22062262
No