Bridging Two Minds: Neural Evidence of Shared Understanding in Constrained Communication

Poster No:

656 

Submission Type:

Abstract Submission 

Authors:

Yulei Shen1,2, Takahiko Koike1, Shohei Tsuchimoto3, Ayumi Yoshioka3, Kanae Ogasawara1, Norihiro Sadato4, Hiroki Tanabe2

Institutions:

1Inter-Brain Dynamics Collaboration Unit, RIKEN Center for Brain Science, Wako, Saitama, 2Department of Cognitive & Psychological Sciences, Graduate School of Informatics, Nagoya, Aichi, 3National institute for physiological sciences, Okazaki, Aichi, 4Ritsumeikan University, Kusatsu, Shiga

First Author:

Yulei Shen  
Inter-Brain Dynamics Collaboration Unit, RIKEN Center for Brain Science|Department of Cognitive & Psychological Sciences, Graduate School of Informatics
Wako, Saitama|Nagoya, Aichi

Co-Author(s):

Takahiko Koike  
Inter-Brain Dynamics Collaboration Unit, RIKEN Center for Brain Science
Wako, Saitama
Shohei Tsuchimoto  
National institute for physiological sciences
Okazaki, Aichi
Ayumi Yoshioka  
National institute for physiological sciences
Okazaki, Aichi
Kanae Ogasawara  
Inter-Brain Dynamics Collaboration Unit, RIKEN Center for Brain Science
Wako, Saitama
Norihiro Sadato  
Ritsumeikan University
Kusatsu, Shiga
Hiroki Tanabe  
Department of Cognitive & Psychological Sciences, Graduate School of Informatics
Nagoya, Aichi

Introduction:

The ability to mentally reconstruct visual information from verbal descriptions is fundamental to human communication[4], particularly when direct visual sharing is constrained. While previous research has explored natural communication processes[1-3], information transfer in real-world contexts frequently occurs under various constraints, posing distinct challenges for the transmission and reconstruction of complex visual representations.
To investigate this phenomenon, our hyperscan-fMRI paradigm addresses three inherent constraints in real-world communication: limited descriptive capacity, temporal adaptation demands, and fragmentary information reconstruction. By computing neural pattern similarity between interacting individuals, this approach reveals how interacting brains establish shared representations despite communicative constraints.

Methods:

Forty-six participants (23 same-sex dyads) completed a face information transfer task. During each trial, senders viewed and verbally described face images to receivers within 16 seconds based on experimenter-provided hints. Receivers then had 6 seconds to form mental images of the described faces, followed by both participants completing a 4AFC identification task. Sender-receiver roles alternated in a pseudo-randomized order to examine how information quantity affected mental imagery effectiveness(Fig.1).
MRI time-series data were acquired using two MRI scanners simultaneously. We examined three aspects of information transfer by dyad-wise spatial pattern similarity (DSPS)(Fig.1): transmission quality (measuring how faithfully the visual information is preserved from sender's initial perception to receiver's final reconstruction), encoding-decoding alignment (quantifying how precisely receiver's mental imagery matches sender's verbally encoded representations), and temporal convergence (comparing DSPS between sender's description and receiver's listening during early versus late phases of interaction, capturing the progressive alignment of neural activity patterns as understanding develops). Statistical significance was determined using 5000-iteration Monte-Carlo permutation tests with FDR correction (q<.05).

Results:

DSPS revealed distinct neural patterns across three aspects of information transfer. Robust DSPS characterized transmission quality (Fig.2) in two primary networks: a visual-semantic network comprising bilateral occipitotemporal regions (left FG1 and FG2: z=0.11; right V3,V4 and OFG: z=0.026; right pMTG and V5/MT: z=0.064, right aSTG,STS and MTG: z=0.023), and an attention-control network including left pSMG and AG/IPL (z=0.035), left SPL and IPS (z=0.029), right Ins (z=0.033), and right FOC and FP(z=0.020).
Enhanced encoding-decoding alignment (Fig.3) emerged in language and memory-related regions, particularly in left vlPFC (z=0.023) and left PHC (z=0.026), extending to right PreCG (z=0.042) and right MTG (z=0.040). Analysis of temporal convergence revealed increasing DSPS from early to late phases of interaction (Fig.4) in temporal-frontal areas, including left STG and MTG (z=0.047), left FP (z=0.037), right pSTS (z=0.042), right pMTG (z=0.014), and right Ins and FOC (z=0.063), demonstrating progressive comprehension convergence during information exchange.
Supporting Image: ohbm.png
 

Conclusions:

This study demonstrates spatial pattern representational alignment between communicating dyads under information constraints. During live interactions, markedly enhanced DSPS in left FG emerged when dyads shared unique facial features. Stronger DSPS in left PFC and HPC suggest shared neural mechanisms for transforming between external and internal representations during verbal encoding and visual decoding. Over the course of interactions, DSPS in S/MTG and front-opercular regions showed increases, reflecting developing comprehension. Notably, MTG and STG displayed enhanced interaction-specific DSPS across all analyses, highlighting their modality-independent role in visual-verbal integration during real-time conversation.

Emotion, Motivation and Social Neuroscience:

Social Interaction 1
Social Neuroscience Other 2

Keywords:

Social Interactions

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I do not want to participate in the reproducibility challenge.

Please indicate below if your study was a "resting state" or "task-activation” study.

Task-activation

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Functional MRI
Behavior

For human MRI, what field strength scanner do you use?

3.0T

Which processing packages did you use for your study?

AFNI
FSL

Provide references using APA citation style.

[1] Nguyen, M., Vanderwal, T., & Hasson, U. (2019). Shared understanding of narratives is correlated with shared neural responses. NeuroImage, 184, 161-170.
[2] Stephens, G. J., Silbert, L. J., & Hasson, U. (2010). Speaker-listener neural coupling underlies successful communication. Nature, 466(7308), 571-576.
[3] Zadbood, A., Chen, J., Leong, Y. C., Norman, K. A., & Hasson, U. (2017). How we transmit memories to other brains: Constructing shared neural representations via communication. Cerebral Cortex, 27(10), 4988-5000.
[4] Zwaan, R. A., & Radvansky, G. A. (1998). Situation models in language comprehension and memory. Psychological bulletin, 123(2), 162.

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No