Poster No:
771
Submission Type:
Abstract Submission
Authors:
Risa Takeuchi1, Haruki Takeshima2, Takuya Niikawa1, Satoshi Nishida3
Institutions:
1Kobe University, Kobe, Hyogo, 2Osaka University, Suita, Osaka, 3National Institute of Information and Communications Technology, Suita, Osaka
First Author:
Co-Author(s):
Satoshi Nishida
National Institute of Information and Communications Technology
Suita, Osaka
Introduction:
Although the characteristics of mental imagery, such as vividness, complexity, and dynamicity, vary across individuals (Pearson, 2019), we cannot directly observe the experience of mental imagery in others due to its inherent subjective nature, making it challenging to explore the individuality of mental imagery. However, brain decoding methods, which recover the content of mental imagery from neural signals, offer a promising tool for investigating such subjective experience of mental imagery from various perspectives. A decoding method proposed for assessing semantic experiences from fMRI signals via word vector space (Nishida and Nishida, 2018) is well-suited for this purpose. This method was originally developed to quantify perceptual experiences evoked by naturalistic audiovisual stimuli using thousands of words. To establish a methodological basis for investigating the individuality of mental imagery, the present study aimed to adapt this method for decoding mental imagery. We validated this method by assessing its performance in decoding the semantic content of mental imagery associated with various words and examined the relationship between decoding performance and the subjective vividness of mental imagery.
Methods:
Brain responses from 44 participants were measured using fMRI while they performed two tasks. In the first task, participants watched natural movies with sound for three hours. The second task, inspired by a previous study (Horikawa and Kamitani, 2017), involved participants recalling mental imagery corresponding to 30 specific words ("imagery words") and then rating the vividness of each imagined image. We constructed a decoding model from the brain responses recorded during the first task, based on a pretrained word2vec space (Mikolov et al., 2013). This model, originally proposed in our prior study (Nishida and Nishimoto, 2018), estimates word vectors corresponding to descriptions of movie scenes using whole-brain responses to those scenes. We applied this model to estimate word vectors from the brain responses elicited during the imagery recall in the second task. The performance of this mental imagery decoding was evaluated by calculating pair-matching accuracy, defined as the likelihood that the Pearson correlation between decoded word vectors and imagery words for matched pairs exceeds that for unmatched pairs. The statistical significance of the decoding performance was assessed by comparing the actual performance to the chance level, equal to 0.5.
Results:
We found that the decoding performance for 20 of the 30 words was significantly higher than the chance level, demonstrating successful decoding of mental imagery for these words (Figure 1). Words with the highest performance, including "dog face," "woman face," "house," "worm in the back," and "man," are typically linked to specific brain regions and evoke strong emotions such as disgust or fear. In contrast, words with the lowest performance, such as "red," "happiness," "English," "sad," and "friendship," often represent abstract concepts or emotions other than disgust and fear. To further examine the behavioral correlates of our decoding performance, we assessed the Spearman correlation between word-wise decoding performance and the corresponding ratings of imagery vividness. We observed that this correlation was significantly higher than zero at the group level (Figure 2), suggesting that decoding performance reflects the vividness of mental imagery.
Conclusions:
This study demonstrated that our decoding method can successfully decode the semantic content in mental imagery evoked by words, reflecting the vividness of such imagery. The evaluation framework used in this study effectively assesses mental imagery and its subjective vividness quantitatively via the use of word vectors, offering a potentially useful tool for investigating the subjective experience of mental imagery.
Higher Cognitive Functions:
Imagery 1
Modeling and Analysis Methods:
Activation (eg. BOLD task-fMRI)
Classification and Predictive Modeling 2
Novel Imaging Acquisition Methods:
BOLD fMRI
Keywords:
Cortex
FUNCTIONAL MRI
Language
Machine Learning
Modeling
Statistical Methods
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I do not want to participate in the reproducibility challenge.
Please indicate below if your study was a "resting state" or "task-activation” study.
Task-activation
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Healthy subjects
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Yes
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Functional MRI
For human MRI, what field strength scanner do you use?
3.0T
Which processing packages did you use for your study?
SPM
Free Surfer
Provide references using APA citation style.
Pearson, J. (2019). The human imagination: the cognitive neuroscience of visual mental imagery. Nature Reviews. Neuroscience, 20(10), 624–634. https://doi.org/10.1038/s41583-019-0202-9
Nishida, S., & Nishimoto, S. (2018). Decoding naturalistic experiences from human brain activity via distributed representations of words. NeuroImage, 180(A), 232–242. https://doi.org/10.1016/j.neuroimage.2017.08.017
Horikawa, T., & Kamitani, Y. (2017). Generic decoding of seen and imagined objects using hierarchical visual features. Nature Communications, 8(1), 15037. https://doi.org/10.1038/ncomms15037
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems, 26, 3111–3119.
No