Poster No:
666
Submission Type:
Abstract Submission
Authors:
Wenlu Li1, Jin Li2, Weiyang Shi3, Congying Chu3, Tianzi Jiang3
Institutions:
1Institute of Automation, Chinese Academy of Sciences, Beijing, Beijing, 2School of Psychology, Capital Normal University, Beijing, Beijing, 3Institute of Automation, Chinese Academy of Sciences, Beijing, China
First Author:
Wenlu Li
Institute of Automation, Chinese Academy of Sciences
Beijing, Beijing
Co-Author(s):
Jin Li
School of Psychology, Capital Normal University
Beijing, Beijing
Weiyang Shi
Institute of Automation, Chinese Academy of Sciences
Beijing, China
Congying Chu
Institute of Automation, Chinese Academy of Sciences
Beijing, China
Tianzi Jiang
Institute of Automation, Chinese Academy of Sciences
Beijing, China
Introduction:
Facial identity and facial social traits profoundly impact our daily interactions and decision-making processes (Sutherland, 2022; Todorov, 2015). Faces provide both identity and social-trait information, yet the interaction between these in the human brain remains unclear. Studies suggest that facial identity and social traits are related, with faces perceived as more similar when rated similarly on social traits (Hassin, 2000). Research in primates and humans shows that the medial temporal lobe (MTL), including the amygdala and hippocampus, processes both identity and social-trait information (Cao, 2022; Tyree, 2023). However, how these two types of information interact in the MTL is not well understood. In this study, we used intracranial recordings to examine whether identity information in the human MTL contributes to the extraction of social-trait information. Specifically, we explored whether the neural representation of social trait in the human MTL aligns with that of a deep convolutional neural network (DCNN) with facial identification experience.
Methods:
The single-neuron recordings in the human MTL for this study were obtained from a publicly available dataset (Cao, 2022). This dataset comprises recordings across 12 participants, each participant viewed 500 face images, and each face image had ratings on 8 social traits (warm, critical, competent, practical, feminine, strong, youthful, and charismatic). In the human MTL, a total of 1577 units were included in the subsequent analysis. 753 units were from the amygdala and 824 were from the hippocampus. Three DCNNs with almost identical architecture but different visual experiences (VGG-face for face identification, VGG-16 for object classification, and VGG-untrained for no visual experience) were included in this study. The same 500 face images viewed by participants served as inputs to these DCNNs. The activation values of the last convolutional layer of the DCNNs (conv5-3) were used to compute the neural representations of social traits. A unit in the human MTL or DCNNs was considered selective for a specific social trait if its output showed a significant correlation with the corresponding social trait rating (p < .05, Spearman correlation). For each social trait, the population activity of the selective units was defined as its neural representation. We calculated the pairwise coupling effect between the 8 social-trait neural representations for the human MTL and each DCNN separately, yielding 4 social-trait representational geometric matrices. Specifically, we decomposed the rank correlation matrix between neural representations using singular value decomposition (SVD), applied the first column of the singular vector matrix to the neural representations, and the coupling effect was quantified as the correlation between the transformed neural representations (Waschke, 2023). We then computed the correlations between the human MTL and the DCNNs, both for the social-trait representational geometric matrix and for the neural representations of individual social traits. The former captures the relationship of social-trait space between different systems, which is a two-dimensional relationship, while the latter reflects the one-dimensional relationship.
Results:
The results showed that there was a significant correlation between the social-trait representational geometric matrix for the human MTL and the VGG-face (r = .49, p = .009). However, there was no significant correlation between the human MTL and the other two DCNNs (VGG-16: r = .27, p = .16; VGG-untrained: r = .35, p = .09). Moreover, the VGG-face exhibited higher correlation values with the human MTL across all individual social traits.
Conclusions:
These results suggest that in the human MTL, the information that supports the recognition of facial identity is also contained in the neural representations supporting the judgments of facial social traits.

·Brain-like Facial Social-trait Representations Emerge in a Facial Identification DCNN
Emotion, Motivation and Social Neuroscience:
Social Neuroscience Other 1
Higher Cognitive Functions:
Higher Cognitive Functions Other 2
Modeling and Analysis Methods:
Classification and Predictive Modeling
Perception, Attention and Motor Behavior:
Perception: Visual
Keywords:
Cognition
Computational Neuroscience
Data analysis
Perception
Single unit recording
Other - Facial Identity; Facial Social Traits; Deep Convolutional Neural Networks
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.
Please indicate below if your study was a "resting state" or "task-activation” study.
Task-activation
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Patients
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Yes
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
Computational modeling
Provide references using APA citation style.
Cao, R. (2022). A human single-neuron dataset for face perception. Scientific data, 9(1), 365.
Cao, R. (2022). A neuronal social trait space for first impressions in the human amygdala and hippocampus. Molecular psychiatry, 27(8), 3501-3509.
Hassin, R. (2000). Facing faces: studies on the cognitive aspects of physiognomy. Journal of personality and social psychology, 78(5), 837.
Sutherland, C. A. (2022). Understanding trait impressions from faces. British Journal of Psychology, 113(4), 1056-1078.
Todorov, A. (2015). Social attributions from faces: Determinants, consequences, accuracy, and functional significance. Annual review of psychology, 66(1), 519-545.
Tyree, T. J. (2023). Cross-modal representation of identity in the primate hippocampus. Science, 382(6669), 417-423.
Waschke, L. (2023). Single-neuron spiking variability in hippocampus dynamically tracks sensory content during memory formation in humans. bioRxiv.
No