Beauty beyond categories: A domain-general neural signature of visual aesthetics

Poster No:

757 

Submission Type:

Abstract Submission 

Authors:

Xinyu Liang1, Kaixiang Zhuang1, Yun Wang1, Daniel Kaiser2,3, Martin Hebart4,5, Deniz Vatansever1

Institutions:

1Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai, China, 2Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Giessen, Giessen, Germany, 3Center for Mind, Brain and Behavior (CMBB), University of Marburg, Justus Liebig University Giessen and Technical University of Darmstadt, Marburg, Germany, 4Department of Medicine, Justus Liebig University Giessen, Giessen, Germany, 5Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

First Author:

Xinyu Liang  
Institute of Science and Technology for Brain-inspired Intelligence, Fudan University
Shanghai, China

Co-Author(s):

Kaixiang Zhuang  
Institute of Science and Technology for Brain-inspired Intelligence, Fudan University
Shanghai, China
Yun Wang  
Institute of Science and Technology for Brain-inspired Intelligence, Fudan University
Shanghai, China
Daniel Kaiser  
Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Giessen|Center for Mind, Brain and Behavior (CMBB), University of Marburg, Justus Liebig University Giessen and Technical University of Darmstadt
Giessen, Germany|Marburg, Germany
Martin Hebart  
Department of Medicine, Justus Liebig University Giessen|Max Planck Institute for Human Cognitive and Brain Sciences
Giessen, Germany|Leipzig, Germany
Deniz Vatansever  
Institute of Science and Technology for Brain-inspired Intelligence, Fudan University
Shanghai, China

Introduction:

Visual experiences of beauty play a central role in our daily interactions with the world around us, influencing our decisions in contexts ranging from consumer purchases to social engagement[1]. Recent research suggests that beauty judgments engage several neurocognitive processes, including visual semantics such as object recognition[2]. However, some researchers also propose the existence of a domain-general mechanism underlying the representation of beauty regardless of object categories, which requires further investigation[3]. To address this gap, we conducted a large-scale 7T fMRI experiment in which participants viewed over 1,280 unique images of objects from 1,854 concepts, each rated for beauty by an independent sample. Using MVPA techniques, we examined the neural underpinnings of beauty judgments across high-level semantic categories.

Methods:

Twenty participants (20-29 years, mean = 24.56, SD = 2.42 years, F/M ratio = 14/6) were densely scanned at 7T fMRI (TR=1.5s, TE=25ms, 1.5mm isotropic voxels) while performing 60 runs of a continuous object recognition task (old/new) across five consecutive daily sessions. In an event-related design (3s ON, 1s OFF), participants viewed images from 1,854 concepts included in the THINGS database[4-5]. An independent sample of 3,750 raters judged the beauty of objects using a nine-point Likert scale ranging from "not beautiful at all" to "very beautiful". The Word-Net taxonomy was used to group objects into high-level semantic categories[4]. High-resolution fMRI data were preprocessed using HCP pipelines and entered into GLMsingle for trial-wise beta estimation[6-7]. To minimize semantic biases within high-level categories, beauty ratings were quantized into four levels. These levels were then merged to ensure that each beauty level contained an equal number of objects across categories. A linear SVR algorithm (C=1, 10×10-fold cross-validation) was employed to predict beauty levels based on average cortical responses[8]. Statistical significance was assessed using a 5,000-sample bootstrapping procedure (FDR q < 0.05). Regions consistently contributing to the prediction model were identified by the overlap between significant weights and Haufe-transformed activations[9].
Supporting Image: 20241212_Beauty_OHBMfig1.png
 

Results:

The analysis of object beauty ratings revealed a clear transition in beauty judgments across 16 high-level semantic categories. Animate and natural objects received higher beauty ratings than man-made and inanimate objects (Fig. 1d). Despite such behavioural differences, an SVR model identified whole-brain neural response patterns that accurately predicted four quantized levels of beauty across all semantic categories in our sample (r=0.60, p<0.001) (Fig. 2a-b). The reconstructed activation patterns illustrated that the ventral medial prefrontal and posterior cingulate cortices (i.e. core DMN regions) were most consistently and positively associated with beauty levels. Conversely, secondary visual regions showed a negative association with beauty. In addition, our neural signature successfully predicted beauty levels within high-level semantic categories (Fig. 2d), though with variation in performance. While animal, container and food categories showed the highest accuracy, tools and musical instruments showed lower performance.
Supporting Image: 20241212_Beauty_OHBMfig2.png
 

Conclusions:

Our findings revealed a neural signature that predicts object beauty across multiple semantic categories, demonstrating a domain-general neural mechanism for processing visual beauty. The identified neural pattern prominently featured regions within the DMN, previously linked to mnemonic and value-based processing across visual and non-visual domains. Although prediction accuracy varied between semantic categories, the consistency of this neural signature provides strong evidence for a shared mechanism underlying beauty judgment, regardless of object type. Collectively, these findings significantly advance our understanding of how the human brain processes visual beauty and aesthetic perception.

Higher Cognitive Functions:

Higher Cognitive Functions Other 1

Modeling and Analysis Methods:

Activation (eg. BOLD task-fMRI)
Classification and Predictive Modeling 2

Perception, Attention and Motor Behavior:

Perception: Visual

Keywords:

FUNCTIONAL MRI
Multivariate
Other - 7T MRI; Neuroaesthetics; Beauty; Visual semantics; Object recognition; Neural signature

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.

Please indicate below if your study was a "resting state" or "task-activation” study.

Task-activation

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Functional MRI
Behavior
Other, Please specify  -   Online testing

For human MRI, what field strength scanner do you use?

7T

Which processing packages did you use for your study?

FSL
Free Surfer
Other, Please list  -   QuNex; GLMsingle; CANlab

Provide references using APA citation style.

1. Skov, M., & Nadal, M. (2021). The nature of beauty: Behavior, cognition, and neurobiology. Annals of the New York Academy of Sciences, 1488(1), 44–55.
2. Chatterjee, A., & Cardilo, E. (Eds.). (2022). Brain, Beauty, and Art: Essays Bringing Neuroaesthetics into Focus (1st ed.). Oxford University Press.
3. Vessel, E. A., Isik, A. I., Belfi, A. M., Stahl, J. L., & Starr, G. G. (2019). The default-mode network represents aesthetic appeal that generalizes across visual domains. Proceedings of the National Academy of Sciences, 116(38), 19155–19164.
4. Hebart, M. N., Dickter, A. H., Kidder, A., Kwok, W. Y., Corriveau, A., Wicklin, C. V., & Baker, C. I. (2019). THINGS: A database of 1,854 object concepts and more than 26,000 naturalistic object images. PLOS ONE, 14(10).
5. Allen, E. J., St-Yves, G., Wu, Y., Breedlove, J. L., Prince, J. S., Dowdle, L. T., Nau, M., Caron, B., Pestilli, F., Charest, I., Hutchinson, J. B., Naselaris, T., & Kay, K. (2021). A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nature Neuroscience, 1–11.
6. Ji, J. L., Demšar, J., Fonteneau, C., Tamayo, Z., Pan, L., Kraljič, A., Matkovič, A., Purg, N., Helmer, M., Warrington, S., Winkler, A., Zerbi, V., Coalson, T. S., Glasser, M. F., Harms, M. P., Sotiropoulos, S. N., Murray, J. D., Anticevic, A., & Repovš, G. (2023). QuNex—An integrative platform for reproducible neuroimaging analytics. Frontiers in Neuroinformatics, 17.
7. Prince, J. S., Charest, I., Kurzawski, J. W., Pyles, J. A., Tarr, M. J., & Kay, K. N. (2022). Improving the accuracy of single-trial fMRI response estimates using GLMsingle. eLife, 11, e77599.
8. Kohoutová, L., Heo, J., Cha, S., Lee, S., Moon, T., Wager, T. D., & Woo, C.-W. (2020). Toward a unified framework for interpreting machine-learning models in neuroimaging. Nature Protocols, 15(4), 1399–1435.
9. Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96–110.

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No