Anger Approach-Avoidance Training Modulates Visual Cortex Representations of Emotion and Intensity

Poster No:

1108 

Submission Type:

Abstract Submission 

Authors:

SoJung Lee1, Heungsik Yoon2, Ji-Hye Lim1, Jin-Young Chung1, Hyun-Chul Kim1, Sang-Hee Kim2

Institutions:

1Kyungpook National University, Daegu, Korea, Republic of, 2Korea University, Seoul, Korea, Republic of

First Author:

SoJung Lee  
Kyungpook National University
Daegu, Korea, Republic of

Co-Author(s):

Heungsik Yoon  
Korea University
Seoul, Korea, Republic of
Ji-Hye Lim  
Kyungpook National University
Daegu, Korea, Republic of
Jin-Young Chung  
Kyungpook National University
Daegu, Korea, Republic of
Hyun-Chul Kim  
Kyungpook National University
Daegu, Korea, Republic of
Sang-Hee Kim  
Korea University
Seoul, Korea, Republic of

Introduction:

The tendency to perceive and interpret social cues as hostile is closely linked to aggressive behavior (Dodge, 1993). Increased research has directed at developing interventions targeting this socio-cognitive bias, such as anger approach-avoidance modification training (AAMT) using angry facial expressions (Carver & Harmon-Jones, 2009). This study examined how AAMT modulates the neural representation of emotional faces, with a particular focus on the visual cortex, given its critical role in processing facial emotions and their intensities, as well as its involvement in emotion regulation (Liu et al., 2024; Taschereau-Dumouchel et al., 2018). However, the inherent complexity of high dimensionality of fMRI data, combined with the limited number of trainable samples, have traditionally limited the application of conventional machine learning techniques (Rastegarnia et al., 2023). To overcome these challenges, we employed a multi-task learning (MTL)-based deep learning model adapted to hierarchical data. This model enabled simultaneous classification of emotion and intensity while providing insights into the effects of the Anger AAMT intervention.

Methods:

Forty-eight right-handed female participants (age = 22.5 ± 2.6 years) were randomly assigned to one of three groups: Angry Approach (A-apr), Angry Avoidance (A-avd), and Control. During the AAMT, the A-apr group pulled a joystick to bring angry faces closer, while the A-avd group pushed the joystick to move them away. The control group pushed and pulled equally frequently. Participants underwent three sessions (Fig 1a): pre-fMRI, AAMT training, and post-fMRI. During fMRI tasks, participants viewed facial stimuli (Lee et al., 2013), including happy and angry faces at varying intensities, neutral and inverted faces (Fig 1b). Trials consisted of a cross-fixation period, followed by a sequence of stimuli, repeated eight times per run across three runs. Neural responses associated with facial stimuli were processed using SPM8 to generate whole-brain beta maps and then transformed into 1D patterns representing visual cortex activity (Glasser et al., 2016). For classification analysis, a multi-layer perceptron (MLP) based MTL model with ElasticNet was employed. This architecture enabled simultaneous classification of emotional expressions and intensity levels (Fig 1c). Model performance was evaluated using subject-wise 10-fold cross-validation to assess AAMT effects.
Supporting Image: figure1.png
 

Results:

All accuracies from varying MLP models were significantly higher than the chance level of 16.7% (p < 10-3). The MTL model with ElasticNet outperformed machine learning models in classification accuracy (Fig 2a). For joint emotion-intensity classification accuracy (Fig 2b), the pre- and post-training accuracies were as follows: A-apr (35.4 ± 1.0% vs. 35.6 ± 1.2%), A-avd (35.7 ± 1.0% vs. 35.0 ± 0.8%), and Control (36.0 ± 0.9% vs. 35.5 ± 0.9%). Among these, the A-avd group (Fig 2b) demonstrated significant differences in joint accuracy (p < .05) and emotion classification (p < 10-2). Saliency maps (Zhu et al., 2024) highlighted the regions deemed important by the model for emotion and intensity classification (Figs 2c and 2d).
Supporting Image: figure2.png
 

Conclusions:

This study highlights the potential of the AAMT intervention in modulating neural representations of facial expressions, particularly in the A-avd group, where a significant reduction in emotion categorization was observed. The MTL model effectively classified emotions and intensities, offering a robust framework for decoding complex fMRI data. The saliency maps revealed task-specific shifts across primary-, mid-level and higher-order visual processing regions. These findings suggest that AAMT may influence visual cortex activity, shaping emotional experiences and facilitating adaptive socioemotional processing.

Emotion, Motivation and Social Neuroscience:

Emotional Perception 2

Modeling and Analysis Methods:

Activation (eg. BOLD task-fMRI)
Classification and Predictive Modeling 1

Keywords:

Emotions
FUNCTIONAL MRI
Other - Approach-Avoidance Modification Training; Facial Perception; Multi-Task Learning

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I do not want to participate in the reproducibility challenge.

Please indicate below if your study was a "resting state" or "task-activation” study.

Task-activation

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Functional MRI

For human MRI, what field strength scanner do you use?

3.0T

Which processing packages did you use for your study?

SPM

Provide references using APA citation style.

1. Carver, C. S., & Harmon-Jones, E. (2009). Anger is an approach-related affect: evidence and implications. Psychological bulletin, 135(2), 183.
2. Dodge, K. A. (1993). Social-cognitive mechanisms in the development of conduct disorder and depression. Annual review of psychology, 44, 559.
3. Glasser, M. F., Coalson, T. S., Robinson, E. C., Hacker, C. D., Harwell, J., Yacoub, E., ... & Van Essen, D. C. (2016). A multi-modal parcellation of human cerebral cortex. Nature, 536(7615), 171-178.
4. Lee, K. U., Kim, J., Yeon, B., Kim, S. H., & Chae, J. H. (2013). Development and standardization of extended ChaeLee Korean facial expressions of emotions. Psychiatry investigation, 10(2), 155.
5. Liu, P., Bo, K., Ding, M., & Fang, R. (2024). Emergence of Emotion Selectivity in Deep Neural Networks Trained to Recognize Visual Objects. PLOS Computational Biology, 20(3), e1011943.
6. Rastegarnia, S., St-Laurent, M., DuPre, E., Pinsard, B., & Bellec, P. (2023). Brain decoding of the Human Connectome Project tasks in a dense individual fMRI dataset. NeuroImage, 283, 120395.
7. Taschereau-Dumouchel, V., Cortese, A., Chiba, T., Knotts, J. D., Kawato, M., & Lau, H. (2018). Towards an unconscious neural reinforcement intervention for common fears. Proceedings of the National Academy of Sciences, 115(13), 3470-3475.
8. Zhu, J., Wei, B., Tian, J., Jiang, F., & Yi, C. (2024). An Adaptively Weighted Averaging Method for Regional Time Series Extraction of fMRI-based Brain Decoding. IEEE Journal of Biomedical and Health Informatics.

Hyun-Chul Kim and Sang-Hee Kim are co-corresponding authors.

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. RS-2022-00166735 & No. RS-2023-00218987) and the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant RS-2023-00251002).

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No