Poster No:
1337
Submission Type:
Abstract Submission
Authors:
CHIA-CHI HSU1, Yi-Ping Chao1
Institutions:
1Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan City, Taiwan
First Author:
Chia-Chi Hsu
Department of Computer Science and Information Engineering, Chang Gung University
Taoyuan City, Taiwan
Co-Author:
Yi-Ping Chao
Department of Computer Science and Information Engineering, Chang Gung University
Taoyuan City, Taiwan
Introduction:
Emotions deeply shape human behavior and cognition, triggering distinct brain response patterns that reflect the complexity of emotional processing. Understanding the neural impact of emotions is crucial for advancing mental health research and human-computer interaction. With its high temporal resolution, electroencephalography (EEG) captures dynamic emotional changes, making it a vital tool for emotion recognition. This study utilizes the SEED dataset (SJTU Emotion EEG Dataset), which employs emotional videos categorized as positive, neutral, and negative. Preprocessing was conducted with MATLAB's EEGLAB, and the EEG Conformer model was employed for classification. Recognizing that valence levels fluctuate during films, we compared EEG data from two strategies: normal movie segments and valence-specific segments, evaluating their effects on emotion classification performance. This research aims to refine emotion classification models and provide deeper insights into EEG-based emotion analysis.
Methods:
EEG signals were preprocessed through bandpass filtering, artifact removal, and electrode re-referencing. To ensure data uniformity, three-minute segments were extracted from four-minute videos and further divided into 16 30-second segments with a 20-second overlap. Spectrogram analysis transformed these segments into time-frequency representations. Two feature sets were explored for emotion classification: normal EEG features and valence features, defined by the F4/F3 alpha power ratio (positive ≥1.1, neutral 0.9–1.1, negative ≤0.9). These features were fed into the EEG Conformer model, which combines convolutional and transformer-based layers to capture both local and global temporal dependencies. Model performance was evaluated using two strategies: (1) 5-fold cross-validation for cross-subject generalizability, where 15 participants were split into five groups, and (2) within-subject evaluation, where two of three experimental sessions were used for training and the third for testing. Both strategies included two approaches: segments-based (accuracy at the 30-second segment level) and movies-based (aggregated predictions via majority voting for the full video).

Results:
The results reveal a clear performance advantage for the movies-based approach over the segments-based approach across both training strategies-5-fold cross-validation (cross-subject) and Experiments Split (within-subject). This suggests that longer movies better preserve temporal emotional dynamics compared to fragmented 30-second segments. Additionally, the higher accuracy observed in the within-subject setting underscores the challenge of inter-subject variability in EEG responses, while highlighting the greater consistency of individual emotional patterns over time. However, the lower performance of valence features compared to normal features suggests that valence alone inadequately captures the complexity of emotional states. Misclassifications likely stem from the limitations of the threshold-based method, which fails to reflect the nuanced nature of emotional responses. Without subjective feedback to validate EEG-derived valence features, alignment with actual emotional experiences remains uncertain. These findings highlight the need for more robust and comprehensive feature extraction methods to enhance emotion classification.

Conclusions:
This study demonstrates the advantages of using longer movies for EEG-based emotion classification, effectively capturing temporal dynamics and improving performance, particularly within subjects. The contrast between cross-subject and within-subject evaluations highlights the challenges of inter-subject variability and the need for models with better generalization. The limited performance of valence features suggests that advancing emotion classification requires more comprehensive feature extraction and the incorporation of subjective feedback.
Modeling and Analysis Methods:
Classification and Predictive Modeling 2
EEG/MEG Modeling and Analysis 1
Novel Imaging Acquisition Methods:
EEG
Keywords:
Electroencephaolography (EEG)
Emotions
Machine Learning
Other - Deep Learning
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.
Please indicate below if your study was a "resting state" or "task-activation” study.
Resting state
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Healthy subjects
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Not applicable
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
EEG/ERP
Computational modeling
Which processing packages did you use for your study?
Other, Please list
-
EEGLAB
Provide references using APA citation style.
Bos, D.P. (2007). EEG-based Emotion Recognition The Influence of Visual and Auditory Stimuli.
Duan, R. -n., Zhu, J. -y., & Lu, B. -l. (2014, January). Differential Entropy Feature for EEG-Based Emotion Classification. 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER). https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6695876&isnumber=6695843
Song, Y., Zheng, Q., Liu, B., & Gao, X. (2022, December). EEG Conformer: Convolutional Transformer for EEG Decoding and Visualization. IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 31, Pp. 710-719, 2023. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9991178&isnumber=10031624
Wang, J., & Wang, M. (2021). Review of the Emotional Feature Extraction and Classification Using EEG Signals. Cognitive Robotics, Volume 1, 2021, Pages 29-40. https://www.sciencedirect.com/science/article/pii/S2667241321000033
Zheng, W., B. -L. Lu. (2015, May). Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Transactions on Autonomous Mental Development, Vol. 7, No. 3, Pp. 162-175. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7104132&isnumber=7317835
No