Poster No:
1328
Submission Type:
Abstract Submission
Authors:
Jaehyun Jeon1, Seungwoo Jeong1, Heung-Il Suk1
Institutions:
1Korea University, Seoul, Republic of Korea
First Author:
Co-Author(s):
Introduction:
Recently, EEG foundation models (EFMs), which capture general representations of EEG data, have emerged as the standard approach for processing EEG signals (Yang, 2024). However, the high inter-subject variability inherent in EEG signals (Ko, 2022) often leads to poor performance when models trained on one subject are applied to others. Consequently, fine-tuning is required to adapt EFMs to individual subjects, yet this necessity introduces challenges such as reduced generalization capability and significant computational costs (Jiang, 2024). Parameter-efficient fine-tuning (PEFT) methods, which adapt foundation models to specific tasks by updating only a small fraction of the model's total parameters, have proven effective in vision and natural language processing (NLP) domains. However, their efficacy in the context of EEG foundation models remains insufficiently validated. Therefore, in our study, we aim to demonstrate the effectiveness of PEFT methods in fine-tuning EFMs.
Methods:
In this study, our method involves two main stages: 1) pre-training an EFM and 2) applying various PEFT methods for downstream tasks, specifically resting state recognition and workload classification. In the first stage, we used the released pre-trained parameters for LaBraM. However, for BIOT, since its released pre-trained weights were trained on a limited domain dataset, we trained it from scratch using a subset of the dataset originally used to pre-train LaBraM. In the second stage, only a subset of the EFM's parameters is fine-tuned for downstream tasks. Figure 1 illustrates the three PEFT approaches analyzed in this study. We tune a subset of the EFM's parameters through the following approaches: 1) Fine-tuning learnable additional modules attached to the existing layers of the EFM, such as SeriesAdapter (Houlsby, 2019) and LoRA (Hu, 2021), 2) Prefix-Tuning, which concatenates learnable prefix vectors to the key and value within the attention layer (Li, 2021), 3) Using a hybrid approach that combines approaches 1 and 2, including AdapterFormer (Chen, 2022) and UniPELT (Mao, 2021).

·(a) Tuning with learnable additional modules. (b) Using learnable prefix vecters. (c) Hybrid approach.
Results:
We analyze whether PEFT methods function effectively by conducting experiments using two prominent EFMs, LaBraM (Jiang, 2024) and BIOT (Yang, 2024). All PEFT methods were evaluated under three settings, where the parameter ratios were determined based on LoRA with r = 4, 8, and 16 (Hu, 2021). For this evaluation, we selected two public datasets: Crowdsourced (Williams, 2023) and STEW (Lim, 2018).
Figure 2 presents the experimental results of applying PEFT methods. Compared to linear probing, most PEFT methods exhibit superior performance on EFMs. Prefix-Tuning demonstrates the best performance across both datasets, while ,for BIOT, SeriesAdapter shows the best performance. Considering both EFMs, the hybrid PEFT methods consistently achieve high performance at 9% parameter ratio. However, the performance gap between existing PEFT methods and fine-tuning remains an issue that needs to be addressed in the future. Additionally, for LoRA and AdapterFormer, the performance varies significantly depending on the parameters, which is another aspect that needs to be addressed to ensure stability.

·Performance comparison of PEFT methods across different EFMs and datasets: (A) LaBraM with Crowdsourced, (B) LaBraM with STEW, (C) BIOT with Crowdsourced, and (D) BIOT with STEW.
Conclusions:
To address the challenges of weakened generalization capabilities and inefficiency that occur during fine-tuning when applying EFMs to downstream tasks, we demonstrate the effectiveness of PEFT methods on EFMs. Through extensive experiments, we validated the applicability and efficiency of PEFT methods, even with a reduced set of parameters required for updates. However, we found that there is a lack of robust PEFT methodology that consistently delivers strong performance across various settings and datasets. Furthermore, we identified performance gaps between fine-tuning and PEFT, as well as performance variability depending on the parameter ratios. Based on these findings, we believe that the future research directions of PEFT for EFM need to be developed to address the issues mentioned.
Modeling and Analysis Methods:
Classification and Predictive Modeling 2
EEG/MEG Modeling and Analysis 1
Keywords:
Electroencephaolography (EEG)
Modeling
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I do not want to participate in the reproducibility challenge.
Please indicate below if your study was a "resting state" or "task-activation” study.
Other
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Healthy subjects
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
Not applicable
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
Not applicable
Please indicate which methods were used in your research:
EEG/ERP
Provide references using APA citation style.
[1] Chen, S., et al. (2022). AdaptFormer: Adapting vision transformers for scalable visual recognition. Advances in Neural Information Processing Systems, 35, 16664–16678.
[2] Houlsby, N., et al. (2019). Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning (pp. 2790–2799). PMLR.
[3] Hu, E. J., et al. (2021). LoRA: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
[4] Jiang, W.-B., et al. (2024). Large brain model for learning generic representations with tremendous EEG data in BCI. arXiv preprint arXiv:2405.18765.
[5] Ko, W., et al. (2022). Semi-supervised generative and discriminative adversarial learning for motor imagery-based brain-computer interface. Scientific Reports, 12(1), 4587.
[6] Li, X. L., et al. (2021). Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190.
[7] Lim, W. L., et al. (2018). STEW: Simultaneous task EEG workload dataset. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(11), 2106–2114.
[8] Mao, Y., et al. (2021). UniPELT: A unified framework for parameter-efficient language model tuning. arXiv preprint arXiv:2110.07577.
[9] Williams, N. S., et al. (2023). Crowdsourced EEG experiments: A proof of concept for remote EEG acquisition using EmotivPRO Builder and EmotivLABS. Heliyon, 9(8).
[10] Yang, C., et al. (2024). Biot: Biosignal transformer for cross-data learning in the wild. Advances in Neural Information Processing Systems, 36.
No