3260 papers • 126 benchmarks • 313 datasets
Emotion Recognition using EEG signals
(Image credit: Papersgraph)
These leaderboards are used to track progress in eeg-emotion-recognition-15
Use these libraries to find eeg-emotion-recognition-15 models and implementations
No subtasks available.
The Electroencephalogram (EEG)-based emotion recognition is promising yet limited by the requirement of a large number of training data. Collecting substantial labeled samples in the training trials is the key to the generalization on the test trials. This process is time-consuming and laborious. In recent years, several studies have proposed various semisupervised learning (e.g., active learning) and transfer learning (e.g., domain adaptation, style transfer mapping) methods to alleviate the requirement on training data. However, most of them are iterative methods, which need considerable training time and are unfeasible in practice. To tackle this problem, we present the Fast Online Instance Transfer (FOIT) for improved affective Brain-computer Interface (aBCI). FOIT selects auxiliary data from historical sessions and (or) other subjects heuristically, which are then combined with the training data for supervised training. The predictions on the test trials are made by an ensemble classifier. As a one-shot algorithm, FOIT avoids the time-consuming iterations. Experimental results show that FOIT brings significant improvement in accuracy for the three-category classification (1%-8%) on the SEED dataset and four-category classification (1%-14%) on the SEED-IV dataset in the cross-subject, cross-session and cross-all scenarios. The time cost over the baselines is moderate (~35s on average for our machine). In contrast, to achieve comparative accuracies, the iterative methods require much more time (~45s-~900s). FOIT provides a simple, fast and practically feasible solution to improve the generalization of aBCIs and allows various choices of classifiers without constraints. Our codes are available online.
A regularized graph neural network for EEG-based emotion recognition that considers the biological topology among different brain regions to capture both local and global relations among different EEG channels and ablation studies show that the proposed adjacency matrix and two regularizers contribute consistent and significant gain to the performance of the model.
The multi-source marginal distribution adaptation (MS-MDA) for EEG emotion recognition is proposed, which takes both domain-invariant and domain-specific features into consideration and outperforms the comparison methods and state-of-the-art models in cross-session and cross-subject transfer scenarios in the authors' settings.
This study is the first to bridge previous neuroscience and ASD research findings to feature-relevance calculation for EEG-based emotion recognition with CNN in typically-development (TD) and in ASD individuals.
Experiments on SEED, SEED-IV, and MPED datasets show that the proposed GMSS has remarkable advantages in learning more discriminative and general features for EEG emotional signals.
Adding a benchmark result helps the community track progress.