3260 papers • 126 benchmarks • 313 datasets
A Brain-Computer Interface (BCI), also known as a Brain-Machine Interface (BMI), is a technology that enables direct communication between the brain and an external device, such as a computer or a machine, without the need for any muscular or peripheral nerve activity. Essentially, BCIs establish a direct pathway between the brain and an external device, allowing for bidirectional communication. BCIs typically work by detecting and interpreting brain signals, which are then translated into commands that control external devices or provide feedback to the user. These brain signals can be detected through various methods, including electroencephalography (EEG), which measures electrical activity in the brain through electrodes placed on the scalp, or invasive techniques such as implanted electrodes.
(Image credit: Papersgraph)
These leaderboards are used to track progress in brain-computer-interface-4
No benchmarks available.
Use these libraries to find brain-computer-interface-4 models and implementations
No datasets available.
A novel EEG decoding method that mainly relies on the attention mechanism, which has good potential to promote the practicality of brain-computer interface (BCI) and is the first time that a detailed and complete method based on the transformer idea has been proposed.
The proposed Siamese deep domain adaptation (SDDA) framework for cross-session MI classification based on mathematical models in domain adaptation theory can be easily applied to most existing artificial neural networks without altering the network structure, which facilitates the method with great flexibility and transferability.
The proposed online algorithm is evaluated and compared with state-of-the-art SSVEP methods, which are based on Canonical Correlation Analysis (CCA), and shown to improve both the classification accuracy and the information transfer rate in the online and asynchronous setup.
This work focuses on the well-known common spatial pattern (CSP) and Riemannian covariance methods, and significantly extend these two feature extractors to multiscale temporal and spectral cases.
A novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities is proposed, which outperforms a series of baselines and the competitive state-of-the-art methods.
A dataset of physiological signals collected from an experiment on auditory attention to natural speech is presented and four different predictive tasks involving the collected dataset are formulated and a feature extraction framework is developed.
BEATS is capable of collecting 32-channel EEG signals at a guaranteed sampling rate of 4 kHz with wireless transmission and displays a better sampling rate than state-of-the-art systems used in many EEG fields, which makes it can be quickly reproduced.
The combination of colored inverted face stimulation with classification using convolutional neural networks in the hard settings of dry electrodes and fast flashing single-trial ERP-based BCI demonstrates the approach potential in improving the practicality of ERP based BCIs.
The Brain-Computer Interface System, developed for the BCI discipline of Cybathlon 2020 competition, and the range40 method combined with an ensemble SVM classifier significantly reached the highest accuracy level (0.4607), with a 4-class classification, and outperformed the state-of-the-art EEGNet.
The brain-computer interface (BCI) is a cutting-edge technology that has the potential to change the world. Electroencephalogram (EEG) motor imagery (MI) signal has been used extensively in many BCI applications to assist disabled people, control devices or environments, and even augment human capabilities. However, the limited performance of brain signal decoding is restricting the broad growth of the BCI industry. In this article, we propose an attention-based temporal convolutional network (ATCNet) for EEG-based motor imagery classification. The ATCNet model utilizes multiple techniques to boost the performance of MI classification with a relatively small number of parameters. ATCNet employs scientific machine learning to design a domain-specific deep learning model with interpretable and explainable features, multihead self-attention to highlight the most valuable features in MI-EEG data, temporal convolutional network to extract high-level temporal features, and convolutional-based sliding window to augment the MI-EEG data efficiently. The proposed model outperforms the current state-of-the-art techniques in the BCI Competition IV-2a dataset with an accuracy of 85.38% and 70.97% for the subject-dependent and subject-independent modes, respectively.
Adding a benchmark result helps the community track progress.