3260 papers • 126 benchmarks • 313 datasets
A Brain-Computer Interface (BCI), also known as a Brain-Machine Interface (BMI), is a technology that enables direct communication between the brain and an external device, such as a computer or a machine, without the need for any muscular or peripheral nerve activity. Essentially, BCIs establish a direct pathway between the brain and an external device, allowing for bidirectional communication. BCIs typically work by detecting and interpreting brain signals, which are then translated into commands that control external devices or provide feedback to the user. These brain signals can be detected through various methods, including electroencephalography (EEG), which measures electrical activity in the brain through electrodes placed on the scalp, or invasive techniques such as implanted electrodes.
(Image credit: Papersgraph)
These leaderboards are used to track progress in brain-computer-interface-15
No benchmarks available.
Use these libraries to find brain-computer-interface-15 models and implementations
No datasets available.
The proposed Siamese deep domain adaptation (SDDA) framework for cross-session MI classification based on mathematical models in domain adaptation theory can be easily applied to most existing artificial neural networks without altering the network structure, which facilitates the method with great flexibility and transferability.
A novel EEG decoding method that mainly relies on the attention mechanism, which has good potential to promote the practicality of brain-computer interface (BCI) and is the first time that a detailed and complete method based on the transformer idea has been proposed.
The proposed online algorithm is evaluated and compared with state-of-the-art SSVEP methods, which are based on Canonical Correlation Analysis (CCA), and shown to improve both the classification accuracy and the information transfer rate in the online and asynchronous setup.
A novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities is proposed, which outperforms a series of baselines and the competitive state-of-the-art methods.
This work focuses on the well-known common spatial pattern (CSP) and Riemannian covariance methods, and significantly extend these two feature extractors to multiscale temporal and spectral cases.
BEATS is capable of collecting 32-channel EEG signals at a guaranteed sampling rate of 4 kHz with wireless transmission and displays a better sampling rate than state-of-the-art systems used in many EEG fields, which makes it can be quickly reproduced.
A dataset of physiological signals collected from an experiment on auditory attention to natural speech is presented and four different predictive tasks involving the collected dataset are formulated and a feature extraction framework is developed.
The combination of colored inverted face stimulation with classification using convolutional neural networks in the hard settings of dry electrodes and fast flashing single-trial ERP-based BCI demonstrates the approach potential in improving the practicality of ERP based BCIs.
The Brain-Computer Interface System, developed for the BCI discipline of Cybathlon 2020 competition, and the range40 method combined with an ensemble SVM classifier significantly reached the highest accuracy level (0.4607), with a 4-class classification, and outperformed the state-of-the-art EEGNet.
Adding a benchmark result helps the community track progress.