3260 papers • 126 benchmarks • 313 datasets
Emotion Recognition from facial images
(Image credit: Papersgraph)
These leaderboards are used to track progress in facial-emotion-recognition-1
Use these libraries to find facial-emotion-recognition-1 models and implementations
No subtasks available.
A novel pipeline strategy is introduced, where the training of the dense layer(s) is followed by tuning each of the pre-trained DCNN blocks successively that has led to gradual improvement of the accuracy of FER to a higher level.
This work adopts the VGGNet architecture, rigorously fine-tune its hyperparameters, and experiment with various optimization methods to achieve the highest single-network classification accuracy on the FER2013 dataset.
This work introduces a new problem of facial emotion recognition with noisy multi-task annotations, and suggests a formulation from the point of joint distribution match view, which aims at learning more reliable correlations among raw facial images and multi- task labels, resulting in the reduction of noise influence.
This paper presents a method of optimizing the hyperparameters of a convolutional neural network in order to increase accuracy in the context of facial emotion recognition. The optimal hyperparameters of the network were determined by generating and training models based on Random Search algorithm applied on a search space defined by discrete values of hyperparameters. The best model resulted was trained and evaluated using FER2013 database, obtaining an accuracy of 72.16%.
This paper proposes a multi-task learning algorithm, in which a single CNN detects gender, age and race of the subject along with their emotion, and shows that this approach is significantly better than the current State of the art algorithms for this task.
This study is the first to bridge previous neuroscience and ASD research findings to feature-relevance calculation for EEG-based emotion recognition with CNN in typically-development (TD) and in ASD individuals.
Results demonstrated that these modalities carried relevant information to detect users’ emotional state and their combination allowed to improve the final system performance.
This paper proposes an architecture capable of learning from raw data and describes three variants of it with distinct modality fusion mechanisms that improves performance drastically under the absence/noisy representations of one modality, and improves the performance in a standard ideal setting, outperforming the competing methods.
Adding a benchmark result helps the community track progress.