3260 papers • 126 benchmarks • 313 datasets
Facial Micro-Expression Spotting is a challenging task in identifying onset, apex and/or offset over a short or long micro-expression sequence.
(Image credit: Papersgraph)
These leaderboards are used to track progress in facial-expression-recognition-fer
No benchmarks available.
Use these libraries to find facial-expression-recognition-fer models and implementations
No subtasks available.
This paper presents baseline results for the Third Facial Micro-Expression Grand Challenge (MEGC 2020). Both macro-and micro-expression intervals in CAS(ME)2 and SAMM Long Videos are spotted by employing the method of Main Directional Maximal Difference Analysis (MDMD). The MDMD method uses the magnitude maximal difference in the main direction of optical flow features to spot facial movements. The single-frame prediction results of the original MDMD method are post-processed into reasonable video intervals. The metric F1-scores of baseline results are evaluated: for CAS(ME)2, the F1-scores are 0.1196 and 0.0082 for macro-and micro-expressions respectively, and the overall F1-score is 0.0376; for SAMM Long Videos, the F1-scores are 0.0629 and 0.0364 for macro-and micro-expressions respectively, and the overall F1-score is 0.0445. The baseline project codes are publicly available at https://github.com/HeyingGithub/ Baseline-project-for-MEGC2020_spotting.
A shallow optical flow three-stream CNN (SOFTNet) model is proposed to predict a score that captures the likelihood of a frame being in an expression interval by fashioning the spotting task as a regression problem and introducing pseudo-labeling to facilitate the learning process.
Adding a benchmark result helps the community track progress.