3260 papers • 126 benchmarks • 313 datasets
Facial action unit detection is the task of detecting action units from a video of a face - for example, lip tightening and cheek raising. ( Image credit: Self-supervised Representation Learning from Videos for Facial Action Unit Detection )
(Image credit: Papersgraph)
These leaderboards are used to track progress in facial-action-unit-detection
Use these libraries to find facial-action-unit-detection models and implementations
No subtasks available.
This work trains a unified model to perform three tasks: facial action unit detection, expression classification, and valence-arousal estimation, and proposes an algorithm for the multitask model to learn from missing (incomplete) labels.
A comprehensive evaluation benchmark for facial representation learning consisting of 5 important face analysis tasks and systematically investigates two ways of large-scale representation learning applied to faces: supervised and unsupervised pre-training.
This paper proposes an elegant linear model to untangle facial actions from expressive face videos which contain a mixture of linearly-representable attributes, and exploits the low-rank property across frames to implicitly subtract the intrinsic neutral face.
The results show that GPT-4V has high accuracy in facial action unit recognition and micro-expression detection while its general facial expression recognition performance is not accurate, and provides valuable insights into the potential applications and challenges of MLLMs in human-centric computing.
This paper proposes an AU relationship modelling approach that deep learns a unique graph to explicitly describe the relationship between each pair of AUs of the target facial display and demonstrates large performance improvements for CNN and transformer-based backbones.
This work proposes a novel convolutional neural network approach to address the fine-grained recognition problem of multi-view dynamic facial action unit detection by formulating the task of predicting the presence or absence of a specific action unit in a still image of a human face as holistic classification.
A novel self-adjusting AU-correlation learning (SACL) method with less computation for AU detection that outperforms the state-of-the-art methods on widely used AU detection benchmark datasets and obtains a more robust feature representation for the final AU detection.
A novel end-to-end deep learning framework for joint AU detection and face alignment, which has not been explored before, in which multi-scale shared features are learned firstly, and high-level features of face alignment are fed into AU detection.
This paper proposes an end-to-end unconstrained facial AU detection framework based on domain adaptation, which transfers accurate AU labels from a constrained source domain to an unconStrained target domain by exploiting labels of AU-related facial landmarks.
Adding a benchmark result helps the community track progress.