3260 papers • 126 benchmarks • 313 datasets
Facial anti-spoofing is the task of preventing false facial verification by using a photo, video, mask or a different substitute for an authorized person’s face. Some examples of attacks: Print attack: The attacker uses someone’s photo. The image is printed or displayed on a digital device. Replay/video attack: A more sophisticated way to trick the system, which usually requires a looped video of a victim’s face. This approach ensures behaviour and facial movements to look more ‘natural’ compared to holding someone’s photo. 3D mask attack: During this type of attack, a mask is used as the tool of choice for spoofing. It’s an even more sophisticated attack than playing a face video. In addition to natural facial movements, it enables ways to deceive some extra layers of protection such as depth sensors. ( Image credit: Learning Generalizable and Identity-Discriminative Representations for Face Anti-Spoofing )
(Image credit: Papersgraph)
These leaderboards are used to track progress in face-anti-spoofing-11
Use these libraries to find face-anti-spoofing-11 models and implementations
No subtasks available.
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
This paper reformulate FAS in an anomaly detection perspective and proposes a residual-learning framework to learn the discriminative live-spoof differences which are defined as the spoof cues, which outperforms the state-of-the-art methods.
A Convolutional Neural Network (CNN) based framework for presentation attack detection, with deep pixel-wise supervision, suitable for deployment in smart devices with minimal computational and time overhead is introduced.
A novel frame level FAS method based on Central Difference Convolution (CDC), which is able to capture intrinsic detailed patterns via aggregating both intensity and gradient information is proposed.
A new approach to detect presentation attacks from multiple frames based on two insights, able to capture discriminative details via Residual Spatial Gradient Block (RSGB) and encode spatio-temporal information from Spatio-Temporal Propagation Module (STPM) efficiently.
Instead of designing feature by ourselves, the deep convolutional neural network is relied on to learn features of high discriminative ability in a supervised manner and combined with some data pre-processing, the face anti-spoofing performance improves drastically.
A method to synthesize virtual spoof data in 3D space to alleviate the problem of expensive spoof data acquisition and open up new possibilities for advancing face anti-spoofing using cheap and large-scale synthetic data.
A large-scale multi-modal dataset, namely CASIA-SURF, is introduced, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and visual modalities and a new multi- modal fusion method is presented, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modal.
An extreme light network architecture(FeatherNet A/B) is proposed with a streaming module which fixes the weakness of Global Average Pooling and uses less parameters and a novel fusion procedure with "ensemble + cascade" structure is presented to satisfy the performance preferred use cases.
Adding a benchmark result helps the community track progress.