3260 papers • 126 benchmarks • 313 datasets
Facial anti-spoofing is the task of preventing false facial verification by using a photo, video, mask or a different substitute for an authorized person’s face. Some examples of attacks: Print attack: The attacker uses someone’s photo. The image is printed or displayed on a digital device. Replay/video attack: A more sophisticated way to trick the system, which usually requires a looped video of a victim’s face. This approach ensures behaviour and facial movements to look more ‘natural’ compared to holding someone’s photo. 3D mask attack: During this type of attack, a mask is used as the tool of choice for spoofing. It’s an even more sophisticated attack than playing a face video. In addition to natural facial movements, it enables ways to deceive some extra layers of protection such as depth sensors. ( Image credit: Learning Generalizable and Identity-Discriminative Representations for Face Anti-Spoofing )
(Image credit: Open Source)
These leaderboards are used to track progress in face-anti-spoofing-68
Use these libraries to find face-anti-spoofing-68 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.