3260 papers • 126 benchmarks • 313 datasets
Multimodal Forgery Detection task is a deep forgery detection method which uses both video and audio.
(Image credit: Papersgraph)
These leaderboards are used to track progress in multimodal-forgery-detection-15
Use these libraries to find multimodal-forgery-detection-15 models and implementations
No subtasks available.
The recent rapid revolution in Artificial Intelligence (AI) technology has enabled the creation of hyper-realistic deepfakes, and detecting deepfake videos (also known as AI-synthesized videos) has become a critical task. The existing systems generally do not fully consider the unified processing of audio and video data, so there is still room for further improvement. In this paper, we focus on the multimodal forgery detection task and propose a deep forgery detection method based on audiovisual ensemble learning. The proposed method consists of four parts, namely a Video Network, an Audio Network, an Audiovisual Network, and a Voting Module. Given a video, the proposed multimodal and ensemble learning system can identify whether it is fake or real. Experimental results on a recently released multimodal FakeAVCeleb dataset show that the proposed method achieves 89% accuracy, significantly outperforming existing models.
Adding a benchmark result helps the community track progress.