Delocate: Detection and Localization for Deepfake Videos with Randomly-Located Tampered Traces
Delocate: Detection and Localization for Deepfake Videos with Randomly-Located Tampered Traces
Juan Hu, Xin Liao, Difei Gao, Satoshi Tsutsui, Qian Wang, Zheng Qin, Mike Zheng Shou
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 5862-5871.
https://doi.org/10.24963/ijcai.2024/648
Deepfake videos are becoming increasingly realistic, showing few tampering traces on facial areas that vary between frames. Consequently, existing Deepfake detection methods struggle to detect unknown domain Deepfake videos while accurately locating the tampered region. To address this limitation, we propose Delocate, a novel Deepfake detection model that can both recognize and localize unknown domain Deepfake videos. Our method consists of two stages named recovering and localization. In the recovering stage, the model randomly masks regions of interest (ROIs) and reconstructs real faces without tampering traces, leading to a relatively good recovery effect for real faces and a poor recovery effect for fake faces. In the localization stage, the output of the recovery phase and the forgery ground truth mask serve as supervision to guide the forgery localization process. This process strategically emphasizes the recovery phase of fake faces with poor recovery, facilitating the localization of tampered regions. Our extensive experiments on four widely used benchmark datasets demonstrate that Delocate not only excels in localizing tampered areas but also enhances cross-domain detection performance.
Keywords:
Multidisciplinary Topics and Applications: MTA: Security and privacy
Computer Vision: CV: Biometrics, face, gesture and pose recognition