This study explores a multi-stream fusion approach with one-class learning for audio-visual deepfake detection.
This paper addresses the challenge of developing a robust audio-visual deepfake detection model. The experimental results demonstrate that this approach surpasses the previous models by a large margin. Furthermore, the proposed framework offers interpretability, indicating which modality the model identifies as more likely to be fake. In practical use cases, new generation algorithms are continually emerging, and these algorithms are not encountered during the development of detection methods. This calls for the generalization ability of the method. Additionally, to ensure the credibility of detection methods, it is beneficial for the model to interpret which cues from the video indicate it is fake. The researchers propose a multi-stream fusion approach with one-class learning as a representation-level regularization technique. The researchers study the generalization problem of audio-visual deepfake detection by creating a new benchmark by extending and re-splitting the existing FakeAVCeleb dataset. The benchmark contains four categories of fake videos (Real Audio-Fake Visual, Fake Audio-Fake Visual, Fake Audio-Real Visual, and Unsynchronized videos).