U.S. flag

An official website of the United States government, Department of Justice.

NCJRS Virtual Library

The Virtual Library houses over 235,000 criminal justice resources, including all known OJP works.
Click here to search the NCJRS Virtual Library

A Multi-Stream Fusion Approach with One-Class Learning for Audio-Visual Deepfake Detection

NCJ Number
309835
Author(s)
Kyungbok Lee; You Zhang; Zhiyao Duan
Date Published
2024
Length
6 pages
Annotation

This study explores a multi-stream fusion approach with one-class learning for audio-visual deepfake detection.

Abstract

This paper addresses the challenge of developing a robust audio-visual deepfake detection model. The experimental results demonstrate that this approach surpasses the previous models by a large margin. Furthermore, the proposed framework offers interpretability, indicating which modality the model identifies as more likely to be fake. In practical use cases, new generation algorithms are continually emerging, and these algorithms are not encountered during the development of detection methods. This calls for the generalization ability of the method. Additionally, to ensure the credibility of detection methods, it is beneficial for the model to interpret which cues from the video indicate it is fake. The researchers propose a multi-stream fusion approach with one-class learning as a representation-level regularization technique. The researchers study the generalization problem of audio-visual deepfake detection by creating a new benchmark by extending and re-splitting the existing FakeAVCeleb dataset. The benchmark contains four categories of fake videos (Real Audio-Fake Visual, Fake Audio-Fake Visual, Fake Audio-Real Visual, and Unsynchronized videos).