Bring back home all the hostages
חזרה לעמוד הקודם

Exposing DeepFake Videos By Detecting Face Warping Artifacts

Yuezun Li, & Siwei Lyu, Exposing DeepFake Videos By Detecting Face Warping Artifacts, IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop (2019)

أكثر المقالات المقتبسة عن استخدام الذكاء الاصطناعيّ والتعلّم الآليّ، من أجل الكشف عن محتويات تمّت معالجتها بتكنولوجيا تزييف، بواسطة كشف "بيانات اصطناعيّة" مرئيّة (artifacts) التي تميّز معالجة الرسومات الخاصة بهذه المحتويات.


In this work, we describe a new deep learning based method that can effectively distinguish AI-generated fake videos (referred to as {\em DeepFake} videos hereafter) from real videos. Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Compared to previous methods which use a large amount of real and DeepFake generated images to train CNN classifier, our method does not need DeepFake generated images as negative training examples since we target the artifacts in affine face warping as the distinctive feature to distinguish real and fake images. The advantages of our method are two-fold: (1) Such artifacts can be simulated directly using simple image processing operations on a image to make it as negative example. Since training a DeepFake model to generate negative examples is time-consuming and resource-demanding, our method saves a plenty of time and resources in training data collection; (2) Since such artifacts are general existed in DeepFake videos from different sources, our method is more robust compared to others. Our method is evaluated on two sets of DeepFake video datasets for its effectiveness in practice.

رابط للمقال: