Deepfake Forensics via an Adversarial Game
With the progress in AI-based facial forgery (i.e., deepfake), people are concerned about its abuse. Albeit effort has been made for training models to recognize such forgeries, existing models suffer from poor generalization to unseen forgery technologies and high sensitivity to changes in image/vi...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 11., Seite 3541-3552 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2022
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | With the progress in AI-based facial forgery (i.e., deepfake), people are concerned about its abuse. Albeit effort has been made for training models to recognize such forgeries, existing models suffer from poor generalization to unseen forgery technologies and high sensitivity to changes in image/video quality. In this paper, we advocate robust training for improving the generalization ability. We believe training with samples that are adversarially crafted to attack the classification models improves the generalization ability considerably. Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted (by models) yet difficult to generalize, we further propose a new adversarial training method that attempts to blur out these artifacts, by introducing pixel-wise Gaussian blurring. Plenty of empirical evidence show that, with adversarial training, models are forced to learn more discriminative and generalizable features. Our code: https://github.com/ah651/deepfake_adv |
---|---|
Beschreibung: | Date Revised 19.05.2022 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2022.3172845 |