DotFAN : A Domain-Transferred Face Augmentation Net

The performance of a convolutional neural network (CNN) based face recognition model largely relies on the richness of labeled training data. However, it is expensive to collect a training set with large variations of a face identity under different poses and illumination changes, so the diversity o...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 01., Seite 8759-8772
1. Verfasser: Shao, Hao-Chiang (VerfasserIn)
Weitere Verfasser: Liu, Kang-Yu, Su, Weng-Tai, Lin, Chia-Wen, Lu, Jiwen
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:The performance of a convolutional neural network (CNN) based face recognition model largely relies on the richness of labeled training data. However, it is expensive to collect a training set with large variations of a face identity under different poses and illumination changes, so the diversity of within-class face images becomes a critical issue in practice. In this paper, we propose a 3D model-assisted domain-transferred face augmentation network (DotFAN) that can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets of other domains. Extending from StarGAN's architecture, DotFAN integrates with two additional subnetworks, i.e., face expert model (FEM) and face shape regressor (FSR), for latent facial code control. While FSR aims to extract face attributes, FEM is designed to capture a face identity. With their aid, DotFAN can separately learn facial feature codes and effectively generate face images of various facial attributes while keeping the identity of augmented faces unaltered. Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity so that a better face recognition model can be learned from the augmented dataset
Beschreibung:Date Completed 10.12.2021
Date Revised 14.12.2021
published: Print-Electronic
Citation Status MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2021.3120313