Face Frontalization Using an Appearance-Flow-Based Convolutional Neural Network

Facial pose variation is one of the major factors making face recognition (FR) a challenging task. One popular solution is to convert non-frontal faces to frontal ones on which FR is performed. Rotating faces causes facial pixel value changes. Therefore, existing CNN-based methods learn to synthesiz...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 28(2019), 5 vom: 29. Mai, Seite 2187-2199
1. Verfasser: Zhang, Zhihong (VerfasserIn)
Weitere Verfasser: Chen, Xu, Wang, Beizhan, Hu, Guosheng, Zuo, Wangmeng, Hancock, Edwin R
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Facial pose variation is one of the major factors making face recognition (FR) a challenging task. One popular solution is to convert non-frontal faces to frontal ones on which FR is performed. Rotating faces causes facial pixel value changes. Therefore, existing CNN-based methods learn to synthesize frontal faces in color space. However, this learning problem in a color space is highly non-linear, causing the synthetic frontal faces to lose fine facial textures. In this paper, we take the view that the nonfrontal-frontal pixel changes are essentially caused by geometric transformations (rotation, translation, and so on) in space. Therefore, we aim to learn the nonfrontal-frontal facial conversion in the spatial domain rather than the color domain to ease the learning task. To this end, we propose an appearance-flow-based face frontalization convolutional neural network (A3F-CNN). Specifically, A3F-CNN learns to establish the dense correspondence between the non-frontal and frontal faces. Once the correspondence is built, frontal faces are synthesized by explicitly "moving" pixels from the non-frontal one. In this way, the synthetic frontal faces can preserve fine facial textures. To improve the convergence of training, an appearance-flow-guided learning strategy is proposed. In addition, generative adversarial network loss is applied to achieve a more photorealistic face, and a face mirroring method is introduced to handle the self-occlusion problem. Extensive experiments are conducted on face synthesis and pose invariant FR. Results show that our method can synthesize more photorealistic faces than the existing methods in both the controlled and uncontrolled lighting environments. Moreover, we achieve a very competitive FR performance on the Multi-PIE, LFW and IJB-A databases
Beschreibung:Date Completed 24.01.2019
Date Revised 24.01.2019
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2018.2883554