Learning Pose-Aware Models for Pose-Invariant Face Recognition in the Wild

We propose a method designed to push the frontiers of unconstrained face recognition in the wild with an emphasis on extreme out-of-plane pose variations. Existing methods either expect a single model to learn pose invariance by training on massive amounts of data or else normalize images by alignin...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 41(2019), 2 vom: 25. Feb., Seite 379-393
1. Verfasser: Masi, Iacopo (VerfasserIn)
Weitere Verfasser: Chang, Feng-Ju, Choi, Jongmoo, Harel, Shai, Kim, Jungyeon, Kim, KangGeon, Leksut, Jatuporn, Rawls, Stephen, Wu, Yue, Hassner, Tal, AbdAlmageed, Wael, Medioni, Gerard, Morency, Louis-Philippe, Natarajan, Prem, Nevatia, Ram
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article Research Support, U.S. Gov't, Non-P.H.S.
LEADER 01000naa a22002652 4500
001 NLM286370395
003 DE-627
005 20231225051541.0
007 cr uuu---uuuuu
008 231225s2019 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2018.2792452  |2 doi 
028 5 2 |a pubmed24n0954.xml 
035 |a (DE-627)NLM286370395 
035 |a (NLM)29994497 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Masi, Iacopo  |e verfasserin  |4 aut 
245 1 0 |a Learning Pose-Aware Models for Pose-Invariant Face Recognition in the Wild 
264 1 |c 2019 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 20.11.2019 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a We propose a method designed to push the frontiers of unconstrained face recognition in the wild with an emphasis on extreme out-of-plane pose variations. Existing methods either expect a single model to learn pose invariance by training on massive amounts of data or else normalize images by aligning faces to a single frontal pose. Contrary to these, our method is designed to explicitly tackle pose variations. Our proposed Pose-Aware Models (PAM) process a face image using several pose-specific, deep convolutional neural networks (CNN). 3D rendering is used to synthesize multiple face poses from input images to both train these models and to provide additional robustness to pose variations at test time. Our paper presents an extensive analysis of the IARPA Janus Benchmark A (IJB-A), evaluating the effects that landmark detection accuracy, CNN layer selection, and pose model selection all have on the performance of the recognition pipeline. It further provides comparative evaluations on IJB-A and the PIPA dataset. These tests show that our approach outperforms existing methods, even surprisingly matching the accuracy of methods that were specifically fine-tuned to the target dataset. Parts of this work previously appeared in [1] and [2] 
650 4 |a Journal Article 
650 4 |a Research Support, U.S. Gov't, Non-P.H.S. 
700 1 |a Chang, Feng-Ju  |e verfasserin  |4 aut 
700 1 |a Choi, Jongmoo  |e verfasserin  |4 aut 
700 1 |a Harel, Shai  |e verfasserin  |4 aut 
700 1 |a Kim, Jungyeon  |e verfasserin  |4 aut 
700 1 |a Kim, KangGeon  |e verfasserin  |4 aut 
700 1 |a Leksut, Jatuporn  |e verfasserin  |4 aut 
700 1 |a Rawls, Stephen  |e verfasserin  |4 aut 
700 1 |a Wu, Yue  |e verfasserin  |4 aut 
700 1 |a Hassner, Tal  |e verfasserin  |4 aut 
700 1 |a AbdAlmageed, Wael  |e verfasserin  |4 aut 
700 1 |a Medioni, Gerard  |e verfasserin  |4 aut 
700 1 |a Morency, Louis-Philippe  |e verfasserin  |4 aut 
700 1 |a Natarajan, Prem  |e verfasserin  |4 aut 
700 1 |a Nevatia, Ram  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 41(2019), 2 vom: 25. Feb., Seite 379-393  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:41  |g year:2019  |g number:2  |g day:25  |g month:02  |g pages:379-393 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2018.2792452  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 41  |j 2019  |e 2  |b 25  |c 02  |h 379-393