MC-GCN : A Multi-Scale Contrastive Graph Convolutional Network for Unconstrained Face Recognition With Image Sets

In this paper, a Multi-scale Contrastive Graph Convolutional Network (MC-GCN) method is proposed for unconstrained face recognition with image sets, which takes a set of media (orderless images and videos) as a face subject instead of single media (an image or video). Due to factors such as illumina...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 01., Seite 3046-3055
1. Verfasser: Shi, Xiao (VerfasserIn)
Weitere Verfasser: Chai, Xiujuan, Xie, Jiake, Sun, Tan
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:In this paper, a Multi-scale Contrastive Graph Convolutional Network (MC-GCN) method is proposed for unconstrained face recognition with image sets, which takes a set of media (orderless images and videos) as a face subject instead of single media (an image or video). Due to factors such as illumination, posture, media source, etc., there are huge intra-set variances in a face set, and the importance of different face prototypes varies considerably. How to model the attention mechanism according to the relationship between prototypes or images in a set is the main content of this paper. In this work, we formulate a framework based on graph convolutional network (GCN), which considers face prototypes as nodes to build relations. Specifically, we first present a multi-scale graph module to learn the relationship between prototypes at multiple scales. Moreover, a Contrastive Graph Convolutional (CGC) block is introduced to build attention control model, which focuses on those frames with similar prototypes (contrastive information) between pair of sets instead of simply evaluating the frame quality. The experiments on IJB-A, YouTube Face, and an animal face dataset clearly demonstrate that our proposed MC-GCN outperforms the state-of-the-art methods significantly
Beschreibung:Date Completed 13.04.2022
Date Revised 13.04.2022
published: Print-Electronic
Citation Status MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2022.3163851