Zero-VAE-GAN : Generating Unseen Features for Generalized and Transductive Zero-Shot Learning

Zero-shot learning (ZSL) is a challenging task due to the lack of unseen class data during training. Existing works attempt to establish a mapping between the visual and class spaces through a common intermediate semantic space. The main limitation of existing methods is the strong bias towards seen...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2020) vom: 13. Jan.
1. Verfasser: Gao, Rui (VerfasserIn)
Weitere Verfasser: Hou, Xingsong, Qin, Jie, Chen, Jiaxin, Liu, Li, Zhu, Fan, Zhang, Zhao, Shao, Ling
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Zero-shot learning (ZSL) is a challenging task due to the lack of unseen class data during training. Existing works attempt to establish a mapping between the visual and class spaces through a common intermediate semantic space. The main limitation of existing methods is the strong bias towards seen class, known as the domain shift problem, which leads to unsatisfactory performance in both conventional and generalized ZSL tasks. To tackle this challenge, we propose to convert ZSL to the conventional supervised learning by generating features for unseen classes. To this end, a joint generative model that couples variational autoencoder (VAE) and generative adversarial network (GAN), called Zero-VAE-GAN, is proposed to generate high-quality unseen features. To enhance the class-level discriminability, an adversarial categorization network is incorporated into the joint framework. Besides, we propose two self-training strategies to augment unlabeled unseen features for the transductive extension of our model, addressing the domain shift problem to a large extent. Experimental results on five standard benchmarks and a large-scale dataset demonstrate the superiority of our generative model over the state-of-the-art methods for conventional, especially generalized ZSL tasks. Moreover, the further improvement of the transductive setting demonstrates the effectiveness of the proposed self-training strategies
Beschreibung:Date Revised 27.02.2024
published: Print-Electronic
Citation Status Publisher
ISSN:1941-0042
DOI:10.1109/TIP.2020.2964429