Rethinking Generalized Zero-Shot Learning : A Synthesized Per-Instance Attribute Perspective

Generalized zero-shot learning (GZSL) shows great potential for improving generalization to unseen classes in real-world scenarios. However, most GZSL methods depend on benchmark datasets with per-class attribute annotations, which creates a large semantic gap and worsens the domain shift problem in...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 34(2025) vom: 12., Seite 5847-5859
1. Verfasser: Tang, Chenwei (VerfasserIn)
Weitere Verfasser: Wang, Ying, Xie, Wei, Zhang, Qianjun, Xiao, Rong, He, Zhenan, Lv, Jiancheng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2025
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Generalized zero-shot learning (GZSL) shows great potential for improving generalization to unseen classes in real-world scenarios. However, most GZSL methods depend on benchmark datasets with per-class attribute annotations, which creates a large semantic gap and worsens the domain shift problem in the visual-semantic space. To address these challenges, instance-level attributes offer an intuitive solution, but they require expensive manual annotation. In this paper, we propose a simple yet effective approach called per-instance attribute synthesis (PIAS) to generate diverse semantic representations for each instance. Our method first uses the Vision Transformer (ViT) model to extract visual features and then generates per-instance attributes. The patch splitting, positional embedding, and multi-head self-attention mechanisms in ViT improve the discriminability of both visual and semantic representations. Next, we define the generated attributes of class-average images as class anchor points. These anchor points are calibrated in the semantic space by minimizing the cosine similarity between the anchor points and per-class attribute annotations. Finally, we improve the diversity of generated per-instance attributes by aligning the topological structure between per-class attribute annotations and synthesized per-instance attributes with that between class-average visual features and per-instance visual features. We conduct comprehensive experiments on three challenging ZSL datasets: AWA2, CUB, and SUN. The results show that PIAS significantly outperforms state-of-the-art methods under both ZSL and GZSL settings. We further demonstrate the generalization ability of PIAS by applying it to attribute-based zero-shot image retrieval tasks
Beschreibung:Date Revised 22.09.2025
published: Print
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2025.3607612