Generative Zero-Shot Learning via Low-Rank Embedded Semantic Dictionary

Zero-shot learning for visual recognition, which approaches identifying unseen categories through a shared visual-semantic function learned on the seen categories and is expected to well adapt to unseen categories, has received considerable research attention most recently. However, the semantic gap...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 41(2019), 12 vom: 01. Dez., Seite 2861-2874
1. Verfasser: Ding, Zhengming (VerfasserIn)
Weitere Verfasser: Shao, Ming, Fu, Yun
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM288142764
003 DE-627
005 20231225055613.0
007 cr uuu---uuuuu
008 231225s2019 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2018.2867870  |2 doi 
028 5 2 |a pubmed24n0960.xml 
035 |a (DE-627)NLM288142764 
035 |a (NLM)30176581 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Ding, Zhengming  |e verfasserin  |4 aut 
245 1 0 |a Generative Zero-Shot Learning via Low-Rank Embedded Semantic Dictionary 
264 1 |c 2019 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 04.03.2020 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Zero-shot learning for visual recognition, which approaches identifying unseen categories through a shared visual-semantic function learned on the seen categories and is expected to well adapt to unseen categories, has received considerable research attention most recently. However, the semantic gap between discriminant visual features and their underlying semantics is still the biggest obstacle, because there usually exists domain disparity across the seen and unseen classes. To deal with this challenge, we design two-stage generative adversarial networks to enhance the generalizability of semantic dictionary through low-rank embedding for zero-shot learning. In detail, we formulate a novel framework to simultaneously seek a two-stage generative model and a semantic dictionary to connect visual features with their semantics under a low-rank embedding. Our first-stage generative model is able to augment more semantic features for the unseen classes, which are then used to generate more discriminant visual features in the second stage, to expand the seen visual feature space. Therefore, we will be able to seek a better semantic dictionary to constitute the latent basis for the unseen classes based on the augmented semantic and visual data. Finally, our approach could capture a variety of visual characteristics from seen classes that are "ready-to-use" for new classes. Extensive experiments on four zero-shot benchmarks demonstrate that our proposed algorithm outperforms the state-of-the-art zero-shot algorithms 
650 4 |a Journal Article 
700 1 |a Shao, Ming  |e verfasserin  |4 aut 
700 1 |a Fu, Yun  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 41(2019), 12 vom: 01. Dez., Seite 2861-2874  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:41  |g year:2019  |g number:12  |g day:01  |g month:12  |g pages:2861-2874 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2018.2867870  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 41  |j 2019  |e 12  |b 01  |c 12  |h 2861-2874