Learning to Model Relationships for Zero-Shot Video Classification

With the explosive growth of video categories, zero-shot learning (ZSL) in video classification has become a promising research direction in pattern analysis and machine learning. Based on some auxiliary information such as word embeddings and attributes, the key to a robust ZSL method is to transfe...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 43(2021), 10 vom: 30. Okt., Seite 3476-3491
1. Verfasser: Gao, Junyu (VerfasserIn)
Weitere Verfasser: Zhang, Tianzhu, Xu, Changsheng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM308906799
003 DE-627
005 20231225132739.0
007 cr uuu---uuuuu
008 231225s2021 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2020.2985708  |2 doi 
028 5 2 |a pubmed24n1029.xml 
035 |a (DE-627)NLM308906799 
035 |a (NLM)32305892 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Gao, Junyu  |e verfasserin  |4 aut 
245 1 0 |a Learning to Model Relationships for Zero-Shot Video Classification 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 03.09.2021 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a With the explosive growth of video categories, zero-shot learning (ZSL) in video classification has become a promising research direction in pattern analysis and machine learning. Based on some auxiliary information such as word embeddings and attributes, the key to a robust ZSL method is to transfer the learned knowledge from seen classes to unseen classes, which requires relationship modeling between these concepts (e.g., categories and attributes). However, most existing approaches ignore to model the explicit relationships in an end-to-end manner, resulting in low effectiveness of knowledge transfer. To tackle this problem, we reconsider the video ZSL task as a task-driven message passing process to jointly enjoy several merits including alleviated heterogeneity gap, low domain shift, and robust temporal modeling. Specifically, we propose a prototype-sample GNN (PS-GNN) consisting of a prototype branch and a sample branch to directly and adaptively model all the relationships between category-attribute, category-category, and attribute-attribute. The prototype branch aims to learn robust representations of video categories, which takes as input a set of word-embedding vectors corresponding to the concepts. The sample branch is designed to generate features of a video sample by leveraging its object semantics. With the co-adaption and cooperation between both branches, a unified and robust ZSL framework is achieved. Extensive experiments strongly evidence that PS-GNN obtains favorable performance on five popular video benchmarks consistently 
650 4 |a Journal Article 
700 1 |a Zhang, Tianzhu  |e verfasserin  |4 aut 
700 1 |a Xu, Changsheng  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 43(2021), 10 vom: 30. Okt., Seite 3476-3491  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:43  |g year:2021  |g number:10  |g day:30  |g month:10  |g pages:3476-3491 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2020.2985708  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 43  |j 2021  |e 10  |b 30  |c 10  |h 3476-3491