Neural Multimodal Cooperative Learning Toward Micro-Video Understanding

The prevailing characteristics of micro-videos result in the less descriptive power of each modality. The micro-video representations, several pioneer efforts proposed, are limited in implicitly exploring the consistency between different modality information but ignore the complementarity. In this...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 29(2020) vom: 15., Seite 1-14
1. Verfasser: Wei, Yinwei (VerfasserIn)
Weitere Verfasser: Wang, Xiang, Guan, Weili, Nie, Liqiang, Lin, Zhouchen, Chen, Baoquan
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM298790696
003 DE-627
005 20231225094930.0
007 cr uuu---uuuuu
008 231225s2020 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2019.2923608  |2 doi 
028 5 2 |a pubmed24n0995.xml 
035 |a (DE-627)NLM298790696 
035 |a (NLM)31265394 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Wei, Yinwei  |e verfasserin  |4 aut 
245 1 0 |a Neural Multimodal Cooperative Learning Toward Micro-Video Understanding 
264 1 |c 2020 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 27.02.2020 
500 |a Date Revised 27.02.2020 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a The prevailing characteristics of micro-videos result in the less descriptive power of each modality. The micro-video representations, several pioneer efforts proposed, are limited in implicitly exploring the consistency between different modality information but ignore the complementarity. In this paper, we focus on how to explicitly separate the consistent features and the complementary features from the mixed information and harness their combination to improve the expressiveness of each modality. Toward this end, we present a neural multimodal cooperative learning (NMCL) model to split the consistent component and the complementary component by a novel relation-aware attention mechanism. Specifically, the computed attention score can be used to measure the correlation between the features extracted from different modalities. Then, a threshold is learned for each modality to distinguish the consistent and complementary features according to the score. Thereafter, we integrate the consistent parts to enhance the representations and supplement the complementary ones to reinforce the information in each modality. As to the problem of redundant information, which may cause overfitting and is hard to distinguish, we devise an attention network to dynamically capture the features which closely related the category and output a discriminative representation for prediction. The experimental results on a real-world micro-video dataset show that the NMCL outperforms the state-of-the-art methods. Further studies verify the effectiveness and cooperative effects brought by the attentive mechanism 
650 4 |a Journal Article 
700 1 |a Wang, Xiang  |e verfasserin  |4 aut 
700 1 |a Guan, Weili  |e verfasserin  |4 aut 
700 1 |a Nie, Liqiang  |e verfasserin  |4 aut 
700 1 |a Lin, Zhouchen  |e verfasserin  |4 aut 
700 1 |a Chen, Baoquan  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 29(2020) vom: 15., Seite 1-14  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:29  |g year:2020  |g day:15  |g pages:1-14 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2019.2923608  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 29  |j 2020  |b 15  |h 1-14