Re-Attention for Visual Question Answering

A simultaneous understanding of questions and images is crucial in Visual Question Answering (VQA). While the existing models have achieved satisfactory performance by associating questions with key objects in images, the answers also contain rich information that can be used to describe the visual...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 15., Seite 6730-6743
1. Verfasser: Guo, Wenya (VerfasserIn)
Weitere Verfasser: Zhang, Ying, Yang, Jufeng, Yuan, Xiaojie
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM328288438
003 DE-627
005 20231225202631.0
007 cr uuu---uuuuu
008 231225s2021 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2021.3097180  |2 doi 
028 5 2 |a pubmed24n1094.xml 
035 |a (DE-627)NLM328288438 
035 |a (NLM)34283714 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Guo, Wenya  |e verfasserin  |4 aut 
245 1 0 |a Re-Attention for Visual Question Answering 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 27.07.2021 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a A simultaneous understanding of questions and images is crucial in Visual Question Answering (VQA). While the existing models have achieved satisfactory performance by associating questions with key objects in images, the answers also contain rich information that can be used to describe the visual contents in images. In this paper, we propose a re-attention framework to utilize the information in answers for the VQA task. The framework first learns the initial attention weights for the objects by calculating the similarity of each word-object pair in the feature space. Then, the visual attention map is reconstructed by re-attending the objects in images based on the answer. Through keeping the initial visual attention map and the reconstructed one to be consistent, the learned visual attention map can be corrected by the answer information. Besides, we introduce a gate mechanism to automatically control the contribution of re-attention to model training based on the entropy of the learned initial visual attention maps. We conduct experiments on three benchmark datasets, and the results demonstrate the proposed model performs favorably against state-of-the-art methods 
650 4 |a Journal Article 
700 1 |a Zhang, Ying  |e verfasserin  |4 aut 
700 1 |a Yang, Jufeng  |e verfasserin  |4 aut 
700 1 |a Yuan, Xiaojie  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 30(2021) vom: 15., Seite 6730-6743  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:30  |g year:2021  |g day:15  |g pages:6730-6743 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2021.3097180  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 30  |j 2021  |b 15  |h 6730-6743