Structured Multimodal Attentions for TextVQA

Text based Visual Question Answering (TextVQA) is a recently raised challenge requiring models to read text in images and answer natural language questions by jointly reasoning over the question, textual information and visual content. Introduction of this new modality - Optical Character Recognitio...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 44(2022), 12 vom: 02. Dez., Seite 9603-9614
1. Verfasser: Gao, Chenyu (VerfasserIn)
Weitere Verfasser: Zhu, Qi, Wang, Peng, Li, Hui, Liu, Yuliang, Hengel, Anton van den, Wu, Qi
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM333924185
003 DE-627
005 20231225222641.0
007 cr uuu---uuuuu
008 231225s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2021.3132034  |2 doi 
028 5 2 |a pubmed24n1113.xml 
035 |a (DE-627)NLM333924185 
035 |a (NLM)34855584 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Gao, Chenyu  |e verfasserin  |4 aut 
245 1 0 |a Structured Multimodal Attentions for TextVQA 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 09.11.2022 
500 |a Date Revised 19.11.2022 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Text based Visual Question Answering (TextVQA) is a recently raised challenge requiring models to read text in images and answer natural language questions by jointly reasoning over the question, textual information and visual content. Introduction of this new modality - Optical Character Recognition (OCR) tokens ushers in demanding reasoning requirements. Most of the state-of-the-art (SoTA) VQA methods fail when answer these questions because of three reasons: (1) poor text reading ability; (2) lack of textual-visual reasoning capacity; and (3) choosing discriminative answering mechanism over generative couterpart (although this has been further addressed by M4C). In this paper, we propose an end-to-end structured multimodal attention (SMA) neural network to mainly solve the first two issues above. SMA first uses a structural graph representation to encode the object-object, object-text and text-text relationships appearing in the image, and then designs a multimodal graph attention network to reason over it. Finally, the outputs from the above modules are processed by a global-local attentional answering module to produce an answer splicing together tokens from both OCR and general vocabulary iteratively by following M4C. Our proposed model outperforms the SoTA models on TextVQA dataset and two tasks of ST-VQA dataset among all models except pre-training based TAP. Demonstrating strong reasoning ability, it also won first place in TextVQA Challenge 2020. We extensively test different OCR methods on several reasoning models and investigate the impact of gradually increased OCR performance on TextVQA benchmark. With better OCR results, different models share dramatic improvement over the VQA accuracy, but our model benefits most blessed by strong textual-visual reasoning ability. To grant our method an upper bound and make a fair testing base available for further works, we also provide human-annotated ground-truth OCR annotations for the TextVQA dataset, which were not given in the original release. The code and ground-truth OCR annotations for the TextVQA dataset are available at https://github.com/ChenyuGAO-CS/SMA 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Zhu, Qi  |e verfasserin  |4 aut 
700 1 |a Wang, Peng  |e verfasserin  |4 aut 
700 1 |a Li, Hui  |e verfasserin  |4 aut 
700 1 |a Liu, Yuliang  |e verfasserin  |4 aut 
700 1 |a Hengel, Anton van den  |e verfasserin  |4 aut 
700 1 |a Wu, Qi  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 44(2022), 12 vom: 02. Dez., Seite 9603-9614  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:44  |g year:2022  |g number:12  |g day:02  |g month:12  |g pages:9603-9614 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2021.3132034  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 44  |j 2022  |e 12  |b 02  |c 12  |h 9603-9614