|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM359117864 |
003 |
DE-627 |
005 |
20231226080212.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2023.3286259
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1197.xml
|
035 |
|
|
|a (DE-627)NLM359117864
|
035 |
|
|
|a (NLM)37410654
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Li, Zhenyang
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Joint Answering and Explanation for Visual Commonsense Reasoning
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 13.07.2023
|
500 |
|
|
|a Date Revised 18.07.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Visual Commonsense Reasoning (VCR), deemed as one challenging extension of Visual Question Answering (VQA), endeavors to pursue a higher-level visual comprehension. VCR includes two complementary processes: question answering over a given image and rationale inference for answering explanation. Over the years, a variety of VCR methods have pushed more advancements on the benchmark dataset. Despite significance of these methods, they often treat the two processes in a separate manner and hence decompose VCR into two irrelevant VQA instances. As a result, the pivotal connection between question answering and rationale inference is broken, rendering existing efforts less faithful to visual reasoning. To empirically study this issue, we perform some in-depth empirical explorations in terms of both language shortcuts and generalization capability. Based on our findings, we then propose a plug-and-play knowledge distillation enhanced framework to couple the question answering and rationale inference processes. The key contribution lies in the introduction of a new branch, which serves as a relay to bridge the two processes. Given that our framework is model-agnostic, we apply it to the existing popular baselines and validate its effectiveness on the benchmark dataset. As demonstrated in the experimental results, when equipped with our method, these baselines all achieve consistent and significant performance improvements, evidently verifying the viability of processes coupling
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Guo, Yangyang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Kejie
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wei, Yinwei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Nie, Liqiang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Kankanhalli, Mohan
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 32(2023) vom: 06., Seite 3836-3846
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:32
|g year:2023
|g day:06
|g pages:3836-3846
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2023.3286259
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 32
|j 2023
|b 06
|h 3836-3846
|