CRIC : A VQA Dataset for Compositional Reasoning on Vision and Commonsense

Alternatively inferring on the visual facts and commonsense is fundamental for an advanced visual question answering (VQA) system. This ability requires models to go beyond the literal understanding of commonsense. The system should not just treat objects as the entrance to query background knowledg...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 5 vom: 01. Mai, Seite 5561-5578
1. Verfasser: Gao, Difei (VerfasserIn)
Weitere Verfasser: Wang, Ruiping, Shan, Shiguang, Chen, Xilin
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Alternatively inferring on the visual facts and commonsense is fundamental for an advanced visual question answering (VQA) system. This ability requires models to go beyond the literal understanding of commonsense. The system should not just treat objects as the entrance to query background knowledge, but fully ground commonsense to the visual world and imagine the possible relationships between objects, e.g., "fork, can lift, food". To comprehensively evaluate such abilities, we propose a VQA benchmark, Compositional Reasoning on vIsion and Commonsense(CRIC), which introduces new types of questions about CRIC, and an evaluation metric integrating the correctness of answering and commonsense grounding. To collect such questions and rich additional annotations to support the metric, we also propose an automatic algorithm to generate question samples from the scene graph associated with the images and the relevant knowledge graph. We further analyze several representative types of VQA models on the CRIC dataset. Experimental results show that grounding the commonsense to the image region and joint reasoning on vision and commonsense are still challenging for current approaches. The dataset is available at https://cricvqa.github.io
Beschreibung:Date Completed 10.04.2023
Date Revised 11.04.2023
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1939-3539
DOI:10.1109/TPAMI.2022.3210780