Depth and Video Segmentation Based Visual Attention for Embodied Question Answering

Embodied Question Answering (EQA) is a newly defined research area where an agent is required to answer the user's questions by exploring the real-world environment. It has attracted increasing research interests due to its broad applications in personal assistants and in-home robots. Most of t...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 6 vom: 04. Juni, Seite 6807-6819
1. Verfasser: Luo, Haonan (VerfasserIn)
Weitere Verfasser: Lin, Guosheng, Yao, Yazhou, Liu, Fayao, Liu, Zichuan, Tang, Zhenmin
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Embodied Question Answering (EQA) is a newly defined research area where an agent is required to answer the user's questions by exploring the real-world environment. It has attracted increasing research interests due to its broad applications in personal assistants and in-home robots. Most of the existing methods perform poorly in terms of answering and navigation accuracy due to the absence of fine-level semantic information, stability to the ambiguity, and 3D spatial information of the virtual environment. To tackle these problems, we propose a depth and segmentation based visual attention mechanism for Embodied Question Answering. First, we extract local semantic features by introducing a novel high-speed video segmentation framework. Then guided by the extracted semantic features, a depth and segmentation based visual attention mechanism is proposed for the Visual Question Answering (VQA) sub-task. Further, a feature fusion strategy is designed to guide the navigator's training process without much additional computational cost. The ablation experiments show that our method effectively boosts the performance of the VQA module and navigation module, leading to 4.9 % and 5.6 % overall improvement in EQA accuracy on House3D and Matterport3D datasets respectively
Beschreibung:Date Completed 07.05.2023
Date Revised 07.05.2023
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1939-3539
DOI:10.1109/TPAMI.2021.3139957