Fs-DSM : Few-Shot Diagram-Sentence Matching via Cross-Modal Attention Graph Model

Diagram-sentence matching is a valuable academic research because it can help learners effectively understand the diagrams with the assisted by sentences. However, there are many uncommon objects, i.e. few-shot contents in diagrams and sentences. The existing methods for image-sentence matching have...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 23., Seite 8102-8115
1. Verfasser: Hu, Xin (VerfasserIn)
Weitere Verfasser: Zhang, Lingling, Liu, Jun, Zheng, Qinghua, Zhou, Jianlong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Diagram-sentence matching is a valuable academic research because it can help learners effectively understand the diagrams with the assisted by sentences. However, there are many uncommon objects, i.e. few-shot contents in diagrams and sentences. The existing methods for image-sentence matching have great limitations when applied to diagrams. Because they focus on the high-frequency objects during training and ignore the uncommon objects. In addition, the specialty leads to the semantic non-intuition of the diagram itself. In this work, we propose a cross-modal attention graph model for the few-shot diagram-sentence matching task named Fs-DSM. Specifically, it is composed of three modules. The graph initialization module regards the region-level diagram features and word-level sentence features as the nodes of Fs-DSM, and edges are represented as similarity between nodes. The information propagation module is a key point of Fs-DSM, in which the few-shot contents are recognized by an uncommon object recognition strategy, and then the nodes are updated by a neighborhood aggregation procedure with cross-modal propagation between all visual and textual nodes, while the edges are recomputed based on the new node features. The global association module integrates the features of regions and words to represent the global diagrams and sentences. By conducting comprehensive experiments in terms of few-shot and conventional image-sentence matching, we demonstrate that Fs-DSM achieves superior performances over the competitors on the AI2D [Formula: see text] diagram dataset and two public benchmark datasets with nature images
Beschreibung:Date Revised 29.09.2021
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2021.3112294