Enhancing Visual Grounding in Vision-Language Pre-Training With Position-Guided Text Prompts

Vision-Language Pre-Training (VLP) has demonstrated remarkable potential in aligning image and text pairs, paving the way for a wide range of cross-modal learning tasks. Nevertheless, we have observed that VLP models often fall short in terms of visual grounding and localization capabilities, which...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 5 vom: 01. Apr., Seite 3406-3421
1. Verfasser: Wang, Alex Jinpeng (VerfasserIn)
Weitere Verfasser: Zhou, Pan, Shou, Mike Zheng, Yan, Shuicheng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article