Multi-Stage Image-Language Cross-Generative Fusion Network for Video-Based Referring Expression Comprehension

Video-based referring expression comprehension is a challenging task that requires locating the referred object in each video frame of a given video. While many existing approaches treat this task as an object-tracking problem, their performance is heavily reliant on the quality of the tracking temp...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 01., Seite 3256-3270
1. Verfasser: Zhang, Yujia (VerfasserIn)
Weitere Verfasser: Li, Qianzhong, Pan, Yi, Zhao, Xiaoguang, Tan, Min
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM371850339
003 DE-627
005 20240509232746.0
007 cr uuu---uuuuu
008 240503s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2024.3394260  |2 doi 
028 5 2 |a pubmed24n1402.xml 
035 |a (DE-627)NLM371850339 
035 |a (NLM)38696298 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Zhang, Yujia  |e verfasserin  |4 aut 
245 1 0 |a Multi-Stage Image-Language Cross-Generative Fusion Network for Video-Based Referring Expression Comprehension 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 08.05.2024 
500 |a Date Revised 09.05.2024 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Video-based referring expression comprehension is a challenging task that requires locating the referred object in each video frame of a given video. While many existing approaches treat this task as an object-tracking problem, their performance is heavily reliant on the quality of the tracking templates. Furthermore, when there is not enough annotation data to assist in template selection, the tracking may fail. Other approaches are based on object detection, but they often use only one adjacent frame of the key frame for feature learning, which limits their ability to establish the relationship between different frames. In addition, improving the fusion of features from multiple frames and referring expressions to effectively locate the referents remains an open problem. To address these issues, we propose a novel approach called the Multi-Stage Image-Language Cross-Generative Fusion Network (MILCGF-Net), which is based on one-stage object detection. Our approach includes a Frame Dense Feature Aggregation module for dense feature learning of adjacent time sequences. Additionally, we propose an Image-Language Cross-Generative Fusion module as the main body of multi-stage learning to generate cross-modal features by calculating the similarity between video and expression, and then refining and fusing the generated features. To further enhance the cross-modal feature generation capability of our model, we introduce a consistency loss that constrains the image-language similarity and language-image similarity matrices during feature generation. We evaluate our proposed approach on three public datasets and demonstrate its effectiveness through comprehensive experimental results 
650 4 |a Journal Article 
700 1 |a Li, Qianzhong  |e verfasserin  |4 aut 
700 1 |a Pan, Yi  |e verfasserin  |4 aut 
700 1 |a Zhao, Xiaoguang  |e verfasserin  |4 aut 
700 1 |a Tan, Min  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 33(2024) vom: 01., Seite 3256-3270  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:33  |g year:2024  |g day:01  |g pages:3256-3270 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2024.3394260  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 33  |j 2024  |b 01  |h 3256-3270