Referring Segmentation via Encoder-Fused Cross-Modal Attention Network

This paper focuses on referring segmentation, which aims to selectively segment the corresponding visual region in an image (or video) according to the referring expression. However, the existing methods usually consider the interaction between multi-modal features at the decoding end of the network...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 6 vom: 11. Juni, Seite 7654-7667
1. Verfasser: Feng, Guang (VerfasserIn)
Weitere Verfasser: Zhang, Lihe, Sun, Jiayu, Hu, Zhiwei, Lu, Huchuan
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM348814933
003 DE-627
005 20231226041145.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2022.3221387  |2 doi 
028 5 2 |a pubmed24n1162.xml 
035 |a (DE-627)NLM348814933 
035 |a (NLM)36367919 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Feng, Guang  |e verfasserin  |4 aut 
245 1 0 |a Referring Segmentation via Encoder-Fused Cross-Modal Attention Network 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 07.05.2023 
500 |a Date Revised 07.05.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a This paper focuses on referring segmentation, which aims to selectively segment the corresponding visual region in an image (or video) according to the referring expression. However, the existing methods usually consider the interaction between multi-modal features at the decoding end of the network. Specifically, they interact the visual features of each scale with language respectively, thus ignoring the correlation between multi-scale features. In this work, we present an encoder fusion network (EFN), which transfers the multi-modal feature learning process from the decoding end to the encoding end and realizes the gradual refinement of multi-modal features by the language. In EFN, we also adopt a co-attention mechanism to promote the mutual alignment of language and visual information in feature space. In the decoding stage, a boundary enhancement module (BEM) is proposed to enhance the network's attention to the details of the target. For video data, we introduce an asymmetric cross-frame attention module (ACFM) to effectively capture the temporal information from the video frames by computing the relationship between each pixel of the current frame and each pooled sub-region of the reference frames. Extensive experiments on referring image/video segmentation datasets show that our method outperforms the state-of-the-art performance 
650 4 |a Journal Article 
700 1 |a Zhang, Lihe  |e verfasserin  |4 aut 
700 1 |a Sun, Jiayu  |e verfasserin  |4 aut 
700 1 |a Hu, Zhiwei  |e verfasserin  |4 aut 
700 1 |a Lu, Huchuan  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 45(2023), 6 vom: 11. Juni, Seite 7654-7667  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:45  |g year:2023  |g number:6  |g day:11  |g month:06  |g pages:7654-7667 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2022.3221387  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 45  |j 2023  |e 6  |b 11  |c 06  |h 7654-7667