CCNet : Criss-Cross Attention for Semantic Segmentation

Contextual information is vital in visual understanding problems, such as semantic segmentation and object detection. We propose a criss-cross network (CCNet) for obtaining full-image contextual information in a very effective and efficient way. Concretely, for each pixel, a novel criss-cross attent...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 6 vom: 30. Juni, Seite 6896-6908
1. Verfasser: Huang, Zilong (VerfasserIn)
Weitere Verfasser: Wang, Xinggang, Wei, Yunchao, Huang, Lichao, Shi, Humphrey, Liu, Wenyu, Huang, Thomas S
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM313260427
003 DE-627
005 20231225150209.0
007 cr uuu---uuuuu
008 231225s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2020.3007032  |2 doi 
028 5 2 |a pubmed24n1044.xml 
035 |a (DE-627)NLM313260427 
035 |a (NLM)32750802 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Huang, Zilong  |e verfasserin  |4 aut 
245 1 0 |a CCNet  |b Criss-Cross Attention for Semantic Segmentation 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 07.05.2023 
500 |a Date Revised 07.05.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Contextual information is vital in visual understanding problems, such as semantic segmentation and object detection. We propose a criss-cross network (CCNet) for obtaining full-image contextual information in a very effective and efficient way. Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can finally capture the full-image dependencies. Besides, a category consistent loss is proposed to enforce the criss-cross attention module to produce more discriminative features. Overall, CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11× less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85 percent of the non-local block. 3) The state-of-the-art performance. We conduct extensive experiments on semantic segmentation benchmarks including Cityscapes, ADE20K, human parsing benchmark LIP, instance segmentation benchmark COCO, video segmentation benchmark CamVid. In particular, our CCNet achieves the mIoU scores of 81.9, 45.76 and 55.47 percent on the Cityscapes test set, the ADE20K validation set and the LIP validation set respectively, which are the new state-of-the-art results. The source codes are available at https://github.com/speedinghzl/CCNethttps://github.com/speedinghzl/CCNet 
650 4 |a Journal Article 
700 1 |a Wang, Xinggang  |e verfasserin  |4 aut 
700 1 |a Wei, Yunchao  |e verfasserin  |4 aut 
700 1 |a Huang, Lichao  |e verfasserin  |4 aut 
700 1 |a Shi, Humphrey  |e verfasserin  |4 aut 
700 1 |a Liu, Wenyu  |e verfasserin  |4 aut 
700 1 |a Huang, Thomas S  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 45(2023), 6 vom: 30. Juni, Seite 6896-6908  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:45  |g year:2023  |g number:6  |g day:30  |g month:06  |g pages:6896-6908 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2020.3007032  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 45  |j 2023  |e 6  |b 30  |c 06  |h 6896-6908