SPA2Net : Structure-Preserved Attention Activated Network for Weakly Supervised Object Localization

By exploring the localizable representations in deep CNN, weakly supervised object localization (WSOL) methods could determine the position of the object in each image just trained by the classification task. However, the partial activation problem caused by the discriminant function makes the netwo...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 32(2023) vom: 17., Seite 5779-5793
1. Verfasser: Chen, Dong (VerfasserIn)
Weitere Verfasser: Pan, Xingjia, Tang, Fan, Dong, Weiming, Xu, Changsheng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM363401822
003 DE-627
005 20231226093253.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2023.3323793  |2 doi 
028 5 2 |a pubmed24n1211.xml 
035 |a (DE-627)NLM363401822 
035 |a (NLM)37847621 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Chen, Dong  |e verfasserin  |4 aut 
245 1 0 |a SPA2Net  |b Structure-Preserved Attention Activated Network for Weakly Supervised Object Localization 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 27.10.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a By exploring the localizable representations in deep CNN, weakly supervised object localization (WSOL) methods could determine the position of the object in each image just trained by the classification task. However, the partial activation problem caused by the discriminant function makes the network unable to locate objects accurately. To alleviate this problem, we propose Structure-Preserved Attention Activated Network (SPA2Net), a simple and effective one-stage WSOL framework to explore the ability of structure preservation of deep features. Different from traditional WSOL approaches, we decouple the object localization task from the classification branch to reduce their mutual influence by involving a localization branch which is online refined by a self-supervised structural-preserved localization mask. Specifically, we employ the high-order self-correlation as structural prior to enhance the perception of spatial interaction within convolutional features. By succinctly combining the structural prior with spatial attention, activations by SPA2Net will spread from part to the whole object during training. To avoid the structure-missing issue caused by the classification network, we furthermore utilize the restricted activation loss (RAL) to distinguish the difference between foreground and background in the channel dimension. In conjunction with the self-supervised localization branch, SPA2Net can directly predict the class-irrelevant localization map while prompting the network to pay more attention to the target region for accurate localization. Extensive experiments on two publicly available benchmarks, including CUB-200-2011 and ILSVRC, show that our SPA2Net achieves substantial and consistent performance gains compared with baseline approaches. The code and models are available at https://github.com/MsterDC/SPA2Net 
650 4 |a Journal Article 
700 1 |a Pan, Xingjia  |e verfasserin  |4 aut 
700 1 |a Tang, Fan  |e verfasserin  |4 aut 
700 1 |a Dong, Weiming  |e verfasserin  |4 aut 
700 1 |a Xu, Changsheng  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 32(2023) vom: 17., Seite 5779-5793  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:32  |g year:2023  |g day:17  |g pages:5779-5793 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2023.3323793  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 32  |j 2023  |b 17  |h 5779-5793