Synthesizing Supervision for Learning Deep Saliency Network without Human Annotation

Recently, the research field of salient object detection is undergoing a rapid and remarkable development along with the wide usage of deep neural networks. Being trained with a large number of images annotated with strong pixel-level ground-truth masks, the deep salient object detectors have achiev...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 42(2020), 7 vom: 20. Juli, Seite 1755-1769
1. Verfasser: Zhang, Dingwen (VerfasserIn)
Weitere Verfasser: Han, Junwei, Zhang, Yu, Xu, Dong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM294194630
003 DE-627
005 20231225080946.0
007 cr uuu---uuuuu
008 231225s2020 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2019.2900649  |2 doi 
028 5 2 |a pubmed24n0980.xml 
035 |a (DE-627)NLM294194630 
035 |a (NLM)30794509 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Zhang, Dingwen  |e verfasserin  |4 aut 
245 1 0 |a Synthesizing Supervision for Learning Deep Saliency Network without Human Annotation 
264 1 |c 2020 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 16.04.2021 
500 |a Date Revised 16.04.2021 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Recently, the research field of salient object detection is undergoing a rapid and remarkable development along with the wide usage of deep neural networks. Being trained with a large number of images annotated with strong pixel-level ground-truth masks, the deep salient object detectors have achieved the state-of-the-art performance. However, it is expensive and time-consuming to provide the pixel-level ground-truth masks for each training image. To address this problem, this paper proposes one of the earliest frameworks to learn deep salient object detectors without requiring any human annotation. The supervisory signals used in our learning framework are generated through a novel supervision synthesis scheme, in which the key insights are "knowledge source transition" and "supervision by fusion". Specifically, in the proposed learning framework, both the external knowledge source and the internal knowledge source are explored dynamically to provide informative cues for synthesizing supervision required in our approach, while a two-stream fusion mechanism is also established to implement the supervision synthesis process. Comprehensive experiments on four benchmark datasets demonstrate that the deep salient object detector trained by our newly proposed learning framework often works well without requiring any human annotated masks, which even approaches to its upper-bound obtained under the fully supervised learning fashion (within only 3 percent performance gap). Besides, we also apply the salient object detector learnt with our annotation-free learning framework to assist the weakly supervised semantic segmentation task, which demonstrates that our approach can also alleviate the heavy supplementary supervision required in the existing weakly supervised semantic segmentation framework 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Han, Junwei  |e verfasserin  |4 aut 
700 1 |a Zhang, Yu  |e verfasserin  |4 aut 
700 1 |a Xu, Dong  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 42(2020), 7 vom: 20. Juli, Seite 1755-1769  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:42  |g year:2020  |g number:7  |g day:20  |g month:07  |g pages:1755-1769 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2019.2900649  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 42  |j 2020  |e 7  |b 20  |c 07  |h 1755-1769