|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM271849355 |
003 |
DE-627 |
005 |
20231224233618.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2017 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2017.2703081
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0906.xml
|
035 |
|
|
|a (DE-627)NLM271849355
|
035 |
|
|
|a (NLM)28500000
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Qing Liu
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Hierarchical Contour Closure-Based Holistic Salient Object Detection
|
264 |
|
1 |
|c 2017
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 11.12.2018
|
500 |
|
|
|a Date Revised 11.12.2018
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Most existing salient object detection methods compute the saliency for pixels, patches, or superpixels by contrast. Such fine-grained contrast-based salient object detection methods are stuck with saliency attenuation of the salient object and saliency overestimation of the background when the image is complicated. To better compute the saliency for complicated images, we propose a hierarchical contour closure-based holistic salient object detection method, in which two saliency cues, i.e., closure completeness and closure reliability, are thoroughly exploited. The former pops out the holistic homogeneous regions bounded by completely closed outer contours, and the latter highlights the holistic homogeneous regions bounded by averagely highly reliable outer contours. Accordingly, we propose two computational schemes to compute the corresponding saliency maps in a hierarchical segmentation space. Finally, we propose a framework to combine the two saliency maps, obtaining the final saliency map. Experimental results on three publicly available datasets show that even each single saliency map is able to reach the state-of-the-art performance. Furthermore, our framework, which combines two saliency maps, outperforms the state of the arts. Additionally, we show that the proposed framework can be easily used to extend existing methods and further improve their performances substantially
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Xiaopeng Hong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Beiji Zou
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jie Chen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zailiang Chen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Guoying Zhao
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 26(2017), 9 vom: 25. Sept., Seite 4537-4552
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:26
|g year:2017
|g number:9
|g day:25
|g month:09
|g pages:4537-4552
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2017.2703081
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 26
|j 2017
|e 9
|b 25
|c 09
|h 4537-4552
|