|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM282900918 |
003 |
DE-627 |
005 |
20250223102010.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2018 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2018.2813165
|2 doi
|
028 |
5 |
2 |
|a pubmed25n0942.xml
|
035 |
|
|
|a (DE-627)NLM282900918
|
035 |
|
|
|a (NLM)29641411
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Chen, Yuhuan
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a SCOM
|b Spatiotemporal Constrained Optimization for Salient Object Detection
|
264 |
|
1 |
|c 2018
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 30.07.2018
|
500 |
|
|
|a Date Revised 30.07.2018
|
500 |
|
|
|a published: Print
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a This paper presents a novel model for video salient object detection called spatiotemporal constrained optimization model (SCOM), which exploits spatial and temporal cues, as well as a local constraint, to achieve a global saliency optimization. For a robust motion estimation of salient objects, we propose a novel approach to modeling the motion cues from optical flow field, the saliency map of the prior video frame and the motion history of change detection, which is able to distinguish the moving salient objects from diverse changing background regions. Furthermore, an effective objectness measure is proposed with intuitive geometrical interpretation to extract some reliable object and background regions, which provided as the basis to define the foreground potential, background potential, and the constraint to support saliency propagation. These potentials and the constraint are formulated into the proposed SCOM framework to generate an optimal saliency map for each frame in a video. The proposed model is extensively evaluated on the widely used challenging benchmark data sets. Experiments demonstrate that our proposed SCOM substantially outperforms the state-of-the-art saliency models
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Zou, Wenbin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tang, Yi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Xia
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Xu, Chen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Komodakis, Nikos
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 27(2018), 7 vom: 31. Juli, Seite 3345-3357
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:27
|g year:2018
|g number:7
|g day:31
|g month:07
|g pages:3345-3357
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2018.2813165
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 27
|j 2018
|e 7
|b 31
|c 07
|h 3345-3357
|