|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM268249628 |
003 |
DE-627 |
005 |
20231224222314.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2017 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2017.2656628
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0894.xml
|
035 |
|
|
|a (DE-627)NLM268249628
|
035 |
|
|
|a (NLM)28113343
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Gao, Junyu
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Deep Relative Tracking
|
264 |
|
1 |
|c 2017
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 20.11.2019
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Most existing tracking methods are direct trackers, which directly exploit foreground or/and background information for object appearance modeling and decide whether an image patch is target object or not. As a result, these trackers cannot perform well when target appearance changes heavily and becomes different from its model. To deal with this issue, we propose a novel relative tracker, which can effectively exploit the relative relationship among image patches from both foreground and background for object appearance modeling. Different from direct trackers, the proposed relative tracker is robust to localize target object by use of the best image patch with the highest relative score to target appearance model. To model relative relationship among large-scale image patch pairs, we propose a novel and effective deep relative learning algorithm via Convolutional Neural Network. We test the proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our method consistently outperforms state-of-the-art trackers due to the powerful capacity of the proposed deep relative model
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Zhang, Tianzhu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yang, Xiaoshan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Xu, Changsheng
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 26(2017), 4 vom: 15. Apr., Seite 1845-1858
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:26
|g year:2017
|g number:4
|g day:15
|g month:04
|g pages:1845-1858
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2017.2656628
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 26
|j 2017
|e 4
|b 15
|c 04
|h 1845-1858
|