|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM331596954 |
003 |
DE-627 |
005 |
20231225213818.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2021 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2021.3117077
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1105.xml
|
035 |
|
|
|a (DE-627)NLM331596954
|
035 |
|
|
|a (NLM)34618673
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Zhang, Zhipeng
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Toward Accurate Pixelwise Object Tracking via Attention Retrieval
|
264 |
|
1 |
|c 2021
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 14.10.2021
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Pixelwise single object tracking is challenging due to the competition of running speeds and segmentation accuracy. Current state-of-the-art real-time approaches seamlessly connect tracking and segmentation by sharing computation of the backbone network, e.g., SiamMask and D3S fork a light branch from the tracking model to predict segmentation mask. Although efficient, directly reusing features from tracking networks may harm the segmentation accuracy, since background clutter in the backbone feature tends to introduce false positives in segmentation. To mitigate this problem, we propose a unified tracking-retrieval-segmentation framework consisting of an attention retrieval network (ARN) and an iterative feedback network (IFN). Instead of segmenting the target inside the bounding box, the proposed framework performs soft spatial constraints on backbone features to obtain an accurate global segmentation map. Concretely, in ARN, a look-up-table (LUT) is first built by sufficiently using the information of the first frame. By retrieving it, a target-aware attention map is generated to suppress the negative influence of background clutter. To ulteriorly refine the contour of the segmentation, IFN iteratively enhances the features at different resolutions by taking the predicted mask as feedback guidance. Our framework sets a new state of the art on the recent pixelwise tracking benchmark VOT2020 and runs at 40 fps. Notably, the proposed model surpasses SiamMask by 11.7/4.2/5.5 points on VOT2020, DAVIS2016, and DAVIS2017, respectively. Code is available at https://github.com/JudasDie/SOTS
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Liu, Yufan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Bing
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Hu, Weiming
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Peng, Houwen
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 30(2021) vom: 07., Seite 8553-8566
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:30
|g year:2021
|g day:07
|g pages:8553-8566
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2021.3117077
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 30
|j 2021
|b 07
|h 8553-8566
|