|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM362949867 |
003 |
DE-627 |
005 |
20231226092317.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2023.3322549
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1209.xml
|
035 |
|
|
|a (DE-627)NLM362949867
|
035 |
|
|
|a (NLM)37801376
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Sun, Libo
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a SC-DepthV3
|b Robust Self-Supervised Monocular Depth Estimation for Dynamic Scenes
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 06.12.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Self-supervised monocular depth estimation has shown impressive results in static scenes. It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions and occlusions. Consequently, existing methods show poor accuracy in dynamic scenes, and the estimated depth map is blurred at object boundaries because they are usually occluded in other training views. In this paper, we propose SC-DepthV3 for addressing the challenges. Specifically, we introduce an external pretrained monocular depth estimation model for generating single-image depth prior, namely pseudo-depth, based on which we propose novel losses to boost self-supervised training. As a result, our model can predict sharp and accurate depth maps, even when training from monocular videos of highly dynamic scenes. We demonstrate the significantly superior performance of our method over previous methods on six challenging datasets, and we provide detailed ablation studies for the proposed terms
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Bian, Jia-Wang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhan, Huangying
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yin, Wei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Reid, Ian
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Shen, Chunhua
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 46(2023), 1 vom: 06. Jan., Seite 497-508
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:46
|g year:2023
|g number:1
|g day:06
|g month:01
|g pages:497-508
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2023.3322549
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 46
|j 2023
|e 1
|b 06
|c 01
|h 497-508
|