Self-Supervised Monocular Depth Estimation With Multiscale Perception
Extracting 3D information from a single optical image is very attractive. Recently emerging self-supervised methods can learn depth representations without using ground truth depth maps as training data by transforming the depth prediction task into an image synthesis task. However, existing methods...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 19., Seite 3251-3266 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2022
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Extracting 3D information from a single optical image is very attractive. Recently emerging self-supervised methods can learn depth representations without using ground truth depth maps as training data by transforming the depth prediction task into an image synthesis task. However, existing methods rely on a differentiable bilinear sampler for image synthesis, which results in each pixel in a synthetic image being derived from only four pixels in the source image and causes each pixel in the depth map to perceive only a few pixels in the source image. In addition, when calculating the photometric error between a synthetic image and its corresponding target image, existing methods only consider the photometric error within a small neighborhood of each single pixel and therefore ignore correlations between larger areas, which causes the model to tend to fall into the local optima for small patches. In order to extend the perceptual area of the depth map over the source image, we propose a novel multi-scale method that downsamples the predicted depth map and performs image synthesis at different resolutions, which enables each pixel in the depth map to perceive more pixels in the source image and improves the performance of the model. As for the locality of photometric error, we propose a structural similarity (SSIM) pyramid loss to allow the model to sense the difference between images in multiple areas of different sizes. Experimental results show that our method achieves superior performance on both outdoor and indoor benchmarks |
---|---|
Beschreibung: | Date Revised 27.04.2022 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2022.3167307 |