|
|
|
|
LEADER |
01000caa a22002652c 4500 |
001 |
NLM335304788 |
003 |
DE-627 |
005 |
20250302210756.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2022.3140750
|2 doi
|
028 |
5 |
2 |
|a pubmed25n1117.xml
|
035 |
|
|
|a (DE-627)NLM335304788
|
035 |
|
|
|a (NLM)34995180
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Shi, Yujiao
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Geometry-Guided Street-View Panorama Synthesis From Satellite Imagery
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 08.11.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a This paper presents a new approach for synthesizing a novel street-view panorama given a satellite image, as if captured from the geographical location at the center of the satellite image. Existing works approach this as an image generation problem, adopting generative adversarial networks to implicitly learn the cross-view transformations, but ignore the geometric constraints. In this paper, we make the geometric correspondences between the satellite and street-view images explicit so as to facilitate the transfer of information between domains. Specifically, we observe that when a 3D point is visible in both views, and the height of the point relative to the camera is known, there is a deterministic mapping between the projected points in the images. Motivated by this, we develop a novel satellite to street-view projection (S2SP) module which learns the height map and projects the satellite image to the ground-level viewpoint, explicitly connecting corresponding pixels. With these projected satellite images as input, we next employ a generator to synthesize realistic street-view panoramas that are geometrically consistent with the satellite images. Our S2SP module is differentiable and the whole framework is trained in an end-to-end manner. Extensive experimental results on two cross-view benchmark datasets demonstrate that our method generates more accurate and consistent images than existing approaches
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Campbell, Dylan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yu, Xin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Hongdong
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 44(2022), 12 vom: 01. Dez., Seite 10009-10022
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnas
|
773 |
1 |
8 |
|g volume:44
|g year:2022
|g number:12
|g day:01
|g month:12
|g pages:10009-10022
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2022.3140750
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 44
|j 2022
|e 12
|b 01
|c 12
|h 10009-10022
|