360 ° Stereo Image Composition With Depth Adaption

360 ° images and videos have become an economic and popular way to provide VR experiences using real-world content. However, the manipulation of the stereo panoramic content remains less explored. In this article, we focus on the 360 ° image composition problem, and develop a solution that can take...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 30(2024), 9 vom: 01. Aug., Seite 6177-6191
1. Verfasser: Huang, Kun (VerfasserIn)
Weitere Verfasser: Zhang, Fang-Lue, Zhao, Junhong, Li, Yiheng, Dodgson, Neil
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM363818804
003 DE-627
005 20240801232607.0
007 cr uuu---uuuuu
008 231226s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2023.3327943  |2 doi 
028 5 2 |a pubmed24n1488.xml 
035 |a (DE-627)NLM363818804 
035 |a (NLM)37889815 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Huang, Kun  |e verfasserin  |4 aut 
245 1 0 |a 360 ° Stereo Image Composition With Depth Adaption 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 31.07.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a 360 ° images and videos have become an economic and popular way to provide VR experiences using real-world content. However, the manipulation of the stereo panoramic content remains less explored. In this article, we focus on the 360 ° image composition problem, and develop a solution that can take an object from a stereo image pair and insert it at a given 3D position in a target stereo panorama, with well-preserved geometry information. Our method uses recovered 3D point clouds to guide the composited image generation. More specifically, we observe that using only a one-off operation to insert objects into equirectangular images will never produce satisfactory depth perception and generate ghost artifacts when users are watching the result from different view directions. Therefore, we propose a novel per-view projection method that segments the object in 3D spherical space with the stereo camera pair facing in that direction. A deep depth densification network is proposed to generate depth guidance for the stereo image generation of each view segment according to the desired position and pose of the inserted object. We finally combine the synthesized view segments and blend the objects into the target stereo 360 ° scene. A user study demonstrates that our method can provide good depth perception and removes ghost artifacts. The per-view solution is a potential paradigm for other content manipulation methods for 360 ° images and videos 
650 4 |a Journal Article 
700 1 |a Zhang, Fang-Lue  |e verfasserin  |4 aut 
700 1 |a Zhao, Junhong  |e verfasserin  |4 aut 
700 1 |a Li, Yiheng  |e verfasserin  |4 aut 
700 1 |a Dodgson, Neil  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 30(2024), 9 vom: 01. Aug., Seite 6177-6191  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:30  |g year:2024  |g number:9  |g day:01  |g month:08  |g pages:6177-6191 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2023.3327943  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 30  |j 2024  |e 9  |b 01  |c 08  |h 6177-6191