Real-Time View Correction for Mobile Devices

We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the loca...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 23(2017), 11 vom: 28. Nov., Seite 2455-2462
1. Verfasser: Schops, Thomas (VerfasserIn)
Weitere Verfasser: Oswald, Martin R, Speciale, Pablo, Yang, Shuoran, Pollefeys, Marc
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2017
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM274830027
003 DE-627
005 20231225004331.0
007 cr uuu---uuuuu
008 231225s2017 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2017.2734578  |2 doi 
028 5 2 |a pubmed24n0916.xml 
035 |a (DE-627)NLM274830027 
035 |a (NLM)28809696 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Schops, Thomas  |e verfasserin  |4 aut 
245 1 0 |a Real-Time View Correction for Mobile Devices 
264 1 |c 2017 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 11.12.2018 
500 |a Date Revised 11.12.2018 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Oswald, Martin R  |e verfasserin  |4 aut 
700 1 |a Speciale, Pablo  |e verfasserin  |4 aut 
700 1 |a Yang, Shuoran  |e verfasserin  |4 aut 
700 1 |a Pollefeys, Marc  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 23(2017), 11 vom: 28. Nov., Seite 2455-2462  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:23  |g year:2017  |g number:11  |g day:28  |g month:11  |g pages:2455-2462 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2017.2734578  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 23  |j 2017  |e 11  |b 28  |c 11  |h 2455-2462