Integrating Both Parallax and Latency Compensation into Video See-through Head-mounted Display

This work introduces a perspective-corrected video see-through mixed-reality head-mounted display with edge-preserving occlusion and low-latency capabilities. To realize the consistent spatial and temporal composition of a captured real world containing virtual objects, we perform three essential ta...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2023) vom: 27. Feb.
1. Verfasser: Ishihara, Atsushi (VerfasserIn)
Weitere Verfasser: Aga, Hiroyuki, Ishihara, Yasuko, Ichikawa, Hirotake, Kaji, Hidetaka, Kobayashi, Daita, Kobayashi, Toshimi, Nishida, Ken, Hamasaki, Takumi, Mori, Hideto, Kawasaki, Koichi, Morikubo, Yuki
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM355322382
003 DE-627
005 20231226064114.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2023.3247460  |2 doi 
028 5 2 |a pubmed24n1184.xml 
035 |a (DE-627)NLM355322382 
035 |a (NLM)37027581 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Ishihara, Atsushi  |e verfasserin  |4 aut 
245 1 0 |a Integrating Both Parallax and Latency Compensation into Video See-through Head-mounted Display 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 07.04.2023 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a This work introduces a perspective-corrected video see-through mixed-reality head-mounted display with edge-preserving occlusion and low-latency capabilities. To realize the consistent spatial and temporal composition of a captured real world containing virtual objects, we perform three essential tasks: 1) to reconstruct captured images so as to match the user's view; 2) to occlude virtual objects with nearer real objects, to provide users with correct depth cues; and 3) to reproject the virtual and captured scenes to be matched and to keep up with users' head motions. Captured image reconstruction and occlusion-mask generation require dense and accurate depth maps. However, estimating these maps is computationally difficult, which results in longer latencies. To obtain an acceptable balance between spatial consistency and low latency, we rapidly generated depth maps by focusing on edge smoothness and disocclusion (instead of fully accurate maps), to shorten the processing time. Our algorithm refines edges via a hybrid method involving infrared masks and color-guided filters, and it fills disocclusions using temporally cached depth maps. Our system combines these algorithms in a two-phase temporal warping architecture based upon synchronized camera pairs and displays. The first phase of warping is to reduce registration errors between the virtual and captured scenes. The second is to present virtual and captured scenes that correspond with the user's head motion. We implemented these methods on our wearable prototype and performed end-to-end measurements of its accuracy and latency. We achieved an acceptable latency due to head motion (less than 4 ms) and spatial accuracy (less than 0.1° in size and less than 0.3° in position) in our test environment. We anticipate that this work will help improve the realism of mixed reality systems 
650 4 |a Journal Article 
700 1 |a Aga, Hiroyuki  |e verfasserin  |4 aut 
700 1 |a Ishihara, Yasuko  |e verfasserin  |4 aut 
700 1 |a Ichikawa, Hirotake  |e verfasserin  |4 aut 
700 1 |a Kaji, Hidetaka  |e verfasserin  |4 aut 
700 1 |a Kobayashi, Daita  |e verfasserin  |4 aut 
700 1 |a Kobayashi, Toshimi  |e verfasserin  |4 aut 
700 1 |a Nishida, Ken  |e verfasserin  |4 aut 
700 1 |a Hamasaki, Takumi  |e verfasserin  |4 aut 
700 1 |a Mori, Hideto  |e verfasserin  |4 aut 
700 1 |a Kawasaki, Koichi  |e verfasserin  |4 aut 
700 1 |a Morikubo, Yuki  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g PP(2023) vom: 27. Feb.  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:PP  |g year:2023  |g day:27  |g month:02 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2023.3247460  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2023  |b 27  |c 02