UNOC : Understanding Occlusion for Embodied Presence in Virtual Reality

Tracking body and hand motions in 3D space is essential for social and self-presence in augmented and virtual environments. Unlike the popular 3D pose estimation setting, the problem is often formulated as egocentric tracking based on embodied perception (e.g., egocentric cameras, handheld sensors)....

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 28(2022), 12 vom: 01. Dez., Seite 4240-4251
1. Verfasser: Parger, Mathias (VerfasserIn)
Weitere Verfasser: Tang, Chengcheng, Xu, Yuanlu, Twigg, Christopher D, Tao, Lingling, Li, Yijing, Wang, Robert, Steinberger, Markus
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM326102620
003 DE-627
005 20231225193849.0
007 cr uuu---uuuuu
008 231225s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2021.3085407  |2 doi 
028 5 2 |a pubmed24n1086.xml 
035 |a (DE-627)NLM326102620 
035 |a (NLM)34061744 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Parger, Mathias  |e verfasserin  |4 aut 
245 1 0 |a UNOC  |b Understanding Occlusion for Embodied Presence in Virtual Reality 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 28.10.2022 
500 |a Date Revised 15.11.2022 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Tracking body and hand motions in 3D space is essential for social and self-presence in augmented and virtual environments. Unlike the popular 3D pose estimation setting, the problem is often formulated as egocentric tracking based on embodied perception (e.g., egocentric cameras, handheld sensors). In this article, we propose a new data-driven framework for egocentric body tracking, targeting challenges of omnipresent occlusions in optimization-based methods (e.g., inverse kinematics solvers). We first collect a large-scale motion capture dataset with both body and finger motions using optical markers and inertial sensors. This dataset focuses on social scenarios and captures ground truth poses under self-occlusions and body-hand interactions. We then simulate the occlusion patterns in head-mounted camera views on the captured ground truth using a ray casting algorithm and learn a deep neural network to infer the occluded body parts. Our experiments show that our method is able to generate high-fidelity embodied poses by applying the proposed method to the task of real-time egocentric body tracking, finger motion synthesis, and 3-point inverse kinematics 
650 4 |a Journal Article 
700 1 |a Tang, Chengcheng  |e verfasserin  |4 aut 
700 1 |a Xu, Yuanlu  |e verfasserin  |4 aut 
700 1 |a Twigg, Christopher D  |e verfasserin  |4 aut 
700 1 |a Tao, Lingling  |e verfasserin  |4 aut 
700 1 |a Li, Yijing  |e verfasserin  |4 aut 
700 1 |a Wang, Robert  |e verfasserin  |4 aut 
700 1 |a Steinberger, Markus  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 28(2022), 12 vom: 01. Dez., Seite 4240-4251  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:28  |g year:2022  |g number:12  |g day:01  |g month:12  |g pages:4240-4251 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2021.3085407  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 28  |j 2022  |e 12  |b 01  |c 12  |h 4240-4251