Towards Fully Mobile 3D Face, Body, and Environment Capture Using Only Head-worn Cameras

We propose a new approach for 3D reconstruction of dynamic indoor and outdoor scenes in everyday environments, leveraging only cameras worn by a user. This approach allows 3D reconstruction of experiences at any location and virtual tours from anywhere. The key innovation of the proposed ego-centric...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 24(2018), 11 vom: 10. Nov., Seite 2993-3004
1. Verfasser: Cha, Young-Woon (VerfasserIn)
Weitere Verfasser: Price, True, Wei, Zhen, Lu, Xinran, Rewkowski, Nicholas, Chabra, Rohan, Qin, Zihe, Kim, Hyounghun, Su, Zhaoqi, Liu, Yebin, Ilie, Adrian, State, Andrei, Xu, Zhenlin, Frahm, Jan-Michael, Fuchs, Henry
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2018
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article Research Support, Non-U.S. Gov't Research Support, U.S. Gov't, Non-P.H.S.
LEADER 01000naa a22002652 4500
001 NLM288451325
003 DE-627
005 20231225060333.0
007 cr uuu---uuuuu
008 231225s2018 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2018.2868527  |2 doi 
028 5 2 |a pubmed24n0961.xml 
035 |a (DE-627)NLM288451325 
035 |a (NLM)30207957 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Cha, Young-Woon  |e verfasserin  |4 aut 
245 1 0 |a Towards Fully Mobile 3D Face, Body, and Environment Capture Using Only Head-worn Cameras 
264 1 |c 2018 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 17.09.2019 
500 |a Date Revised 10.12.2019 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a We propose a new approach for 3D reconstruction of dynamic indoor and outdoor scenes in everyday environments, leveraging only cameras worn by a user. This approach allows 3D reconstruction of experiences at any location and virtual tours from anywhere. The key innovation of the proposed ego-centric reconstruction system is to capture the wearer's body pose and facial expression from near-body views, e.g. cameras on the user's glasses, and to capture the surrounding environment using outward-facing views. The main challenge of the ego-centric reconstruction, however, is the poor coverage of the near-body views - that is, the user's body and face are observed from vantage points that are convenient for wear but inconvenient for capture. To overcome these challenges, we propose a parametric-model-based approach to user motion estimation. This approach utilizes convolutional neural networks (CNNs) for near-view body pose estimation, and we introduce a CNN-based approach for facial expression estimation that combines audio and video. For each time-point during capture, the intermediate model-based reconstructions from these systems are used to re-target a high-fidelity pre-scanned model of the user. We demonstrate that the proposed self-sufficient, head-worn capture system is capable of reconstructing the wearer's movements and their surrounding environment in both indoor and outdoor situations without any additional views. As a proof of concept, we show how the resulting 3D-plus-time reconstruction can be immersively experienced within a virtual reality system (e.g., the HTC Vive). We expect that the size of the proposed egocentric capture-and-reconstruction system will eventually be reduced to fit within future AR glasses, and will be widely useful for immersive 3D telepresence, virtual tours, and general use-anywhere 3D content creation 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
650 4 |a Research Support, U.S. Gov't, Non-P.H.S. 
700 1 |a Price, True  |e verfasserin  |4 aut 
700 1 |a Wei, Zhen  |e verfasserin  |4 aut 
700 1 |a Lu, Xinran  |e verfasserin  |4 aut 
700 1 |a Rewkowski, Nicholas  |e verfasserin  |4 aut 
700 1 |a Chabra, Rohan  |e verfasserin  |4 aut 
700 1 |a Qin, Zihe  |e verfasserin  |4 aut 
700 1 |a Kim, Hyounghun  |e verfasserin  |4 aut 
700 1 |a Su, Zhaoqi  |e verfasserin  |4 aut 
700 1 |a Liu, Yebin  |e verfasserin  |4 aut 
700 1 |a Ilie, Adrian  |e verfasserin  |4 aut 
700 1 |a State, Andrei  |e verfasserin  |4 aut 
700 1 |a Xu, Zhenlin  |e verfasserin  |4 aut 
700 1 |a Frahm, Jan-Michael  |e verfasserin  |4 aut 
700 1 |a Fuchs, Henry  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 24(2018), 11 vom: 10. Nov., Seite 2993-3004  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:24  |g year:2018  |g number:11  |g day:10  |g month:11  |g pages:2993-3004 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2018.2868527  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 24  |j 2018  |e 11  |b 10  |c 11  |h 2993-3004