|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM355329433 |
003 |
DE-627 |
005 |
20240628231900.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TVCG.2023.3255991
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1454.xml
|
035 |
|
|
|a (DE-627)NLM355329433
|
035 |
|
|
|a (NLM)37028286
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Han, DongHeun
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a VR-HandNet
|b A Visually and Physically Plausible Hand Manipulation System in Virtual Reality
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 28.06.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a This study aims to allow users to perform dexterous hand manipulation of objects in virtual environments with hand-held VR controllers. To this end, the VR controller is mapped to the virtual hand and the hand motions are dynamically synthesized when the virtual hand approaches an object. At each frame, given the information about the virtual hand, VR controller input, and hand-object spatial relations, the deep neural network determines the desired joint orientations of the virtual hand model in the next frame. The desired orientations are then converted into a set of torques acting on hand joints and applied to a physics simulation to determine the hand pose at the next frame. The deep neural network, named VR-HandNet, is trained with a reinforcement learning-based approach. Therefore, it can produce physically plausible hand motion since the trial-and-error training process can learn how the interaction between hand and object is performed under the environment that is simulated by a physics engine. Furthermore, we adopted an imitation learning paradigm to increase visual plausibility by mimicking the reference motion datasets. Through the ablation studies, we validated the proposed method is effectively constructed and successfully serves our design goal. A live demo is demonstrated in the supplementary video
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Lee, RoUn
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Kim, KyeongMin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Kang, HyeongYeop
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on visualization and computer graphics
|d 1996
|g 30(2024), 7 vom: 05. Juni, Seite 4170-4182
|w (DE-627)NLM098269445
|x 1941-0506
|7 nnns
|
773 |
1 |
8 |
|g volume:30
|g year:2024
|g number:7
|g day:05
|g month:06
|g pages:4170-4182
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TVCG.2023.3255991
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 30
|j 2024
|e 7
|b 05
|c 06
|h 4170-4182
|