|
|
|
|
| LEADER |
01000caa a22002652c 4500 |
| 001 |
NLM386051453 |
| 003 |
DE-627 |
| 005 |
20250906233503.0 |
| 007 |
cr uuu---uuuuu |
| 008 |
250508s2025 xx |||||o 00| ||eng c |
| 024 |
7 |
|
|a 10.1109/TVCG.2025.3553975
|2 doi
|
| 028 |
5 |
2 |
|a pubmed25n1558.xml
|
| 035 |
|
|
|a (DE-627)NLM386051453
|
| 035 |
|
|
|a (NLM)40131751
|
| 040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
| 041 |
|
|
|a eng
|
| 100 |
1 |
|
|a Liu, Xinxin
|e verfasserin
|4 aut
|
| 245 |
1 |
0 |
|a ${\rm{H}}_{2}{\rm{O}}$H2O-NeRF
|b Radiance Fields Reconstruction for Two-Hand-Held Objects
|
| 264 |
|
1 |
|c 2025
|
| 336 |
|
|
|a Text
|b txt
|2 rdacontent
|
| 337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
| 338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
| 500 |
|
|
|a Date Revised 05.09.2025
|
| 500 |
|
|
|a published: Print
|
| 500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
| 520 |
|
|
|a Our work aims to reconstruct the appearance and geometry of the two-hand-held object from a sequence of color images. In contrast to traditional single-hand-held manipulation, two-hand-holding allows more flexible interaction, thereby providing back views of the object, which is particularly convenient for reconstruction but generates complex view-dependent occlusions. The recent development of neural rendering provides new potential for hand-held object reconstruction. In this paper, we propose a novel neural representation-based framework to recover radiance fields of the two-hand-held object, named ${\rm{H}}_{2}{\rm{O}}$H2O-NeRF. We first design an object-centric semantic module based on the geometric signed distance function cues to predict 3D object-centric regions and develop the view-dependent visible module based on the image-related cues to label 2D occluded regions. We then combine them to obtain a 2D visible mask that adaptively guides ray sampling on the object for optimization. We also provide a newly collected ${\rm{H}}_{2}{\rm{O}}$H2O dataset to validate the proposed method. Experiments show that our method achieves superior performance on reconstruction completeness and view-consistency synthesis compared to the state-of-the-art methods
|
| 650 |
|
4 |
|a Journal Article
|
| 700 |
1 |
|
|a Zhang, Qi
|e verfasserin
|4 aut
|
| 700 |
1 |
|
|a Huang, Xin
|e verfasserin
|4 aut
|
| 700 |
1 |
|
|a Feng, Ying
|e verfasserin
|4 aut
|
| 700 |
1 |
|
|a Zhou, Guoqing
|e verfasserin
|4 aut
|
| 700 |
1 |
|
|a Wang, Qing
|e verfasserin
|4 aut
|
| 773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on visualization and computer graphics
|d 1996
|g 31(2025), 10 vom: 03. Sept., Seite 7696-7710
|w (DE-627)NLM098269445
|x 1941-0506
|7 nnas
|
| 773 |
1 |
8 |
|g volume:31
|g year:2025
|g number:10
|g day:03
|g month:09
|g pages:7696-7710
|
| 856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TVCG.2025.3553975
|3 Volltext
|
| 912 |
|
|
|a GBV_USEFLAG_A
|
| 912 |
|
|
|a SYSFLAG_A
|
| 912 |
|
|
|a GBV_NLM
|
| 912 |
|
|
|a GBV_ILN_350
|
| 951 |
|
|
|a AR
|
| 952 |
|
|
|d 31
|j 2025
|e 10
|b 03
|c 09
|h 7696-7710
|