EventPointMesh : Human Mesh Recovery Solely From Event Point Clouds

How much can we infer about human shape using an event camera that only detects the pixel position where the luminance changed and its timestamp? This neuromorphic vision technology captures changes in pixel values at ultra-high speeds, regardless of the variations in environmental lighting brightne...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2024) vom: 18. Sept.
1. Verfasser: Hori, Ryosuke (VerfasserIn)
Weitere Verfasser: Isogawa, Mariko, Mikami, Dan, Saito, Hideo
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM37779564X
003 DE-627
005 20240919233203.0
007 cr uuu---uuuuu
008 240919s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2024.3462816  |2 doi 
028 5 2 |a pubmed24n1539.xml 
035 |a (DE-627)NLM37779564X 
035 |a (NLM)39292570 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Hori, Ryosuke  |e verfasserin  |4 aut 
245 1 0 |a EventPointMesh  |b Human Mesh Recovery Solely From Event Point Clouds 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 18.09.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a How much can we infer about human shape using an event camera that only detects the pixel position where the luminance changed and its timestamp? This neuromorphic vision technology captures changes in pixel values at ultra-high speeds, regardless of the variations in environmental lighting brightness. Existing methods for human mesh recovery (HMR) from event data need to utilize intensity images captured with a generic frame-based camera, rendering them vulnerable to low-light conditions, energy/memory constraints, and privacy issues. In contrast, we explore the potential of solely utilizing event data to alleviate these issues and ascertain whether it offers adequate cues for HMR, as illustrated in Fig. 1. This is a quite challenging task due to the substantially limited information ensuing from the absence of intensity images. To this end, we propose EventPointMesh, a framework which treats event data as a three-dimensional (3D) spatio-temporal point cloud for reconstructing the human mesh. By employing a coarse-to-fine pose feature extraction strategy, we extract both global features and local features. The local features are derived by processing the spatio-temporally dispersed event points into groups associated with individual body segments. This combination of global and local features allows the framework to achieve a more accurate HMR, capturing subtle differences in human movements. Experiments demonstrate that our method with only sparse event data outperforms baseline methods. The dataset and code will be available at https://github.com/RyosukeHori/EventPointMesh 
650 4 |a Journal Article 
700 1 |a Isogawa, Mariko  |e verfasserin  |4 aut 
700 1 |a Mikami, Dan  |e verfasserin  |4 aut 
700 1 |a Saito, Hideo  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g PP(2024) vom: 18. Sept.  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:18  |g month:09 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2024.3462816  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 18  |c 09