|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM297140965 |
003 |
DE-627 |
005 |
20231225091341.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2020 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2019.2916873
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0990.xml
|
035 |
|
|
|a (DE-627)NLM297140965
|
035 |
|
|
|a (NLM)31095476
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Liu, Jun
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a NTU RGB+D 120
|b A Large-Scale Benchmark for 3D Human Activity Understanding
|
264 |
|
1 |
|c 2020
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 25.06.2021
|
500 |
|
|
|a Date Revised 25.06.2021
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition. The existing depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of large-scale training samples, realistic number of distinct class categories, diversity in camera views, varied environmental conditions, and variety of human subjects. In this work, we introduce a large-scale dataset for RGB+D human action recognition, which is collected from 106 distinct subjects and contains more than 114 thousand video samples and 8 million frames. This dataset contains 120 different action classes including daily, mutual, and health-related activities. We evaluate the performance of a series of existing 3D activity analysis methods on this dataset, and show the advantage of applying deep learning methods for 3D-based human action recognition. Furthermore, we investigate a novel one-shot 3D activity recognition problem on our dataset, and a simple yet effective Action-Part Semantic Relevance-aware (APSR) framework is proposed for this task, which yields promising results for recognition of the novel action classes. We believe the introduction of this large-scale dataset will enable the community to apply, adapt, and develop various data-hungry learning techniques for depth-based and RGB+D-based human activity understanding
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Shahroudy, Amir
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Perez, Mauricio
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Gang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Duan, Ling-Yu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Kot, Alex C
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 42(2020), 10 vom: 01. Okt., Seite 2684-2701
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:42
|g year:2020
|g number:10
|g day:01
|g month:10
|g pages:2684-2701
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2019.2916873
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 42
|j 2020
|e 10
|b 01
|c 10
|h 2684-2701
|