Learning Clip Representations for Skeleton-Based 3D Action Recognition

This paper presents a new representation of skeleton sequences for 3D action recognition. Existing methods based on hand-crafted features or recurrent neural networks cannot adequately capture the complex spatial structures and the long-term temporal dynamics of the skeleton sequences, which are ver...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 27(2018), 6 vom: 30. Juni, Seite 2842-2855
1. Verfasser: Ke, Qiuhong (VerfasserIn)
Weitere Verfasser: Bennamoun, Mohammed, An, Senjian, Sohel, Ferdous, Boussaid, Farid
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2018
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM282255370
003 DE-627
005 20231225033514.0
007 cr uuu---uuuuu
008 231225s2018 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2018.2812099  |2 doi 
028 5 2 |a pubmed24n0940.xml 
035 |a (DE-627)NLM282255370 
035 |a (NLM)29570086 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Ke, Qiuhong  |e verfasserin  |4 aut 
245 1 0 |a Learning Clip Representations for Skeleton-Based 3D Action Recognition 
264 1 |c 2018 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 30.07.2018 
500 |a Date Revised 30.07.2018 
500 |a published: Print 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a This paper presents a new representation of skeleton sequences for 3D action recognition. Existing methods based on hand-crafted features or recurrent neural networks cannot adequately capture the complex spatial structures and the long-term temporal dynamics of the skeleton sequences, which are very important to recognize the actions. In this paper, we propose to transform each channel of the 3D coordinates of a skeleton sequence into a clip. Each frame of the generated clip represents the temporal information of the entire skeleton sequence and one particular spatial relationship between the skeleton joints. The entire clip incorporates multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We also propose a multitask convolutional neural network (MTCNN) to learn the generated clips for action recognition. The proposed MTCNN processes all the frames of the generated clips in parallel to explore the spatial and temporal information of the skeleton sequences. The proposed method has been extensively tested on six challenging benchmark datasets. Experimental results consistently demonstrate the superiority of the proposed clip representation and the feature learning method for 3D action recognition compared to the existing techniques 
650 4 |a Journal Article 
700 1 |a Bennamoun, Mohammed  |e verfasserin  |4 aut 
700 1 |a An, Senjian  |e verfasserin  |4 aut 
700 1 |a Sohel, Ferdous  |e verfasserin  |4 aut 
700 1 |a Boussaid, Farid  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 27(2018), 6 vom: 30. Juni, Seite 2842-2855  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:27  |g year:2018  |g number:6  |g day:30  |g month:06  |g pages:2842-2855 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2018.2812099  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 27  |j 2018  |e 6  |b 30  |c 06  |h 2842-2855