|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM295306971 |
003 |
DE-627 |
005 |
20231225083357.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2019 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2019.2907048
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0984.xml
|
035 |
|
|
|a (DE-627)NLM295306971
|
035 |
|
|
|a (NLM)30908224
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Nie, Qiang
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a View-Invariant Human Action Recognition Based on a 3D Bio-Constrained Skeleton Model
|
264 |
|
1 |
|c 2019
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 02.01.2020
|
500 |
|
|
|a Date Revised 02.01.2020
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a Skeleton-based human action recognition has been a hot topic in recent years. Most existing studies are based on the skeleton data obtained from Kinect, which is noisy and unstable, in particular, in the case of occlusions. To cope with the noisy skeleton data and variation of viewpoints, this paper presents a view-invariant method for human action recognition by recovering the corrupted skeletons based on a 3D bio-constrained skeleton model and visualizing those body-level motion features obtained during the recovery process with images. The bio-constrained skeleton model is defined with two types of constraints: 1) constant bone lengths and 2) motion limits of joints. Based on the bio-constrained model, an effective method is proposed for skeleton recovery. Two types of new motion features, the Euclidean distance matrix between joints (JEDM), which contains the global structure information of the body, and the local dynamic variation of the joint Euler angles (JEAs) are used in describing human action. These two types of features are encoded into different motion images, which are fed into a two-stream convolutional neural network for learning different action patterns. The experiments on three benchmark datasets achieve better accuracy than the state-of-the-art approaches, which demonstrates the effectiveness of the proposed method
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Wang, Jiangliu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Xin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Yunhui
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 28(2019), 8 vom: 22. Aug., Seite 3959-3972
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:28
|g year:2019
|g number:8
|g day:22
|g month:08
|g pages:3959-3972
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2019.2907048
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 28
|j 2019
|e 8
|b 22
|c 08
|h 3959-3972
|