View-Invariant Human Action Recognition Based on a 3D Bio-Constrained Skeleton Model

Skeleton-based human action recognition has been a hot topic in recent years. Most existing studies are based on the skeleton data obtained from Kinect, which is noisy and unstable, in particular, in the case of occlusions. To cope with the noisy skeleton data and variation of viewpoints, this paper...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 28(2019), 8 vom: 22. Aug., Seite 3959-3972
1. Verfasser: Nie, Qiang (VerfasserIn)
Weitere Verfasser: Wang, Jiangliu, Wang, Xin, Liu, Yunhui
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Skeleton-based human action recognition has been a hot topic in recent years. Most existing studies are based on the skeleton data obtained from Kinect, which is noisy and unstable, in particular, in the case of occlusions. To cope with the noisy skeleton data and variation of viewpoints, this paper presents a view-invariant method for human action recognition by recovering the corrupted skeletons based on a 3D bio-constrained skeleton model and visualizing those body-level motion features obtained during the recovery process with images. The bio-constrained skeleton model is defined with two types of constraints: 1) constant bone lengths and 2) motion limits of joints. Based on the bio-constrained model, an effective method is proposed for skeleton recovery. Two types of new motion features, the Euclidean distance matrix between joints (JEDM), which contains the global structure information of the body, and the local dynamic variation of the joint Euler angles (JEAs) are used in describing human action. These two types of features are encoded into different motion images, which are fed into a two-stream convolutional neural network for learning different action patterns. The experiments on three benchmark datasets achieve better accuracy than the state-of-the-art approaches, which demonstrates the effectiveness of the proposed method
Beschreibung:Date Completed 02.01.2020
Date Revised 02.01.2020
published: Print-Electronic
Citation Status MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2019.2907048