Human Action Recognition From Various Data Modalities : A Review

Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 3 vom: 14. März, Seite 3200-3225
1. Verfasser: Sun, Zehua (VerfasserIn)
Weitere Verfasser: Ke, Qiuhong, Rahmani, Hossein, Bennamoun, Mohammed, Wang, Gang, Liu, Jun
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Review Journal Article Research Support, Non-U.S. Gov't
Beschreibung
Zusammenfassung:Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this article, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions
Beschreibung:Date Completed 10.04.2023
Date Revised 05.05.2023
published: Print-Electronic
Citation Status MEDLINE
ISSN:1939-3539
DOI:10.1109/TPAMI.2022.3183112