Simultaneously-Collected Multimodal Lying Pose Dataset : Enabling In-Bed Human Pose Monitoring

Computer vision field has achieved great success in interpreting semantic meanings from images, yet its algorithms can be brittle for tasks with adverse vision conditions and the ones suffering from data/label pair limitation. Among these tasks is in-bed human pose monitoring with significant value...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 1 vom: 03. Jan., Seite 1106-1118
1. Verfasser: Liu, Shuangjun (VerfasserIn)
Weitere Verfasser: Huang, Xiaofei, Fu, Nihang, Li, Cheng, Su, Zhongnan, Ostadabbas, Sarah
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article Research Support, U.S. Gov't, Non-P.H.S.
Beschreibung
Zusammenfassung:Computer vision field has achieved great success in interpreting semantic meanings from images, yet its algorithms can be brittle for tasks with adverse vision conditions and the ones suffering from data/label pair limitation. Among these tasks is in-bed human pose monitoring with significant value in many healthcare applications. In-bed pose monitoring in natural settings involves pose estimation in complete darkness or full occlusion. The lack of publicly available in-bed pose datasets hinders the applicability of many successful human pose estimation algorithms for this task. In this paper, we introduce our Simultaneously-collected multimodal Lying Pose (SLP) dataset, which includes in-bed pose images from 109 participants captured using multiple imaging modalities including RGB, long wave infrared (LWIR), depth, and pressure map. We also present a physical hyper parameter tuning strategy for ground truth pose label generation under adverse vision conditions. The SLP design is compatible with the mainstream human pose datasets; therefore, the state-of-the-art 2D pose estimation models can be trained effectively with the SLP data with promising performance as high as 95% at PCKh0.5 on a single modality. The pose estimation performance of these models can be further improved by including additional modalities through the proposed collaborative scheme
Beschreibung:Date Completed 06.04.2023
Date Revised 05.05.2023
published: Print-Electronic
Citation Status MEDLINE
ISSN:1939-3539
DOI:10.1109/TPAMI.2022.3155712