A Dual-Branch Self-Boosting Framework for Self-Supervised 3D Hand Pose Estimation

Although 3D hand pose estimation has made significant progress in recent years with the development of the deep neural network, most learning-based methods require a large amount of labeled data that is time-consuming to collect. In this paper, we propose a dual-branch self-boosting framework for se...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 09., Seite 5052-5066
1. Verfasser: Ren, Pengfei (VerfasserIn)
Weitere Verfasser: Sun, Haifeng, Hao, Jiachang, Qi, Qi, Wang, Jingyu, Liao, Jianxin
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Although 3D hand pose estimation has made significant progress in recent years with the development of the deep neural network, most learning-based methods require a large amount of labeled data that is time-consuming to collect. In this paper, we propose a dual-branch self-boosting framework for self-supervised 3D hand pose estimation from depth images. First, we adopt a simple yet effective image-to-image translation technology to generate realistic depth images from synthetic data for network pre-training. Second, we propose a dual-branch network to perform 3D hand model estimation and pixel-wise pose estimation in a decoupled way. Through a part-aware model-fitting loss, the network can be updated according to the fine-grained differences between the hand model and the unlabeled real image. Through an inter-branch loss, the two complementary branches can boost each other continuously during self-supervised learning. Furthermore, we adopt a refinement stage to better utilize the prior structure information in the estimated hand model for a more accurate and robust estimation. Our method outperforms previous self-supervised methods by a large margin without using paired multi-view images and achieves comparable results to strongly supervised methods. Besides, by adopting our regenerated pose annotations, the performance of the skeleton-based gesture recognition is significantly improved
Beschreibung:Date Completed 04.08.2022
Date Revised 04.08.2022
published: Print-Electronic
Citation Status MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2022.3192708