Boosting Monocular 3D Human Pose Estimation With Part Aware Attention

Monocular 3D human pose estimation is challenging due to depth ambiguity. Convolution-based and Graph-Convolution-based methods have been developed to extract 3D information from temporal cues in motion videos. Typically, in the lifting-based methods, most recent works adopt the transformer to model...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 16., Seite 4278-4291
1. Verfasser: Xue, Youze (VerfasserIn)
Weitere Verfasser: Chen, Jiansheng, Gu, Xiangming, Ma, Huimin, Ma, Hongbing
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM342304755
003 DE-627
005 20231226013808.0
007 cr uuu---uuuuu
008 231226s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2022.3182269  |2 doi 
028 5 2 |a pubmed24n1140.xml 
035 |a (DE-627)NLM342304755 
035 |a (NLM)35709111 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Xue, Youze  |e verfasserin  |4 aut 
245 1 0 |a Boosting Monocular 3D Human Pose Estimation With Part Aware Attention 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 01.07.2022 
500 |a Date Revised 01.07.2022 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Monocular 3D human pose estimation is challenging due to depth ambiguity. Convolution-based and Graph-Convolution-based methods have been developed to extract 3D information from temporal cues in motion videos. Typically, in the lifting-based methods, most recent works adopt the transformer to model the temporal relationship of 2D keypoint sequences. These previous works usually consider all the joints of a skeleton as a whole and then calculate the temporal attention based on the overall characteristics of the skeleton. Nevertheless, the human skeleton exhibits obvious part-wise inconsistency of motion patterns. It is therefore more appropriate to consider each part's temporal behaviors separately. To deal with such part-wise motion inconsistency, we propose the Part Aware Temporal Attention module to extract the temporal dependency of each part separately. Moreover, the conventional attention mechanism in 3D pose estimation usually calculates attention within a short time interval. This indicates that only the correlation within the temporal context is considered. Whereas, we find that the part-wise structure of the human skeleton is repeating across different periods, actions, and even subjects. Therefore, the part-wise correlation at a distance can be utilized to further boost 3D pose estimation. We thus propose the Part Aware Dictionary Attention module to calculate the attention for the part-wise features of input in a dictionary, which contains multiple 3D skeletons sampled from the training set. Extensive experimental results show that our proposed part aware attention mechanism helps a transformer-based model to achieve state-of-the-art 3D pose estimation performance on two widely used public datasets. The codes and the trained models are released at https://github.com/thuxyz19/3D-HPE-PAA 
650 4 |a Journal Article 
700 1 |a Chen, Jiansheng  |e verfasserin  |4 aut 
700 1 |a Gu, Xiangming  |e verfasserin  |4 aut 
700 1 |a Ma, Huimin  |e verfasserin  |4 aut 
700 1 |a Ma, Hongbing  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 31(2022) vom: 16., Seite 4278-4291  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:31  |g year:2022  |g day:16  |g pages:4278-4291 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2022.3182269  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 31  |j 2022  |b 16  |h 4278-4291