|
|
|
|
LEADER |
01000naa a22002652c 4500 |
001 |
NLM391241001 |
003 |
DE-627 |
005 |
20250815232642.0 |
007 |
cr uuu---uuuuu |
008 |
250815s2025 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2025.3599313
|2 doi
|
028 |
5 |
2 |
|a pubmed25n1531.xml
|
035 |
|
|
|a (DE-627)NLM391241001
|
035 |
|
|
|a (NLM)40811156
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Zhou, Wengang
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Scaling up Multimodal Pre-Training for Sign Language Understanding
|
264 |
|
1 |
|c 2025
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 14.08.2025
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status Publisher
|
520 |
|
|
|a Sign language pre-training (SLP) has significantly improved the performance of diverse sign language understanding (SLU) tasks. However, many existing methods employ pre-training techniques that are tailored to a specific task with small data scale, resulting in limited model generalization. Some others focus solely on exploring visual cues, neglecting semantically textual cues embedded in sign translation texts. These limitations inherently diminish the representative capacity of pre-trained models. To this end, we present a multimodal SLP framework to leverage rich visual contextual information and vision-language semantic consistency with massively available data to enhance the representative capability of sign language video. Specifically, we first curate a large-scale text-labeled sign pose dataset ($\sim$ 1.5M), namely SL-1.5M, from various sources to alleviate the scarcity of pre-training data. Subsequently, we propose a pre-training framework, which integrates sign-text contrastive learning with masked pose modeling as the pretext task. In this way, our framework is empowered to effectively capture contextual cues within sign pose sequences and learn visual representation by aligning semantical text-rich features in a latent space. Moreover, in order to grasp the comprehensive meaning of sign language videos, we concurrently model manual and non-manual information to ensure the holistic integrity of visual content. To validate the generalization and superiority of our proposed pre-trained framework, we conduct extensive experiments without intricate design on diverse SLU tasks, achieving new state-of-the-art performance on multiple benchmarks
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Zhao, Weichao
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Hu, Hezhen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Zecheng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Houqiang
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g PP(2025) vom: 14. Aug.
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnas
|
773 |
1 |
8 |
|g volume:PP
|g year:2025
|g day:14
|g month:08
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2025.3599313
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d PP
|j 2025
|b 14
|c 08
|