|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM329504584 |
003 |
DE-627 |
005 |
20231225205248.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2021.3105387
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1098.xml
|
035 |
|
|
|a (DE-627)NLM329504584
|
035 |
|
|
|a (NLM)34406937
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Zhang, Yi
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Widar3.0
|b Zero-Effort Cross-Domain Gesture Recognition With Wi-Fi
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 06.10.2022
|
500 |
|
|
|a Date Revised 19.11.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a With the development of signal processing technology, the ubiquitous Wi-Fi devices open an unprecedented opportunity to solve the challenging human gesture recognition problem by learning motion representations from wireless signals. Wi-Fi-based gesture recognition systems, although yield good performance on specific data domains, are still practically difficult to be used without explicit adaptation efforts to new domains. Various pioneering approaches have been proposed to resolve this contradiction but extra training efforts are still necessary for either data collection or model re-training when new data domains appear. To advance cross-domain recognition and achieve fully zero-effort recognition, we propose Widar3.0, a Wi-Fi-based zero-effort cross-domain gesture recognition system. The key insight of Widar3.0 is to derive and extract domain-independent features of human gestures at the lower signal level, which represent unique kinetic characteristics of gestures and are irrespective of domains. On this basis, we develop a one-fits-all general model that requires only one-time training but can adapt to different data domains. Experiments on various domain factors (i.e. environments, locations, and orientations of persons) demonstrate the accuracy of 92.7% for in-domain recognition and 82.6%-92.4% for cross-domain recognition without model re-training, outperforming the state-of-the-art solutions
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Zheng, Yue
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Qian, Kun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Guidong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Yunhao
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wu, Chenshu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yang, Zheng
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 44(2022), 11 vom: 04. Nov., Seite 8671-8688
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:44
|g year:2022
|g number:11
|g day:04
|g month:11
|g pages:8671-8688
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2021.3105387
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 44
|j 2022
|e 11
|b 04
|c 11
|h 8671-8688
|