|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM329274864 |
003 |
DE-627 |
005 |
20231225204748.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2021.3103390
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1097.xml
|
035 |
|
|
|a (DE-627)NLM329274864
|
035 |
|
|
|a (NLM)34383644
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Liang, Jian
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Source Data-Absent Unsupervised Domain Adaptation Through Hypothesis Transfer and Labeling Transfer
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 06.10.2022
|
500 |
|
|
|a Date Revised 19.11.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a Unsupervised domain adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain. Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns. This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to, the source data. To effectively utilize the source model for adaptation, we propose a novel approach called Source HypOthesis Transfer (SHOT), which learns the feature extraction module for the target domain by fitting the target data features to the frozen source classification module (representing classification hypothesis). Specifically, SHOT exploits both information maximization and self-supervised learning for the feature extraction module learning to ensure the target features are implicitly aligned with the features of unseen source data via the same hypothesis. Furthermore, we propose a new labeling transfer strategy, which separates the target data into two splits based on the confidence of predictions (labeling information), and then employ semi-supervised learning to improve the accuracy of less-confident predictions in the target domain. We denote labeling transfer as SHOT++ if the predictions are obtained by SHOT. Extensive experiments on both digit classification and object recognition tasks show that SHOT and SHOT++ achieve results surpassing or comparable to the state-of-the-arts, demonstrating the effectiveness of our approaches for various visual domain adaptation problems. Code will be available at https://github.com/tim-learn/SHOT-plus
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Hu, Dapeng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Yunbo
|e verfasserin
|4 aut
|
700 |
1 |
|
|a He, Ran
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Feng, Jiashi
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 44(2022), 11 vom: 24. Nov., Seite 8602-8617
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:44
|g year:2022
|g number:11
|g day:24
|g month:11
|g pages:8602-8617
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2021.3103390
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 44
|j 2022
|e 11
|b 24
|c 11
|h 8602-8617
|