|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM33619451X |
003 |
DE-627 |
005 |
20231225231602.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2022.3146234
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1120.xml
|
035 |
|
|
|a (DE-627)NLM33619451X
|
035 |
|
|
|a (NLM)35085072
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Fang, Zhen
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Semi-Supervised Heterogeneous Domain Adaptation
|b Theory and Algorithms
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 05.04.2023
|
500 |
|
|
|a Date Revised 05.04.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Semi-supervised heterogeneous domain adaptation (SsHeDA) aims to train a classifier for the target domain, in which only unlabeled and a small number of labeled data are available. This is done by leveraging knowledge acquired from a heterogeneous source domain. From algorithmic perspectives, several methods have been proposed to solve the SsHeDA problem; yet there is still no theoretical foundation to explain the nature of the SsHeDA problem or to guide new and better solutions. Motivated by compatibility condition in semi-supervised probably approximately correct (PAC) theory, we explain the SsHeDA problem by proving its generalization error - that is, why labeled heterogeneous source data and unlabeled target data help to reduce the target risk. Guided by our theory, we devise two algorithms as proof of concept. One, kernel heterogeneous domain alignment (KHDA), is a kernel-based algorithm; the other, joint mean embedding alignment (JMEA), is a neural network-based algorithm. When a dataset is small, KHDA's training time is less than JMEA's. When a dataset is large, JMEA is more accurate in the target domain. Comprehensive experiments with image/text classification tasks show KHDA to be the most accurate among all non-neural network baselines, and JMEA to be the most accurate among all baselines
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Lu, Jie
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Feng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Guangquan
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 45(2023), 1 vom: 27. Jan., Seite 1087-1105
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:45
|g year:2023
|g number:1
|g day:27
|g month:01
|g pages:1087-1105
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2022.3146234
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 45
|j 2023
|e 1
|b 27
|c 01
|h 1087-1105
|