|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM303178930 |
003 |
DE-627 |
005 |
20240229162411.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2019 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2019.2950768
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1308.xml
|
035 |
|
|
|a (DE-627)NLM303178930
|
035 |
|
|
|a (NLM)31714227
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Chadha, Aaron
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Improved Techniques for Adversarial Discriminative Domain Adaptation
|
264 |
|
1 |
|c 2019
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 27.02.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status Publisher
|
520 |
|
|
|a Adversarial discriminative domain adaptation (ADDA) is an efficient framework for unsupervised domain adaptation in image classification, where the source and target domains are assumed to have the same classes, but no labels are available for the target domain. While ADDA has already achieved better training efficiency and competitive accuracy on image classification in comparison to other adversarial based methods, we investigate whether we can improve its performance with a new framework and new loss formulations. Following the framework of semi-supervised GANs, we first extend the discriminator output over the source classes, in order to model the joint distribution over domain and task. We thus leverage on the distribution over the source encoder posteriors (which is fixed during adversarial training) and propose maximum mean discrepancy (MMD) and reconstruction-based loss functions for aligning the target encoder distribution to the source domain. We compare and provide a comprehensive analysis of how our framework and loss formulations extend over simple multi-class extensions of ADDA and other discriminative variants of semi-supervised GANs. In addition, we introduce various forms of regularization for stabilizing training, including treating the discriminator as a denoising autoencoder and regularizing the target encoder with source examples to reduce overfitting under a contraction mapping (i.e., when the target per-class distributions are contracting during alignment with the source). Finally, we validate our framework on standard datasets like MNIST, USPS, SVHN, MNIST-M and Office-31. We additionally examine how the proposed framework benefits recognition problems based on sensing modalities that lack training data. This is realized by introducing and evaluating on a neuromorphic vision sensing (NVS) sign language recognition dataset, where the source domain constitutes emulated neuromorphic spike events converted from conventional pixel-based video and the target domain is experimental (real) spike events from an NVS camera. Our results on all datasets show that our proposal is both simple and efficient, as it competes or outperforms the state-of-the-art in unsupervised domain adaptation, such as DIFA and MCDDA, whilst offering lower complexity than other recent adversarial methods
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Andreopoulos, Yiannis
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g (2019) vom: 06. Nov.
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g year:2019
|g day:06
|g month:11
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2019.2950768
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|j 2019
|b 06
|c 11
|