|
|
|
|
LEADER |
01000caa a22002652c 4500 |
001 |
NLM337448256 |
003 |
DE-627 |
005 |
20250303021529.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2022.3152631
|2 doi
|
028 |
5 |
2 |
|a pubmed25n1124.xml
|
035 |
|
|
|a (DE-627)NLM337448256
|
035 |
|
|
|a (NLM)35213308
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Feng, Liangjun
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Bias-Eliminated Semantic Refinement for Any-Shot Learning
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 05.04.2022
|
500 |
|
|
|a Date Revised 06.01.2025
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a When training samples are scarce, the semantic embedding technique, i. e., describing class labels with attributes, provides a condition to generate visual features for unseen objects by transferring the knowledge from seen objects. However, semantic descriptions are usually obtained in an external paradigm, such as manual annotation, resulting in weak consistency between descriptions and visual features. In this paper, we refine the coarse-grained semantic description for any-shot learning tasks, i. e., zero-shot learning (ZSL), generalized zero-shot learning (GZSL), and few-shot learning (FSL). A new model, namely, the semantic refinement Wasserstein generative adversarial network (SRWGAN) model, is designed with the proposed multihead representation and hierarchical alignment techniques. Unlike conventional methods, semantic refinement is performed with the aim of identifying a bias-eliminated condition for disjoint-class feature generation and is applicable in both inductive and transductive settings. We extensively evaluate model performance on six benchmark datasets and observe state-of-the-art results for any-shot learning; e. g., we obtain 70.2% harmonic accuracy for the Caltech UCSD Birds (CUB) dataset and 82.2% harmonic accuracy for the Oxford Flowers (FLO) dataset in the standard GZSL setting. Various visualizations are also provided to show the bias-eliminated generation of SRWGAN. Our code is available. 1
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Zhao, Chunhui
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Xi
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 31(2022) vom: 15., Seite 2229-2244
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnas
|
773 |
1 |
8 |
|g volume:31
|g year:2022
|g day:15
|g pages:2229-2244
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2022.3152631
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 31
|j 2022
|b 15
|h 2229-2244
|