|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM32629936X |
003 |
DE-627 |
005 |
20231225194307.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2021.3086140
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1087.xml
|
035 |
|
|
|a (DE-627)NLM32629936X
|
035 |
|
|
|a (NLM)34081579
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Wang, Yikai
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a How to Trust Unlabeled Data? Instance Credibility Inference for Few-Shot Learning
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 16.09.2022
|
500 |
|
|
|a Date Revised 19.11.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a Deep learning based models have excelled in many computer vision tasks and appear to surpass humans' performance. However, these models require an avalanche of expensive human labeled training data and many iterations to train their large number of parameters. This severely limits their scalability to the real-world long-tail distributed categories, some of which are with a large number of instances, but with only a few manually annotated. Learning from such extremely limited labeled examples is known as Few-Shot Learning (FSL). Different to prior arts that leverage meta-learning or data augmentation strategies to alleviate this extremely data-scarce problem, this paper presents a statistical approach, dubbed Instance Credibility Inference (ICI) to exploit the support of unlabeled instances for few-shot visual recognition. Typically, we repurpose the self-taught learning paradigm to predict pseudo-labels of unlabeled instances with an initial classifier trained from the few shot and then select the most confident ones to augment the training set to re-train the classifier. This is achieved by constructing a (Generalized) Linear Model (LM/GLM) with incidental parameters to model the mapping from (un-)labeled features to their (pseudo-)labels, in which the sparsity of the incidental parameters indicates the credibility of the corresponding pseudo-labeled instance. We rank the credibility of pseudo-labeled instances along the regularization path of their corresponding incidental parameters, and the most trustworthy pseudo-labeled examples are preserved as the augmented labeled instances. This process is repeated until all the unlabeled samples are included in the expanded training set. Theoretically, under the conditions of restricted eigenvalue, irrepresentability, and large error, our approach is guaranteed to collect all the correctly-predicted pseudo-labeled instances from the noisy pseudo-labeled set. Extensive experiments under two few-shot settings show the effectiveness of our approach on four widely used few-shot visual recognition benchmark datasets including miniImageNet, tieredImageNet, CIFAR-FS, and CUB. Code and models are released at https://github.com/Yikai-Wang/ICI-FSL
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Zhang, Li
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yao, Yuan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Fu, Yanwei
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 44(2022), 10 vom: 03. Okt., Seite 6240-6253
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:44
|g year:2022
|g number:10
|g day:03
|g month:10
|g pages:6240-6253
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2021.3086140
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 44
|j 2022
|e 10
|b 03
|c 10
|h 6240-6253
|