|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM373030258 |
003 |
DE-627 |
005 |
20240605232928.0 |
007 |
cr uuu---uuuuu |
008 |
240531s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2024.3404663
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1429.xml
|
035 |
|
|
|a (DE-627)NLM373030258
|
035 |
|
|
|a (NLM)38814769
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Dang, Zhuohang
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Disentangled Generation With Information Bottleneck for Enhanced Few-Shot Learning
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 05.06.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Few-shot learning (FSL) poses a significant challenge in classifying unseen classes with limited samples, primarily stemming from the scarcity of data. Although numerous generative approaches have been investigated for FSL, their generation process often results in entangled outputs, exacerbating the distribution shift inherent in FSL. Consequently, this considerably hampers the overall quality of the generated samples. Addressing this concern, we present a pioneering framework called DisGenIB, which leverages an Information Bottleneck (IB) approach for Disentangled Generation. Our framework ensures both discrimination and diversity in the generated samples, simultaneously. Specifically, we introduce a groundbreaking Information Theoretic objective that unifies disentangled representation learning and sample generation within a novel framework. In contrast to previous IB-based methods that struggle to leverage priors, our proposed DisGenIB effectively incorporates priors as invariant domain knowledge of sub-features, thereby enhancing disentanglement. This innovative approach enables us to exploit priors to their full potential and facilitates the overall disentanglement process. Moreover, we establish the theoretical foundation that reveals certain prior generative and disentanglement methods as special instances of our DisGenIB, underscoring the versatility of our proposed framework. To solidify our claims, we conduct comprehensive experiments on demanding FSL benchmarks, affirming the remarkable efficacy and superiority of DisGenIB. Furthermore, the validity of our theoretical analyses is substantiated by the experimental results. Our code is available at https://github.com/eric-hang/DisGenIB
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Luo, Minnan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Jihong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jia, Chengyou
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yan, Caixia
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Dai, Guang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chang, Xiaojun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zheng, Qinghua
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 33(2024) vom: 30., Seite 3520-3535
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:33
|g year:2024
|g day:30
|g pages:3520-3535
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2024.3404663
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 33
|j 2024
|b 30
|h 3520-3535
|