Understanding Episode Hardness in Few-Shot Learning

Achieving generalization for deep learning models has usually suffered from the bottleneck of annotated sample scarcity. As a common way of tackling this issue, few-shot learning focuses on "episodes", i.e. sampled tasks that help the model acquire generalizable knowledge onto unseen categ...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - PP(2024) vom: 08. Okt.
1. Verfasser: Guo, Yurong (VerfasserIn)
Weitere Verfasser: Du, Ruoyi, Sain, Aneeshan, Liang, Kongming, Dong, Yuan, Song, Yi-Zhe, Ma, Zhanyu
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM378648365
003 DE-627
005 20241009232532.0
007 cr uuu---uuuuu
008 241009s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3476075  |2 doi 
028 5 2 |a pubmed24n1562.xml 
035 |a (DE-627)NLM378648365 
035 |a (NLM)39378258 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Guo, Yurong  |e verfasserin  |4 aut 
245 1 0 |a Understanding Episode Hardness in Few-Shot Learning 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 08.10.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a Achieving generalization for deep learning models has usually suffered from the bottleneck of annotated sample scarcity. As a common way of tackling this issue, few-shot learning focuses on "episodes", i.e. sampled tasks that help the model acquire generalizable knowledge onto unseen categories - better the episodes, the higher a model's generalisability. Despite extensive research, the characteristics of episodes and their potential effects are relatively less explored. A recent paper discussed that different episodes exhibit different prediction difficulties, and coined a new metric "hardness" to quantify episodes, which however is too wide-range for an arbitrary dataset and thus remains impractical for realistic applications. In this paper therefore, we for the first time conduct an algebraic analysis of the critical factors influencing episode hardness supported by experimental demonstrations, that reveal episode hardness to largely depend on classes within an episode, and importantly propose an efficient pre-sampling hardness assessment technique named Inverse-Fisher Discriminant Ratio (IFDR). This enables sampling hard episodes at the class level via class-level (cl) sampling scheme that drastically decreases quantification cost. Delving deeper, we also develop a variant called class-pair-level (cpl) sampling, which further reduces the sampling cost while guaranteeing the sampled distribution. Finally, comprehensive experiments conducted on benchmark datasets verify the efficacy of our proposed method. Codes are available at: https://github.com/PRIS-CV/class-level-sampling 
650 4 |a Journal Article 
700 1 |a Du, Ruoyi  |e verfasserin  |4 aut 
700 1 |a Sain, Aneeshan  |e verfasserin  |4 aut 
700 1 |a Liang, Kongming  |e verfasserin  |4 aut 
700 1 |a Dong, Yuan  |e verfasserin  |4 aut 
700 1 |a Song, Yi-Zhe  |e verfasserin  |4 aut 
700 1 |a Ma, Zhanyu  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g PP(2024) vom: 08. Okt.  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:08  |g month:10 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3476075  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 08  |c 10