Learning to Learn Task-Adaptive Hyperparameters for Few-Shot Learning

The objective of few-shot learning is to design a system that can adapt to a given task with only few examples while achieving generalization. Model-agnostic meta-learning (MAML), which has recently gained the popularity for its simplicity and flexibility, learns a good initialization for fast adapt...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 3 vom: 01. Feb., Seite 1441-1454
1. Verfasser: Baik, Sungyong (VerfasserIn)
Weitere Verfasser: Choi, Myungsub, Choi, Janghoon, Kim, Heewon, Lee, Kyoung Mu
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM355353253
003 DE-627
005 20240207232011.0
007 cr uuu---uuuuu
008 231226s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2023.3261387  |2 doi 
028 5 2 |a pubmed24n1283.xml 
035 |a (DE-627)NLM355353253 
035 |a (NLM)37030677 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Baik, Sungyong  |e verfasserin  |4 aut 
245 1 0 |a Learning to Learn Task-Adaptive Hyperparameters for Few-Shot Learning 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 07.02.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a The objective of few-shot learning is to design a system that can adapt to a given task with only few examples while achieving generalization. Model-agnostic meta-learning (MAML), which has recently gained the popularity for its simplicity and flexibility, learns a good initialization for fast adaptation to a task under few-data regime. However, its performance has been relatively limited especially when novel tasks are different from tasks previously seen during training. In this work, instead of searching for a better initialization, we focus on designing a better fast adaptation process. Consequently, we propose a new task-adaptive weight update rule that greatly enhances the fast adaptation process. Specifically, we introduce a small meta-network that can generate per-step hyperparameters for each given task: learning rate and weight decay coefficients. The experimental results validate that learning a good weight update rule for fast adaptation is the equally important component that has drawn relatively less attention in the recent few-shot learning approaches. Surprisingly, fast adaptation from random initialization with ALFA can already outperform MAML. Furthermore, the proposed weight-update rule is shown to consistently improve the task-adaptation capability of MAML across diverse problem domains: few-shot classification, cross-domain few-shot classification, regression, visual tracking, and video frame interpolation 
650 4 |a Journal Article 
700 1 |a Choi, Myungsub  |e verfasserin  |4 aut 
700 1 |a Choi, Janghoon  |e verfasserin  |4 aut 
700 1 |a Kim, Heewon  |e verfasserin  |4 aut 
700 1 |a Lee, Kyoung Mu  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 3 vom: 01. Feb., Seite 1441-1454  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:46  |g year:2024  |g number:3  |g day:01  |g month:02  |g pages:1441-1454 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2023.3261387  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 3  |b 01  |c 02  |h 1441-1454