|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM317153668 |
003 |
DE-627 |
005 |
20231225162636.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2021 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2020.3035351
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1057.xml
|
035 |
|
|
|a (DE-627)NLM317153668
|
035 |
|
|
|a (NLM)33147140
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Zhang, Miao
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a One-Shot Neural Architecture Search
|b Maximising Diversity to Overcome Catastrophic Forgetting
|
264 |
|
1 |
|c 2021
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 29.09.2021
|
500 |
|
|
|a Date Revised 29.09.2021
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a One-shot neural architecture search (NAS) has recently become mainstream in the NAS community because it significantly improves computational efficiency through weight sharing. However, the supernet training paradigm in one-shot NAS introduces catastrophic forgetting, where each step of the training can deteriorate the performance of other architectures that contain partially-shared weights with current architecture. To overcome this problem of catastrophic forgetting, we formulate supernet training for one-shot NAS as a constrained continual learning optimization problem such that learning the current architecture does not degrade the validation accuracy of previous architectures. The key to solving this constrained optimization problem is a novelty search based architecture selection (NSAS) loss function that regularizes the supernet training by using a greedy novelty search method to find the most representative subset. We applied the NSAS loss function to two one-shot NAS baselines and extensively tested them on both a common search space and a NAS benchmark dataset. We further derive three variants based on the NSAS loss function, the NSAS with depth constrain (NSAS-C) to improve the transferability, and NSAS-G and NSAS-LG to handle the situation with a limited number of constraints. The experiments on the common NAS search space demonstrate that NSAS and it variants improve the predictive ability of supernet training in one-shot NAS with remarkable and efficient performance on the CIFAR-10, CIFAR-100, and ImageNet datasets. The results with the NAS benchmark dataset also confirm the significant improvements these one-shot NAS baselines can make
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
650 |
|
4 |
|a Research Support, U.S. Gov't, Non-P.H.S.
|
700 |
1 |
|
|a Li, Huiqi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Pan, Shirui
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chang, Xiaojun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhou, Chuan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Ge, Zongyuan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Su, Steven
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 43(2021), 9 vom: 04. Sept., Seite 2921-2935
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:43
|g year:2021
|g number:9
|g day:04
|g month:09
|g pages:2921-2935
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2020.3035351
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 43
|j 2021
|e 9
|b 04
|c 09
|h 2921-2935
|