|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM33829015X |
003 |
DE-627 |
005 |
20231226000335.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2022.3160328
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1127.xml
|
035 |
|
|
|a (DE-627)NLM33829015X
|
035 |
|
|
|a (NLM)35298374
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Ye, Han-Jia
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Generalized Knowledge Distillation via Relationship Matching
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 25.09.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a The knowledge of a well-trained deep neural network (a.k.a. the "teacher") is valuable for learning similar tasks. Knowledge distillation extracts knowledge from the teacher and integrates it with the target model (a.k.a. the "student"), which expands the student's knowledge and improves its learning efficacy. Instead of enforcing the teacher to work on the same task as the student, we borrow the knowledge from a teacher trained from a general label space - in this "Generalized Knowledge Distillation (GKD)," the classes of the teacher and the student may be the same, completely different, or partially overlapped. We claim that the comparison ability between instances acts as an essential factor threading knowledge across tasks, and propose the RElationship FacIlitated Local cLassifiEr Distillation (stance-label confidence between models, ReFilled requires the teacher to reweight the hard tuples pushed forward by the student and then matches the similarity comparison levels between instances. An embedding-induced classifier based on the teacher model supervises the student's classification confidence and adaptively emphasizes the most related supervision from the teacher. ReFilled demonstrates strong discriminative ability when the classes of the teacher vary from the same to a fully non-overlapped set w.r.t. the student. It also achieves state-of-the-art performance on standard knowledge distillation, one-step incremental learning, and few-shot learning tasks
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Lu, Su
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhan, De-Chuan
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 45(2023), 2 vom: 17. Feb., Seite 1817-1834
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:45
|g year:2023
|g number:2
|g day:17
|g month:02
|g pages:1817-1834
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2022.3160328
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 45
|j 2023
|e 2
|b 17
|c 02
|h 1817-1834
|