|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM356623106 |
003 |
DE-627 |
005 |
20231226070844.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2023.3274593
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1188.xml
|
035 |
|
|
|a (DE-627)NLM356623106
|
035 |
|
|
|a (NLM)37159309
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Chang, Dongliang
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Making a Bird AI Expert Work for You and Me
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 06.09.2023
|
500 |
|
|
|a Date Revised 20.09.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a As powerful as fine-grained visual classification (FGVC) is, responding your query with a bird name of "Whip-poor-will" or "Mallard" probably does not make much sense. This however commonly accepted in the literature, underlines a fundamental question interfacing AI and human - what constitutes transferable knowledge for human to learn from AI? This paper sets out to answer this very question using FGVC as a test bed. Specifically, we envisage a scenario where a trained FGVC model (the AI expert) functions as a knowledge provider in enabling average people (you and me) to become better domain experts ourselves. Assuming an AI expert trained using expert human labels, we anchor our focus on asking and providing solutions for two questions: (i) what is the best transferable knowledge we can extract from AI, and (ii) what is the most practical means to measure the gains in expertise given that knowledge? We propose to represent knowledge as highly discriminative visual regions that are expert-exclusive and instantiate it via a novel multi-stage learning framework. A human study of 15,000 trials shows our method is able to consistently improve people of divergent bird expertise to recognise once unrecognisable birds. We further propose a crude but benchmarkable metric TEMI and therefore allow future efforts in this direction to be comparable to ours without the need of large-scale human studies
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Pang, Kaiyue
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Du, Ruoyi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tong, Yujun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Song, Yi-Zhe
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Ma, Zhanyu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Guo, Jun
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 45(2023), 10 vom: 18. Okt., Seite 12068-12084
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:45
|g year:2023
|g number:10
|g day:18
|g month:10
|g pages:12068-12084
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2023.3274593
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 45
|j 2023
|e 10
|b 18
|c 10
|h 12068-12084
|