Student Loss : Towards the Probability Assumption in Inaccurate Supervision

Noisy labels are often encountered in datasets, but learning with them is challenging. Although natural discrepancies between clean and mislabeled samples in a noisy category exist, most techniques in this field still gather them indiscriminately, which leads to their performances being partially ro...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 6 vom: 01. Mai, Seite 4460-4475
1. Verfasser: Zhang, Shuo (VerfasserIn)
Weitere Verfasser: Li, Jian-Qing, Fujita, Hamido, Li, Yu-Wen, Wang, Deng-Bao, Zhu, Ting-Ting, Zhang, Min-Ling, Liu, Cheng-Yu
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM367519771
003 DE-627
005 20240508232309.0
007 cr uuu---uuuuu
008 240124s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3357518  |2 doi 
028 5 2 |a pubmed24n1401.xml 
035 |a (DE-627)NLM367519771 
035 |a (NLM)38261485 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Zhang, Shuo  |e verfasserin  |4 aut 
245 1 0 |a Student Loss  |b Towards the Probability Assumption in Inaccurate Supervision 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 08.05.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Noisy labels are often encountered in datasets, but learning with them is challenging. Although natural discrepancies between clean and mislabeled samples in a noisy category exist, most techniques in this field still gather them indiscriminately, which leads to their performances being partially robust. In this paper, we reveal both empirically and theoretically that the learning robustness can be improved by assuming deep features with the same labels follow a student distribution, resulting in a more intuitive method called student loss. By embedding the student distribution and exploiting the sharpness of its curve, our method is naturally data-selective and can offer extra strength to resist mislabeled samples. This ability makes clean samples aggregate tightly in the center, while mislabeled samples scatter, even if they share the same label. Additionally, we employ the metric learning strategy and develop a large-margin student (LT) loss for better capability. It should be noted that our approach is the first work that adopts the prior probability assumption in feature representation to decrease the contributions of mislabeled samples. This strategy can enhance various losses to join the student loss family, even if they have been robust losses. Experiments demonstrate that our approach is more effective in inaccurate supervision. Enhanced LT losses significantly outperform various state-of-the-art methods in most cases. Even huge improvements of over 50% can be obtained under some conditions 
650 4 |a Journal Article 
700 1 |a Li, Jian-Qing  |e verfasserin  |4 aut 
700 1 |a Fujita, Hamido  |e verfasserin  |4 aut 
700 1 |a Li, Yu-Wen  |e verfasserin  |4 aut 
700 1 |a Wang, Deng-Bao  |e verfasserin  |4 aut 
700 1 |a Zhu, Ting-Ting  |e verfasserin  |4 aut 
700 1 |a Zhang, Min-Ling  |e verfasserin  |4 aut 
700 1 |a Liu, Cheng-Yu  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 6 vom: 01. Mai, Seite 4460-4475  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:46  |g year:2024  |g number:6  |g day:01  |g month:05  |g pages:4460-4475 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3357518  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 6  |b 01  |c 05  |h 4460-4475