Regularly Truncated M-Estimators for Learning With Noisy Labels

The sample selection approach is very popular in learning with noisy labels. As deep networks "learn pattern first", prior methods built on sample selection share a similar training procedure: the small-loss examples can be regarded as clean examples and used for helping generalization, wh...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 5 vom: 25. Apr., Seite 3522-3536
1. Verfasser: Xia, Xiaobo (VerfasserIn)
Weitere Verfasser: Lu, Pengqian, Gong, Chen, Han, Bo, Yu, Jun, Liu, Tongliang
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM366444751
003 DE-627
005 20240405233329.0
007 cr uuu---uuuuu
008 240108s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2023.3347850  |2 doi 
028 5 2 |a pubmed24n1366.xml 
035 |a (DE-627)NLM366444751 
035 |a (NLM)38153827 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Xia, Xiaobo  |e verfasserin  |4 aut 
245 1 0 |a Regularly Truncated M-Estimators for Learning With Noisy Labels 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 05.04.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a The sample selection approach is very popular in learning with noisy labels. As deep networks "learn pattern first", prior methods built on sample selection share a similar training procedure: the small-loss examples can be regarded as clean examples and used for helping generalization, while the large-loss examples are treated as mislabeled ones and excluded from network parameter updates. However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization. In this paper, we propose regularly truncated M-estimators (RTME) to address the above two issues simultaneously. Specifically, RTME can alternately switch modes between truncated M-estimators and original M-estimators. The former can adaptively select small-losses examples without knowing the noise rate and reduce the side-effects of noisy labels in them. The latter makes the possibly clean examples but with large losses involved to help generalization. Theoretically, we demonstrate that our strategies are label-noise-tolerant. Empirically, comprehensive experimental results show that our method can outperform multiple baselines and is robust to broad noise types and levels 
650 4 |a Journal Article 
700 1 |a Lu, Pengqian  |e verfasserin  |4 aut 
700 1 |a Gong, Chen  |e verfasserin  |4 aut 
700 1 |a Han, Bo  |e verfasserin  |4 aut 
700 1 |a Yu, Jun  |e verfasserin  |4 aut 
700 1 |a Yu, Jun  |e verfasserin  |4 aut 
700 1 |a Liu, Tongliang  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 5 vom: 25. Apr., Seite 3522-3536  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:46  |g year:2024  |g number:5  |g day:25  |g month:04  |g pages:3522-3536 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2023.3347850  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 5  |b 25  |c 04  |h 3522-3536