|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM355275279 |
003 |
DE-627 |
005 |
20231226064015.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2023.3236459
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1184.xml
|
035 |
|
|
|a (DE-627)NLM355275279
|
035 |
|
|
|a (NLM)37022833
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Zhou, Xiong
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Asymmetric Loss Functions for Noise-Tolerant Learning
|b Theory and Applications
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 06.06.2023
|
500 |
|
|
|a Date Revised 06.06.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Supervised deep learning has achieved tremendous success in many computer vision tasks, which however is prone to overfit noisy labels. To mitigate the undesirable influence of noisy labels, robust loss functions offer a feasible approach to achieve noise-tolerant learning. In this work, we systematically study the problem of noise-tolerant learning with respect to both classification and regression. Specifically, we propose a new class of loss function, namely asymmetric loss functions (ALFs), which are tailored to satisfy the Bayes-optimal condition and thus are robust to noisy labels. For classification, we investigate general theoretical properties of ALFs on categorical noisy labels, and introduce the asymmetry ratio to measure the asymmetry of a loss function. We extend several commonly-used loss functions, and establish the necessary and sufficient conditions to make them asymmetric and thus noise-tolerant. For regression, we extend the concept of noise-tolerant learning for image restoration with continuous noisy labels. We theoretically prove that lp loss ( ) is noise-tolerant for targets with the additive white Gaussian noise. For targets with general noise, we introduce two losses as surrogates of l0 loss that seeks the mode when clean pixels keep dominant. Experimental results demonstrate that ALFs can achieve better or comparative performance compared with the state-of-the-arts. The source code of our method is available at: https://github.com/hitcszx/ALFs
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Liu, Xianming
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhai, Deming
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jiang, Junjun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Ji, Xiangyang
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 45(2023), 7 vom: 09. Juli, Seite 8094-8109
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:45
|g year:2023
|g number:7
|g day:09
|g month:07
|g pages:8094-8109
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2023.3236459
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 45
|j 2023
|e 7
|b 09
|c 07
|h 8094-8109
|