|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM355269104 |
003 |
DE-627 |
005 |
20231226064007.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2023.3240337
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1184.xml
|
035 |
|
|
|a (DE-627)NLM355269104
|
035 |
|
|
|a (NLM)37022219
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Han, Xinzhe
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a General Greedy De-Bias Learning
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 03.07.2023
|
500 |
|
|
|a Date Revised 03.07.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Neural networks often make predictions relying on the spurious correlations from the datasets rather than the intrinsic properties of the task of interest, facing with sharp degradation on out-of-distribution (OOD) test data. Existing de-bias learning frameworks try to capture specific dataset bias by annotations but they fail to handle complicated OOD scenarios. Others implicitly identify the dataset bias by special design low capability biased models or losses, but they degrade when the training and testing data are from the same distribution. In this paper, we propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and base model. The base model is encouraged to focus on examples that are hard to solve with biased models, thus remaining robust against spurious correlations in the test stage. GGD largely improves models' OOD generalization ability on various tasks, but sometimes over-estimates the bias level and degrades on the in-distribution test. We further re-analyze the ensemble process of GGD and introduce the Curriculum Regularization inspired by curriculum learning, which achieves a good trade-off between in-distribution (ID) and out-of-distribution performance. Extensive experiments on image classification, adversarial question answering, and visual question answering demonstrate the effectiveness of our method. GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge. Codes are available at https://github.com/GeraldHan/GGD
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Wang, Shuhui
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Su, Chi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Huang, Qingming
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tian, Qi
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 45(2023), 8 vom: 06. Aug., Seite 9789-9805
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:45
|g year:2023
|g number:8
|g day:06
|g month:08
|g pages:9789-9805
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2023.3240337
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 45
|j 2023
|e 8
|b 06
|c 08
|h 9789-9805
|