Revisiting the Trade-Off Between Accuracy and Robustness via Weight Distribution of Filters

Adversarial attacks have been proven to be potential threats to Deep Neural Networks (DNNs), and many methods are proposed to defend against adversarial attacks. However, while enhancing the robustness, the accuracy for clean examples will decline to a certain extent, implying a trade-off existed be...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 12 vom: 09. Nov., Seite 8870-8882
1. Verfasser: Wei, Xingxing (VerfasserIn)
Weitere Verfasser: Zhao, Shiji, Li, Bo
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM373364016
003 DE-627
005 20241107232050.0
007 cr uuu---uuuuu
008 240608s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3411035  |2 doi 
028 5 2 |a pubmed24n1593.xml 
035 |a (DE-627)NLM373364016 
035 |a (NLM)38848237 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Wei, Xingxing  |e verfasserin  |4 aut 
245 1 0 |a Revisiting the Trade-Off Between Accuracy and Robustness via Weight Distribution of Filters 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 07.11.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Adversarial attacks have been proven to be potential threats to Deep Neural Networks (DNNs), and many methods are proposed to defend against adversarial attacks. However, while enhancing the robustness, the accuracy for clean examples will decline to a certain extent, implying a trade-off existed between the accuracy and adversarial robustness. In this paper, to meet the trade-off problem, we theoretically explore the underlying reason for the difference of the filters' weight distribution between standard-trained and robust-trained models and then argue that this is an intrinsic property for static neural networks, thus they are difficult to fundamentally improve the accuracy and adversarial robustness at the same time. Based on this analysis, we propose a sample-wise dynamic network architecture named Adversarial Weight-Varied Network (AW-Net), which focuses on dealing with clean and adversarial examples with a "divide and rule" weight strategy. The AW-Net adaptively adjusts the network's weights based on regulation signals generated by an adversarial router, which is directly influenced by the input sample. Benefiting from the dynamic network architecture, clean and adversarial examples can be processed with different network weights, which provides the potential to enhance both accuracy and adversarial robustness. A series of experiments demonstrate that our AW-Net is architecture-friendly to handle both clean and adversarial examples and can achieve better trade-off performance than state-of-the-art robust models 
650 4 |a Journal Article 
700 1 |a Zhao, Shiji  |e verfasserin  |4 aut 
700 1 |a Li, Bo  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 12 vom: 09. Nov., Seite 8870-8882  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:46  |g year:2024  |g number:12  |g day:09  |g month:11  |g pages:8870-8882 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3411035  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 12  |b 09  |c 11  |h 8870-8882