Anatomical Context Protects Deep Learning from Adversarial Perturbations in Medical Imaging

Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks are susceptible to small adversarial perturbations in the image. We study the impact of such adversarial perturbations in medi...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing. - 1998. - 379(2020) vom: 28. Feb., Seite 370-378
1. Verfasser: Li, Yi (VerfasserIn)
Weitere Verfasser: Zhang, Huahong, Bermudez, Camilo, Chen, Yifan, Landman, Bennett A, Vorobeychik, Yevgeniy
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:Neurocomputing
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks are susceptible to small adversarial perturbations in the image. We study the impact of such adversarial perturbations in medical image processing where the goal is to predict an individual's age based on a 3D MRI brain image. We consider two models: a conventional deep neural network, and a hybrid deep learning model which additionally uses features informed by anatomical context. We find that we can introduce significant errors in predicted age by adding imperceptible noise to an image, can accomplish this even for large batches of images using a single perturbation, and that the hybrid model is much more robust to adversarial perturbations than the conventional deep neural network. Our work highlights limitations of current deep learning techniques in clinical applications, and suggests a path forward
Beschreibung:Date Revised 25.10.2024
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1872-8286
DOI:10.1016/j.neucom.2019.10.085