Anatomical Context Protects Deep Learning from Adversarial Perturbations in Medical Imaging

Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks are susceptible to small adversarial perturbations in the image. We study the impact of such adversarial perturbations in medi...

Description complète

Détails bibliographiques
Publié dans:Neurocomputing. - 1998. - 379(2020) vom: 28. Feb., Seite 370-378
Auteur principal: Li, Yi (Auteur)
Autres auteurs: Zhang, Huahong, Bermudez, Camilo, Chen, Yifan, Landman, Bennett A, Vorobeychik, Yevgeniy
Format: Article en ligne
Langue:English
Publié: 2020
Accès à la collection:Neurocomputing
Sujets:Journal Article
Description
Résumé:Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks are susceptible to small adversarial perturbations in the image. We study the impact of such adversarial perturbations in medical image processing where the goal is to predict an individual's age based on a 3D MRI brain image. We consider two models: a conventional deep neural network, and a hybrid deep learning model which additionally uses features informed by anatomical context. We find that we can introduce significant errors in predicted age by adding imperceptible noise to an image, can accomplish this even for large batches of images using a single perturbation, and that the hybrid model is much more robust to adversarial perturbations than the conventional deep neural network. Our work highlights limitations of current deep learning techniques in clinical applications, and suggests a path forward
Description:Date Revised 25.10.2024
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1872-8286
DOI:10.1016/j.neucom.2019.10.085