MA-GANet : A Multi-Attention Generative Adversarial Network for Defocus Blur Detection

Background clutters pose challenges to defocus blur detection. Existing approaches often produce artifact predictions in background areas with clutter and relatively low confident predictions in boundary areas. In this work, we tackle the above issues from two perspectives. Firstly, inspired by the...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 09., Seite 3494-3508
1. Verfasser: Jiang, Zeyu (VerfasserIn)
Weitere Verfasser: Xu, Xun, Zhang, Le, Zhang, Chao, Foo, Chuan Sheng, Zhu, Ce
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Background clutters pose challenges to defocus blur detection. Existing approaches often produce artifact predictions in background areas with clutter and relatively low confident predictions in boundary areas. In this work, we tackle the above issues from two perspectives. Firstly, inspired by the recent success of self-attention mechanism, we introduce channel-wise and spatial-wise attention modules to attentively aggregate features at different channels and spatial locations to obtain more discriminative features. Secondly, we propose a generative adversarial training strategy to suppress spurious and low reliable predictions. This is achieved by utilizing a discriminator to identify predicted defocus map from ground-truth ones. As such, the defocus network (generator) needs to produce 'realistic' defocus map to minimize discriminator loss. We further demonstrate that the generative adversarial training allows exploiting additional unlabeled data to improve performance, a.k.a. semi-supervised learning, and we provide the first benchmark on semi-supervised defocus detection. Finally, we demonstrate that the existing evaluation metrics for defocus detection generally fail to quantify the robustness with respect to thresholding. For a fair and practical evaluation, we introduce an effective yet efficient AUFβ metric. Extensive experiments on three public datasets verify the superiority of the proposed methods compared against state-of-the-art approaches
Beschreibung:Date Revised 19.05.2022
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2022.3171424