|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM340615370 |
003 |
DE-627 |
005 |
20231226005753.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2022.3171424
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1135.xml
|
035 |
|
|
|a (DE-627)NLM340615370
|
035 |
|
|
|a (NLM)35533163
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Jiang, Zeyu
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a MA-GANet
|b A Multi-Attention Generative Adversarial Network for Defocus Blur Detection
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 19.05.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Background clutters pose challenges to defocus blur detection. Existing approaches often produce artifact predictions in background areas with clutter and relatively low confident predictions in boundary areas. In this work, we tackle the above issues from two perspectives. Firstly, inspired by the recent success of self-attention mechanism, we introduce channel-wise and spatial-wise attention modules to attentively aggregate features at different channels and spatial locations to obtain more discriminative features. Secondly, we propose a generative adversarial training strategy to suppress spurious and low reliable predictions. This is achieved by utilizing a discriminator to identify predicted defocus map from ground-truth ones. As such, the defocus network (generator) needs to produce 'realistic' defocus map to minimize discriminator loss. We further demonstrate that the generative adversarial training allows exploiting additional unlabeled data to improve performance, a.k.a. semi-supervised learning, and we provide the first benchmark on semi-supervised defocus detection. Finally, we demonstrate that the existing evaluation metrics for defocus detection generally fail to quantify the robustness with respect to thresholding. For a fair and practical evaluation, we introduce an effective yet efficient AUFβ metric. Extensive experiments on three public datasets verify the superiority of the proposed methods compared against state-of-the-art approaches
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Xu, Xun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Le
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Chao
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Foo, Chuan Sheng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhu, Ce
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 31(2022) vom: 09., Seite 3494-3508
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:31
|g year:2022
|g day:09
|g pages:3494-3508
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2022.3171424
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 31
|j 2022
|b 09
|h 3494-3508
|