Global-Feature Encoding U-Net (GEU-Net) for Multi-Focus Image Fusion

The convolutional neural network (CNN)-based multi-focus image fusion methods which learn the focus map from the source images have greatly enhanced fusion performance compared with the traditional methods. However, these methods have not yet reached a satisfactory fusion result, since the convoluti...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 27., Seite 163-175
1. Verfasser: Xiao, Bin (VerfasserIn)
Weitere Verfasser: Xu, Bocheng, Bi, Xiuli, Li, Weisheng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM316813605
003 DE-627
005 20231225161908.0
007 cr uuu---uuuuu
008 231225s2021 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2020.3033158  |2 doi 
028 5 2 |a pubmed24n1056.xml 
035 |a (DE-627)NLM316813605 
035 |a (NLM)33112746 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Xiao, Bin  |e verfasserin  |4 aut 
245 1 0 |a Global-Feature Encoding U-Net (GEU-Net) for Multi-Focus Image Fusion 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 19.11.2020 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a The convolutional neural network (CNN)-based multi-focus image fusion methods which learn the focus map from the source images have greatly enhanced fusion performance compared with the traditional methods. However, these methods have not yet reached a satisfactory fusion result, since the convolution operation pays too much attention on the local region and generating the focus map as a local classification (classify each pixel into focus or de-focus classes) problem. In this article, a global-feature encoding U-Net (GEU-Net) is proposed for multi-focus image fusion. In the proposed GEU-Net, the U-Net network is employed for treating the generation of focus map as a global two-class segmentation task, which segments the focused and defocused regions from a global view. For improving the global feature encoding capabilities of U-Net, the global feature pyramid extraction module (GFPE) and global attention connection upsample module (GACU) are introduced to effectively extract and utilize the global semantic and edge information. The perceptual loss is added to the loss function, and a large-scale dataset is constructed for boosting the performance of GEU-Net. Experimental results show that the proposed GEU-Net can achieve superior fusion performance than some state-of-the-art methods in both human visual quality, objective assessment and network complexity 
650 4 |a Journal Article 
700 1 |a Xu, Bocheng  |e verfasserin  |4 aut 
700 1 |a Bi, Xiuli  |e verfasserin  |4 aut 
700 1 |a Li, Weisheng  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 30(2021) vom: 27., Seite 163-175  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:30  |g year:2021  |g day:27  |g pages:163-175 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2020.3033158  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 30  |j 2021  |b 27  |h 163-175