DDcGAN : A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion

In this paper, we proposed a new end-to-end model, termed as dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Our method establishes an adversarial game between a generator and two discriminators. The generator a...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2020) vom: 10. März
1. Verfasser: Ma, Jiayi (VerfasserIn)
Weitere Verfasser: Xu, Han, Jiang, Junjun, Mei, Xiaoguang, Zhang, Xiao-Ping
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM307559912
003 DE-627
005 20240229162645.0
007 cr uuu---uuuuu
008 231225s2020 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2020.2977573  |2 doi 
028 5 2 |a pubmed24n1308.xml 
035 |a (DE-627)NLM307559912 
035 |a (NLM)32167894 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Ma, Jiayi  |e verfasserin  |4 aut 
245 1 0 |a DDcGAN  |b A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion 
264 1 |c 2020 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 27.02.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a In this paper, we proposed a new end-to-end model, termed as dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Our method establishes an adversarial game between a generator and two discriminators. The generator aims to generate a real-like fused image based on a specifically designed content loss to fool the two discriminators, while the two discriminators aim to distinguish the structure differences between the fused image and two source images, respectively, in addition to the content loss. Consequently, the fused image is forced to simultaneously keep the thermal radiation in the infrared image and the texture details in the visible image. Moreover, to fuse source images of different resolutions, e.g., a low-resolution infrared image and a high-resolution visible image, our DDcGAN constrains the downsampled fused image to have similar property with the infrared image. This can avoid causing thermal radiation information blurring or visible texture detail loss, which typically happens in traditional methods. In addition, we also apply our DDcGAN to fusing multi-modality medical images of different resolutions, e.g., a low-resolution positron emission tomography image and a high-resolution magnetic resonance image. The qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our DDcGAN over the state-of-the-art, in terms of both visual effect and quantitative metrics 
650 4 |a Journal Article 
700 1 |a Xu, Han  |e verfasserin  |4 aut 
700 1 |a Jiang, Junjun  |e verfasserin  |4 aut 
700 1 |a Mei, Xiaoguang  |e verfasserin  |4 aut 
700 1 |a Zhang, Xiao-Ping  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g (2020) vom: 10. März  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g year:2020  |g day:10  |g month:03 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2020.2977573  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |j 2020  |b 10  |c 03