On the Diversity of Conditional Image Synthesis with Semantic Layouts

Many image processing tasks can be formulated as translating images between two image domains, such as colorization, super-resolution, and conditional image synthesis. In most of these tasks, an input image may correspond to multiple outputs. However, current existing approaches only show minor stoc...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2019) vom: 10. Jan.
1. Verfasser: Yang, Zichen (VerfasserIn)
Weitere Verfasser: Liu, Haifeng, Cai, Deng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2019
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Many image processing tasks can be formulated as translating images between two image domains, such as colorization, super-resolution, and conditional image synthesis. In most of these tasks, an input image may correspond to multiple outputs. However, current existing approaches only show minor stochasticity of the outputs. In this paper, we present a novel approach to synthesize diverse realistic images corresponding to a semantic layout. We introduce a diversity loss objective, which maximizes the distance between synthesized image pairs and relates the input noise to the semantic segments in the synthesized images. Thus, our approach can not only produce multiple diverse images but also allow users to manipulate the output images by adjusting the noise manually. Experimental results show that images synthesized by our approach are diverse than that of the current existing works and equipping our diversity loss does not degrade the reality of the base networks. Moreover, our approach can be applied to unpaired datasets
Beschreibung:Date Revised 27.02.2024
published: Print-Electronic
Citation Status Publisher
ISSN:1941-0042
DOI:10.1109/TIP.2019.2891935