Continuous Conditional Generative Adversarial Networks : Novel Empirical Losses and Label Input Mechanisms

This article focuses on conditional generative modeling (CGM) for image data with continuous, scalar conditions (termed regression labels). We propose the first model for this task which is called continuous conditional generative adversarial network (CcGAN). Existing conditional GANs (cGANs) are ma...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 7 vom: 12. Juli, Seite 8143-8158
1. Verfasser: Ding, Xin (VerfasserIn)
Weitere Verfasser: Wang, Yongwei, Xu, Zuheng, Welch, William J, Wang, Z Jane
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:This article focuses on conditional generative modeling (CGM) for image data with continuous, scalar conditions (termed regression labels). We propose the first model for this task which is called continuous conditional generative adversarial network (CcGAN). Existing conditional GANs (cGANs) are mainly designed for categorical conditions (e.g., class labels). Conditioning on regression labels is mathematically distinct and raises two fundamental problems: (P1) since there may be very few (even zero) real images for some regression labels, minimizing existing empirical versions of cGAN losses (a.k.a. empirical cGAN losses) often fails in practice; and (P2) since regression labels are scalar and infinitely many, conventional label input mechanisms (e.g., combining a hidden map of the generator/discriminator with a one-hot encoded label) are not applicable. We solve these problems by: (S1) reformulating existing empirical cGAN losses to be appropriate for the continuous scenario; and (S2) proposing a naive label input (NLI) mechanism and an improved label input (ILI) mechanism to incorporate regression labels into the generator and the discriminator. The reformulation in (S1) leads to two novel empirical discriminator losses, termed the hard vicinal discriminator loss (HVDL) and the soft vicinal discriminator loss (SVDL) respectively, and a novel empirical generator loss. Hence, we propose four versions of CcGAN employing different proposed losses and label input mechanisms. The error bounds of the discriminator trained with HVDL and SVDL, respectively, are derived under mild assumptions. To evaluate the performance of CcGANs, two new benchmark datasets (RC-49 and Cell-200) are created. A novel evaluation metric (Sliding Fréchet Inception Distance) is also proposed to replace Intra-FID when Intra-FID is not applicable. Our extensive experiments on several benchmark datasets (i.e., RC-49, UTKFace, Cell-200, and Steering Angle with both low and high resolutions) support the following findings: the proposed CcGAN is able to generate diverse, high-quality samples from the image distribution conditional on a given regression label; and CcGAN substantially outperforms cGAN both visually and quantitatively
Beschreibung:Date Completed 06.06.2023
Date Revised 06.06.2023
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1939-3539
DOI:10.1109/TPAMI.2022.3228915