Dual Color Space Guided Sketch Colorization

Automatic sketch colorization is a challenging task in both computer graphics and computer vision since all the color, texture, shading generation have to be created based on the abstract sketch. Besides, it is a subjective task in painting process, which needs illustrators to comprehend drawing pri...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 17., Seite 7292-7304
1. Verfasser: Dou, Zhi (VerfasserIn)
Weitere Verfasser: Wang, Ning, Li, Baopu, Wang, Zhihui, Li, Haojie, Liu, Bin
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Automatic sketch colorization is a challenging task in both computer graphics and computer vision since all the color, texture, shading generation have to be created based on the abstract sketch. Besides, it is a subjective task in painting process, which needs illustrators to comprehend drawing priori (DP), such as hue variation, saturation contrast and gray contrast and utilize them in the HSV color space which is closer to human visual cognition system. As such, incorporating supplementary supervision in the HSV color space may be beneficial to sketch colorization. However, previous methods improve the colorization quality only in the RGB color space without considering the HSV color space, often causing results with dull color, inappropriate saturation contrast, and artifacts. To address this issue, we propose a novel sketch colorization method, dual color space guided generative adversarial network (DCSGAN), that considers the complementary information contained in both the RGB and HSV color space. Specifically, we incorporate the HSV color space to construct dual color spaces for supervising our method with a color space transformation (CST) network that learns transformation from the RGB to HSV color space. Then, we propose a DP loss that enables the DCSGAN to generate vivid color images with pixel level supervision. Additionally, a novel dual color space adversarial (DCSA) loss is designed to guide the generator at global level to reduce the artifacts to meet audiences' aesthetic expectations. Extensive experiments and ablation studies demonstrate the superiority of the proposed method over previous state-of-the-art (SOTA) methods
Beschreibung:Date Revised 23.08.2021
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2021.3104190