A Unified Framework for Generalizable Style Transfer : Style and Content Separation
Image style transfer has drawn broad attention recently. However, most existing methods aim to explicitly model the transformation between different styles, and the learned model is often not generalizable to new styles. Based on the idea of style and content separation, we here propose a unified st...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2020) vom: 31. Jan. |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2020
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Image style transfer has drawn broad attention recently. However, most existing methods aim to explicitly model the transformation between different styles, and the learned model is often not generalizable to new styles. Based on the idea of style and content separation, we here propose a unified style transfer framework that consists of style encoder, content encoder, mixer and decoder. The style encoder and the content encoder are used to extract the style and content representations from the corresponding reference images. The two representations are integrated by the mixer and fed to the decoder, which generates images with the target style and content. Assuming the same encoder could be shared among different styles/contents, the style/content encoder explores a generalizable way to represent style/content information, i.e. the encoders are expected to capture the underlying representation for different styles/contents and generalize to new styles/contents. Training simultaneously with a number of styles and contents, the framework enables building one single transfer network for multiple styles and further leads to a key merit of the framework, i.e. its generalizability to new styles and contents. To evaluate the proposed framework, we apply it to both supervised and unsupervised style transfer, using character typeface transfer and neural style transfer as respective examples. For character typeface transfer, to separate the style features and content features, we leverage the conditional dependence of styles and contents given an image. For neural style transfer, we leverage the statistical information of feature maps in certain layers to represent style. Extensive experimental results have demonstrated the effectiveness and robustness of the proposed methods. Furthermore, models learned under the proposed framework are shown to be better generalizable to new styles and contents |
---|---|
Beschreibung: | Date Revised 27.02.2024 published: Print-Electronic Citation Status Publisher |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2020.2969081 |