|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM336118201 |
003 |
DE-627 |
005 |
20231225231417.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TVCG.2022.3146000
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1120.xml
|
035 |
|
|
|a (DE-627)NLM336118201
|
035 |
|
|
|a (NLM)35077365
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Shi, Min
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Reference-Based Deep Line Art Video Colorization
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 04.05.2023
|
500 |
|
|
|a Date Revised 04.05.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Coloring line art images based on the colors of reference images is a crucial stage in animation production, which is time-consuming and tedious. This paper proposes a deep architecture to automatically color line art videos with the same color style as the given reference images. Our framework consists of a color transform network and a temporal refinement network based on 3U-net. The color transform network takes the target line art images as well as the line art and color images of the reference images as input and generates corresponding target color images. To cope with the large differences between each target line art image and the reference color images, we propose a distance attention layer that utilizes non-local similarity matching to determine the region correspondences between the target image and the reference images and transforms the local color information from the references to the target. To ensure global color style consistency, we further incorporate Adaptive Instance Normalization (AdaIN) with the transformation parameters obtained from a multiple-layer AdaIN that describes the global color style of the references extracted by an embedder network. The temporal refinement network learns spatiotemporal features through 3D convolutions to ensure the temporal color consistency of the results. Our model can achieve even better coloring results by fine-tuning the parameters with only a small number of samples when dealing with an animation of a new style. To evaluate our method, we build a line art coloring dataset. Experiments show that our method achieves the best performance on line art video coloring compared to the current state-of-the-art methods
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Zhang, Jia-Qi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chen, Shu-Yu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Gao, Lin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lai, Yu-Kun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Fang-Lue
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on visualization and computer graphics
|d 1996
|g 29(2023), 6 vom: 02. Juni, Seite 2965-2979
|w (DE-627)NLM098269445
|x 1941-0506
|7 nnns
|
773 |
1 |
8 |
|g volume:29
|g year:2023
|g number:6
|g day:02
|g month:06
|g pages:2965-2979
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TVCG.2022.3146000
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 29
|j 2023
|e 6
|b 02
|c 06
|h 2965-2979
|