|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM266083900 |
003 |
DE-627 |
005 |
20231224213713.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2017 xx |||||o 00| ||eng c |
028 |
5 |
2 |
|a pubmed24n0886.xml
|
035 |
|
|
|a (DE-627)NLM266083900
|
035 |
|
|
|a (NLM)27831874
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Zhang, Dongyu
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning
|
264 |
|
1 |
|c 2017
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 30.07.2018
|
500 |
|
|
|a Date Revised 30.07.2018
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Sketch portrait generation benefits a wide range of applications such as digital entertainment and law enforcement. Although plenty of efforts have been dedicated to this task, several issues still remain unsolved for generating vivid and detail-preserving personal sketch portraits. For example, quite a few artifacts may exist in synthesizing hairpins and glasses, and textural details may be lost in the regions of hair or mustache. Moreover, the generalization ability of current systems is somewhat limited since they usually require elaborately collecting a dictionary of examples or carefully tuning features/components. In this paper, we present a novel representation learning framework that generates an end-to-end photo-sketch mapping through structure and texture decomposition. In the training stage, we first decompose the input face photo into different components according to their representational contents (i.e., structural and textural parts) by using a pre-trained convolutional neural network (CNN). Then, we utilize a branched fully CNN for learning structural and textural representations, respectively. In addition, we design a sorted matching mean square error metric to measure texture patterns in the loss function. In the stage of sketch rendering, our approach automatically generates structural and textural representations for the input photo and produces the final result via a probabilistic fusion scheme. Extensive experiments on several challenging benchmarks suggest that our approach outperforms example-based synthesis algorithms in terms of both perceptual and objective metrics. In addition, the proposed method also has better generalization ability across data set without additional training
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Lin, Liang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chen, Tianshui
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wu, Xian
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tan, Wenwei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Izquierdo, Ebroul
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 26(2017), 1 vom: 04. Jan., Seite 328-339
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:26
|g year:2017
|g number:1
|g day:04
|g month:01
|g pages:328-339
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 26
|j 2017
|e 1
|b 04
|c 01
|h 328-339
|