Video texture synthesis with multi-frame LBP-TOP and diffeomorphic growth model

Video texture synthesis is the process of providing a continuous and infinitely varying stream of frames, which plays an important role in computer vision and graphics. However, it still remains a challenging problem to generate high-quality synthesis results. Considering the two key factors that af...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 22(2013), 10 vom: 17. Okt., Seite 3879-91
1. Verfasser: Guo, Yimo (VerfasserIn)
Weitere Verfasser: Zhao, Guoying, Zhou, Ziheng, Pietikainen, Matti
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2013
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
Beschreibung
Zusammenfassung:Video texture synthesis is the process of providing a continuous and infinitely varying stream of frames, which plays an important role in computer vision and graphics. However, it still remains a challenging problem to generate high-quality synthesis results. Considering the two key factors that affect the synthesis performance, frame representation and blending artifacts, we improve the synthesis performance from two aspects: 1) Effective frame representation is designed to capture both the image appearance information in spatial domain and the longitudinal information in temporal domain. 2) Artifacts that degrade the synthesis quality are significantly suppressed on the basis of a diffeomorphic growth model. The proposed video texture synthesis approach has two major stages: video stitching stage and transition smoothing stage. In the first stage, a video texture synthesis model is proposed to generate an infinite video flow. To find similar frames for stitching video clips, we present a new spatial-temporal descriptor to provide an effective representation for different types of dynamic textures. In the second stage, a smoothing method is proposed to improve synthesis quality, especially in the aspect of temporal continuity. It aims to establish a diffeomorphic growth model to emulate local dynamics around stitched frames. The proposed approach is thoroughly tested on public databases and videos from the Internet, and is evaluated in both qualitative and quantitative ways
Beschreibung:Date Completed 01.04.2014
Date Revised 02.09.2013
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2013.2263148