Video registration using dynamic textures
We consider the problem of spatially and temporally registering multiple video sequences of dynamical scenes which contain, but are not limited to, nonrigid objects such as fireworks, flags fluttering in the wind, etc., taken from different vantage points. This problem is extremely challenging due t...
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence. - 1979. - 33(2011), 1 vom: 04. Jan., Seite 158-71 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2011
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on pattern analysis and machine intelligence |
Schlagworte: | Journal Article Research Support, Non-U.S. Gov't Research Support, U.S. Gov't, Non-P.H.S. |
Zusammenfassung: | We consider the problem of spatially and temporally registering multiple video sequences of dynamical scenes which contain, but are not limited to, nonrigid objects such as fireworks, flags fluttering in the wind, etc., taken from different vantage points. This problem is extremely challenging due to the presence of complex variations in the appearance of such dynamic scenes. In this paper, we propose a simple algorithm for matching such complex scenes. Our algorithm does not require the cameras to be synchronized, and is not based on frame-by-frame or volume-by-volume registration. Instead, we model each video as the output of a linear dynamical system and transform the task of registering the video sequences to that of registering the parameters of the corresponding dynamical models. As these parameters are not uniquely defined, one cannot directly compare them to perform registration. We resolve these ambiguities by jointly identifying the parameters from multiple video sequences, and converting the identified parameters to a canonical form. This reduces the video registration problem to a multiple image registration problem, which can be efficiently solved using existing image matching techniques. We test our algorithm on a wide variety of challenging video sequences and show that it matches the performance of significantly more computationally expensive existing methods |
---|---|
Beschreibung: | Date Completed 14.03.2011 Date Revised 22.11.2010 published: Print Citation Status MEDLINE |
ISSN: | 1939-3539 |
DOI: | 10.1109/TPAMI.2010.61 |