Inference-Reconstruction Variational Autoencoder for Light Field Image Reconstruction

Light field cameras can capture the radiance and direction of light rays by a single exposure, providing a new perspective to photography and 3D geometry perception. However, existing sub-aperture based light field cameras are limited by their sensor resolution to obtain high spatial and angular res...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 22., Seite 5629-5644
1. Verfasser: Han, Kang (VerfasserIn)
Weitere Verfasser: Xiang, Wei
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Light field cameras can capture the radiance and direction of light rays by a single exposure, providing a new perspective to photography and 3D geometry perception. However, existing sub-aperture based light field cameras are limited by their sensor resolution to obtain high spatial and angular resolution images simultaneously. In this paper, we propose an inference-reconstruction variational autoencoder (IR-VAE) to reconstruct a dense light field image out of four corner reference views in a light field image. The proposed IR-VAE is comprised of one inference network and one reconstruction network, where the inference network infers novel views from existing reference views and viewpoint conditions, and the reconstruction network reconstructs novel views from a latent variable that contains the information of reference views, novel views, and viewpoints. The conditional latent variable in the inference network is regularized by the latent variable in the reconstruction network to facilitate information flow between the conditional latent variable and novel views. We also propose a statistic distance measurement dubbed the mean local maximum mean discrepancy (MLMMD) to enable the measurement of the statistic distance between two distributions with high-resolution latent variables, which can capture richer information than their low-resolution counterparts. Finally, we propose a viewpoint-dependent indirect view synthesis method to synthesize novel views more efficiently by leveraging adaptive convolution. Experimental results show that our proposed methods outperform state-of-the-art methods on different light field datasets
Beschreibung:Date Revised 06.09.2022
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2022.3197976