Ghost-Free Deep High-Dynamic-Range Imaging Using Focus Pixels for Complex Motion Scenes

Multi-exposure image fusion inevitably causes ghost artifacts owing to inaccurate image registration. In this study, we propose a deep learning technique for the seamless fusion of multi-exposed low dynamic range (LDR) images using a focus-pixel sensor. For auto-focusing in mobile cameras, a focus-p...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 01., Seite 5001-5016
1. Verfasser: Woo, Sung-Min (VerfasserIn)
Weitere Verfasser: Ryu, Je-Ho, Kim, Jong-Ok
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Multi-exposure image fusion inevitably causes ghost artifacts owing to inaccurate image registration. In this study, we propose a deep learning technique for the seamless fusion of multi-exposed low dynamic range (LDR) images using a focus-pixel sensor. For auto-focusing in mobile cameras, a focus-pixel sensor originally provides left (L) and right (R) luminance images simultaneously with a full-resolution RGB image. These L/R images are less saturated than the RGB images because they are summed up to be a normal pixel value in the RGB image of the focus pixel sensor. These two features of the focus pixel image, namely, relatively short exposure and perfect alignment are utilized in this study to provide fusion cues for high dynamic range (HDR) imaging. To minimize fusion artifacts, luminance and chrominance fusions are performed separately in two sub-nets. In a luminance recovery network, two heterogeneous images, the focus pixel image and the corresponding overexposed LDR image, are first fused by joint learning to produce an HDR luminance image. Subsequently, a chrominance network fuses the color components of the misaligned underexposed LDR input to obtain a 3-channel HDR image. Existing deep-neural-network-based HDR fusion methods fuse misaligned multi-exposed inputs directly. They suffer from visual artifacts that are observed mostly in saturated regions because pixel values are clipped out. Meanwhile, the proposed method reconstructs missing luminance with aligned unsaturated focus pixel image first, and thus, the luma-recovered image provides the cues for accurate color fusion. The experimental results show that the proposed method not only accurately restores fine details in saturated areas, but also produce ghost-free high-quality HDR images without pre-alignment
Beschreibung:Date Revised 20.05.2021
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2021.3077137