Video Diffusion Posterior Sampling for Seeing Beyond Dynamic Scattering Layers

Imaging through scattering is challenging, as even a thin layer can randomly perturb light propagation and obscure hidden objects. Accurate closed-form modeling of forward scattering remains difficult, particularly for dynamically varying or thick layers. Here, we introduce a plug-and-play inverse s...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - PP(2025) vom: 13. Aug.
1. Verfasser: Kwon, Taesung (VerfasserIn)
Weitere Verfasser: Song, Gookho, Kim, Yoosun, Kim, Jeongsol, Ye, Jong Chul, Jang, Mooseok
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2025
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Imaging through scattering is challenging, as even a thin layer can randomly perturb light propagation and obscure hidden objects. Accurate closed-form modeling of forward scattering remains difficult, particularly for dynamically varying or thick layers. Here, we introduce a plug-and-play inverse solver based on video diffusion models with a physically grounded forward model tailored to dynamic scattering layers. Our method extends Diffusion Posterior Sampling (DPS) to the spatio-temporal domain, thereby capturing statistical correlations between video frames and scattered signals more effectively. Leveraging these temporal correlations, our approach recovers high-resolution spatial details that spatial-only methods typically fail to reconstruct. We also propose an inference-time optimization with a lightweight mapping network, enabling joint estimation of low-dimensional forward-model parameters without additional training. This joint optimization significantly enhances adaptability to unknown, time-varying degradations, making our method suitable for blind inverse scattering problems. We validate across diverse conditions, including different scene types, layer thicknesses, and scene-layer distances. And real-world experiments using multiple datasets confirm the robustness and effectiveness of our approach, even under real noise and forward-model approximation mismatches. Finally, we validate our method as a general video-restoration framework across dehazing, deblurring, inpainting, and blind restoration under complex optical aberrations. Our implementation is available at: https://github.com/star-kwon/VDPS
Beschreibung:Date Revised 13.08.2025
published: Print-Electronic
Citation Status Publisher
ISSN:1939-3539
DOI:10.1109/TPAMI.2025.3598457