Single-Image Real-Time Rain Removal Based on Depth-Guided Non-Local Features
Rain is a common weather phenomenon that affects environmental monitoring and surveillance systems. According to an established rain model (Garg and Nayar, 2007), the scene visibility in the rain varies with the depth from the camera, where objects faraway are visually blocked more by the fog than b...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 20., Seite 1759-1770 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2021
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Rain is a common weather phenomenon that affects environmental monitoring and surveillance systems. According to an established rain model (Garg and Nayar, 2007), the scene visibility in the rain varies with the depth from the camera, where objects faraway are visually blocked more by the fog than by the rain streaks. However, existing datasets and methods for rain removal ignore these physical properties, thus limiting the rain removal efficiency on real photos. In this work, we analyze the visual effects of rain subject to scene depth and formulate a rain imaging model that collectively considers rain streaks and fog. Also, we prepare a dataset called RainCityscapes on real outdoor photos. Furthermore, we design a novel real-time end-to-end deep neural network, for which we train to learn the depth-guided non-local features and to regress a residual map to produce a rain-free output image. We performed various experiments to visually and quantitatively compare our method with several state-of-the-art methods to show its superiority over others |
---|---|
Beschreibung: | Date Revised 15.01.2021 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2020.3048625 |