Cascaded Attention Guidance Network for Single Rainy Image Restoration

Restoring a rainy image with raindrops or rainstreaks of varying scales, directions, and densities is an extremely challenging task. Recent approaches attempt to leverage the rain distribution (e.g., location) as prior to generate satisfactory results. However, concatenation of a single distribution...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - PP(2020) vom: 23. Sept.
1. Verfasser: Wang, Guoqing (VerfasserIn)
Weitere Verfasser: Sun, Changming, Sowmya, Arcot
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Restoring a rainy image with raindrops or rainstreaks of varying scales, directions, and densities is an extremely challenging task. Recent approaches attempt to leverage the rain distribution (e.g., location) as prior to generate satisfactory results. However, concatenation of a single distribution map with the rainy image or with intermediate feature maps is too simplistic to fully exploit the advantages of such priors. To further explore this valuable information, an advanced cascaded attention guidance network, dubbed as CAG-Net, is formulated and designed as a three-stage model. In the first stage, a multitask learning network is constructed for producing the attention map and coarse de-raining results simultaneously. Subsequently, the coarse results and the rain distribution map are concatenated and fed to the second stage for results refinement. In this stage, the attention map generation network from the first stage is used to formulate a novel semantic consistency loss for better detail recovery. In the third stage, a novel pyramidal "whereand- how" learning mechanism is formulated. At each pyramid level, a two-branch network is designed to take the features from previous stages as inputs to generate better attention-guidance features and de-raining features, which are then combined via a gating scheme to produce the final de-raining results. Moreover, the uncertainty maps are also generated in this stage for more accurate pixel-wise loss calculation. Extensive experiments are carried out for removing raindrops or rainstreaks from both synthetic and real rainy images, and CAG-Net is demonstrated to produce significantly better results than state-of-the-art models. Code will be publicly available after paper acceptance
Beschreibung:Date Revised 22.02.2024
published: Print-Electronic
Citation Status Publisher
ISSN:1941-0042
DOI:10.1109/TIP.2020.3023773