Self-Parameter Distillation Dehazing

In this paper, we propose a novel dehazing method based on self-distillation. In contrast to conventional knowledge distillation approaches that transfer large models (teacher networks) to small models (student networks), we introduce a single knowledge distillation network that transfers network pa...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 32(2023) vom: 06., Seite 631-642
1. Verfasser: Kim, Guisik (VerfasserIn)
Weitere Verfasser: Kwon, Junseok
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:In this paper, we propose a novel dehazing method based on self-distillation. In contrast to conventional knowledge distillation approaches that transfer large models (teacher networks) to small models (student networks), we introduce a single knowledge distillation network that transfers network parameters to itself for dehazing. In the early stages, the proposed network transfers scene content (identity) information to the next stage of itself using haze-free data. However, in the later stages, the network transfers haze information to itself using haze data, enabling the accurate dehazing of input images using scene information from the early stages. In a single network, parameters are seamlessly updated from extracting global scene features to dehazing the scene. During the training, forward propagation acts as a teacher network, whereas backward propagation acts as a student network. The experimental results demonstrate that the proposed method considerably outperforms other state-of-the-art dehazing methods
Beschreibung:Date Revised 04.04.2025
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2022.3231122