Self-Parameter Distillation Dehazing

In this paper, we propose a novel dehazing method based on self-distillation. In contrast to conventional knowledge distillation approaches that transfer large models (teacher networks) to small models (student networks), we introduce a single knowledge distillation network that transfers network pa...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 32; pp. 631 - 642
Main Authors Kim, Guisik, Kwon, Junseok
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we propose a novel dehazing method based on self-distillation. In contrast to conventional knowledge distillation approaches that transfer large models (teacher networks) to small models (student networks), we introduce a single knowledge distillation network that transfers network parameters to itself for dehazing. In the early stages, the proposed network transfers scene content (identity) information to the next stage of itself using haze-free data. However, in the later stages, the network transfers haze information to itself using haze data, enabling the accurate dehazing of input images using scene information from the early stages. In a single network, parameters are seamlessly updated from extracting global scene features to dehazing the scene. During the training, forward propagation acts as a teacher network, whereas backward propagation acts as a student network. The experimental results demonstrate that the proposed method considerably outperforms other state-of-the-art dehazing methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2022.3231122