Adaptive Explicit Knowledge Transfer for Knowledge Distillation
Logit-based knowledge distillation (KD) for classification is cost-efficient compared to feature-based KD but often subject to inferior performance. Recently, it was shown that the performance of logit-based KD can be improved by effectively delivering the probability distribution for the non-target...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
03.09.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Logit-based knowledge distillation (KD) for classification is cost-efficient
compared to feature-based KD but often subject to inferior performance.
Recently, it was shown that the performance of logit-based KD can be improved
by effectively delivering the probability distribution for the non-target
classes from the teacher model, which is known as `implicit (dark) knowledge',
to the student model. Through gradient analysis, we first show that this
actually has an effect of adaptively controlling the learning of implicit
knowledge. Then, we propose a new loss that enables the student to learn
explicit knowledge (i.e., the teacher's confidence about the target class)
along with implicit knowledge in an adaptive manner. Furthermore, we propose to
separate the classification and distillation tasks for effective distillation
and inter-class relationship modeling. Experimental results demonstrate that
the proposed method, called adaptive explicit knowledge transfer (AEKT) method,
achieves improved performance compared to the state-of-the-art KD methods on
the CIFAR-100 and ImageNet datasets. |
---|---|
DOI: | 10.48550/arxiv.2409.01679 |