AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 6
  • Download : 0
We present a novel adversarial penalized self-knowledge distillation method, named adversarial learning and implicit regularization for self-knowledge distillation (AI-KD), which regularizes the training procedure by adversarial learning and implicit distillations. Our model not only distills the deterministic and progressive knowledge which are from the pre -trained and previous epoch predictive probabilities but also transfers the knowledge of the deterministic predictive distributions using adversarial learning. The motivation is that the self-knowledge distillation methods regularize the predictive probabilities with soft targets, but the exact distributions may be hard to predict. Our proposed method deploys a discriminator to distinguish the distributions between the pre -trained and student models while the student model is trained to fool the discriminator in the trained procedure. Thus, the student model not only can learn the pre -trained model's predictive probabilities but also align the distributions between the pre -trained and student models. We demonstrate the effectiveness of the proposed method with network architectures on multiple datasets and show the proposed method achieves better performance than existing approaches.
Publisher
ELSEVIER
Issue Date
2024-06
Language
English
Article Type
Article
Citation

KNOWLEDGE-BASED SYSTEMS, v.293

ISSN
0950-7051
DOI
10.1016/j.knosys.2024.111692
URI
http://hdl.handle.net/10203/323108
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0