Feature based knowledge distillation for image recognition영상 인식을 위한 피처 기반의 지식 증류

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 131
  • Download : 0
The performance of models related to visual recognition, such as image classification and object detection, has recently improved with the development of deep learning. However, in order to train a deep learning-based model, a model with many parameters and a large amount of data are required. This thesis examines the knowledge distillation method. First, we proposed an attention-based meta-network which models relationships between the teacher model layers and the student model layers to distill knowledge effectively. We empirically show through experiments that the proposed methodology is more effective than using a heuristic to designate the layer links between the teacher model layers and the student model layers. Second, when using the method of knowledge distillation by the student model itself without a teacher model (self-knowledge distillation), we proposed the novel method which utilizes spatial information. To this end, we introduced an auxiliary network for training by altering the model used in the existing object detection model.
Advisors
Moon, Il-Chulresearcher문일철researcher
Description
한국과학기술원 :산업및시스템공학과,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 산업및시스템공학과, 2022.8,[vi, 55 p. :]

Keywords

Transfer learning▼aKnowledge distillation▼aImage classification; 전이 학습▼a모델 증류▼a영상 분류

URI
http://hdl.handle.net/10203/308394
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1007809&flag=dissertation
Appears in Collection
IE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0