(A) study on effective knowledge distillation methods for compressing large-scale speech self-supervised learning models음성 자기지도학습 모델 압축을 위한 효과적인 지식 증류기법에 관한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 113
  • Download : 0
The success of self-supervised learning in the domain of speech has led to the development of large-scale self-supervised models. However, using these kinds of large-scale models in practice can be costly, potentially limiting the use of these models especially in resource-constrained settings. In this regard, we propose a model compression method FitHuBERT, which uses a thinner and deeper architecture across almost all of its model components compared to prior work. Additionally, we propose knowledge distillation with hints to improve performance, and the use of time-reduction layers to increase efficiency. Evaluation results on the SUPERB benchmark show that our model outperforms previous work especially on content related tasks, while having fewer parameters and faster inference time.
Advisors
Kim, Hoirinresearcher김회린researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[iv, 24 p. :]

Keywords

Self-supervised Learning▼aKnowledge Distillation▼aRepresentation Learning; 자기지도학습▼a지식 증류▼a표현 학습

URI
http://hdl.handle.net/10203/309881
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032916&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0