DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Hoirin | - |
dc.contributor.advisor | 김회린 | - |
dc.contributor.author | Lee, Yeonghyeon | - |
dc.date.accessioned | 2023-06-26T19:33:54Z | - |
dc.date.available | 2023-06-26T19:33:54Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032916&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/309881 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[iv, 24 p. :] | - |
dc.description.abstract | The success of self-supervised learning in the domain of speech has led to the development of large-scale self-supervised models. However, using these kinds of large-scale models in practice can be costly, potentially limiting the use of these models especially in resource-constrained settings. In this regard, we propose a model compression method FitHuBERT, which uses a thinner and deeper architecture across almost all of its model components compared to prior work. Additionally, we propose knowledge distillation with hints to improve performance, and the use of time-reduction layers to increase efficiency. Evaluation results on the SUPERB benchmark show that our model outperforms previous work especially on content related tasks, while having fewer parameters and faster inference time. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Self-supervised Learning▼aKnowledge Distillation▼aRepresentation Learning | - |
dc.subject | 자기지도학습▼a지식 증류▼a표현 학습 | - |
dc.title | (A) study on effective knowledge distillation methods for compressing large-scale speech self-supervised learning models | - |
dc.title.alternative | 음성 자기지도학습 모델 압축을 위한 효과적인 지식 증류기법에 관한 연구 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 이영현 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.