DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Junmo | - |
dc.contributor.advisor | 김준모 | - |
dc.contributor.author | Shon, Hyounguk | - |
dc.date.accessioned | 2023-06-26T19:33:49Z | - |
dc.date.available | 2023-06-26T19:33:49Z | - |
dc.date.issued | 2022 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=997205&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/309865 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2022.2,[iii, 21 p. :] | - |
dc.description.abstract | We propose a continual learning algorithm that effectively mitigates catastrophic forgetting that occurs when a deep neural network is trained on multiple tasks sequentially. Our method takes advantage of the pre-training of neural networks for effective continual learning. Based on the observation that quadratic parameter regularization is able to achieve the optimal continual learning policy with linear models, our algorithm $\textit{linearizes}$ the neural network and applies quadratic penalty to parameters by estimating the Fisher information matrix. We show that the proposed method can prevent forgetting while achieving high performance on image classification tasks. Our method can be used in data incremental and task incremental learning problems. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.title | Continual learning with linearized deep neural networks | - |
dc.title.alternative | 선형화된 심층 신경망을 이용한 지속 학습 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 손형욱 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.