Reversible and mergeable learning for continual learning of artificial neural networks인공 신경망의 지속 학습을 위한 가역 및 가합 학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 321
  • Download : 0
Deep learning with artificial neural networks is one of the most powerful AI technologies. It shows state-of-the-art performance in various fields. To save network training time and improve performance, transfer learning is commonly used. Transfer learning achieves good target performance, but source performance is poor due to catastrophic forgetting. We propose a novel continual learning method called Reversible and Mergeable Learning (RML). The RML method keeps all the original parameters to enable the reversible process of additional training on the target task. After training, the original network and newly learned network can be merged into a network of the same size with the original one. A new network trained with the RML method shows better performance than the finetuning method, and surprisingly, can even outperform the original one on the source task without seeing any source data when the source and target tasks are similar. The RML method can also be used for data incremental learning.
Advisors
Kim, Jun Moresearcher김준모researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2017
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2017.8,[iii, 14 p. :]

Keywords

Continual Learning▼aMultitask Learning▼aData Incremental Learning▼aArtificial Neural Network▼aDeep Learning; 지속 학습▼a멀티 태스킹 학습▼a데이터 증가 학습▼a인공 신경망▼a심층 학습

URI
http://hdl.handle.net/10203/243370
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=718703&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0