DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Jong-Hwan | - |
dc.contributor.advisor | 김종환 | - |
dc.contributor.author | Yoo, Sahng-Min | - |
dc.date.accessioned | 2023-06-23T19:33:36Z | - |
dc.date.available | 2023-06-23T19:33:36Z | - |
dc.date.issued | 2022 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1007870&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/309082 | - |
dc.description | 학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2022.8,[v, 53 p. :] | - |
dc.description.abstract | With the development of deep learning, the tasks that Artificial Intelligence (AI) can solve with capabilities beyond humans have diversified, such as computer vision, natural language processing, and robotics. Here, since all tasks are trained under the common assumption that the distribution of training and test data is the same, the neural network trained for one task cannot perform other tasks. However, in the real world, AI must respond to multiple tasks or data domains that appear simultaneously or sequentially. This paper defines three cases in which one neural network can continuously exert knowledge while training multiple tasks and presents knowledge transfer and training strategy solutions for each: 1) Transfer learning to solve the lack of data problem for one target task, 2) Domain generalization that can test multiple target domains with small domain gap without additional training, 3) Continual learning: sequentially training multiple target domains with large domain gap while minimizing catastrophic forgetting. We performed transfer learning in the invisible mobile keyboard decoding task, domain generalization was studied in the face swapping task, and we applied continual learning to the unsupervised domain adaptation task for the first time in the world. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Knowledge transfer▼aTransfer learning▼aDomain generalization▼aContinual learning▼aLifelong learning | - |
dc.subject | 지식 전이▼a전이 학습▼a도메인 일반화▼a연속 학습▼a평생 학습 | - |
dc.title | Knowledge transfer and training strategies to train multiple tasks with a lifelong learning network | - |
dc.title.alternative | 평생학습 신경망을 이용한 다중 테스크 학습을 위한 지식 전이와 학습 전략 | - |
dc.type | Thesis(Ph.D) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 유상민 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.