DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Moon, Jaekyun | - |
dc.contributor.advisor | 문재균 | - |
dc.contributor.author | Lee, Namjin | - |
dc.date.accessioned | 2023-06-26T19:34:01Z | - |
dc.date.available | 2023-06-26T19:34:01Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032863&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/309902 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[iv, 19 p. :] | - |
dc.description.abstract | Few-shot class-incremental learning (FSCIL) aims to enable a neural network to learn new information well from a few data samples when new data is continually provided. In FSCIL, it is very important to prevent catastrophic forgetting, which is lost of existing knowledge. FSCIL method consists of two stages, pretraining and fine-tuning. In pretraining stage, neural network is trined with a plenty of samples of base classes. In fine-tuning stage, the pretrained network is fine-tuned with a few samples of novel classes. Various algorithms suggested until now generally focus on improvement in fine-tuning stage. However, according to F2M, when using prototype as classifier, it performs better than existing algorithms even without fine-tuning. This discovery raised a doubt on the effect of fine-tuning, which make us focus on pretraining stage. Consequently, when we try to improve the performance in pretraining stage, the performance is superior to existing algorithms even when the model is not fine-tuned. Based on this result, we conclude that Making a good embedding in pretraining is much more effective than trying to teach new data well in fine-tuning. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Few-shot class-incremental learning▼aCatastrophic forgetting▼aPretraining▼aFine-tuning | - |
dc.subject | 점진적인 소수샷 학습▼a파괴적 망각▼a사전학습▼a미세조정 | - |
dc.title | Enhancing incremental few-shot learning algorithm | - |
dc.title.alternative | 좋은 임베딩을 이용한 점진적인 소수 샷 학습 알고리즘 성능 개선 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 이남진 | - |
dc.title.subtitle | good embedding is all we need | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.