Overcoming catastrophic forgetting by deep visualization깊은 시각화를 이용한 파괴적 망각 극복

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 217
  • Download : 0
Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life. Taking the advantages of interpretable machine learning (interpretable ML), this work proposes a novel tool called Catastrophic Forgetting Dissector (or CFD) to explain catastrophic forgetting in continual learning settings. We also introduce a new method called Critical Freezing based on the observations of our tool. Experiments on ResNet articulate how catastrophic forgetting happens, particularly showing which components of this famous network are forgetting. Our new continual learning algorithm defeats various recent techniques by a significant margin, proving the capability of the investigation. Critical freezing not only attacks catastrophic forgetting but also exposes explainability.
Advisors
Kim, Daeyoungresearcher김대영researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[iii, 30 p. :]

Keywords

Image captioning▼aInterpretable ML▼aContinual learning▼aCatastrophic forgetting▼aDeep Visualization; 이미지 캡션▼a해석 가능한 ML▼a지속적인 학습▼a파괴적 망각▼a깊은 시각화

URI
http://hdl.handle.net/10203/285006
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925167&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0