Explaining How Deep Neural Networks Forget by Deep Visualization

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 80
  • Download : 0
Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life. Taking the advantages of interpretable machine learning (interpretable ML), this paper proposes a novel tool called Catastrophic Forgetting Dissector (or CFD) to explain catastrophic forgetting in continual learning settings. We also introduce a new method called Critical Freezing based on the observations of our tool. Experiments on ResNet-50 articulate how catastrophic forgetting happens, particularly showing which components of this famous network are forgetting. Our new continual learning algorithm defeats various recent techniques by a significant margin, proving the capability of the investigation. Critical freezing not only attacks catastrophic forgetting but also exposes explainability. © 2021, Springer Nature Switzerland AG.
Publisher
Springer Science and Business Media Deutschland GmbH
Issue Date
2021-01
Language
English
Citation

25th International Conference on Pattern Recognition Workshops, ICPR 2020, pp.162 - 173

ISSN
0302-9743
DOI
10.1007/978-3-030-68796-0_12
URI
http://hdl.handle.net/10203/288813
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0