Interpreting and Explaining Deep Neural Networks: A Perspective on Time Series Data

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 144
  • Download : 0
Explainable and interpretable machine learning models and algorithms are important topics which have received growing attention from research, application and administration. Many complex Deep Neural Networks (DNNs) are often perceived as black-boxes. Researchers would like to be able to interpret what the DNN has learned in order to identify biases and failure models and improve models. In this tutorial, we will provide a comprehensive overview on methods to analyze deep neural networks and an insight how those interpretable and explainable methods help us understand time series data.
Publisher
Association for Computing Machinery
Issue Date
2020-08-23
Language
English
Citation

26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2020, pp.3563 - 3564

DOI
10.1145/3394486.3406478
URI
http://hdl.handle.net/10203/286463
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0