Interpreting Internal Activation Patterns in Deep Temporal Neural Networks by Finding Prototypes

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 18
  • Download : 0
Deep neural networks have demonstrated competitive performance in classification tasks for sequential data. However, it remains difficult to understand which temporal patterns the internal channels of deep neural networks capture for decision-making in sequential data. To address this issue, we propose a new framework with which to visualize temporal representations learned in deep neural networks without hand-crafted segmentation labels. Given input data, our framework extracts highly activated temporal regions that contribute to activating internal nodes and characterizes such regions by prototype selection method based on Maximum Mean Discrepancy. Representative temporal patterns referred to here as Prototypes of Temporally Activated Patterns (PTAP) provide core examples of subsequences in the sequential data for interpretability. We also analyze the role of each channel by Value-LRP plots using representative prototypes and the distribution of the input attribution. Input attribution plots give visual information to recognize the shapes focused on by the channel for decision-making.
Publisher
Association for Computing Machinery
Issue Date
2021-08-16
Language
English
Citation

27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2021, pp.158 - 166

DOI
10.1145/3447548.3467346
URI
http://hdl.handle.net/10203/289707
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0