EDiT: Interpreting ensemble models via compact soft decision trees

Cited 4 time in webofscience Cited 0 time in scopus
  • Hit : 28
  • Download : 0
Given feature-based data, how can we accurately classify individual input and interpret the result of it? Ensemble models are often the best choice in terms of accuracy when dealing with feature-based datasets. However, interpreting the decision made by the ensemble model for individual input seems intractable. On the other hand, decision trees, although being prone to overfit, are considered as the most interpretable in terms of being able to trace the decision process of individual input. In this work, we propose Ensemble to Distilled Tree (EDiT), a novel distilling method that generates compact soft decision trees from ensemble models. EDiT exploits the interpretability of a tree-based structure by removing redundant branches and learning sparse weights, while enhancing accuracy by distilling the knowledge of ensemble models such as random forests (RF). Our experiments on eight datasets show that EDiT reduces the number of parameters of an RF by 6.4 to 498.4 times with a minor loss of classification accuracy.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2019-11-09
Language
English
Citation

19th IEEE International Conference on Data Mining, ICDM 2019, pp.1438 - 1443

ISSN
1550-4786
DOI
10.1109/ICDM.2019.00187
URI
http://hdl.handle.net/10203/311559
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0