A Comparative Analysis on Interpretability of Explainable AI(XAI) for Neural Network Base Fault Detection Method

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 106
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorSohn, Su Yeonko
dc.contributor.authorLee, Jay Hyungko
dc.identifier.citation2021 AIChE Annual Meeting-
dc.description.abstractArtificial intelligence (AI) technology has spread to many fields, achieving astounding performance in certain cases. A single most important factor responsible for this is the development of deep learning using multiple hidden layers. Despite the popularity, deep neural networks (DNN) are black box models and have important limitations, among which first and foremost is that lack of physical interpretability. Explainable AI (XAI), which attempts to explain how the AI reached its decisions, has been getting a lot of attention lately in expectation of drawing physical insights from the black box models. Recent developments in the field of XAI enable humans to comprehend the decision making of AI, by analyzing the contribution of input features. For example, an explanation can be represented as a heatmap highlighting which pixels of the input image are most relevant to the classification decision [1-3], or that highlighting texts in sentences [4, 5]. A large number of XAI methods have been put forward in the domain of image classification and natural language processing. Surveys providing comparative overview of these techniques also have appeared [6, 7]. In the field of process system engineering, many efforts have been reported to apply AI technologies to process modeling, control and optimization. Fault detection and isolation (FDI) is one of the areas where AI technologies have been popular, with various detection and classification models ranging from feedforward neural networks to SVM and LSTM [8-10]. As FDI’s goal is to ensure safety and on-spec product quality, the lack of interpretability of these approaches hinder their widespread applications. Motivated by this, this work applies representative XAI methods to DNN models for fault detection and compares their performance in identifying the most important input features for the detection. The XAI methods we examine are Integrated gradients, DeepLIFT, Kernel SHAP, Gradient SHAP. Integrated gradients computes the average gradient along the path from a given baseline to the input [11]. DeepLIFT is based on the backpropagation approach, which attributes a change to inputs based on the differences between the inputs and the baselines [12]. Kernel SHAP uses a specially-weighted local linear regression to estimate the Shapley values [13]. Gradient SHAP is a gradient method to compute the Shapley values [13]. For comparison, DNN are trained to perform fault detection for the Tennessee Eastman process according to [14] and each XAI method is applied on the models respectively to compute each input feature’s attribution to detecting the fault. Once most relevant input variables are chosen, neural networks are then trained again without those variables. We compare the interpretation ability of the various methods by computing the difference in the fault detection rate between the original and newly trained models and analyzing the difference. As a result, among the four methods tried, Gradient SHAP showed the largest difference in the fault detection rate. As proper interpretability enhances the acceptance of an AI model for fault detection, effective application of the XAI methods is expected to accelerate the adoption of AI in the process engineering field.-
dc.titleA Comparative Analysis on Interpretability of Explainable AI(XAI) for Neural Network Base Fault Detection Method-
dc.citation.publicationname2021 AIChE Annual Meeting-
dc.identifier.conferencelocationBoston, MA-
dc.contributor.localauthorLee, Jay Hyung-
dc.contributor.nonIdAuthorSohn, Su Yeon-
Appears in Collection
CBE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.


  • mendeley


rss_1.0 rss_2.0 atom_1.0