초록 |
With the complexity and size of modern industrial processes increasing, process monitoring is attracting increased attention in order to ensure safe and economic process operations. Various data-driven fault diagnosis models for capturing the underlying complexity of processes with high classification performance are actively investigated, but they cannot explain the prediction outcomes. That is, the preceding classification models are unable to provide a solution to the fundamental question of "Why did this fault occur?". In this study, a framework for the explainable fault diagnosis model is proposed. Using stacked autoencoder (SAE), high classification performance is achieved and an explanation of the model predictions is presented by Kernel Shapley additive explanations (Kernel SHAP). The resulting explanation which includes the local feature importance and global interpretation of the model behavior reinforces the reliability of the model and enables to take manipulating actions when a fault occurs. To evaluate the proposed framework's efficacy, a dataset from the Tennessee Eastman process is used. |