Deep learning has recently been applied to various artificial intelligence (AI) fields and has demonstrated excellent performance. However, several models based on deep learning encounter black-box problem that complicates the interpretation of the models and understand their predictions. This makes it difficult to apply deep learning to real problems, especially in critical systems such as those in the defense, aerospace, and security domains. To overcome this issue, the concept of explainable AI was introduced. Various approaches have been proposed to visually explain model predictions for image and text classification. A common approach for visual explanation includes layer-wise relevance propagation (LRP), which produces a heatmap where each pixel value represents a contribution to the prediction of the model. Advanced versions of LRP have been proposed, but these methods have some limitations. In this study, we propose selective layer-wise relevance propagation, which produces a clearer heatmap than the existing methods by combining relevance-based methods and gradient-based methods. The experimental results are presented qualitatively and quantitatively to evaluate the proposed method and verify its effectiveness.