Chemical structural information, such as Morgan fingerprints, has also been increasingly subjected to the Artificial neural networks (ANNs) modeling for various chemical studies. Despite the increasing volumes of relevant studies, insights into the optimal architecture and prediction performance of ANNs have not been sufficiently examined. Relevant questions include whether multilayer perceptrons (MLPs) always outperform single-layer perceptrons (SLPs) and shallow neural networks (SNNs) for classification problems of chemical structures, and specifically what deep features are made for prediction. In this study, we compared the performance of SLPs and SNNs to MLPs by using Morgan fingerprints as an input and interpreted MLPs using model similarity and deep feature homogeneity. We found that MLPs only performed better if the chemical dataset involves multiple chemical systems, otherwise MLPs did not learn diverse deep features and showed similar or worse performance than SNNs and SLPs. We also showed that Morgan Fingerprints with SNNs with sigmoid activation function outperformed DeepDDI, a recently developed DNN for drug-drug interaction prediction using 8 hidden-layers. This study suggested that hidden layers in ANNs might reveal the chemical system of the given datasets, which might help scientists to have a better sense of designing the hyperparameters of artificial neural networks. With this information, future DNNs could be customized to improve the predictive performance of chemical datasets.