Bayesian networks are a useful tool for understanding and representing knowledge structures in the research fields such as AI, medicine, biology, education, social science, business, management, etc. Its popularity comes from its high interpretability of the phenomena when they are expressed in the form of a Bayesian network. The directions of the arrows in the network indicate the relationship between the random variables, mostly interpretable as cause-effect relationship, the variable at the tail being interpreted as a cause and the one at the head as an effect.
As is common for most of modeling problems, learning the structure of a model is not easy, when a lot of variables are involved, in the context of computation time and model complexity. We propose a method of structure-learning under the assumption that the true model is a Bayesian network model. In the method, we apply a regression tree method and the entropy measure to find out how variables are related to each other. The two methods produce similar model structures as a whole, but if we combine the two results we can have an improved model structure. We applied this method to two sets of artificial data sets, one set of 20 binary variables and the other of 40 binary variables. The result strongly supported the proposed method for structure learning.