Bayesian reinforcement learning (BRL) provides a formal framework to optimally trading off exploration and exploitation in reinforcement learning. Unfortunately, it is generally intractable to find the Bayes-optimal behavior since the uncertainty in the model of the environment has to be taken into account. In this paper, we present a heuristic search approach to the model-based BRL. In addition, we present potential-based reward shaping for model-based BRL that makes the search more effective. The potential functions we propose are domain-independent in the sense that they do not require any knowledge about the actual environment model. We show that the proposed potential functions generally improve the quality of search, enabling our heuristic search algorithm to outperform state-of-the-art BRL algorithms in standard benchmark domains.