Graph Neural Networks (GNN) have been proven effective in non-euclidean data such as social networks, biology networks, and chemistry networks. There are various ways to further improve network performance. Numerous papers have found that enlarging model size is not a viable solution in improving the model’s performance due to over-smoothing. Feasible methods to improve performance were therefore considered, with the Ensemble method put forth as the most appropriate approach. We therefore decided to conduct experiments using the Ensemble method, and as an extension, applied BatchEnsemble, an Ensemble method of greater efficiency, to graph-structured datasets and neural networks to observe its effectiveness in such contexts. In this paper, BatchEnsemble is proven to achieve better performance than the Ensemble with GNNs. For credibility, we tested the performance of both Ensemble and BatchEnsemble on node classification (Cora, CiteSeer, and PubMed) and graph classification tasks (MUTAG, PROTEINS, and COLLAB) using three iconic GNNs: Graph Convolutional Network, Graph Isomorphism Network, and Graph Attention Network. The BatchEnsemble yielded better accuracy and uncertainty measures than the Ensemble method on both node and graph classification tasks; both training and inference times were faster in the BatchEnsemble setting.