The vast majority of recent machine learning models are based on neural networks. Although neural
networks perform very well in many domains, they are still hard to capture logical constraints from the
dataset. Among the previous research against this problem, i.e. finding a neural network architecture
supporting logical reasoning, SATNet is one of the first models which both captures the logical relations
and gives a solution for its learned logical relation. However, it still lacks some desirable components:
group equivariance, interpretability, and low-cost computation. We suggest a method for improving
performance by exploiting group symmetries, inferring learned symmetries, and reducing computation
cost. Furthermore, we have analyzed the weaknesses and limitations of SATNet and suggested an
improved method of solving group equivariant logical problems with our improvements.