Since the Boltzmann Machine requires huge amount of computational resources when the network size is large or applied to real applications, it has been a main obstacle which prevents it to be used widely. To overcome this obstacle, an efficient method of Boltzmann Machine computations onto a distributed memory multiprocessor, which exploits the synchronous spatial parallelism, is presented. In this method, the neurons in the Boltzmann Machine are partitioned into P disjoint sets, and each set is mapped on a processor of a P processor system. A parallel convergence and learning algorithms of Boltzmann Machine, necessary communication pattern among the processors, and their time complexities when neurons are partitioned and mapped onto a distributed memory multiprocessor are investigated. A theoretical P processor speed up of the parallelizing scheme over a single processor is analyzed and an experimental speed up is also presented to demonstrate the usefulness of our parallel implementation. It can help to determine the cost effective number of processors to parallelize the Boltzmann Machine computations. It can be used as a basis in determining the most cost effective or optimal number of processors with respect to the communication capabilities and interconnection topologies of given distributed memory multiprocessor.