In this paper, we learn a diffusion model to generate3D data on a scene-scale. Specifically, our model crafts a3D scene consisting of multiple objects, while recent diffu-sion research has focused on a single object. To realize ourgoal, we represent a scene with discrete class labels, i.e.,categorical distribution, to assign multiple objects into se-mantic categories. Thus, we extend discrete diffusion mod-els to learn scene-scale categorical distributions. In addi-tion, we validate that a latent diffusion model can reducecomputation costs for training and deploying. To the bestof our knowledge, our work is the first to apply discreteand latent diffusion for 3D categorical data on a scene-scale. We further propose to perform semantic scene com-pletion (SSC) by learning a conditional distribution usingour diffusion model, where the condition is a partial ob-servation in a sparse point cloud. In experiments, we em-pirically show that our diffusion models not only generatereasonable scenes, but also perform the scene completiontask better than a discriminative model. Our code and mod-els are available at https://github.com/zoomin-lee/scene-scale-diffusion.