Diffusion Probabilistic Models for Scene-Scale 3D Categorical Data

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 52
  • Download : 0
In this paper, we learn a diffusion model to generate3D data on a scene-scale. Specifically, our model crafts a3D scene consisting of multiple objects, while recent diffu-sion research has focused on a single object. To realize ourgoal, we represent a scene with discrete class labels, i.e.,categorical distribution, to assign multiple objects into se-mantic categories. Thus, we extend discrete diffusion mod-els to learn scene-scale categorical distributions. In addi-tion, we validate that a latent diffusion model can reducecomputation costs for training and deploying. To the bestof our knowledge, our work is the first to apply discreteand latent diffusion for 3D categorical data on a scene-scale. We further propose to perform semantic scene com-pletion (SSC) by learning a conditional distribution usingour diffusion model, where the condition is a partial ob-servation in a sparse point cloud. In experiments, we em-pirically show that our diffusion models not only generatereasonable scenes, but also perform the scene completiontask better than a discriminative model. Our code and mod-els are available at https://github.com/zoomin-lee/scene-scale-diffusion.
Publisher
한국방송미디어공학회
Issue Date
2023-02-10
Language
English
Citation

Workshop on Image Processing and Image Understanding 2023

URI
http://hdl.handle.net/10203/314620
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0