Differentiable Diffusion for Dense Depth Estimation from Multi-view Images

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 220
  • Download : 0
We present a method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view reprojection error from RGB supervision. We optimize point positions, depths, and weights with respect to the loss by differential splatting that models points as Gaussians with analytic transmittance. Further, we develop an efficient optimization routine that can simultaneously optimize the 50k+ points required for complex scene reconstruction. We validate our routine using ground truth data and show high reconstruction quality. Then, we apply this to light field and wider baseline images via self supervision, and show improvements in both average and outlier error for depth maps diffused from inaccurate sparse points. Finally, we compare qualitative and quantitative results to image processing and deep learning methods.
Publisher
IEEE
Issue Date
2021-06-23
Language
English
Citation

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.8908 - 8917

ISSN
1063-6919
DOI
10.1109/CVPR46437.2021.00880
URI
http://hdl.handle.net/10203/285741
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0