Are Multi-view Edges Incomplete for Depth Estimation?

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 21
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKhan, Numairko
dc.contributor.authorKim, Min H.ko
dc.contributor.authorTompkin, Jamesko
dc.date.accessioned2024-08-29T08:00:06Z-
dc.date.available2024-08-29T08:00:06Z-
dc.date.created2024-05-18-
dc.date.issued2024-07-
dc.identifier.citationINTERNATIONAL JOURNAL OF COMPUTER VISION, v.132, no.7, pp.2639 - 2673-
dc.identifier.issn0920-5691-
dc.identifier.urihttp://hdl.handle.net/10203/322456-
dc.description.abstractDepth estimation tries to obtain 3D scene geometry from low-dimensional data like 2D images. This is a vital operation in computer vision and any general solution must preserve all depth information of potential relevance to support higher-level tasks. For scenes with well-defined depth, this work shows that multi-view edges can encode all relevant information-that multi-view edges are complete. For this, we follow Elder's complementary work on the completeness of 2D edges for image reconstruction. We deploy an image-space geometric representation: an encoding of multi-view scene edges as constraints and a diffusion reconstruction method for inverting this code into depth maps. Due to inaccurate constraints, diffusion-based methods have previously underperformed against deep learning methods; however, we will reassess the value of diffusion-based methods and show their competitiveness without requiring training data. To begin, we work with structured light fields and epipolar plane images (EPIs). EPIs present high-gradient edges in the angular domain: with correct processing, EPIs provide depth constraints with accurate occlusion boundaries and view consistency. Then, we present a differentiable representation form that allows the constraints and the diffusion reconstruction to be optimized in an unsupervised way via a multi-view reconstruction loss. This is based around point splatting via radiative transport, and extends to unstructured multi-view images. We evaluate our reconstructions for accuracy, occlusion handling, view consistency, and sparsity to show that they retain the geometric information required for higher-level tasks.-
dc.languageEnglish-
dc.publisherSPRINGER-
dc.titleAre Multi-view Edges Incomplete for Depth Estimation?-
dc.typeArticle-
dc.identifier.wosid001159361200001-
dc.identifier.scopusid2-s2.0-85184885550-
dc.type.rimsART-
dc.citation.volume132-
dc.citation.issue7-
dc.citation.beginningpage2639-
dc.citation.endingpage2673-
dc.citation.publicationnameINTERNATIONAL JOURNAL OF COMPUTER VISION-
dc.identifier.doi10.1007/s11263-023-01890-y-
dc.contributor.localauthorKim, Min H.-
dc.contributor.nonIdAuthorKhan, Numair-
dc.contributor.nonIdAuthorTompkin, James-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDiffusion-
dc.subject.keywordAuthorLight fields-
dc.subject.keywordAuthorMulti-view reconstruction-
dc.subject.keywordAuthorEdges-
dc.subject.keywordAuthorDepth reconstruction-
dc.subject.keywordPlusFIELD-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0