VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction

Cited 3 time in webofscience Cited 0 time in scopus
  • Hit : 381
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorChoe, Jaesungko
dc.contributor.authorIm, Sunghoonko
dc.contributor.authorRameau, Francoisko
dc.contributor.authorKang, Minjunko
dc.contributor.authorKweon, In-Soko
dc.date.accessioned2022-11-28T02:02:44Z-
dc.date.available2022-11-28T02:02:44Z-
dc.date.created2022-11-25-
dc.date.created2022-11-25-
dc.date.issued2021-10-17-
dc.identifier.citation18th IEEE/CVF International Conference on Computer Vision, ICCV 2021, pp.16066 - 16075-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10203/301085-
dc.description.abstractTo reconstruct a 3D scene from a set of calibrated views, traditional multi-view stereo techniques rely on two distinct stages: local depth maps computation and global depth maps fusion. Recent studies concentrate on deep neural architectures for depth estimation by using conventional depth fusion method or direct 3D reconstruction network by regressing Truncated Signed Distance Function (TSDF). In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results. As mentioned, our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume. In order to improve the matching performance between images acquired from very different viewpoints (e.g., large-baseline and rotations), we introduce a rotation-invariant 3D convolution kernel called PosedConv. The effectiveness of the proposed architecture is underlined via a large series of experiments conducted on the ScanNet dataset where our approach compares favorably against both traditional and deep learning techniques.-
dc.languageEnglish-
dc.publisherIEEE Computer Society-
dc.titleVolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction-
dc.typeConference-
dc.identifier.wosid000798743206025-
dc.identifier.scopusid2-s2.0-85121119056-
dc.type.rimsCONF-
dc.citation.beginningpage16066-
dc.citation.endingpage16075-
dc.citation.publicationname18th IEEE/CVF International Conference on Computer Vision, ICCV 2021-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1109/ICCV48922.2021.01578-
dc.contributor.localauthorKweon, In-So-
dc.contributor.nonIdAuthorChoe, Jaesung-
dc.contributor.nonIdAuthorIm, Sunghoon-
dc.contributor.nonIdAuthorRameau, Francois-
dc.contributor.nonIdAuthorKang, Minjun-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0