DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jang, Hyeonjoong | ko |
dc.contributor.author | Jeon, Daniel | ko |
dc.contributor.author | Ha, Hyunho | ko |
dc.contributor.author | Kim, Min Hyuk | ko |
dc.date.accessioned | 2019-12-13T09:30:29Z | - |
dc.date.available | 2019-12-13T09:30:29Z | - |
dc.date.created | 2019-11-11 | - |
dc.date.created | 2019-11-11 | - |
dc.date.created | 2019-11-11 | - |
dc.date.issued | 2019-10-08 | - |
dc.identifier.citation | 14th International Symposium on Visual Computing (ISVC), pp.683 - 694 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10203/269271 | - |
dc.description.abstract | Omnidirectional cameras are commonly equipped with fish-eye lenses to capture 360-degree visual information, and severe spherical projective distortion occurs when a 360-degree image is stored as a two-dimensional image array. As a consequence, traditional depth estimation methods are not directly applicable to omnidirectional cameras. Dense depth estimation for omnidirectional imaging has been achieved by applying several offline processes, such as patch-matching, optical flow, and convolutional propagation filtering, resulting in additional heavy computation. No dense depth estimation for real-time applications is available yet. In response, we propose an efficient depth densification method designed for omnidirectional imaging to achieve 360-degree dense depth video with an omnidirectional camera. First, we compute the sparse depth estimates using a conventional simultaneous localization and mapping (SLAM) method, and then use these estimates as input to a depth densification method. We propose a novel densification method using the spherical pull-push method by devising a joint spherical pyramid for color and depth, based on multi-level icosahedron subdivision surfaces. This allows us to propagate the sparse depth continuously over 360-degree angles efficiently in an edge-aware manner. The results demonstrate that our real-time densification method is comparable to state-of-the-art offline methods in terms of per-pixel depth accuracy. Combining our depth densification with a conventional SLAM allows us to capture real-time 360-degree RGB-D video with a single omnidirectional camera. | - |
dc.language | English | - |
dc.publisher | Springer | - |
dc.title | Fast Omnidirectional Depth Densification | - |
dc.type | Conference | - |
dc.identifier.wosid | 000582481300053 | - |
dc.identifier.scopusid | 2-s2.0-85076169317 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 683 | - |
dc.citation.endingpage | 694 | - |
dc.citation.publicationname | 14th International Symposium on Visual Computing (ISVC) | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | Lake Tahoe, Nevada | - |
dc.identifier.doi | 10.1007/978-3-030-33720-9_53 | - |
dc.contributor.localauthor | Kim, Min Hyuk | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.