DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Jihyun | ko |
dc.contributor.author | Sung, Minhyuk | ko |
dc.contributor.author | Choi, Honggyu | ko |
dc.contributor.author | Kim, Tae-Kyun | ko |
dc.date.accessioned | 2023-11-27T01:01:48Z | - |
dc.date.available | 2023-11-27T01:01:48Z | - |
dc.date.created | 2023-11-24 | - |
dc.date.issued | 2023-06 | - |
dc.identifier.citation | 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) | - |
dc.identifier.uri | http://hdl.handle.net/10203/315197 | - |
dc.description.abstract | We present Implicit Two Hands (Im2Hands), the first neural implicit representation of two interacting hands. Unlike existing methods on two-hand reconstruction that rely on a parametric hand model and/or low-resolution meshes, Im2Hands can produce fine-grained geometry of two hands with high hand-to-hand and hand-to-image coherency. To handle the shape complexity and interaction context between two hands, Im2Hands models the occupancy volume of two hands – conditioned on an RGB image and coarse 3D keypoints – by two novel attention-based modules responsible for (1) initial occupancy estimation and (2) context-aware occupancy refinement, respectively. Im2Hands first learns per-hand neural articulated occupancy in the canonical space designed for each hand using query-image attention. It then refines the initial two-hand occupancy in the posed space to enhance the coherency between the two hand shapes using query-anchor attention. In addition, we introduce an optional keypoint refinement module to enable robust two-hand shape estimation from predicted hand keypoints in a single-image reconstruction scenario. We experimentally demonstrate the effectiveness of Im2Hands on two-hand reconstruction in comparison to related methods, where ours achieves state-of-the-art results. | - |
dc.language | English | - |
dc.publisher | IEEE | - |
dc.title | Im2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes | - |
dc.type | Conference | - |
dc.type.rims | CONF | - |
dc.citation.publicationname | 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) | - |
dc.identifier.conferencecountry | CN | - |
dc.identifier.conferencelocation | Vancouver, BC | - |
dc.identifier.doi | 10.1109/cvpr52729.2023.02028 | - |
dc.contributor.localauthor | Sung, Minhyuk | - |
dc.contributor.nonIdAuthor | Lee, Jihyun | - |
dc.contributor.nonIdAuthor | Choi, Honggyu | - |
dc.contributor.nonIdAuthor | Kim, Tae-Kyun | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.