Im2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 42
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Jihyunko
dc.contributor.authorSung, Minhyukko
dc.contributor.authorChoi, Honggyuko
dc.contributor.authorKim, Tae-Kyunko
dc.date.accessioned2023-11-27T01:01:48Z-
dc.date.available2023-11-27T01:01:48Z-
dc.date.created2023-11-24-
dc.date.issued2023-06-
dc.identifier.citation2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)-
dc.identifier.urihttp://hdl.handle.net/10203/315197-
dc.description.abstractWe present Implicit Two Hands (Im2Hands), the first neural implicit representation of two interacting hands. Unlike existing methods on two-hand reconstruction that rely on a parametric hand model and/or low-resolution meshes, Im2Hands can produce fine-grained geometry of two hands with high hand-to-hand and hand-to-image coherency. To handle the shape complexity and interaction context between two hands, Im2Hands models the occupancy volume of two hands – conditioned on an RGB image and coarse 3D keypoints – by two novel attention-based modules responsible for (1) initial occupancy estimation and (2) context-aware occupancy refinement, respectively. Im2Hands first learns per-hand neural articulated occupancy in the canonical space designed for each hand using query-image attention. It then refines the initial two-hand occupancy in the posed space to enhance the coherency between the two hand shapes using query-anchor attention. In addition, we introduce an optional keypoint refinement module to enable robust two-hand shape estimation from predicted hand keypoints in a single-image reconstruction scenario. We experimentally demonstrate the effectiveness of Im2Hands on two-hand reconstruction in comparison to related methods, where ours achieves state-of-the-art results.-
dc.languageEnglish-
dc.publisherIEEE-
dc.titleIm2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationname2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationVancouver, BC-
dc.identifier.doi10.1109/cvpr52729.2023.02028-
dc.contributor.localauthorSung, Minhyuk-
dc.contributor.nonIdAuthorLee, Jihyun-
dc.contributor.nonIdAuthorChoi, Honggyu-
dc.contributor.nonIdAuthorKim, Tae-Kyun-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0