Im2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 39
  • Download : 0
We present Implicit Two Hands (Im2Hands), the first neural implicit representation of two interacting hands. Unlike existing methods on two-hand reconstruction that rely on a parametric hand model and/or low-resolution meshes, Im2Hands can produce fine-grained geometry of two hands with high hand-to-hand and hand-to-image coherency. To handle the shape complexity and interaction context between two hands, Im2Hands models the occupancy volume of two hands – conditioned on an RGB image and coarse 3D keypoints – by two novel attention-based modules responsible for (1) initial occupancy estimation and (2) context-aware occupancy refinement, respectively. Im2Hands first learns per-hand neural articulated occupancy in the canonical space designed for each hand using query-image attention. It then refines the initial two-hand occupancy in the posed space to enhance the coherency between the two hand shapes using query-anchor attention. In addition, we introduce an optional keypoint refinement module to enable robust two-hand shape estimation from predicted hand keypoints in a single-image reconstruction scenario. We experimentally demonstrate the effectiveness of Im2Hands on two-hand reconstruction in comparison to related methods, where ours achieves state-of-the-art results.
Publisher
IEEE
Issue Date
2023-06
Language
English
Citation

2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

DOI
10.1109/cvpr52729.2023.02028
URI
http://hdl.handle.net/10203/315197
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0