PrimA6D: Rotational Primitive Reconstruction for Enhanced and Robust 6D Pose Estimation

Cited 6 time in webofscience Cited 2 time in scopus
  • Hit : 335
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorJeon, Myung-Hwanko
dc.contributor.authorKim, Ayoungko
dc.date.accessioned2021-03-26T02:52:00Z-
dc.date.available2021-03-26T02:52:00Z-
dc.date.created2020-07-27-
dc.date.issued2020-07-
dc.identifier.citationIEEE ROBOTICS AND AUTOMATION LETTERS, v.5, no.3, pp.4955 - 4962-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10203/281978-
dc.description.abstractIn this letter, we introduce a rotational primitive prediction based 6D object pose estimation using a single image as an input. We solve for the 6D object pose of a known object relative to the camera using a single image with occlusion. Many recent state-of-the-art (SOTA) two-step approaches have exploited image keypoints extraction followed by PnP regression for pose estimation. Instead of relying on bounding box or keypoints on the object, we propose to learn orientation-induced primitive so as to achieve the pose estimation accuracy regardless of the object size. We leverage a Variational AutoEncoder (VAE) to learn this underlying primitive and its associated keypoints. The keypoints inferred from the reconstructed primitive image are then used to regress the rotation using PnP. Lastly, we compute the translation in a separate localization module to complete the entire 6D pose estimation. When evaluated over public datasets, the proposed method yields a notable improvement over the LINEMOD, the Occlusion LINEMOD, and the YCB-Video dataset. We further provide a synthetic-only trained case presenting comparable performance to the existing methods which require real images in the training phase.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titlePrimA6D: Rotational Primitive Reconstruction for Enhanced and Robust 6D Pose Estimation-
dc.typeArticle-
dc.identifier.wosid000546883300008-
dc.identifier.scopusid2-s2.0-85088146709-
dc.type.rimsART-
dc.citation.volume5-
dc.citation.issue3-
dc.citation.beginningpage4955-
dc.citation.endingpage4962-
dc.citation.publicationnameIEEE ROBOTICS AND AUTOMATION LETTERS-
dc.identifier.doi10.1109/LRA.2020.3004322-
dc.contributor.localauthorKim, Ayoung-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorPerception for grasping and manipulation-
dc.subject.keywordAuthordeep learning for visual perception-
Appears in Collection
CE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0