Category-Level Metric Scale Object Shape and Pose Estimation

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 114
  • Download : 7
DC FieldValueLanguage
dc.contributor.authorLee, Taeyeopko
dc.contributor.authorLee, Byeong-Ukko
dc.contributor.authorKim, Myungchulko
dc.contributor.authorKweon, I. S.ko
dc.date.accessioned2021-10-11T04:50:10Z-
dc.date.available2021-10-11T04:50:10Z-
dc.date.created2021-10-11-
dc.date.created2021-10-11-
dc.date.created2021-10-11-
dc.date.issued2021-10-
dc.identifier.citationIEEE ROBOTICS AND AUTOMATION LETTERS, v.6, no.4, pp.8575 - 8582-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10203/288122-
dc.description.abstractAdvances in deep learning recognition have led to accurate object detection with 2D images. However, these 2D perception methods are insufficient for complete 3D world information. Concurrently, advanced 3D shape estimation approaches focus on the shape itself, without considering metric scale. These methods cannot determine the accurate location and orientation of objects. To tackle this problem, we propose a framework that jointly estimates a metric scale shape and pose from a single RGB image. Our framework has two branches: the Metric Scale Object Shape branch (MSOS) and the Normalized Object Coordinate Space branch (NOCS). The MSOS branch estimates the metric scale shape observed in the camera coordinates. The NOCS branch predicts the normalized object coordinate space (NOCS) map and performs similarity transformation with the rendered depth map from a predicted metric scale mesh to obtain 6D pose and size. Additionally, we introduce the Normalized Object Center Estimation (NOCE) to estimate the geometrically aligned distance from the camera to the object center. We validated our method on both synthetic and real-world datasets to evaluate category-level object pose and shape.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleCategory-Level Metric Scale Object Shape and Pose Estimation-
dc.typeArticle-
dc.identifier.wosid000701239400003-
dc.identifier.scopusid2-s2.0-85114737677-
dc.type.rimsART-
dc.citation.volume6-
dc.citation.issue4-
dc.citation.beginningpage8575-
dc.citation.endingpage8582-
dc.citation.publicationnameIEEE ROBOTICS AND AUTOMATION LETTERS-
dc.identifier.doi10.1109/LRA.2021.3110538-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorKweon, I. S.-
dc.contributor.nonIdAuthorKim, Myungchul-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorRobot manipulation-
dc.subject.keywordAuthoraugmented reality-
dc.subject.keywordAuthorobject shape estimation-
dc.subject.keywordAuthorobject pose estimation-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0