Learning 3D local surface descriptor for point cloud images of objects in the real-world

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 223
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorSeo, Juhwanko
dc.contributor.authorKwon, Dong-Sooko
dc.date.accessioned2019-05-29T02:25:06Z-
dc.date.available2019-05-29T02:25:06Z-
dc.date.created2019-05-28-
dc.date.created2019-05-28-
dc.date.issued2019-06-
dc.identifier.citationROBOTICS AND AUTONOMOUS SYSTEMS, v.116, pp.64 - 79-
dc.identifier.issn0921-8890-
dc.identifier.urihttp://hdl.handle.net/10203/262252-
dc.description.abstractSurface descriptors, which represent the surface characteristics of an image numerically, are the fundamental elements in many vision applications. Although traditional surface descriptors that are handcrafted or learned using machine learning techniques have been applied in many different vision applications, some difficulty remains in handling large amounts of noise and variance in 3D data. To resolve this difficulty, recent studies have applied deep learning techniques for the development of surface descriptors. Unlike other techniques based on the complete 3D CAD model or pre-known mesh information of the object, we consider the constraint of the robotic applications in which the information mentioned above is difficult to preload. In this paper, we propose a new 3D surface descriptor that does not require any pre-loaded topological information of the objects or a mesh construction, which may occasionally fail with new or previously unknown objects. Further, we propose a voxel representation that is adaptive to the density of the points, resolving the problem of varying densities of the point cloud data. Finally, we adopt domain-adversarial learning that leads a network to learn the features discriminative for similarity measurements while remaining invariant to different point densities. We gathered approximately 5,000 point-cloud images of objects along with their position and orientation information. We then constructed approximately half a million pairs of point clouds indicating the identical and different parts of the objects, which are labeled as true and false, respectively. The dataset of constructed pairs was used for the learning of 3D surface descriptors using a Siamese convolutional neural network (SCNN) with a domain-adversarial characteristic. The results indicate that the proposed descriptor outperforms other descriptors. (C) 2019 Published by Elsevier B.V.-
dc.languageEnglish-
dc.publisherELSEVIER SCIENCE BV-
dc.titleLearning 3D local surface descriptor for point cloud images of objects in the real-world-
dc.typeArticle-
dc.identifier.wosid000466820600005-
dc.identifier.scopusid2-s2.0-85063468684-
dc.type.rimsART-
dc.citation.volume116-
dc.citation.beginningpage64-
dc.citation.endingpage79-
dc.citation.publicationnameROBOTICS AND AUTONOMOUS SYSTEMS-
dc.identifier.doi10.1016/j.robot.2019.03.009-
dc.contributor.localauthorKwon, Dong-Soo-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthor3D local surface descriptor-
dc.subject.keywordAuthorRGB-D sensor-
dc.subject.keywordAuthorPoint cloud-
dc.subject.keywordAuthorConvolutional neural network-
dc.subject.keywordPlusREPRESENTATION-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0