Autonomous 3d model generation of unknown objects for dual-manipulator humanoid robots

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 51
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLlopart, Adrianko
dc.contributor.authorRavn, Oleko
dc.contributor.authorAndersen, Nils Ako
dc.contributor.authorKim, Jong-Hwanko
dc.date.accessioned2023-08-17T05:00:33Z-
dc.date.available2023-08-17T05:00:33Z-
dc.date.created2023-07-07-
dc.date.issued2019-12-
dc.identifier.citation5th International Conference on Robot Intelligence Technology and Applications, RiTA 2017, pp.515 - 530-
dc.identifier.issn2194-5357-
dc.identifier.urihttp://hdl.handle.net/10203/311623-
dc.description.abstractThis paper proposes a novel approach for the autonomous 3D model generation of unknown objects. A humanoid robot (or any setup with two manipulators) holds the object to model in one hand, views it from different perspectives and registers the depth information using a RGB-D sensor. The occlusions due to limited movement of the manipulator and the gripper itself covering the object are avoided by switching the object from one hand to the other. This allows for additional viewpoints leading to the registration of more depth information of the object. The contributions of this paper are as follows: 1. A humanoid robot that manipulates objects and obtains depth information 2. Tracing the hand movements with the robots head to be able to see the object at every moment 3. Filtering the point clouds to remove parts of the robot from them 4. Utilizing the Normal Iterative Closest Point algorithm (depth points, surface normals and curvature information) to register point clouds over time. This method will be applied to those pointclouds that include the robots gripper for optimal convergence; the resultant transform is then applied to those point clouds that describe only the segmented object 5. Changing the object from one hand to another 6. Merging the resulting object’s partial point clouds from both the left and right hands 7. Generating a mesh of the object based on the triangulation of final points of the object’s surface. No prior knowledge of the objects is necessary. No human intervention nor external help (i.e visual markers, turntables..) is required either.-
dc.languageEnglish-
dc.publisherSpringer Verlag-
dc.titleAutonomous 3d model generation of unknown objects for dual-manipulator humanoid robots-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85048229582-
dc.type.rimsCONF-
dc.citation.beginningpage515-
dc.citation.endingpage530-
dc.citation.publicationname5th International Conference on Robot Intelligence Technology and Applications, RiTA 2017-
dc.identifier.conferencecountryKO-
dc.identifier.conferencelocationDaejeon-
dc.identifier.doi10.1007/978-3-319-78452-6_41-
dc.contributor.localauthorKim, Jong-Hwan-
dc.contributor.nonIdAuthorLlopart, Adrian-
dc.contributor.nonIdAuthorRavn, Ole-
dc.contributor.nonIdAuthorAndersen, Nils A-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0