Online urban object recognition in point clouds using consecutive point information for urban robotic missions

Cited 16 time in webofscience Cited 17 time in scopus
  • Hit : 612
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorChoe, Yungeunko
dc.contributor.authorAhn, Seungukko
dc.contributor.authorChung, Myung Jinko
dc.date.accessioned2014-09-02T02:33:49Z-
dc.date.available2014-09-02T02:33:49Z-
dc.date.created2014-08-12-
dc.date.created2014-08-12-
dc.date.issued2014-08-
dc.identifier.citationROBOTICS AND AUTONOMOUS SYSTEMS, v.62, no.8, pp.1130 - 1152-
dc.identifier.issn0921-8890-
dc.identifier.urihttp://hdl.handle.net/10203/189846-
dc.description.abstractUrban object recognition is the ability to categorize ambient objects into several classes and it plays an important role in various urban robotic missions, such as surveillance, rescue, and SLAM. However, there were several difficulties when previous studies on urban object recognition in point clouds were adopted for robotic missions: offline-batch processing, deterministic results in classification, and necessity of many training examples. The aim of this paper is to propose an urban object recognition algorithm for urban robotic missions with useful properties: online processing, classification results with probabilistic outputs, and training with a few examples based on a generative model. To achieve this, the proposed algorithm utilizes the consecutive point information (CPI) of a 2D LIDAR sensor. This additional information was useful for designing an online algorithm consisting of segmentation and classification. Experimental results show that the proposed algorithm using CPI enhances the applicability of urban object recognition for various urban robotic missions.-
dc.languageEnglish-
dc.publisherELSEVIER SCIENCE BV-
dc.subjectLASER DATA-
dc.subjectCLASSIFICATION-
dc.subjectSEGMENTATION-
dc.subjectENVIRONMENTS-
dc.subjectTRANSFORM-
dc.titleOnline urban object recognition in point clouds using consecutive point information for urban robotic missions-
dc.typeArticle-
dc.identifier.wosid000338405000004-
dc.identifier.scopusid2-s2.0-84902174297-
dc.type.rimsART-
dc.citation.volume62-
dc.citation.issue8-
dc.citation.beginningpage1130-
dc.citation.endingpage1152-
dc.citation.publicationnameROBOTICS AND AUTONOMOUS SYSTEMS-
dc.identifier.doi10.1016/j.robot.2014.04.007-
dc.contributor.localauthorChung, Myung Jin-
dc.contributor.nonIdAuthorAhn, Seunguk-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorUrban object recognition-
dc.subject.keywordAuthorOnline-
dc.subject.keywordAuthorGenerative model-
dc.subject.keywordAuthorLIDAR-
dc.subject.keywordAuthorPoint cloud-
dc.subject.keywordAuthorUrban environment-
dc.subject.keywordPlusLASER DATA-
dc.subject.keywordPlusCLASSIFICATION-
dc.subject.keywordPlusSEGMENTATION-
dc.subject.keywordPlusENVIRONMENTS-
dc.subject.keywordPlusTRANSFORM-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 16 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0