Learning robust perceptive locomotion for quadrupedal robots in the wild

Cited 180 time in webofscience Cited 0 time in scopus
  • Hit : 214
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorMiki, Takahiroko
dc.contributor.authorLee, Joonhoko
dc.contributor.authorHwangbo, Jeminko
dc.contributor.authorWellhausen, Lorenzko
dc.contributor.authorKoltun, Vladlenko
dc.contributor.authorHutter, Marcoko
dc.date.accessioned2022-02-08T06:43:51Z-
dc.date.available2022-02-08T06:43:51Z-
dc.date.created2022-02-08-
dc.date.created2022-02-08-
dc.date.created2022-02-08-
dc.date.issued2022-01-
dc.identifier.citationSCIENCE ROBOTICS, v.7, no.62-
dc.identifier.issn2470-9476-
dc.identifier.urihttp://hdl.handle.net/10203/292125-
dc.description.abstractLegged robots that can operate autonomously in remote and hazardous environments will greatly increase opportunities for exploration into underexplored areas. Exteroceptive perception is crucial for fast and energy-efficient locomotion: Perceiving the terrain before making contact with it enables planning and adaptation of the gait ahead of time to maintain speed and stability. However, using exteroceptive perception robustly for locomotion has remained a grand challenge in robotics. Snow, vegetation, and water visually appear as obstacles on which the robot cannot step or are missing altogether due to high reflectance. In addition, depth perception can degrade due to difficult lighting, dust, fog, reflective or transparent surfaces, sensor occlusion, and more. For this reason, the most robust and general solutions to legged locomotion to date rely solely on proprioception. This severely limits locomotion speed because the robot has to physically feel out the terrain before adapting its gait accordingly. Here, we present a robust and general solution to integrating exteroceptive and proprioceptive perception for legged locomotion. We leverage an attention-based recurrent encoder that integrates proprioceptive and exteroceptive input. The encoder is trained end to end and learns to seamlessly combine the different perception modalities without resorting to heuristics. The result is a legged locomotion controller with high robustness and speed. The controller was tested in a variety of challenging natural and urban environments over multiple seasons and completed an hour-long hike in the Alps in the time recommended for human hikers.-
dc.languageEnglish-
dc.publisherAMER ASSOC ADVANCEMENT SCIENCE-
dc.titleLearning robust perceptive locomotion for quadrupedal robots in the wild-
dc.typeArticle-
dc.identifier.wosid000745636700003-
dc.identifier.scopusid2-s2.0-85123560399-
dc.type.rimsART-
dc.citation.volume7-
dc.citation.issue62-
dc.citation.publicationnameSCIENCE ROBOTICS-
dc.identifier.doi10.1126/scirobotics.abk2822-
dc.contributor.localauthorHwangbo, Jemin-
dc.contributor.nonIdAuthorMiki, Takahiro-
dc.contributor.nonIdAuthorLee, Joonho-
dc.contributor.nonIdAuthorWellhausen, Lorenz-
dc.contributor.nonIdAuthorKoltun, Vladlen-
dc.contributor.nonIdAuthorHutter, Marco-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordPlusROUGH-TERRAIN LOCOMOTION-
dc.subject.keywordPlusMOBILE ROBOTS-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 180 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0