Not Only Rewards but Also Constraints: Applications on Legged Robot Locomotion

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 5
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Yunhoko
dc.contributor.authorOh, Hyunsikko
dc.contributor.authorLee, Jeonghyunko
dc.contributor.authorChoi, Jinhyeokko
dc.contributor.authorJi, Gwanghyeonko
dc.contributor.authorJung, Moonkyuko
dc.contributor.authorYoum, Donghoonko
dc.contributor.authorHwangbo, Jeminko
dc.date.accessioned2024-09-05T09:00:26Z-
dc.date.available2024-09-05T09:00:26Z-
dc.date.created2024-08-29-
dc.date.issued2024-
dc.identifier.citationIEEE TRANSACTIONS ON ROBOTICS, v.40, pp.2984 - 3003-
dc.identifier.issn1552-3098-
dc.identifier.urihttp://hdl.handle.net/10203/322698-
dc.description.abstractSeveral earlier studies have shown impressive control performance in complex robotic systems by designing the controller using a neural network and training it with model-free reinforcement learning. However, these outstanding controllers with natural motion style and high task performance are developed through extensive reward engineering, which is a highly laborious and time-consuming process of designing numerous reward terms and determining suitable reward coefficients. In this article, we propose a novel reinforcement learning framework for training neural network controllers for complex robotic systems consisting of both rewards and constraints. To let the engineers appropriately reflect their intent to constraints and handle them with minimal computation overhead, two constraint types and an efficient policy optimization algorithm are suggested. The learning framework is applied to train locomotion controllers for several legged robots with different morphology and physical attributes to traverse challenging terrains. Extensive simulation and real-world experiments demonstrate that performant controllers can be trained with significantly less reward engineering, by tuning only a single reward coefficient. Furthermore, a more straightforward and intuitive engineering process can be utilized, thanks to the interpretability and generalizability of constraints.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleNot Only Rewards but Also Constraints: Applications on Legged Robot Locomotion-
dc.typeArticle-
dc.identifier.wosid001240051600003-
dc.identifier.scopusid2-s2.0-85193256463-
dc.type.rimsART-
dc.citation.volume40-
dc.citation.beginningpage2984-
dc.citation.endingpage3003-
dc.citation.publicationnameIEEE TRANSACTIONS ON ROBOTICS-
dc.identifier.doi10.1109/TRO.2024.3400935-
dc.contributor.localauthorHwangbo, Jemin-
dc.contributor.nonIdAuthorKim, Yunho-
dc.contributor.nonIdAuthorOh, Hyunsik-
dc.contributor.nonIdAuthorLee, Jeonghyun-
dc.contributor.nonIdAuthorChoi, Jinhyeok-
dc.contributor.nonIdAuthorJung, Moonkyu-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorRobots-
dc.subject.keywordAuthorLegged locomotion-
dc.subject.keywordAuthorReinforcement learning-
dc.subject.keywordAuthorOptimization-
dc.subject.keywordAuthorNeural networks-
dc.subject.keywordAuthorQuadrupedal robots-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorConstrained reinforcement learning (RL)-
dc.subject.keywordAuthorlegged locomotion-
dc.subject.keywordAuthorRL-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0