Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 48
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHa, Jung-Suko
dc.contributor.authorPark, Young-Jinko
dc.contributor.authorChae, Hyeok-Jooko
dc.contributor.authorPark, Soon-Seoko
dc.contributor.authorChoi, Han-Limko
dc.date.accessioned2023-09-07T01:01:58Z-
dc.date.available2023-09-07T01:01:58Z-
dc.date.created2023-09-07-
dc.date.created2023-09-07-
dc.date.issued2021-05-30-
dc.identifier.citation2021 IEEE International Conference on Robotics and Automation (ICRA), pp.4459 - 4466-
dc.identifier.issn1050-4729-
dc.identifier.urihttp://hdl.handle.net/10203/312282-
dc.description.abstractWe present a hierarchical planning and control framework that enables an agent to perform various tasks and adapt to a new task flexibly. Rather than learning an individual policy for each particular task, the proposed framework, DISH, distills a hierarchical policy from a set of tasks by representation and reinforcement learning. The framework is based on the idea of latent variable models that represent high-dimensional observations using low-dimensional latent variables. The resulting policy consists of two levels of hierarchy: (i) a planning module that reasons a sequence of latent intentions that would lead to an optimistic future and (ii) a feedback control policy, shared across the tasks, that executes the inferred intention. Because the planning is performed in low-dimensional latent space, the learned policy can immediately be used to solve or adapt to new tasks without additional training. We demonstrate the proposed framework can learn compact representations (3- and 1-dimensional latent states and commands for a humanoid with 197- and 36-dimensional state features and actions) while solving a small number of imitation tasks, and the resulting policy is directly applicable to other types of tasks, i.e., navigation in cluttered environments.-
dc.languageEnglish-
dc.publisherIEEE-
dc.titleDistilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning-
dc.typeConference-
dc.identifier.wosid000765738803071-
dc.identifier.scopusid2-s2.0-85125471487-
dc.type.rimsCONF-
dc.citation.beginningpage4459-
dc.citation.endingpage4466-
dc.citation.publicationname2021 IEEE International Conference on Robotics and Automation (ICRA)-
dc.identifier.conferencecountryCC-
dc.identifier.conferencelocationXi'an-
dc.identifier.doi10.1109/icra48506.2021.9561017-
dc.contributor.localauthorChoi, Han-Lim-
dc.contributor.nonIdAuthorHa, Jung-Su-
dc.contributor.nonIdAuthorPark, Young-Jin-
Appears in Collection
AE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0