Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 45
  • Download : 0
We present a hierarchical planning and control framework that enables an agent to perform various tasks and adapt to a new task flexibly. Rather than learning an individual policy for each particular task, the proposed framework, DISH, distills a hierarchical policy from a set of tasks by representation and reinforcement learning. The framework is based on the idea of latent variable models that represent high-dimensional observations using low-dimensional latent variables. The resulting policy consists of two levels of hierarchy: (i) a planning module that reasons a sequence of latent intentions that would lead to an optimistic future and (ii) a feedback control policy, shared across the tasks, that executes the inferred intention. Because the planning is performed in low-dimensional latent space, the learned policy can immediately be used to solve or adapt to new tasks without additional training. We demonstrate the proposed framework can learn compact representations (3- and 1-dimensional latent states and commands for a humanoid with 197- and 36-dimensional state features and actions) while solving a small number of imitation tasks, and the resulting policy is directly applicable to other types of tasks, i.e., navigation in cluttered environments.
Publisher
IEEE
Issue Date
2021-05-30
Language
English
Citation

2021 IEEE International Conference on Robotics and Automation (ICRA), pp.4459 - 4466

ISSN
1050-4729
DOI
10.1109/icra48506.2021.9561017
URI
http://hdl.handle.net/10203/312282
Appears in Collection
AE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0