LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 42
  • Download : 0
In this paper, a unified framework for exploration in reinforcement learning (RL) is proposed based on an option-critic model. The proposed framework learns to integrate a set of diverse exploration strategies so that the agent can adaptively select the most effective exploration strategy over time to realize a relevant exploration-exploitation trade-off for each given task. The effectiveness of the proposed exploration framework is demonstrated by various experiments in the MiniGrid and Atari environments.
Publisher
ML Research Press
Issue Date
2023-07
Language
English
Citation

40th International Conference on Machine Learning, ICML 2023, pp.16619 - 16638

URI
http://hdl.handle.net/10203/315837
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0