Bayesian Optimistic Kullback-Leibler Exploration

Cited 2 time in webofscience Cited 1 time in scopus
  • Hit : 552
  • Download : 0
We consider a Bayesian approach to model-based reinforcement learning, where the agent uses a distribution of environment models to find the action that optimally trades off exploration and exploitation. Unfortunately, it is intractable to find the Bayes-optimal solution to the problem except for restricted cases. In this paper, we present BOKLE, a simple algorithm that uses Kullback–Leibler divergence to constrain the set of plausible models for guiding the exploration. We provide a formal analysis that this algorithm is near Bayes-optimal with high probability. We also show an asymptotic relation between the solution pursued by BOKLE and a well-known algorithm called Bayesian exploration bonus. Finally, we show experimental results that clearly demonstrate the exploration efficiency of the algorithm.
Publisher
SPRINGER
Issue Date
2019-05
Language
English
Article Type
Article; Proceedings Paper
Citation

MACHINE LEARNING, v.108, no.5, pp.765 - 783

ISSN
0885-6125
DOI
10.1007/s10994-018-5767-4
URI
http://hdl.handle.net/10203/262791
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0