Final Iteration Convergence Bound of Q-Learning: Switching System Approach

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 9
  • Download : 0
Q-learning is known as one of the fundamental reinforcement learning (RL) algorithms. Its convergence has been the focus of extensive research over the past several decades. Recently, a new finite-time error bound and analysis for Q-learning was introduced using a switching system framework. This approach views the dynamics of Q-learning as a discrete-time stochastic switching system. The prior study established a finite-time error bound on the averaged iterates using Lyapunov functions, offering further insights into Q-learning. While valuable, the analysis focuses on error bounds of the averaged iterate, which comes with the inherent disadvantages: It necessitates extra averaging steps, which can decelerate the convergence rate. Moreover, the final iterate, being the original format of Q-learning, is more commonly used and is often regarded as a more intuitive and natural form in the majority of iterative algorithms. In this article, we present a finite-time error bound on the final iterate of Q-learning based on the switching system framework. The proposed error bounds have different features compared to the previous works, and cover different scenarios. Finally, we expect that the proposed results provide additional insights on Q-learning via connections with discrete-time switching systems, and can potentially present a new template for finite-time analysis of more general RL algorithms.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2024-07
Language
English
Article Type
Article
Citation

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, v.69, no.7, pp.4765 - 4772

ISSN
0018-9286
DOI
10.1109/TAC.2024.3355326
URI
http://hdl.handle.net/10203/322400
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0