Distributed Off-Policy Temporal Difference Learning Using Primal-Dual Method

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 99
  • Download : 0
The goal of this paper is to provide theoretical analysis and additional insights on a distributed temporal-difference (TD)-learning algorithm for the multi-agent Markov decision processes (MDPs) via saddle-point viewpoints. The (single-agent) TD-learning is a reinforcement learning (RL) algorithm for evaluating a given policy based on reward feedbacks. In multi-agent settings, multiple RL agents concurrently behave, and each agent receives its local rewards. The goal of each agent is to evaluate a given policy corresponding to the global reward, which is an average of the local rewards by sharing learning parameters through random network communications. In this paper, we propose a distributed TD-learning based on saddle-point frameworks, and provide rigorous analysis of finite-time convergence of the algorithm and its solution based on tools in optimization theory. The results in this paper provide general and unified perspectives of the distributed policy evaluation problem, and theoretically complement the previous works.
Publisher
IEEE
Issue Date
2022-10
Language
English
Article Type
Article
Citation

IEEE ACCESS, v.10, pp.107077 - 107094

ISSN
2169-3536
DOI
10.1109/ACCESS.2022.3211395
URI
http://hdl.handle.net/10203/299091
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0