Utilizing Skipped Frames in Action Repeats for Improving Sample Efficiency in Reinforcement Learning

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 131
  • Download : 0
Action repeat has become the de-facto mechanism in deep reinforcement learning (RL) for stabilizing training and enhancing exploration. Here, the action is taken at the action-decision point and is executed repeatedly for a designated number of times until the next decision point. Although showing several advantages, in this mechanism, the intermediate states which stem from repeated actions are discarded in training agents, causing sample inefficiency. To utilize the discarded states as training data is nontrivial as the action, which causes the transition between these states, is unavailable. This paper proposes to infer the action at the intermediate states via an inverse dynamic model. The proposed method is simple and easily incorporated into the existing off-policy RL algorithms - integrating the proposed method with SAC shows consistent improvement across various tasks.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2022
Language
English
Article Type
Article
Citation

IEEE ACCESS, v.10, pp.64965 - 64975

ISSN
2169-3536
DOI
10.1109/access.2022.3182107
URI
http://hdl.handle.net/10203/297080
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0