Approximate dynamic programming strategies and their applicability for process control: A review and future directions

Cited 65 time in webofscience Cited 0 time in scopus
  • Hit : 483
  • Download : 0
This paper reviews dynamic programming (DP), surveys approximate solution methods for it, and considers their applicability to process control problems. Reinforcement Learning (RL) and Neuro-Dynamic Programming (NDP), which can be viewed as approximate DP techniques, are already established techniques for solving difficult multi-stage decision problems in the fields of operations research, computer science, and robotics. Owing to the significant disparity of problem formulations and objective, however, the algorithms and techniques available from these fields are not directly applicable to process control problems, and reformulations based on accurate understanding of these techniques are needed. We categorize the currently available approximate solution techniques for dynamic programming and identify those most suitable for process control problems. Several open issues are also identified and discussed.
Publisher
INST CONTROL AUTOMATION SYSTEMS ENGINEERS
Issue Date
2004-09
Language
English
Article Type
Review
Citation

INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, v.2, no.3, pp.263 - 278

ISSN
1598-6446
URI
http://hdl.handle.net/10203/82068
Appears in Collection
CBE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 65 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0