Approximate dynamic programming approach for process control

Cited 44 time in webofscience Cited 0 time in scopus
  • Hit : 539
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Jay H.ko
dc.contributor.authorWong, Weechinko
dc.date.accessioned2013-03-09T00:27:41Z-
dc.date.available2013-03-09T00:27:41Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2010-10-
dc.identifier.citationJOURNAL OF PROCESS CONTROL, v.20, no.9, pp.1038 - 1048-
dc.identifier.issn0959-1524-
dc.identifier.urihttp://hdl.handle.net/10203/94805-
dc.description.abstractWe assess the potentials of the approximate dynamic programming (ADP) approach for process control, especially as a method to complement the model predictive control (MPC) approach. In the artificial intelligence (AI) and operations research (OR) research communities, ADP has recently seen significant activities as an effective method for solving Markov decision processes (MDPs), which represent a type of multi-stage decision problems under uncertainty. Process control problems are similar to MDPs with the key difference being the continuous state and action spaces as opposed to discrete ones. In addition, unlike in other popular ADP application areas like robotics or games, in process control applications first and foremost concern should be on the safety and economics of the on-going operation rather than on efficient learning. We explore different options within ADP design, such as the pre-decision state vs. post-decision state value function, parametric vs. nonparametric value function approximator, batch-mode vs. continuous-mode learning, and exploration vs. robustness. We argue that ADP possesses great potentials, especially for obtaining effective control policies for stochastic constrained nonlinear or linear systems and continually improving them towards optimality. (C) 2010 Elsevier Ltd. All rights reserved.-
dc.languageEnglish-
dc.publisherELSEVIER SCI LTD-
dc.subjectMODEL-PREDICTIVE CONTROL-
dc.subjectNONLINEAR PROCESSES-
dc.subjectREACTOR-
dc.subjectDESIGN-
dc.subjectISSUES-
dc.titleApproximate dynamic programming approach for process control-
dc.typeArticle-
dc.identifier.wosid000282868600008-
dc.identifier.scopusid2-s2.0-77956439408-
dc.type.rimsART-
dc.citation.volume20-
dc.citation.issue9-
dc.citation.beginningpage1038-
dc.citation.endingpage1048-
dc.citation.publicationnameJOURNAL OF PROCESS CONTROL-
dc.identifier.doi10.1016/j.jprocont.2010.06.007-
dc.contributor.localauthorLee, Jay H.-
dc.contributor.nonIdAuthorWong, Weechin-
dc.type.journalArticleArticle; Proceedings Paper-
dc.subject.keywordAuthorStochastic process control-
dc.subject.keywordAuthorStochastic dynamic programming-
dc.subject.keywordAuthorApproximate dynamic programming-
dc.subject.keywordAuthorDual control-
dc.subject.keywordAuthorConstrained control-
dc.subject.keywordPlusMODEL-PREDICTIVE CONTROL-
dc.subject.keywordPlusNONLINEAR PROCESSES-
dc.subject.keywordPlusREACTOR-
dc.subject.keywordPlusDESIGN-
dc.subject.keywordPlusISSUES-
Appears in Collection
CBE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 44 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0