Strategic Capacity Decision-Making in a Stochastic Manufacturing Environment Using Real-Time Approximate Dynamic Programming

Cited 8 time in webofscience Cited 0 time in scopus
  • Hit : 314
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPratikakis, Nikolaos E.ko
dc.contributor.authorRealff, Matthew J.ko
dc.contributor.authorLee, JayHyungko
dc.date.accessioned2013-03-09T05:20:09Z-
dc.date.available2013-03-09T05:20:09Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2010-04-
dc.identifier.citationNAVAL RESEARCH LOGISTICS, v.57, no.3, pp.211 - 224-
dc.identifier.issn0894-069X-
dc.identifier.urihttp://hdl.handle.net/10203/95468-
dc.description.abstractIn this study, we illustrate a real-time approximate dynamic programming (RTADP) method for solving multistage capacity decision problems in a stochastic manufacturing environment, by using an exemplary three-stage manufacturing system with recycle. The system is a moderate size queuing network, which experiences stochastic variations in demand and product yield. The dynamic capacity decision problem is formulated as a Markov decision process (MDP). The proposed RTADP method starts with a set of heuristics and learns a superior quality solution by interacting with the stochastic system via simulation. The curse-of-dimensionality associated with DP methods is alleviated by the adoption of several notions including "evolving set of relevant states," for which the value function table is built and updated, "adaptive action set" for keeping track of attractive action candidates, and "nonparametric k nearest neighbor averager" for value function approximation. The performance of the learned solution is evaluated against (1) an "ideal" Solution derived using a mixed integer programming (MIP) formulation, which assumes full knowledge of future realized values of the stochastic variables (2) a myopic heuristic solution, and (3) a sample path based rolling horizon MIP solution. The policy learned through the RTADP method turned out to be superior to polices of 2 and 3. (C) 2010 Wiley Periodicals, Inc. Naval Research Logistics 57: 211-224, 2010-
dc.languageEnglish-
dc.publisherWILEY-BLACKWELL-
dc.titleStrategic Capacity Decision-Making in a Stochastic Manufacturing Environment Using Real-Time Approximate Dynamic Programming-
dc.typeArticle-
dc.identifier.wosid000275776800001-
dc.identifier.scopusid2-s2.0-77950829394-
dc.type.rimsART-
dc.citation.volume57-
dc.citation.issue3-
dc.citation.beginningpage211-
dc.citation.endingpage224-
dc.citation.publicationnameNAVAL RESEARCH LOGISTICS-
dc.identifier.doi10.1002/nav.20384-
dc.contributor.localauthorLee, JayHyung-
dc.contributor.nonIdAuthorPratikakis, Nikolaos E.-
dc.contributor.nonIdAuthorRealff, Matthew J.-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorqueueing networks-
dc.subject.keywordAuthorapproximate dynamic programming-
dc.subject.keywordAuthorreal-time dynamic programming-
dc.subject.keywordAuthorcapacity planning-
dc.subject.keywordPlusIMPROVING HEURISTIC SOLUTIONS-
dc.subject.keywordPlusTRAVELING SALESMAN PROBLEM-
dc.subject.keywordPlusALGORITHMIC FRAMEWORK-
Appears in Collection
CBE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 8 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0