Maintenance policy analysis for multi-unit systems with dynamic programming and reinforcement learning복수 요소 시스템을 위한 동적 계획법 및 강화 학습 기반 유지보수 정책 분석

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 215
  • Download : 0
This thesis investigates the engineering systems consisting of multiple units subject to gradual performance degradation. This degradation eventually results in catastrophic system failure. Conventionally, system engineers try to reduce system failure by performing periodic maintenance. However, this time-periodic maintenance policy often considers states of the units only not the inter-relationship among the states of different units. Meanwhile, due to the advancement of the technologies, real-time based state identification is possible. In this thesis, we analyze the state-dependent maintenance policy considering the state of the units as well as the inter-relationship among the states of the units. The maintenance decision process is formulated with a Markov decision process (MDP) and stochastic dynamic programming (DP) is used to find optimal maintenance policies. With the insights gained from the MDP with the DP approach, a model-free methodology using reinforcement learning (RL) is proposed the validity of the model is verified.
Advisors
Jang, Young Jaeresearcher장영재researcher
Description
한국과학기술원 :산업및시스템공학과,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 산업및시스템공학과, 2020.8,[iv, 39 p. :]

Keywords

Degradation▼aMultiple unit systems▼aMaintenance▼aDynamic programming▼aReinforcement learning; 성능 저하▼a복수 요소 시스템▼a유지보수▼a동적 계획법▼a강화학습

URI
http://hdl.handle.net/10203/284900
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925050&flag=dissertation
Appears in Collection
IE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0