Application of multi-agent Markov Decision Process to operational planning of energy grid

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 192
  • Download : 0
Smart grid is an intelligent energy grid with advanced information sharing and energy sources like renewable energy in the network. Its flexible management has been increasingly required for more efficient energy generation and distribution. Markov Decision Process (MDP) has been widely applied to this kind of planning problems. As there are several spatially distributed energy generation and dispatch units coexisting in a smart grid, a multi-agent MDP (MMDP) which is an extension of single-agent MDP to multi-agent can be an improved approach in terms of reliability and scalability. However, if a system has uncertainty such as solar and wind power generation, solution to the planning problem with dynamic programming is computationally infeasible. Instead, it can be solved by Reinforcement Learning (RL). In this study, planning of a microgrid which aims to provide energy and match the demand is solved using the concepts of MMDP and RL. The target microgrid draws energy from renewable energy generation units. It is also connected to a main grid which it exchanges excess or insufficient energy with. The results of solving this problem using a single-agent and multi-agent are compared.
Publisher
KIChE
Issue Date
2018-10-26
Citation

2018 KIChE Fall Meeting

URI
http://hdl.handle.net/10203/272684
Appears in Collection
CBE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0