Deep-learning- and reinforcement-learning-based profitable strategy of a grid-level energy storage system for the smart grid

Cited 15 time in webofscience Cited 0 time in scopus
  • Hit : 363
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHan, Gwangwooko
dc.contributor.authorLee, Sanghunko
dc.contributor.authorLee, Jaemyungko
dc.contributor.authorLee, Kangyongko
dc.contributor.authorBae, Joongmyeonko
dc.date.accessioned2021-08-03T00:50:08Z-
dc.date.available2021-08-03T00:50:08Z-
dc.date.created2021-08-03-
dc.date.created2021-08-03-
dc.date.issued2021-09-
dc.identifier.citationJOURNAL OF ENERGY STORAGE, v.41-
dc.identifier.issn2352-152X-
dc.identifier.urihttp://hdl.handle.net/10203/286954-
dc.description.abstractA profitable operation strategy of an energy storage system (ESS) could play a pivotal role in the smart grid, balancing electricity supply with demand. Here, we propose an AI-based novel arbitrage strategy to maximize operating profit in the electricity market composed of a grid operator (GO), an ESS, and customers (CUs). This strategy, the buying and selling of electricity to profit from a price imbalance, can also cause a peak load shift from on-peak to off-peak, a win-win approach for both the ESS operator (EO) and the GO. Particularly, to maximize the EO's profit and further reduce the GO's on-peak power, we introduce a stimulus-integrated arbitrage algorithm, providing an additional reward to the EO from the GO with different weights for each peak period. The algorithm consists of two parts: the first is recurrent neural network-based deep learning for overcoming the future uncertainties of electricity prices and load demands. The second is reinforcement learning to derive the optimal charging or discharging policy considering the grid peak states, the EO's profit, and CUs' load demand. We find it significant that the suggested approach increases operating profit 2.4 times and decreases the on-peak power of the GO by 30%.-
dc.languageEnglish-
dc.publisherELSEVIER-
dc.titleDeep-learning- and reinforcement-learning-based profitable strategy of a grid-level energy storage system for the smart grid-
dc.typeArticle-
dc.identifier.wosid000674626500011-
dc.identifier.scopusid2-s2.0-85109466142-
dc.type.rimsART-
dc.citation.volume41-
dc.citation.publicationnameJOURNAL OF ENERGY STORAGE-
dc.identifier.doi10.1016/j.est.2021.102868-
dc.contributor.localauthorBae, Joongmyeon-
dc.contributor.nonIdAuthorHan, Gwangwoo-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorAI-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorReinforcement learning-
dc.subject.keywordAuthorRecurrent neural network-
dc.subject.keywordAuthorEnergy storage system-
dc.subject.keywordAuthorSmart grid-
dc.subject.keywordPlusDEMAND RESPONSE-
dc.subject.keywordPlusPOWER-PLANTS-
dc.subject.keywordPlusBATTERY-
dc.subject.keywordPlusARBITRAGE-
dc.subject.keywordPlusFUTURE-
dc.subject.keywordPlusMODEL-
dc.subject.keywordPlusWIND-
dc.subject.keywordPlusCELL-
dc.subject.keywordPlusESS-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 15 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0