An adaptive dual-level reinforcement learning approach for optimal trade execution

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 85
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Soohanko
dc.contributor.authorKim, Jimyeongko
dc.contributor.authorSul, Hong Keeko
dc.contributor.authorHong, Youngjoonko
dc.date.accessioned2024-09-05T07:00:06Z-
dc.date.available2024-09-05T07:00:06Z-
dc.date.created2024-08-29-
dc.date.issued2024-10-
dc.identifier.citationEXPERT SYSTEMS WITH APPLICATIONS, v.252-
dc.identifier.issn0957-4174-
dc.identifier.urihttp://hdl.handle.net/10203/322645-
dc.description.abstractThe purpose of this research is to devise a tactic that can closely track the daily cumulative volume -weighted average price (VWAP) using reinforcement learning while minimizing the deviation from the VWAP. Previous studies often choose a relatively short trading horizon to implement their models, making it difficult to accurately track the daily cumulative VWAP since the stock price movement is often insignificant within the short trading horizon. On the other hand, training reinforcement learning models directly over a longer, daily horizon is burdensome due to extensive sequence length. Hence, there is a need for a method that can divide the long daily horizon into smaller, more manageable segments. We propose a method that leverages the U-shaped pattern of intraday stock trade volumes and uses Proximal Policy Optimization (PPO) as the learning algorithm. Our method follows a dual -level approach: a Transformer model that captures the overall (global) distribution of daily volumes in a U -shape, and a LSTM model that handles the distribution of orders within smaller (local) time intervals. The results from our experiments suggest that this dual -level architecture improves cumulative VWAP tracking accuracy compared to previous reinforcement learning approaches. The key finding is that explicitly accounting for the U-shaped intraday volume pattern leads to better performance in approximating the cumulative daily VWAP. This has implications for developing trading strategies that need to efficiently track VWAP over a full trading day.-
dc.languageEnglish-
dc.publisherPERGAMON-ELSEVIER SCIENCE LTD-
dc.titleAn adaptive dual-level reinforcement learning approach for optimal trade execution-
dc.typeArticle-
dc.identifier.wosid001245090900001-
dc.identifier.scopusid2-s2.0-85193754510-
dc.type.rimsART-
dc.citation.volume252-
dc.citation.publicationnameEXPERT SYSTEMS WITH APPLICATIONS-
dc.identifier.doi10.1016/j.eswa.2024.124263-
dc.contributor.localauthorHong, Youngjoon-
dc.contributor.nonIdAuthorKim, Soohan-
dc.contributor.nonIdAuthorKim, Jimyeong-
dc.contributor.nonIdAuthorSul, Hong Kee-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorVolume-weighted average price-
dc.subject.keywordAuthorReinforcement learning-
dc.subject.keywordAuthorOptimal trade execution-
dc.subject.keywordAuthorProximal policy optimization-
dc.subject.keywordAuthorMarkov decision process-
Appears in Collection
MA-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0