DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Soohan | ko |
dc.contributor.author | Kim, Jimyeong | ko |
dc.contributor.author | Sul, Hong Kee | ko |
dc.contributor.author | Hong, Youngjoon | ko |
dc.date.accessioned | 2024-09-05T07:00:06Z | - |
dc.date.available | 2024-09-05T07:00:06Z | - |
dc.date.created | 2024-08-29 | - |
dc.date.issued | 2024-10 | - |
dc.identifier.citation | EXPERT SYSTEMS WITH APPLICATIONS, v.252 | - |
dc.identifier.issn | 0957-4174 | - |
dc.identifier.uri | http://hdl.handle.net/10203/322645 | - |
dc.description.abstract | The purpose of this research is to devise a tactic that can closely track the daily cumulative volume -weighted average price (VWAP) using reinforcement learning while minimizing the deviation from the VWAP. Previous studies often choose a relatively short trading horizon to implement their models, making it difficult to accurately track the daily cumulative VWAP since the stock price movement is often insignificant within the short trading horizon. On the other hand, training reinforcement learning models directly over a longer, daily horizon is burdensome due to extensive sequence length. Hence, there is a need for a method that can divide the long daily horizon into smaller, more manageable segments. We propose a method that leverages the U-shaped pattern of intraday stock trade volumes and uses Proximal Policy Optimization (PPO) as the learning algorithm. Our method follows a dual -level approach: a Transformer model that captures the overall (global) distribution of daily volumes in a U -shape, and a LSTM model that handles the distribution of orders within smaller (local) time intervals. The results from our experiments suggest that this dual -level architecture improves cumulative VWAP tracking accuracy compared to previous reinforcement learning approaches. The key finding is that explicitly accounting for the U-shaped intraday volume pattern leads to better performance in approximating the cumulative daily VWAP. This has implications for developing trading strategies that need to efficiently track VWAP over a full trading day. | - |
dc.language | English | - |
dc.publisher | PERGAMON-ELSEVIER SCIENCE LTD | - |
dc.title | An adaptive dual-level reinforcement learning approach for optimal trade execution | - |
dc.type | Article | - |
dc.identifier.wosid | 001245090900001 | - |
dc.identifier.scopusid | 2-s2.0-85193754510 | - |
dc.type.rims | ART | - |
dc.citation.volume | 252 | - |
dc.citation.publicationname | EXPERT SYSTEMS WITH APPLICATIONS | - |
dc.identifier.doi | 10.1016/j.eswa.2024.124263 | - |
dc.contributor.localauthor | Hong, Youngjoon | - |
dc.contributor.nonIdAuthor | Kim, Soohan | - |
dc.contributor.nonIdAuthor | Kim, Jimyeong | - |
dc.contributor.nonIdAuthor | Sul, Hong Kee | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Volume-weighted average price | - |
dc.subject.keywordAuthor | Reinforcement learning | - |
dc.subject.keywordAuthor | Optimal trade execution | - |
dc.subject.keywordAuthor | Proximal policy optimization | - |
dc.subject.keywordAuthor | Markov decision process | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.