Market Making under Order Stacking Framework: A Deep Reinforcement Learning Approach

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 98
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorChung, GuHyukko
dc.contributor.authorChung, Munkiko
dc.contributor.authorLee, Yongjaeko
dc.contributor.authorKim, Woo Changko
dc.date.accessioned2023-09-12T12:00:32Z-
dc.date.available2023-09-12T12:00:32Z-
dc.date.created2023-09-12-
dc.date.issued2022-11-
dc.identifier.citation3rd ACM International Conference on AI in Finance, ICAIF 2022, pp.223 - 231-
dc.identifier.urihttp://hdl.handle.net/10203/312493-
dc.description.abstractMarket making strategy is one of the most popular high frequency trading strategies, where a market maker continuously quotes on both bid and ask side of the limit order book to profit from capturing bid-ask spread and to provide liquidity to the market. A market maker should consider three types of risk: 1) inventory risk, 2) adverse selection risk, and 3) non-execution risk. While there have been a lot of studies on market making via deep reinforcement learning, most of them focus on the first risk. However, in highly competitive markets, the latter two risks are very important to make stable profit from market making. For better control of the latter two risks, it is important to reserve good queue position of their resting limit orders. For this purpose, practitioners frequently adopt order stacking framework where their limit orders are quoted at multiple price levels beyond the best limit price. To the best of our knowledge, there have been no studies that adopt order stacking framework for market making. In this regard, we develop a deep reinforcement learning model for market making under order stacking framework. We use a modified state representation to efficiently encode the queue positions of the resting limit orders. We conduct comprehensive ablation study to show that by utilizing deep reinforcement learning, a market making agent under order stacking framework successfully learns to improve the PL while reducing various risks. For the training and testing of our model, we use complete limit order book data of KOSPI200 Index Futures from November 1, 2019 to January 31, 2020 which is comprised of 61 trading days.-
dc.languageEnglish-
dc.publisherAssociation for Computing Machinery, Inc-
dc.titleMarket Making under Order Stacking Framework: A Deep Reinforcement Learning Approach-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85142529974-
dc.type.rimsCONF-
dc.citation.beginningpage223-
dc.citation.endingpage231-
dc.citation.publicationname3rd ACM International Conference on AI in Finance, ICAIF 2022-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationNew York-
dc.identifier.doi10.1145/3533271.3561789-
dc.contributor.localauthorKim, Woo Chang-
dc.contributor.nonIdAuthorLee, Yongjae-
Appears in Collection
IE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0