Open Access iconOpen Access

ARTICLE

crossmark

Trading in Fast-Changing Markets with Meta-Reinforcement Learning

Yutong Tian1, Minghan Gao2, Qiang Gao1,*, Xiao-Hong Peng3

1 School of Electronic and Information Engineering, Beihang University, Beijing, 100191, China
2 School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100191, China
3 Faculty of Computing, Engineering and the Built Environment, Birmingham City University, Birmingham, B5 5JU, UK

* Corresponding Author: Qiang Gao. Email: email

Intelligent Automation & Soft Computing 2024, 39(2), 175-188. https://doi.org/10.32604/iasc.2024.042762

Abstract

How to find an effective trading policy is still an open question mainly due to the nonlinear and non-stationary dynamics in a financial market. Deep reinforcement learning, which has recently been used to develop trading strategies by automatically extracting complex features from a large amount of data, is struggling to deal with fast-changing markets due to sample inefficiency. This paper applies the meta-reinforcement learning method to tackle the trading challenges faced by conventional reinforcement learning (RL) approaches in non-stationary markets for the first time. In our work, the history trading data is divided into multiple task data and for each of these data the market condition is relatively stationary. Then a model agnostic meta-learning (MAML)-based trading method involving a meta-learner and a normal learner is proposed. A trading policy is learned by the meta-learner across multiple task data, which is then fine-tuned by the normal learner through a small amount of data from a new market task before trading in it. To improve the adaptability of the MAML-based method, an ordered multiple-step updating mechanism is also proposed to explore the changing dynamic within a task market. The simulation results demonstrate that the proposed MAML-based trading methods can increase the annualized return rate by approximately 180%, 200%, and 160%, increase the Sharpe ratio by 180%, 90%, and 170%, and decrease the maximum drawdown by 30%, 20%, and 40%, compared to the traditional RL approach in three stock index future markets, respectively.

Keywords


Cite This Article

APA Style
Tian, Y., Gao, M., Gao, Q., Peng, X. (2024). Trading in fast-changing markets with meta-reinforcement learning. Intelligent Automation & Soft Computing, 39(2), 175-188. https://doi.org/10.32604/iasc.2024.042762
Vancouver Style
Tian Y, Gao M, Gao Q, Peng X. Trading in fast-changing markets with meta-reinforcement learning. Intell Automat Soft Comput . 2024;39(2):175-188 https://doi.org/10.32604/iasc.2024.042762
IEEE Style
Y. Tian, M. Gao, Q. Gao, and X. Peng "Trading in Fast-Changing Markets with Meta-Reinforcement Learning," Intell. Automat. Soft Comput. , vol. 39, no. 2, pp. 175-188. 2024. https://doi.org/10.32604/iasc.2024.042762



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 487

    View

  • 169

    Download

  • 0

    Like

Share Link