Research of offline deep reinforcement learning based on time logic framework in financial market algorithmic trading
Deep Reinforcement Learning(DRL)can effectively deal with large amounts of sequence data in highly complex,non-linear and dynamic financial markets.To reduce the risks associated with real-time interactions and improve the ability to process sequential data,an offline deep reinforcement learning transaction framework was proposed based on a temporal logic framework.The framework combined DRL with Long Short-Term Memory(LSTM)and Transformer deep neural networks to process large amounts of financial sequence data and train and evaluate them in an offline environment.The training and evalua-tion were conducted over different time periods,using two data sets,crude oil futures and cotton futures.The results showed that the framework outperformed DRL and benchmark trading strategies on several risk and return metrics.
deep reinforcement learningoffline reinforcement learningartificial neural networksalgorithmic trading