Enhancing trading strategies by combining incremental reinforcement learning and self-supervised prediction
•Propose a DRL framework combing incremental learning and self-supervised learning.•Develop a new type of data that consist of daily and weekly OHLCV value.•Incorporate a self-supervised network, AutoConNet, to predict future price data.•Exploit online EWC during testing phase to improve the predict...
Saved in:
Published in | Expert systems with applications Vol. 289; p. 128297 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
15.09.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •Propose a DRL framework combing incremental learning and self-supervised learning.•Develop a new type of data that consist of daily and weekly OHLCV value.•Incorporate a self-supervised network, AutoConNet, to predict future price data.•Exploit online EWC during testing phase to improve the prediction accuracy of AutoConNet.•Compare performance of proposed method with several baselines across six datasets.
Incremental learning provides critical adaptability in dynamic environments, enabling models to continuously adjust with new data to improve prediction accuracy. This adaptability is especially valuable in volatile financial markets, where incremental learning can help model better capture the emerging patterns of time series data. On the other hand, self-supervised learning gained significant attention during the past few years due to its ability to exploit abundant unlabeled data to uncover complex structures and temporal dependencies, improving model generalization and pattern detection, which can provide a more comprehensive understanding of data structures. In this case, it gradually become a powerful tool in the field of time series analysis, especially in financial domain. In parallel, deep reinforcement learning (DRL) has shown great potential in decision-making tasks, particularly in financial strategy optimization. However, most DRL approaches in finance rely solely on raw market data, limiting the model’s ability to extract key insights and often ignoring future trends essential for profitable trading. To address the aforementioned challenge appeared in DRL domain, Incremental Forecast Fusion Deep Reinforcement Learning (IFF-DRL), an innovative framework that combines incremental learning and self-supervised learning with DRL for financial trading is proposed, for accurately and continually optimizing and improving trading strategies. In this paper, AutoConNet, as a self-supervised learning method, is incorporated to forecast future OHLCV (Open, High, Low, Close, and Volume) data based on observed values. During testing, incremental learning dynamically refines the predictive model to stay aligned with market trends. By utilizing incremental learning algorithms, the self-supervised time series prediction network, AutoConNet, achieved an average reduction of 5.93 % in MSE error across six datasets during the testing phase. The predicted OHLCV is combined with actual observed OHLCV to generate “weekly data”, forming the state space termed “daily & weekly data” in reinforcement learning. Experiments conducted on six datasets show that the proposed IFF-DRL significantly improves trading performance, delivering an annualized return of 103.19 % on the HSI index. By combining incremental learning with reinforcement learning, IFF-DRL enables trading agents to adapt more effectively in fast-paced markets, capturing profit opportunities and offering a more responsive, future-oriented approach in financial decision-making. Code of this research can be found in https://github.com/AndyZCJ/IFF-DRL-Inference. |
---|---|
ISSN: | 0957-4174 |
DOI: | 10.1016/j.eswa.2025.128297 |