Enhancing Multivariate Time Series Forecasting: A Novel Approach with Mallows Model Averaging and Graph Neural Networks

Multivariate time series forecasting holds substantial practical significance, facilitates precise predictions, and informs decision-making. The complexity of nonlinear relationships and the presence of higher-order features in multivariate time series data have sparked a burgeoning interest in leve...

Full description

Saved in:
Bibliographic Details
Published inJournal of systems science and complexity Vol. 38; no. 4; pp. 1707 - 1729
Main Authors Zhang, Haili, Wang, Jiawei, Liu, Zhaobo, Dong, Hailing
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.08.2025
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multivariate time series forecasting holds substantial practical significance, facilitates precise predictions, and informs decision-making. The complexity of nonlinear relationships and the presence of higher-order features in multivariate time series data have sparked a burgeoning interest in leveraging deep learning approaches for such forecasting tasks. Existing methods often use pre-scaled neural networks, whose reliability and generalization can pose a challenge. In this study, the authors propose an instance-wise graph-based Mallows model averaging (IGMMA) framework for multivariate time series prediction. The framework incorporates a model averaging module into the network, where extracted features are utilized as inputs for candidate linear models. These linear models are combined with weights to create a new linear layer, forming a novel graph neural network model. Moreover, the network loss function is modified based on the Mallows criterion, where penalties are imposed on the parameters and the weights separately. The authors use the proposed method to predict multicommodity futures prices, and the empirical results show that IGMMA has superior predictive accuracy even when small neural networks are used. This indicates that the model averaging module significantly reduces the parameters required for deep learning training, which enables the training of multiple small models as an alternative to training a large model.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1009-6124
1559-7067
DOI:10.1007/s11424-024-4044-9