Hierarchical attention network for multivariate time series long-term forecasting Hierarchical Attention Network for Multivariate Time Series Long-term Forecasting

Multivariate time series long-term forecasting has always been the subject of research in various fields such as economics, finance, and traffic. In recent years, attention-based recurrent neural networks (RNNs) have received attention due to their ability of reducing error accumulation. However, th...

Full description

Saved in:
Bibliographic Details
Published inApplied intelligence (Dordrecht, Netherlands) Vol. 53; no. 5; pp. 5060 - 5071
Main Authors Bi, Hongjing, Lu, Lilei, Meng, Yizhen
Format Journal Article
LanguageEnglish
Published New York Springer US 01.03.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multivariate time series long-term forecasting has always been the subject of research in various fields such as economics, finance, and traffic. In recent years, attention-based recurrent neural networks (RNNs) have received attention due to their ability of reducing error accumulation. However, the existing attention-based RNNs fail to eliminate the negative influence of irrelevant factors on prediction, and ignore the conflict between exogenous factors and target factor. To tackle these problems, we propose a novel Hierarchical Attention Network (HANet) for multivariate time series long-term forecasting. At first, HANet designs a factor-aware attention network (FAN) and uses it as the first component of the encoder. FAN weakens the negative impact of irrelevant exogenous factors on predictions by assigning small weights to them. Then HANet proposes a multi-modal fusion network (MFN) as the second component of the encoder. MFN employs a specially designed multi-modal fusion gate to adaptively select how much information about the expression of current time come from target and exogenous factors. Experiments on two real-world datasets reveal that HANet not only outperforms state-of-the-art methods, but also provides interpretability for prediction.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0924-669X
1573-7497
1573-7497
DOI:10.1007/s10489-022-03825-5