Hyperparameters Tuning for Machine Learning Models for Time Series Forecasting

In this study we experimentally test the accuracy of time series forecasting for three different architectures of neural networks with the various number of layers and neurons in each layer: recurrent neural networks with LSTM cells, one-dimensional convolutional neural networks and multi-layer perc...

Full description

Saved in:
Bibliographic Details
Published in2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS) pp. 328 - 332
Main Authors Peter, Gladilin, Matskevichus, Maria
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2019
Subjects
Online AccessGet full text
DOI10.1109/SNAMS.2019.8931860

Cover

Loading…
More Information
Summary:In this study we experimentally test the accuracy of time series forecasting for three different architectures of neural networks with the various number of layers and neurons in each layer: recurrent neural networks with LSTM cells, one-dimensional convolutional neural networks and multi-layer perceptrons (fully-connected models). We fit every model on the set of 100 various time series from M4 Kaggle competition to evaluate the optimal configuration in terms of the forecasting accuracy and the model's complexity. Experimental results have shown that: (i) one-layer recurrent neural networks with LSTM cells have better prediction accuracy in general; (ii) it is no obvious dependence of the number of the layers on the predictive accuracy and (iii) from the point of view of the specific complexity fully-connected models and convolutional neural networks are the best choice.
DOI:10.1109/SNAMS.2019.8931860