Neural network structure identification in inflation forecasting

Neural networks (NNs) are appropriate to use in time series analysis under conditions of unfulfilled assumptions, i.e., non‐normality and nonlinearity. The aim of this paper is to propose means of addressing identified shortcomings with the objective of identifying the NN structure for inflation for...

Full description

Saved in:
Bibliographic Details
Published inJournal of forecasting Vol. 40; no. 1; pp. 62 - 79
Main Authors Šestanović, Tea, Arnerić, Josip
Format Journal Article
LanguageEnglish
Published Chichester Wiley Periodicals Inc 01.01.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Neural networks (NNs) are appropriate to use in time series analysis under conditions of unfulfilled assumptions, i.e., non‐normality and nonlinearity. The aim of this paper is to propose means of addressing identified shortcomings with the objective of identifying the NN structure for inflation forecasting. The research is based on a theoretical model that includes the characteristics of demand‐pull and cost‐push inflation; i.e., it uses the labor market, financial and external factors, and lagged inflation variables. It is conducted at the aggregate level of euro area countries from January 1999 to January 2017. Based on the estimated 90 feedforward NNs (FNNs) and 450 Jordan NNs (JNNs), which differ in variable parameters (number of iterations, learning rate, initial weight value intervals, number of hidden neurons, and weight value of the context unit), the mean square error (MSE), and the Akaike Information Criterion (AIC) are calculated for two periods: in‐the‐sample and out‐of‐sample. Ranking NNs simultaneously on both periods according to either MSE or AIC does not lead to the selection of the ‘best’ NN because the optimal NN in‐the‐sample, based on MSE and/or AIC criteria, often has high out‐of‐sample values of both indicators. To achieve the best compromise solution, i.e., to select an optimal NN, the preference ranking organization method for enrichment of evaluations (PROMETHEE) is used. Comparing the optimal FNN and JNN, i.e., FNN(4,5,1) and JNN(4,3,1), it is concluded that under approximately equal conditions, fewer hidden layer neurons are required in JNN than in FNN, confirming that JNN is parsimonious compared to FNN. Moreover, JNN has a better forecasting performance than FNN.
ISSN:0277-6693
1099-131X
DOI:10.1002/for.2698