Forecasting using. econometric models
The non-stationary nature of many economic time series has a bearing on virtually all aspects of econometrics, including forecasting. Recent developments in forecasting theory have taken this into account, and provide a, framework for understanding typical findings in forecast evaluations: for example, why certain types of models are more prone to forecast failure than others. In this chapter we discuss the sources of forecast failure most likely to occur in practice, and we compare the forecasts of a large econometric forecasting model with the forecasts stemming from simpler forecasting systems, such as dVARs. The large scale model holds its ground in our experiment, but the theoretical discussion about vulnerability to deterministic shifts is very relevant for our understanding of the instances where a dVAR does better. We also analyse the theoretical and empirical forecast properties of two wage-price models that have appeared earlier in the book: the dynamic incomplete competition model (ICM) and the Phillips curve, PCM. The analysis shows that although the PCM shares some of the robustness of dVARs, it also embodies equilibrium correction, in the form of natural rate dynamics. Since that form of correction mechanism is rejected empirically, the PCM forecasts are harmed both by excessive uncertainty (from its dVAR aspect), and by their econometric mis-specification of the equilibrium-correction mechanism in wage formation.
Economic forecasts are statements about the future which are generated with a range of methods, ranging from wholly informal (‘gut feeling’) to sophisticated statistical techniques and the use of econometric models. However, professional forecasters never stick to only one method of forecasting, so formal and informal
forecasting methods both have an impact on the final (published) forecast. The use of judgemental correction of forecasts from econometric models is one example.
It is fair to say that the combined use of different forecasting methods reflects how practitioners have discovered that there is no undisputed and overall ‘best’ way of constructing forecasts. A related observation, brought into the literature already in Bates and Granger (1969), is that a combination of forecasts of an economic variable often turn out to be more accurate than the individual projections that are combined together.
Nevertheless, intercept correction and pooling are still looked upon with suspicion in wide circles. Hence, being open-minded about intercept correction often has a cost in terms of credibility loss. For example, the forecaster will often find herself accused of an inconsistency (i. e. ‘if you believe in the model, why do you overrule its forecasts?’), or the model can be denounced on the logic that ‘if intercept correction is needed, why use a model in the first place?’.
It is probable that such reactions are based on an unrealistic description of the forecasting situation, namely that the econometric model in question is correctly specified simplification of the data generation process, which in turn is assumed to be without regime shifts in the forecasting period. Realistically however, there is genuine uncertainty about how good a model is, even within the sample. Moreover, since the economy is evolving, we can take it for granted that the data generation process will change in the forecast period, causing any model of it to become mis-specified over that period, and this is eventually the main problem in economic forecasting. The inevitable conclusion is that there is no way of knowing ex ante the degree of mis-specification of an econometric model over the forecast period. The implication is that all measures of forecast uncertainty based on within-sample model fit are underestimating the true forecast uncertainty. Sometimes, when regimes shifts affect parameters like growth rates and the means and coefficients of cointegration relationships, one is going to experience forecast failure, that is, ex post forecast errors are systematically larger than indicated by within-sample fit.
On the basis of a realistic description of the forecasting problem it thus becomes clear that intercept corrections have a pivotal role in robustifying the forecasts from econometric models, when the forecaster has other information which indicate that structural changes are ‘immanent’; see Hendry (2001a). Moreover, correcting a model’s forecast through intercept correction does not incriminate the use of that model for policy analysis. That issue hinges more precisely on which parameters of the model are affected by the regime shift. Clearly, if the regime shift entails significant changes in the parameters that determine the (dynamic) multipliers, then the continued use of the model is untenable. However, if it is the intercepts and long-run means of cointegrating relationships which are affected by a regime shift, the model may still be valid for policy analysis, in spite of forecast failure. Clements and Hendry (1999a) provide a comprehensive exposition of the theory of forecasting non-stationary
time-series under realistic assumptions about regime shifts, and Hendry and Mizon (2000) specifically discuss the consequences of forecast failure for policy analysis. Ericsson and Hendry (2001) is a non-technical presentation of recent developments in economic forecasting theory and practice.
A simple example may be helpful in defining the main issues at stake. Let Ml in equation (11.1) represent a model of the rate of inflation nt (i. e. denoted Apt in the earlier chapters). In equation (11.1) p denotes the unconditional mean of the rate of inflation, while zt denotes an exogenous variable, whose change affects the rate of inflation with semi-elasticity given by 7 (hence 7 is the derivative coefficient of this model). Assume next that Ml corresponds to the data generation process over the sample period t = 1, 2,..., T, hence as in earlier chapters, et denote a white noise innovation (with respect to nt_1 and Azt), and follows a normal distribution with zero mean and constant variance.
Ml: Ant = 5 - a(nt_i - p) +^Azt + єt. (11.1)
By definition, any alternative model is mis-specified over the sample period. M2 in equation (11.2) is an example of a simple model in differenced form, a dVAR, often used as a benchmark in forecast comparisons since it produces the naive forecasts that tomorrow’s rate of inflation is identical to today’s rate. The M2-disturbance is clearly not an innovation, but is instead given by the equation below M2.
M2: AAnt = vt, (11.2)
vt = - aAnt_i + YA2zt + єt - є_і.
As noted above, M2 is by definition inferior to Ml when viewed as an alternative model of the rate of inflation. However, our concern now is a different one: if we use Ml and M2 to forecast inflation over H periods T +1, T + 2, ...,T + H, which set of forecasts is the best or most accurate? In other words: which of Ml and M2 provides the best forecast mechanism? It is perhaps surprising that the answer depends on which other additional assumptions we make about the forecasting situation. Take for instance the exogenous variable zt in (11.1): only if the forecasting agency also controls the process governing this variable in the forecast period can we assume that the conditional inflation forecast based on Ml is based on the correct sequence of exogenous variables (zT+1, zt+2,..., zt+h ). Thus, as is well documented, errors in the projections of the exogenous variables are important contributors to forecast errors. Nevertheless, for the purpose of example, we shall assume that the future z’s are correctly forecasted. Another simplifying assumption is to abstract from estimation uncertainty, that is, we evaluate the properties of forecasting mechanism Ml as if the coefficients 5, a, and 7 are known coefficients. Intuitively, given the first assumption that Ml corresponds to the data generating process, the assumption about no parameter uncertainty is of second - or third-order importance.
Given this description of the forecasting situation, we can concentrate on the impact of deterministic non-stationarities, or structural change, on the forecasts of Ml and M2. Assume first that there is no structural change. In this case, Ml delivers the predictor with the minimum mean squared forecast error (MMSFE): see, for example, Clements and Hendry (1998: ch. 2.7). Evidently, the imputed forecast errors from M2, and hence the conventional 95% predictions intervals, are too large (notably by 100% for the T +1 forecast).
However, if there is a structural change in the long-run mean of the rate of inflation, g, it is no longer obvious that Ml is the winning forecasting mechanism. Exactly when g shifts to its new value, g*, before or after the preparation of the forecast in period T, is important, as shown by the biases of the two 1-step ahead forecasts:
Е[пт+і - Пм1,т+1 | It] = a(g — g*),
Е[пт+і — Пм2,т+і I It] = —аАпт + yA2zt+i,
E[nT+1 — nM2,T+1 1 It ] = a(g — g*), demonstrating that
• The forecast mechanism corresponding to M2 ‘error corrects’ to the structural change occurring before the forecast period. Hence, M2 post-break forecasts are robust.
• Ml produces forecast failure, also when M2-forecasts do not break down, that is, Ml post-break forecasts are not robust.
• Both forecasts are damaged if the regime shift occurs after the forecast is made (i. e. in the forecast period). In fact, Ml and M2 share a common bias in this pre-break case (see first and third line).
Thus, apart from the special case where the econometric model corresponds to the true mechanism in the forecast period, it is impossible to prove that it provides the best forecasting mechanism, establishing the role of supplementary forecasting mechanisms in economic forecasting. Moreover, in this example, forecast failure of Ml is due to a change in a parameter which does not affect the dynamic multipliers (the relevant parameters being a and y in this example). Thus, forecast failure per se does not entail that the model is invalid for policy analysis.
Ml - regime shifts that occur prior to the forecast period are detectable, in principle, and the forecaster therefore has an opportunity to avoid forecast failure by intercept correction. In comparison, it is seen that the simple forecast mechanism M2 has built in intercept correction: its forecast is back on track in the first period after the break. Intriguingly, M2 has this capability almost by virtue of being a wrong model of the economy.
In the rest of this chapter, we investigate the relevance of these insights for macroeconometric forecasting. Section 11.2 contains a broader discussion of
the relative merits of equilibrium-correction models (EqCMs) and differenced VARs (dVARs) in macroeconometric forecasting. This is done by first giving an extended algebraic example, in Section 11.2.1. In Section 11.2.2, we turn to the theory’s practical relevance for understanding the forecasts of the Norwegian economy in the 1990s. The model that takes the role of the EqCM is the macroeconometric model RIMINI. The rival forecasting systems are dVARs derived from the full scale model as well as univariate autoregressive models.
So far we have discussed forecasting mechanisms as if the choice of forecasting method is clear-cut and between using a statistical model, M2, and a well-defined econometric model, M1. In practice, the forecaster has not one but many econometric models to choose from. In earlier chapters of this book, we have seen that different dynamic model specifications can be compatible with the same theoretical framework. We showed, for example, that the open economy Phillips curve model with a constant NAIRU, can be seen as an EqCM version of a bargaining model (see Chapter 4), but also that there was an alternative EqCM which did not imply a supply-side natural rate (in Chapter 6 we referred to it as the dynamic incomplete competition model). In Section 11.3 we discuss the forecasting properties of the two contending specifications of inflation, theoretically and empirically.