THE ECONOMETRICS OF MACROECONOMIC MODELLING
Forecast properties (Chapter 11)
When studies of macroeconometric models’ forecast performance started to appear in the 1960s and 1970s, it was considered a surprise that they were found to be outperformed by very simple forecasting mechanisms. As pointed out by Granger and Newbold (1986), many theory-driven macro models largely ignored dynamics and temporal properties of the data, so that it should not come as a surprise why they produced suboptimal forecasts. Forecasting is a time-oriented activity, and a procedure that pays only rudimentary attention to temporal aspects is likely to lose out to rival procedures that put dynamics in the foreground. Such competing procedures were developed and gained ground in the 1970s in the form of Box-Jenkins time-series analysis and ARIMA models.
As we alluded to in Section 1.1, macroeconometric modelling has progressed in the last two decades through the adoption of new techniques and insights from time-series econometrics, with more emphasis on dynamic specification and testing against mis-specification. The dangers of spurious regressions have been reduced as a consequence of the adoption of new inference procedures for integrated variables. As a result, modern macroeconometric forecasting models are less exposed to Granger and Newbold’s diagnosis.
In particular, one might reasonably expect that equilibrium-correcting models (EqCMs) will forecast better than models that only use differenced data, so-called differenced vector autoregressions (dVARs) or other member of pure time-series models. In fact, the typical case will be that the dVAR is mis - specified relative to an econometrically specified EqCM, and dVAR forecasts will therefore be suboptimal.
However, as shown by the work of Michael Clements and David Hendry in several books and papers, the expected favourable forecasting properties of econometric models rest on the assumption of constant parameters in the forecast period. This is of course an untenable basis for the theory and practice of economic forecasting. The frequent occurrences of structural changes and regime shifts tilt the balance of the argument back in favour of dVARs. One reason is if key parameters like, for example, the means of the cointegrating relationships change after the forecast is made, then forecasts of the EqCM are damaged while the dVAR forecasts are robust (since the affected relationships are omitted from the forecasting mechanism in the first place). Hence, in practice, EqCM forecasts may turn out to be less accurate than forecasts derived from a dVAR. Nevertheless, the EqCM may be the right model to use for policy simulations (e. g. the analysis of the transmission mechanism). Specifically, this is true if the source of forecast failure turns out to be location shifts in, for example, the means of cointegration relationships or in autonomous growth rates, rather than in the model’s ‘derivative’ coefficients, which are the parameters of interest in policy analysis. Theoretical and empirical research indicate that this is a typical situation. Conversely, the ‘best model’ in terms of economic interpretation and econometrics, may not be the best model for forecasts. In Chapter 11, we investigate the practical relevance of these theoretical developments for forecasts of the Norwegian economy in the 1990s. The model that takes the role of the EqCM is the RIMINI model mentioned earlier.