THE ECONOMETRICS OF MACROECONOMIC MODELLING
Role of economic theory in macroeconometrics
The Cowles Commission research agenda focused on simultaneous equation models (SEMs) and put much weight on the issue of identification. In dealing with these issues, economic theory plays an important part. The prominent representative of this tradition, Lawrence Klein, writes in a very readable survey of the interaction between statistics and economics in the context of macroeconometric modelling (Klein 1988) that the model building approach can be contrasted to pure statistical analysis, which is empirical and not so closely related to received economic theory as is model building.
Still, it is on this score the traditional macroeconomic model building has come under attack (see Favero 2001). Whereas the LSE methodology largely ascribes the failure of those early macroeconomic models to missing dynamics or model mis-specification (omitted energy price effects), other critiques like Robert Lucas and Christopher Sims have claimed that the cause is rather that they had a weak theoretical basis. The Lucas critique (see, for example, Lucas 1976) claims that the failure of conditional models is caused by regime shifts, as a result of policy changes and shifts in expectations. The critique carries over to SEMs if expectations are non-modelled. On the other hand, Sims (1980) argued that SEMs embodied ‘incredible’ identifying restrictions: the restrictions needed to claim exogeneity for certain variables would not be valid in an environment where agents optimise intertemporally.
Sims instead advocated the use of a low-order vector autoregression to analyse economic time-series. This approach appeared to have the advantage that it did not depend on an ‘arbitrary’ division between exogenous and endogenous variables and also did not require ‘incredible’ identifying restrictions. Instead Sims introduced identifying restrictions on the error structure of the model, and this approach has been criticised for being equally arbitrary. Later developments have led to structural VAR models in which cointegration defines long-run relationships between non-stationary variables and where exogenous variables are reintroduced (see Pesaran and Smith 1998 for a survey in which they reanalyse an early model by King et al. 1991).[7]
Ever since the Keynes-Tinbergen controversy (see Morgan 1990 and Hendry and Morgan 1995), the role of theory in model specification has represented a major controversy in econometrics (cf. Granger 1990, 1999 for recent surveys). At one end of the theory-empiricism line we have theory-driven models that take the received theory for granted, and do not test it. Prominent examples are the general equilibrium models, dubbed real business cycle models, that have gained a dominating position in academia (see, for example, Kydland and Prescott 1991). There is also a new breed of macroeconometric models which assume intertemporally optimising agents endowed with rational forward-looking expectations, leading to a set of Euler equations (see Poloz et al. 1994, Willman et al. 2000, Hunt et al. 2000, and Nilsson 2002 for models from the central banks of Canada, Finland, New Zealand, and Sweden, respectively). At another extreme we have data based VAR models which initially were statistical devices that made only minimal use of economic theory. As noted above, in the less extreme case of structural VARs, theory restrictions can be imposed as testable cointegrating relationships in levels or they can be imposed on the error structure of the model.
The approach we are advocating has much in common with the LSE methodology referred to above, and it focuses on evaluation as recommended in Granger (1999). It represents a compromise between data based (purely statistical) models and economic theory: on the one hand learning from the process of trying to take serious account of the data, while on the other hand avoiding making strong theoretical assumptions—needed to make theories ‘complete’— which may not make much sense empirically, that is, that are not supported by the data.[8] Moreover, there are common-sense arguments in favour of not adopting a theory-driven model as a basis for policy decisions, which indeed affect reality, until it has gained convincing empirical support (see Granger 1992).