Introduction: small vs. large models
Macroeconometric modelling aims at explaining the empirical behaviour of an actual economic system. Such models will be systems of inter-linked equations estimated from time-series data using statistical or econometric techniques.
A conceptual starting point is the idea of a general stochastic process that has generated all data we observe for the economy, and that this process can be summarised in terms of the joint probability distribution of random observable variables in a stochastic equation system: see Section 2.3. For a modern economy, the complexity of such a system, and of the corresponding joint probability distribution, is evident. Nevertheless, it is always possible to take a highly aggregated approach in order to represent the behaviour of a few ‘headline’ variables (e. g. inflation, GDP growth, unemployment) in a small- scale model. If small enough, the estimation of such econometric models can be
based on formally established statistical theory (as with low-dimensional vector autoregressive models [VARs]), where the statistical theory has recently been extended to cointegrated variables.
In this book, we supplement the existing literature by suggesting the following operational procedure:
1. By relevant choices of variables we define and analyse subsectors of the economy (by marginalisation).
2. By distinguishing between exogenous and endogenous variables we construct (by conditioning) relevant partial models, which we will call models of type A.
3. Finally, we need to combine these submodels in order to obtain a Model B for the entire economy.
Our thesis is that, given that Model A is a part of Model B, it is possible to learn about Model B from Model A. The alternative to this thesis amounts to a kind of creationism, that is, unless of course macroeconometrics should be restricted to aggregate models.
Examples of properties that can be discovered using our procedure include cointegration in Model B. This follows from a corollary of the theory of cointegrated systems: any nonzero linear combination of cointegrating vectors is also a cointegrating vector. In the simplest case, if there are two cointegrating vectors in Model B, there always exists a linear combination of those cointegrating vectors that ‘nets out’ one of the variables. Cointegration analysis
of the subset of variables (i. e. Model A) excluding that variable will result in a cointegrating vector corresponding to that linear combination. Thus, despite being a property of Model B, cointegration analysis of the subsystem (Model A) identifies one cointegration vector. Whether that identification is economically meaningful or not remains in general an open issue, and any such claim must be substantiated in each separate case. We provide several examples in this book: in Section 2.4 we discuss the identification of a consumption function as a cointegrating relationship, and link that discussion to the concept of partial structure. In Chapter 5, the identification of cointegrating relationships corresponding to price - and wage-setting is discussed in detail.
Other important properties of the full model that can be tested from subsystems include the existence of a natural rate of unemployment (see Chapter 6), and the relevance of forward looking terms in wage - and price-setting (see Chapter 7).
Nevertheless, as pointed out by Johansen (2002), there is a Catch 22 to the above procedure: a general theory for the three steps will contain criteria and conditions which are formulated for the full system. However, sophisticated piecewise modelling can be seen as a sort of gradualism—seeking to establish submodels that represent partial structure: that is, partial models that are invariant to extensions of the sample period, to changes elsewhere in the economy (e. g. due to regime shifts) and remain the same for extensions of the information set. However, gradualism also implies a readiness to revise a submodel. Revisions are sometimes triggered by forecast failure, but perhaps less often than believed in academic circles: see Section 2.3.2. More mundane reasons include data revisions and data extensions which allow more precise and improved model specifications. The dialogue between model builders and model users often result in revisions too. For example, experienced model users are usually able to pinpoint unfortunate and unintended implications of a single equation’s (or submodel) specification on the properties of the full model.
Obviously, gradualism does not preclude thorough testing of a submodel. On the contrary, the first two steps in the operational procedure above do not require that we know the full model, and testing those conditions has some intuitive appeal since real life provides ‘new evidence’ through the arrival of new data and by ‘natural experiments’ through regime shifts like, for example, changes in government or the financial deregulation in many European economies in the recent past. For the last of the three steps, we could in principle think of the full model as the ultimate extension of the information set, and so establishing structure or partial structure represents a way to meet Johansen’s observation. In practice, we know that the full model is not attainable. What we do then is to graft the sector model in simplified approximations of Model B, and test the relevant exogeneity assumptions of the partial model within that framework. To the extent that the likelihood function of the simplified Model B is adequately representing or approximating the likelihood function of the full Model B, there is no serious problem left. It is also possible to corroborate the entire procedure, since it is true that Model A can be tested and improved gradually on new information, which is a way of gaining knowledge that parallels modern Darwinism in the natural sciences. We develop these views further in Section 2.5.
A practical reason for focusing on submodels is that the modellers may have good reasons to study some parts of the economy more carefully than other parts. For a central bank that targets inflation, there is a strong case for getting the model of the inflationary process right. This calls for careful modelling of the wage and price formation conditional on institutional arrangements for the wage bargain, the development in financial markets, and the evolving real economy in order to answer a number of important questions: Is there a natural rate (of unemployment) that anchors unemployment as well as inflation? What is the importance of expectations for inflation and how should they be modelled? What is the role of money in the inflationary process?
We find that in order to answer such questions—and to probe the competing hypotheses regarding supply-side economics—a detailed modelling, drawing on information specific to the economy under study—is necessary. Taking account of the simultaneity is to a large extent a matter of estimation efficiency. If there is a tradeoff between such efficiency and the issue of getting the economic mechanisms right, the practitioners of macroeconometric modelling should give priority to the latter.