Methodological issues (Chapter 2)
The specification of a macroeconomic model rests in both economic theory and the econometric analysis of historical data. Different model builders place different weight on these two inputs to model specification, which is one reason why models differ and controversies remain, cf. the report on macroeconomic modelling and forecasting at the Bank of England (Pagan 2003).
The balance between theoretical consistency and empirical relevance is also of interest for model users, model owners, and research funding institutions. In the case where the model is used in a policy context, model-users may have a tendency to put relatively more weight on ‘closeness to theory’, on the grounds that theory consistency ensures model properties (e. g. impulse responses of dynamic multipliers) which are easy to understand and to communicate to the general public. While a high degree of theory consistency is desirable in our discipline, it does not by itself imply unique models. This is basically because, in macroeconomics, no universally accepted theory exists. Thus, there is little reason to renounce the requirement that empirical modelling and confrontation of theories with the data are essential ingredients in the process of specifying a serious macro model. In particular, care must be taken to avoid that theory consistency is used rhetorically to impute specific and controversial properties on the models that influence policy-making.
Recently, Pagan (2003) claimed that ‘state of the art modelling’ in economics would entail a dynamic stochastic general equilibrium (DSGE) model, since that would continue the trend taken by macroeconomic modelling in academia into the realm of policy-oriented modelling. However, despite its theory underpinnings, it is unclear if DSGE models have structural properties in the sense of being invariant over time, across regimes and with respect to additional information (e. g. the information embedded in existing studies, see Chapter 7).
A failure on any of these three requirements means that the model is non-structural according to the wider understanding of ‘structure’ adopted
in this book: a structural representation of an economy embodies not only theory content, but explanatory power, stability, and robustness to regime shifts (see Hendry (1995a) and Section 2.3.2 for an example). Since structural representation is a many-faceted model feature, it cannot be uniquely identified with closeness to theory. Instead, theory-driven models are prone to well-known econometric problems, which may signal mis-specification with damaging implications for policy recommendations (see Nymoen 2002).
The approach advocated in this book is therefore a more balanced view. Although theory is a necessary ingredient in the modelling process, empirical determination is always needed to specify the ‘final model’. Moreover, as noted, since there are many different theoretical approaches already available in macroeconomics, DSGE representing only one, there is always the question about which theory to use. In our view, economists have been too ready to accept theoretical elegance and rigour as a basis for macroeconomic relationships, even though the underlying assumptions are unrealistic and the representative agent a dubious construct at the macro level. Our approach is instead to favour models that are based on realistic assumptions which are at least consistent with such well-documented phenomena as, for example, involuntary unemployment, a non-unique ‘natural rate’, and the role of fairness in wage-setting. Such theories belong to behavioural macroeconomics as defined by Akerlof (2002). In Chapters 3-7 of this book, one recurrent theme is to gauge the credibility and usefulness of rival theories of wage - and price-setting from that perspective.
Many macroeconometric models are rather large systems of equations constructed piece-by-piece, for example, equation-by-equation, or, at best, sector-by-sector (the consumption expenditure system, the module for labour demand, and investment, etc.). Thus, there is no way around the implication that the models’ overall properties only can be known when the construction is complete. The literature on macroeconometric modelling has produced methods of evaluation of the system of equations as a whole (see, for example, Klein et al. 1999).
Nevertheless, the piecewise construction of macroeconometric models is the source of much of the criticism levied against them. First, the specification process may become inefficient, as a seemingly valid single equation or module may either lead to unexpected or unwanted model properties. This point is related to the critique of structural econometric models in Sims (1980), where the author argues that such models can only be identified if one imposes ‘incredible’ identifying restrictions to the system of equations (see Section 2.2.2). Second, the statistical assumptions underlying single equation analysis may be invalidated when the equation is grafted into the full model. The most common examples are probably that the single equation estimation method is seen to become inconsistent with the statistical model implied by the full set of equations, or that the equation is too simple in the light of the whole model (e. g. omits a variable). These concerns are real, but they may also be seen as unavoidable costs of formulating models that go beyond a handful of equations, and which must therefore be balanced against the benefits of a more detailed modelling of the functional relationships of the macro economy. Chapter 2 discusses operational strategies that promise to reduce the cost of piece-by-piece model specification.
In Section 1.4, we briefly outline the transmission mechanism as represented in the medium scale macroeconometric model RIMINI (an acronym for a model for the Real economy and Income accounts—a MINI version—see Section 1.4), which illustrates the complexity and interdependencies in a realistic macroeconometric model and also why one has to make sense out of bits and pieces rather than handling a complete model. The modelling of subsystems implies making simplifications of the joint distribution of all observable variables in the model through sequential conditioning and marginalisations, as discussed in Section 2.3.
The methodological approach of sequential subsector modelling is highlighted by means of two case studies. First, the strategy of sequential simplification is illustrated for the household sector in RIMINI, see Section 2.4. The empirical consumption function we derive has been stable for more than a decade. Thus, it is of particular interest to compare it with rival models in the literature, as we do in Section 2.4.2. Second, in Chapter 9 we describe a stepwise procedure for modelling wages and prices. This is an exercise that includes all ingredients regarded as important for establishing an econometrically relevant submodel. In this case we are in fact entertaining two models: one core model for wage and price determination, where we condition on a number of explanatory variables and a second model, which is a small aggregated econometric model for the entire economy. Although different, the embedding model shares many properties of the full RIMINI model.
The credentials of the core model within the embedding aggregated model can be seen as indirect evidence for the validity of the assumptions underlying the use of the core model as part of the larger model, that is, RIMINI. The small econometric model is, however, a model of interest in its own right. First, it is small enough to be estimated as a simultaneous system of equations, and the size makes it suitable for model developments and experiments that are cumbersome, time-consuming, and in some cases impossible to carry out with the full-blown RIMINI model. When we analyse the transmission mechanism in the context of econometric inflation targeting in Chapter 9 and evaluate different monetary policy rules in Chapter 10, this is done by means of the small econometric model, cf. Section 9.5.