Introduction: small vs. large models

Macroeconometric modelling aims at explaining the empirical behaviour of an actual economic system. Such models will be systems of inter-linked equations estimated from time-series data using statistical or econometric techniques.

A conceptual starting point is the idea of a general stochastic process that has generated all data we observe for the economy, and that this process can be summarised in terms of the joint probability distribution of random observable variables in a stochastic equation system: see Section 2.3. For a modern economy, the complexity of such a system, and of the corresponding joint probability distribution, is evident. Nevertheless, it is always possible to take a highly aggregated approach in order to represent the behaviour of a few ‘headline’ variables (e. g. inflation, GDP growth, unemployment) in a small- scale model. If small enough, the estimation of such econometric models can be

based on formally established statistical theory (as with low-dimensional vector autoregressive models [VARs]), where the statistical theory has recently been extended to cointegrated variables.

However, it takes surprisingly little in terms of user-instigated detailing of model features—for example, more than one production sector, separate modelling of consumption and investment—to render simultaneous modelling of all equations impossible in practice. Hence, models that are used for analysing the impact of the governmental budget on the economy are typically very large systems of equations. Even in the cases where the model user from the outset targets only one variable, as with the recently contrived inflation targeting, policy choices are made against the backdrop of a broader analysis of the effects of the interest rate on the economy (the nominal and real exchange rates, output growth, employment and unemployment, housing prices, credit growth, and financial stability). Thus, it has been a long-standing task of model builders to establish good practice and to develop operational procedures for model building which secures that the end product of piecewise modelling is tenable and useful. Important contributions in the literature include Christ (1966), Klein (1983), Fair (1984, 1994), Klein et al. (1999), and the surveys in Bodkin et al. (1991) and Wallis (1994).

In this book, we supplement the existing literature by suggesting the following operational procedure[5]:

1. By relevant choices of variables we define and analyse subsectors of the economy (by marginalisation).

2. By distinguishing between exogenous and endogenous variables we con­struct (by conditioning) relevant partial models, which we will call models of type A.

3. Finally, we need to combine these submodels in order to obtain a Model B for the entire economy.

Our thesis is that, given that Model A is a part of Model B, it is possible to learn about Model B from Model A. The alternative to this thesis amounts to a kind of creationism,[6] that is, unless of course macroeconometrics should be restricted to aggregate models.

Examples of properties that can be discovered using our procedure include cointegration in Model B. This follows from a corollary of the theory of coin­tegrated systems: any nonzero linear combination of cointegrating vectors is also a cointegrating vector. In the simplest case, if there are two cointegrating vectors in Model B, there always exists a linear combination of those coin­tegrating vectors that ‘nets out’ one of the variables. Cointegration analysis

of the subset of variables (i. e. Model A) excluding that variable will result in a cointegrating vector corresponding to that linear combination. Thus, despite being a property of Model B, cointegration analysis of the subsystem (Model A) identifies one cointegration vector. Whether that identification is economically meaningful or not remains in general an open issue, and any such claim must be substantiated in each separate case. We provide several examples in this book: in Section 2.4 we discuss the identification of a consumption function as a cointegrating relationship, and link that discussion to the concept of par­tial structure. In Chapter 5, the identification of cointegrating relationships corresponding to price - and wage-setting is discussed in detail.

Other important properties of the full model that can be tested from subsystems include the existence of a natural rate of unemployment (see Chap­ter 6), and the relevance of forward looking terms in wage - and price-setting (see Chapter 7).

Nevertheless, as pointed out by Johansen (2002), there is a Catch 22 to the above procedure: a general theory for the three steps will contain criteria and conditions which are formulated for the full system. However, sophist­icated piecewise modelling can be seen as a sort of gradualism—seeking to establish submodels that represent partial structure: that is, partial models that are invariant to extensions of the sample period, to changes elsewhere in the economy (e. g. due to regime shifts) and remain the same for extensions of the information set. However, gradualism also implies a readiness to revise a submodel. Revisions are sometimes triggered by forecast failure, but perhaps less often than believed in academic circles: see Section 2.3.2. More mundane reasons include data revisions and data extensions which allow more precise and improved model specifications. The dialogue between model builders and model users often result in revisions too. For example, experienced model users are usually able to pinpoint unfortunate and unintended implications of a single equation’s (or submodel) specification on the properties of the full model.

Obviously, gradualism does not preclude thorough testing of a submodel. On the contrary, the first two steps in the operational procedure above do not require that we know the full model, and testing those conditions has some intuitive appeal since real life provides ‘new evidence’ through the arrival of new data and by ‘natural experiments’ through regime shifts like, for example, changes in government or the financial deregulation in many European economies in the recent past. For the last of the three steps, we could in principle think of the full model as the ultimate extension of the information set, and so establishing structure or partial structure represents a way to meet Johansen’s observation. In practice, we know that the full model is not attainable. What we do then is to graft the sector model in simplified approximations of Model B, and test the relevant exogeneity assumptions of the partial model within that framework. To the extent that the likelihood function of the simplified Model B is adequately representing or approximating the likelihood function of the full Model B, there is no serious problem left. It is also possible to corroborate the entire procedure, since it is true that Model A can be tested and improved gradually on new information, which is a way of gaining knowledge that paral­lels modern Darwinism in the natural sciences. We develop these views further in Section 2.5.

A practical reason for focusing on submodels is that the modellers may have good reasons to study some parts of the economy more carefully than other parts. For a central bank that targets inflation, there is a strong case for getting the model of the inflationary process right. This calls for careful modelling of the wage and price formation conditional on institutional arrangements for the wage bargain, the development in financial markets, and the evolving real economy in order to answer a number of important questions: Is there a natural rate (of unemployment) that anchors unemployment as well as inflation? What is the importance of expectations for inflation and how should they be modelled? What is the role of money in the inflationary process?

We find that in order to answer such questions—and to probe the compet­ing hypotheses regarding supply-side economics—a detailed modelling, drawing on information specific to the economy under study—is necessary. Taking account of the simultaneity is to a large extent a matter of estimation effi­ciency. If there is a tradeoff between such efficiency and the issue of getting the economic mechanisms right, the practitioners of macroeconometric modelling should give priority to the latter.

Добавить комментарий


Inflation equations derived from the P*-model

The P*-model is presented in Section 8.5.4. The basic variables of the model are calculated in much the same way for Norway as for the Euro area in the previous …

Forecast comparisons

Both models condition upon the rate of unemployment ut, average labour productivity at, import prices pit, and GDP mainland output yt. In order to investigate the dynamic forecasting properties we …

The NPCM in Norway

Consider the NPCM (with forward term only) estimated on quarterly Norwegian data[65]: Apt = 1.06 Apt+1 + 0.01 wst + 0.04 Apit + dummies (7.21) (0.11) (0.02) (0.02) x2(10) = …

Как с нами связаться:

тел./факс +38 05235  77193 Бухгалтерия
+38 050 512 11 94 — гл. инженер-менеджер (продажи всего оборудования)

+38 050 457 13 30 — Рашид - продажи новинок
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов шлакоблочного оборудования:

+38 096 992 9559 Инна (вайбер, вацап, телеграм)
Эл. почта:

За услуги или товары возможен прием платежей Онпай: Платежи ОнПай