The roles of statistics and economic theory in macroeconometrics

Macroeconometrics draws upon and combines two academic disciplines— economics and statistics. There is hardly any doubt that statisticians have had a decisive influence on quantitative economics in general and on modern macroeconometric modelling in particular.

2.2.1 The influx of statistics into economics

The history of macroeconomic modelling starts with the Dutch economist Jan Tinbergen who built and estimated the first macroeconometric models in the mid-1980s (Tinbergen 1937). Tinbergen showed how one could build a system of equations into an econometric model of the business cycle, using economic theory to derive behaviourally motivated dynamic equations and statistical methods (of that time) to test them against data. However, there seems to be universal agreement that statistics entered the discipline of eco­nomics and econometrics with the contributions of the Norwegian economist Trygve Haavelmo in his treatise ‘The Probability Approach in Econometrics’ (Haavelmo 1944; see Royal Swedish Academy of Science 1990, Klein 1988, Mor­gan 1990, or Hendry and Morgan 1995). Haavelmo was inspired by some of the greatest statisticians of that time. As Morgan (1990, p. 242) points out, he was converted to the usefulness of probability ideas by Jerzy Neyman and he was also influenced by Abraham Wald whom Haavelmo credited as the source of his understanding of statistical theory.

For our purpose, it is central to note that Haavelmo recognised and explained in the context of an economic model, that the joint distribution of all observable variables for the whole sample period provides the most gen­eral framework for statistical inference (see Hendry et al. 1989). This applies to specification (op. cit., pp. 48-49), as well as identification, estimation, and hypothesis testing:

all come down to one and the same thing, namely to study the properties of the joint probability distribution of random (observable) variables in a stochastic equation system (Haavelmo 1944, p. 85).

Haavelmo’s probabilistic revolution changed econometrics. His thoughts were immediately adopted by Jacob Marschak—a Russian-born scientist who had studied statistics with Slutsky—as the research agenda for the Cowles Commision for the period 1943-47, in reconsidering Tinbergen’s work on busi­ness cycles cited above. Marschak was joined by a group of statisticians, mathematicians, and economists, including Haavelmo himself. Their work was to set the standards for modern econometrics and found its way into the textbooks of econometrics from Tintner (1952) and Klein (1953) onwards.

The work of the Cowles Commision also laid the foundations for the devel­opment of macroeconomic models and model building which grew into a large industry in the United States in the next three decades (see Bodkin et al. 1991 and Wallis 1994). These models were mainly designed for short (and medium) term forecasting, that is, modelling business cycles. The first model (Klein 1950) was made with the explicit aim of implementing Haavelmo’s ideas into Tinber­gen’s modelling framework for the United States economy. Like Tinbergen’s model, it was a small model and Klein put much weight on the modelling of simultaneous equations. Later models became extremely large systems in which more than 1000 equations were used to describe the behaviour of a modern industrial economy. In such models, less care could be taken about each econo­metric specification, and simultaneity could not be treated in a satisfactory way. The forecasting purpose of these models meant that they were evaluated on their performance. When the models failed to forecast the effects on the indus­trial economies of the oil price shocks in 1973 and 1979, the macroeconomic modelling industry lost much of its position, particularly in the United States.

In the 1980s, macroeconometric models took advantage of the methodo­logical and conceptual advances in time-series econometrics. Box and Jenkins (1970) had provided and made popular a purely statistical tool for modelling and forecasting univariate time-series. The second influx of statistical method­ology into econometrics has its roots in the study of the non-stationary nature of economic data series. Clive Granger—with his background in statistics—has in a series of influential papers shown the importance of an econometric equation being balanced. A stationary variable cannot be explained by a non-stationary variable and vice versa (see, for example, Granger 1990). Moreover, the con­cept of cointegration (see Granger 1981; Engle and Granger 1987, 1991)—that a linear combination of two or more non-stationary variables can be stationary— has proven useful and important in macroeconometric modelling. Within the framework of a general VAR, the statistician Spren Johansen has provided (see Johansen 1988, 1991, 19956) the most widely used tools for testing for cointegra­tion in a multivariate setting, drawing on the analytical framework of canonical correlation and multivariate reduced rank regression in Anderson (1951).

Also, there has been an increased attention attached to the role of eval­uation in modern econometrics (see Granger 1990, 1999). The so-called LSE methodology emphasises the importance of testing and evaluating econometric models (see Hendry 1993a, 1995a, Mizon 1995, and Ericsson 2005). Interest­ingly, Hendry et al. (1989) claim that many aspects of the Haavelmo research agenda were ignored for a long time. For instance, the joint distribution function for observable variables was recognised by the Cowles Commission as central to solving problems of statistical inference, but the ideas did not influence empirical modelling strategies for decades. By contrast, many developments in econometrics after 1980 are in line with this and other aspects of Haavelmo’s research programme. This is also true for the role of economic theory in econometrics:

Theoretical models are necessary tools in our attempts to understand and ‘explain’ events in real life. (Haavelmo 1944, p. 1)

But whatever ‘explanations’ we prefer, it is not to be forgotten that they are all our own artificial inventions in a search for an understanding of real life; they are not hidden truths to be ‘discovered’. (Haavelmo 1944, p. 3)

With this starting point, one would not expect the facts or the observations to agree with any precise statement derived from a theoretical model. Economic theories must then be formulated as probabilistic statements and Haavelmo viewed probability theory as indispensable in formalising the notion of models being approximations to reality.

Добавить комментарий


Inflation equations derived from the P*-model

The P*-model is presented in Section 8.5.4. The basic variables of the model are calculated in much the same way for Norway as for the Euro area in the previous …

Forecast comparisons

Both models condition upon the rate of unemployment ut, average labour productivity at, import prices pit, and GDP mainland output yt. In order to investigate the dynamic forecasting properties we …

The NPCM in Norway

Consider the NPCM (with forward term only) estimated on quarterly Norwegian data[65]: Apt = 1.06 Apt+1 + 0.01 wst + 0.04 Apit + dummies (7.21) (0.11) (0.02) (0.02) x2(10) = …

Как с нами связаться:

тел./факс +38 05235  77193 Бухгалтерия
+38 050 512 11 94 — гл. инженер-менеджер (продажи всего оборудования)

+38 050 457 13 30 — Рашид - продажи новинок
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов шлакоблочного оборудования:

+38 096 992 9559 Инна (вайбер, вацап, телеграм)
Эл. почта: