THE ECONOMETRICS OF MACROECONOMIC MODELLING

# The roles of statistics and economic theory in macroeconometrics

Macroeconometrics draws upon and combines two academic disciplines— economics and statistics. There is hardly any doubt that statisticians have had a decisive influence on quantitative economics in general and on modern macroeconometric modelling in particular.

2.2.1 The influx of statistics into economics

The history of macroeconomic modelling starts with the Dutch economist Jan Tinbergen who built and estimated the first macroeconometric models in the mid-1980s (Tinbergen 1937). Tinbergen showed how one could build a system of equations into an econometric model of the business cycle, using economic theory to derive behaviourally motivated dynamic equations and statistical methods (of that time) to test them against data. However, there seems to be universal agreement that statistics entered the discipline of economics and econometrics with the contributions of the Norwegian economist Trygve Haavelmo in his treatise ‘The Probability Approach in Econometrics’ (Haavelmo 1944; see Royal Swedish Academy of Science 1990, Klein 1988, Morgan 1990, or Hendry and Morgan 1995). Haavelmo was inspired by some of the greatest statisticians of that time. As Morgan (1990, p. 242) points out, he was converted to the usefulness of probability ideas by Jerzy Neyman and he was also influenced by Abraham Wald whom Haavelmo credited as the source of his understanding of statistical theory.

For our purpose, it is central to note that Haavelmo recognised and explained in the context of an economic model, that the joint distribution of all observable variables for the whole sample period provides the most general framework for statistical inference (see Hendry et al. 1989). This applies to specification (op. cit., pp. 48-49), as well as identification, estimation, and hypothesis testing:

all come down to one and the same thing, namely to study the properties of the joint probability distribution of random (observable) variables in a stochastic equation system (Haavelmo 1944, p. 85).

Haavelmo’s probabilistic revolution changed econometrics. His thoughts were immediately adopted by Jacob Marschak—a Russian-born scientist who had studied statistics with Slutsky—as the research agenda for the Cowles Commision for the period 1943-47, in reconsidering Tinbergen’s work on business cycles cited above. Marschak was joined by a group of statisticians, mathematicians, and economists, including Haavelmo himself. Their work was to set the standards for modern econometrics and found its way into the textbooks of econometrics from Tintner (1952) and Klein (1953) onwards.

The work of the Cowles Commision also laid the foundations for the development of macroeconomic models and model building which grew into a large industry in the United States in the next three decades (see Bodkin et al. 1991 and Wallis 1994). These models were mainly designed for short (and medium) term forecasting, that is, modelling business cycles. The first model (Klein 1950) was made with the explicit aim of implementing Haavelmo’s ideas into Tinbergen’s modelling framework for the United States economy. Like Tinbergen’s model, it was a small model and Klein put much weight on the modelling of simultaneous equations. Later models became extremely large systems in which more than 1000 equations were used to describe the behaviour of a modern industrial economy. In such models, less care could be taken about each econometric specification, and simultaneity could not be treated in a satisfactory way. The forecasting purpose of these models meant that they were evaluated on their performance. When the models failed to forecast the effects on the industrial economies of the oil price shocks in 1973 and 1979, the macroeconomic modelling industry lost much of its position, particularly in the United States.

In the 1980s, macroeconometric models took advantage of the methodological and conceptual advances in time-series econometrics. Box and Jenkins (1970) had provided and made popular a purely statistical tool for modelling and forecasting univariate time-series. The second influx of statistical methodology into econometrics has its roots in the study of the non-stationary nature of economic data series. Clive Granger—with his background in statistics—has in a series of influential papers shown the importance of an econometric equation being balanced. A stationary variable cannot be explained by a non-stationary variable and vice versa (see, for example, Granger 1990). Moreover, the concept of cointegration (see Granger 1981; Engle and Granger 1987, 1991)—that a linear combination of two or more non-stationary variables can be stationary— has proven useful and important in macroeconometric modelling. Within the framework of a general VAR, the statistician Spren Johansen has provided (see Johansen 1988, 1991, 19956) the most widely used tools for testing for cointegration in a multivariate setting, drawing on the analytical framework of canonical correlation and multivariate reduced rank regression in Anderson (1951).

Theoretical models are necessary tools in our attempts to understand and ‘explain’ events in real life. (Haavelmo 1944, p. 1)

But whatever ‘explanations’ we prefer, it is not to be forgotten that they are all our own artificial inventions in a search for an understanding of real life; they are not hidden truths to be ‘discovered’. (Haavelmo 1944, p. 3)

With this starting point, one would not expect the facts or the observations to agree with any precise statement derived from a theoretical model. Economic theories must then be formulated as probabilistic statements and Haavelmo viewed probability theory as indispensable in formalising the notion of models being approximations to reality.