Mostly Harmless Econometrics: An Empiricist’s Companion

Asymptotic OLS Inference

In practice, we don’t usually know what the CEF or the population regression vector is. We therefore draw statistical inferences about these quantities using samples. Statistical inference is what much of traditional econometrics is about. Although this material is covered in any Econometrics text, we don’t want to skip the inference step completely. A review of basic asymptotic theory allows us to highlight the important fact that the process of statistical inference is entirely distinct from the question of how a particular set of regression

Подпись: Sample is limited to white men, age 40-49. Data is from Census IPUMS 1980, 5% sample. Figure 3.1.2: Regression threads the CEF of average weekly wages given schooling

estimates should be interpreted. Whatever a regression coefficient may mean, it has a sampling distribution that is easy to describe and use for statistical inference.[9]

We are interested in the distribution of the sample analog of

Подпись:

Подпись: in repeated samples. Suppose the vector W. = Подпись: is independently and identically distributed in

P = E [XiX'j-1 E [X. y.]

Подпись: P = Подпись: EXiXi image031
Подпись: -i-i

a sample of size N. A natural estimator of the first population moment, E[W.], is the sum, — ^i=l w.. By the law of large numbers, this sample moment gets arbitrarily close to the corresponding population moment as the sample size grows. We might similarly consider higher-order moments of the elements of W., e. g., the matrix of second moments, E[W. W/], with sample analog -^^2N=1 W. W/. Following this principle, the method of moments estimator of ft replaces each expectation by a sum. This logic leads to the Ordinary Least Squares (OLS) estimator

Although we derived ft as a method of moments estimator, it is called the OLS estimator of ft because it solves the sample analog of the least-squares problem described at the beginning of Section З.1.2.[10]

A - Individual-level data

. regress earnings school, robust

Подпись: Number of obs F( 1,409433) Prob > F R-squared Adj R-squared Root MSE Подпись: 409435 49118.25 0.0000 0.1071 0.1071 .67879 Source | SS df MS

------ 1-------------------------

Model | 22631.4793 1 22631.4793

Residual | 188648.31 409433 .460755019

--------- 1-----------------------------

Total | 211279.789 409434 .51602893

Подпись:

image036

■+

B - Means by years of schooling

. regress average_earnings school [aweight=count], robust

(sum of wgt is 4.0944e+05)

Подпись: SS df MS Number of obs = 21 F ( 1, 19) = 540.31 1.16077332 1 1.16077332 Prob > F = 0.0000 . 040818796 19 .002148358 R-squared = 0.9660 Adj R-squared = 0.9642 1.20159212 20 .060079606 Root MSE = . 04635 Подпись: +Подпись: +Source

Model

Residual

Подпись:

Подпись: + average | earnings | Coef. + school | .0674387 const. | 5.8357 61
Подпись: Old Fashioned Std. Err. t .0029013 23.24 .0381792 152.85

Total

Figure 3.1.3: Micro-data and grouped-data estimates of returns to schooling. Source: 1980 Census - IPUMS, 5 percent sample. Sample is limited to white men, age 40-49. Derived from Stata regression output. Old - fashioned standard errors are the default reported. Robust standard errors are heteroscedasticity-consistent. Panel A uses individual-level data. Panel B uses earnings averaged by years of schooling.

The asymptotic sampling distribution of /3 depends solely on the definition of the estimand (i. e., the nature of the thing we’re trying to estimate, /) and the assumption that the data constitute a random sample. Before deriving this distribution, it helps to record the general asymptotic distribution theory that covers our needs. This basic theory can be stated mostly in words. For the purposes of these statements, we assume the reader is familiar with the core terms and concepts of statistical theory (e. g., moments, mathematical expectation, probability limits, and asymptotic distributions). For definitions of these terms and a formal mathematical statement of the theoretical propositions given below, see, e. g., Knight (2000).

THE LAW OF LARGE NUMBERS Sample moments converge in probability to the corresponding population moments. In other words, the probability that the sample mean is close to the population mean can be made as high as you like by taking a large enough sample.

THE CENTRAL LIMIT THEOREM Sample moments are asymptotically Normally distributed (after subtracting the corresponding population moment and multiplying by the square root of the sample size). The covariance matrix is given by the variance of the underlying random variable. In other words, in large enough samples, appropriately normalized sample moments are approximately Normally distributed.

SLUTSKY’S THEOREM

(a) Consider the sum of two random variables, one of which converges in distribution and the other converges

in probability to a constant: the asymptotic distribution of this sum is unaffected by replacing the one that converges to a constant by this constant. Formally, let be a statistic with a limiting distribution and let bn be a statistic with probability limit b. Then a^ + bn and a^ + b have the same limiting distribution.

(b) Consider the product of two random variables, one of which converges in distribution and the other

converges in probability to a constant: the asymptotic distribution of this product is unaffected by replacing the one that converges to a constant by this constant. This allows us to replaces some sample moments by population moments (i. e., by their probability limits) when deriving distributions. Formally, let a^ be a statistic with a limiting distribution and let bn be a statistic with probability limit b. Then aNbn and a^b have the same asymptotic distribution.

THE CONTINUOUS MAPPING THEOREM Probability limits pass through continuous functions. For example, the probability limit of any continuous function of a sample moment is the function evaluated at the corresponding population moment. Formally, the probability limit of h(pN) is h(b) where plim bN = b and h(-) is continuous at b. X

THE DELTA METHOD Consider a vector-valued random variable that is asymptotically Normally dis­tributed. Most scalar functions of this random variable are also asymptotically Normally distributed, with covariance matrix given by a quadratic form with the covariance matrix of the random variable on the inside and the gradient of the function evaluated at the probability limit of the random vari­able on the outside. Formally, the asymptotic distribution of h(pN) is Normal with covariance matrix Vh(b)'QVh(b) where plim bn = b, h(-) is continuously differentiable at b with gradient Vh(b), and bn has asymptotic covariance matrix fi.[11]

Подпись: Yi = X'3 +[Yi - X'3] Подпись: X'3 + ^, Подпись: (3.1.6)

We can use these results to derive the asymptotic distribution of /3 in two ways. A conceptually straight­forward but somewhat inelegant approach is to use the delta method: 3 is a function of sample moments, and is therefore asymptotically Normally distributed. It remains only to find the covariance matrix of the asymptotic distribution from the gradient of this function. (Note that consistency of 3 comes immediately from the continuous mapping theorem). An easier and more instructive derivation uses the Slutsky and central limit theorems. Note first that we can write

where the residual e' is defined as the difference between the dependent variable and the population regression function, as before. This is as good a place as any to point out that these residuals are uncorrelated with the regressors by definition of 3. In other words, E[X'e'] =0 is a consequence of 3 = E[XiXi]~1E[XjYj] and ei = Yj—X'3, and not an assumption about an underlying economic relation. We return to this important point in the discussion of causal regression models in Section 3.2.[12]

Substituting the identity 3.1.6 for Yj in the formula for 3, we have

'У ' [13]iei

3 = 3 + [^ XjXj

The asymptotic distribution of 3 is the asymptotic distribution of /N(3—3) = N XiXi] 1 Xiei.

By the Slutsky theorem, this has the same asymptotic distribution as E[XiXi]_1 Xiei. Since E[Xiei] =

0, "У2Xiei is a root-N-normalized and centered sample moment. By the central limit theorem, this is

asymptotically Normally distributed with mean zero and covariance matrix E[X'Xie2], since this fourth mo­ment is the covariance matrix of Xiei. Therefore, 3 has an asymptotic Normal distribution, with probability limit 3, and covariance matrix

E [XiXi ]-1E [XiXie2]E [XiXi]-1. (3.1.7)

The standard errors used to construct t-statistics are the square roots of the diagonal elements of this

matrix. In practice these standard errors are estimated by substituting sums for expectations, and using the estimated residuals, є; =Yj — Xjf to form the empirical fourth moment, ^[XjXje2]/N.

Asymptotic standard errors computed in this way are known as heteroskedasticity-consistent standard errors, White (1980a) standard errors, or Eicker-White standard errors in recognition of Eicker’s (1967) derivation. They are also known as “robust” standard errors (e. g., in Stata). These standard errors are said to be robust because, in large enough samples, they provide accurate hypothesis tests and confidence intervals given minimal assumptions about the data and model. In particular, our derivation of the limiting distribution makes no assumptions other than those needed to ensure that basic statistical results like the central limit theorem go through. These are not, however, the standard errors that you get by default from packaged software. Default standard errors are derived under a homoskedasticity assumption, specifically, that E[e2|Xj] = a2, a constant. Given this assumption, we have

E [XjX'ef ] = E(XjXjE [e2|Xj]) = a2E[XjXj],

E[XjX']-1E[XjX'e2]E [XjX']-

Подпись: E [XjX']-1 a2E[Xj Xj]E [XjXjp1 E [XjX']-1 a2. Подпись: (3.1.8)

by iterating expectations. The asymptotic covariance matrix of i then simplifies to

The diagonal elements of (3.1.8) are what SAS or Stata report unless you request otherwise.

Our view of regression as an approximation to the CEF makes heteroskedasticity seem natural. If the CEF is nonlinear and you use a linear model to approximate it, then the quality of fit between the regression line and the CEF will vary with Xj. Hence, the residuals will be larger, on average, at values of Xj where the fit is poorer. Even if you are prepared to assumed that the conditional variance of Yj given Xj is constant, the fact that the CEF is nonlinear means that E[(y;—Xjf)2|Xj] will vary with Xj. To see this, note that, as a rule,

E[(Yj - Xjf)2|Xj] = (3.1.9)

E{[(Yj - E^X]) + (E[Yj|Xj] - Xjf)]2|Xj}

= V [Y;|X;] + (E[Y;|X;] - Xjf )2.

Therefore, even if V[y^X;] is constant, the residual variance increases with the square of the gap between the regression line and the CEF, a fact noted in White (1980b).[14]

In the same spirit, it’s also worth noting that while a linear CEF makes homoskedasticity possible, this is

not a sufficient condition for homoskedasticity. Our favorite example in this context is the linear probability model (LPM). A linear probability model is any regression where the dependent variable is zero-one, i. e., a dummy variable such as an indicator for labor force participation. Suppose the regression model is saturated, so the CEF is linear. Because the CEF is linear, the residual variance is also the conditional variance, V[Yj|Xj]. But the dependent variable is a Bernoulli trial and the variance of a Bernoulli trial is P[yі|Xi](1 — P[yі|Xi]). We conclude that LPM residuals are necessarily heteroskedastic unless the only regressor is a constant.

These points of principle notwithstanding, as an empirical matter, heteroskedasticity may matter little. In the micro-data schooling regression depicted in Figure 3.1.3, the robust standard error is.0003447, while the old-fashioned standard error is.0003043, only slightly smaller. The standard errors from the grouped - data regression, which are necessarily heteroskedastic if group sizes differ, change somewhat more; compare the.004 robust standard to the.0029 conventional standard error. Based on our experience, these differences are typical. If heteroskedasticity matters too much, say, more than a 30% increase or any marked decrease in standard errors, you should worry about possible programming errors or other problems (for example, robust standard errors below conventional may be a sign of finite-sample bias in the robust calculation; see Chapter 8, below.)

Добавить комментарий

Mostly Harmless Econometrics: An Empiricist’s Companion

Regression Details

1.4.1 Weighting Regression Few things are as confusing to applied researchers as the role of sample weights. Even now, 20 years post - Ph. D., we read the section of …

Getting a Little Jumpy: Regression Discontinuity Designs

But when you start exercising those rules, all sorts of processes start to happen and you start to find out all sorts of stuff about people... Its just a way …

Two-Sample IV and Split-Sample IVF

GLS estimates of Г in (4.3.1) are consistent because E The 2SLS minimand can be thought of as GLS applied to equation (4.3.1), after multiplying by /N to keep the …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.