Mostly Harmless Econometrics: An Empiricist’s Companion

Control for Covariates Using the Propensity Score

The most important result in regression theory is the omitted variables bias formula: coefficients on included variables are unaffected by the omission of variables when the variables omitted are uncorrelated with the variables included. The propensity score theorem, due to Rosenbaum and Rubin (1983), extends this idea to estimation strategies that rely on matching instead of regression, where the causal variable of interest is a treatment dummy.[31]

The propensity score theorem states that if potential outcomes are independent of treatment status conditional on a multivariate covariate vector, Xi, then potential outcomes are independent of treatment status conditional on a scalar function of covariates, the propensity score, defined as p(Xi) = E[Di|Xi]. Formally, we have

Theorem 3.3.1 The Propensity-Score Theorem.

Suppose the CIA holds for Yji; j = 0,1. Then YjiUDi|p(Xi).

Proof. The claim is true if P[Dj = 1|Yjj, p(Xj)] does not depend on Yjj.

Подпись: P [Dj = 1|Yjj,p(Xj)]E[Dj |Yji; p(Xi)]

EfE[Dj|Yjj, p(Xj), Xj]|Yjj, p(Xj)} EfE[Dj|Yjj, Xj]|Yjj, p(Xj)}

EfE[Dj|Xj]|Yjj, p(Xj)}, by the CIA.

But E{E[Dj|Xj]|Yjj, p(Xj)} = E{p(Xj)|Yjj, p(Xj)}, which is clearly just p(Xj). ■

Like the OVB formula for regression, the propensity score theorem says you need only control for covari­ates that affect the probability of treatment. But it also says something more: the only covariate you really need to control for is the probability of treatment itself. In practice, the propensity score theorem is usually used for estimation in two steps: first, p(Xj) is estimated using some kind of parametric model, say, Logit or Probit. Then estimates of the effect of treatment are computed either by matching on the fitted values from this first step, or by a weighting scheme described below (see, Imbens, 2004, for an overview).

In practice there are many ways to use the propensity score theorem for estimation. Direct propensity - score matching works like covariate matching, except that we match on the score instead of the covariates directly. By the propensity score theorem and the CIA,

E [Yij - Y0j|Dj = 1] = E {E[Yj|p(Xj), Dj = 1] - E [Yj |p(Xj), Dj =0]|Dj = 1} .

Estimates of the effect of treatment on the treated can therefore be obtained by stratifying on an estimate of p(Xj) and substituting conditional sample averages for expectations or by matching each treated observation to controls with the same or similar values of the propensity score (both of these approaches were used by Dehejia and Wahba, 1999). Alternately, a model-based or non-parametric estimate of E[Yj|p(Xj),Dj] can be substituted for these conditional mean functions and the outer expectation replaced with a sum (as in Heckman, Ichimura, and Todd, 1998).

Подпись: step by exploiting the fact that the CIA implies E image090 Подпись: = E[Yij] and E[] = E[Yoj]. Therefore.

The somewhat niftier weighting approach to propensity-score estimation skips the cumbersome matching

Подпись: E[Y 1 j - Yoj] Подпись: E E Подпись: YjD j Yj(1 - D j) p(Xj) 1 - p(Xj), (Dj - p(Xj))Yj ~ p(Xj)(1 -p(Xj))_ . Подпись: (3.3.11)

given a scheme for estimating p(Xj), we can construct estimates of the average treatment effect from the sample analog of

This last expression is an estimand of the form suggested by Newey (1990) and Robins, Mark, and Newey

Подпись: E[YU - Y0i|Di = 1] = E image097 Подпись: (3.3.12)

(1992). We can similarly calculate the effect of treatment on the treated from the sample analog of:

The idea that you can correct for non-random sampling by weighting by the reciprocal of the probability of selection dates back to Horvitz and Thompson (1952). Of course, to make this approach feasible, and for the resulting estimates to be consistent, we need a consistent estimator for p(Xi)

The Horvitz-Thompson version of the propensity-score approach is appealing since the estimator is essen­tially automated, with no cumbersome matching required. The Horvitz-Thompson approach also highlights the close link between propensity-score matching and regression, much as discussed for covariate matching in section 3.3.1. Consider again the regression estimand, Sr, for the population regression of Yi on Di, controlling for a saturated model for covariates. This estimand can be written

Подпись: (3.3.13)E[(Di - p(Xi))Yi] E[p(Xi)(1 - p(Xi))]

Подпись: E Подпись: g(Xi) image102 Подпись: Yi(1 - Di) j ] (1 - p(Xi ))J Г Подпись: (3.3.14)

The two Horvitz-Thompson matching estimands and the regression estimand are all members of the class of weighted average estimands considered by Hirano, Imbens, and Ridder (2003):

where g(Xi) is a known weighting function (To go from estimand to estimator, replace p(Xi) with a consistent estimator, and expectations with sums). For the average treatment effect, set g(Xi) = 1; for the effect on the treated, set g(Xi) = PX); and for regression set

p(Xi)(1 - p(Xi))

g( i) E[p(Xi)(1 - p(Xi))] •

This similarity highlights once again the fact that regression and matching—including propensity score matching—are not really different animals, at least not until we specify a model for the propensity score.

A big question here is how best to model and estimate p(Xi), or how much smoothing or stratification to use when estimating E[Yi|p(Xi),Di], especially if the covariates are continuous The regression analog of this question is how to parametrize the control variables (e. g., polynomials or main effects and interaction terms if the covariates are coded as discrete). The answer to this is inherently application-specific. A growing empirical literature suggests that a Logit model for the propensity score with a few polynomial terms in continuous covariates works well in practice, though this cannot be a theorem (see, e. g., Dehejia and Wahba, 1999).

A developing theoretical literature has produced some thought-provoking theorems on efficient use of the
propensity score. First, from the point of view of asymptotic efficiency, there is usually a cost to matching on the propensity score instead of full covariate matching. We can get lower asymptotic standard errors by matching on any covariate that explains outcomes, whether or not it turns up in the propensity score. This we know from Hahn’s (1998) investigation of the maximal precision that it is possible to obtain for estimates of treatment effects under the CIA, with and without knowledge of the propensity score. For example, in Angrist (1998), there is an efficiency gain from matching on year of birth, even if the probability of serving in the military is unrelated to birth year, because earnings are related to birth year. A regression analog for this point is the result that even in a scenario with no omitted variables bias, the long regression generates more precise estimates of the coefficients on the variables included in a short regression whenever these variables have some predictive power for outcomes because these covariates lead to a smaller residual variance (see Section 3.1.3).

Hahn’s (1998) results raise the question of why we should ever bother with estimators that use the propensity score. A philosophical argument is that the propensity score rightly focuses researcher attention on models for treatment assignment, something about which we may have reasonably good information, instead of the typically more complex and mysterious process determining outcomes. This view seems especially compelling when treatment assignment is the outcome of human institutions or government regulations while the process determining outcomes is more anonymous (e. g., a market). For example, in a time series evaluation of the causal effects of monetary policy, Angrist and Kuersteiner (2004) argue that we know more about how the Federal Reserve sets interests rates than about the process determining GDP. In the same spirit, it may also be easier to validate a model for treatment assignment than to validate a model for outcomes (see, e. g., Rosenbaum and Rubin, 1985, for a version of this argument).

A more precise though purely statistical argument for using the propensity score is laid out in Angrist and Hahn (2004). This paper shows that even though there is no asymptotic efficiency gain from the use of estimators based on the propensity score, there will often be a gain in precision in finite samples. Since all real data sets are finite, this result is empirically relevant. Intuitively, if the covariates omitted from the propensity score explain little of the variation in outcomes (in a purely statistical sense), it may then be better to ignore them than to bear the statistical burden imposed by the need to estimate their effects. This is easy to see in studies using data sets such as the NLSY where there are hundreds of covariates that might predict outcomes. In practice, we focus on a small subset of all possible covariates. This subset is chosen with an eye to what predicts treatment as well as outcomes.

Finally, Hirano, Imbens, and Ridder (2003) provide an alternative asymptotic resolution of the “propen­sity score paradox” generated by Hahn’s (1998) theorems. They show that even though estimates of treat­ment effects based on a known propensity score are inefficient, for models with continuous covariates, a Horvitz-Thompson-type weighting estimator is efficient when weighting uses a non-parametric estimate of the score. The fact that the propensity score is estimated and the fact that it is estimated non-parametrically are both key for the Hirano, Imbens, and Ridder conclusions.

Do the Hirano, Imbens, and Ridder (2003) results resolve the propensity-score paradox? For the moment, we prefer the finite-sample resolution given by Angrist and Hahn (2004). Their results highlight the fact that it is the researchers’ willingness to impose some restrictions on the score which gives propensity-score-based inference its conceptual and statistical power. In Angrist (1998), for example, an application with high­dimensional though discrete covariates, the unrestricted non-parametric estimator of the score is just the empirical probability of treatment in each covariate cell. With this nonparametric estimator plugged in for p(Xi), it’s straightforward to show that the sample analogs of (3.3.11) and (3.3.12) are algebraically equivalent to the corresponding full-covariate matching estimators. Hence, it’s no surprise that score-based estimation comes out efficient, since full-covariate matching is the asymptotically efficient benchmark. An essential element of propensity score methods is the use of prior knowledge for dimension reduction. The statistical payoff is an improvement in finite-sample behavior. If you’re not prepared to smooth, restrict, or otherwise reduce the dimensionality of the matching problem in a manner that has real empirical consequences, then you might as well go for full covariate matching or saturated regression control.

Добавить комментарий

Mostly Harmless Econometrics: An Empiricist’s Companion

Regression Details

1.4.1 Weighting Regression Few things are as confusing to applied researchers as the role of sample weights. Even now, 20 years post - Ph. D., we read the section of …

Getting a Little Jumpy: Regression Discontinuity Designs

But when you start exercising those rules, all sorts of processes start to happen and you start to find out all sorts of stuff about people... Its just a way …

Two-Sample IV and Split-Sample IVF

GLS estimates of Г in (4.3.1) are consistent because E The 2SLS minimand can be thought of as GLS applied to equation (4.3.1), after multiplying by /N to keep the …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.