Advanced Econometrics Takeshi Amemiya
Results of Manski and McFadden
Manski and McFadden (1981) presented a comprehensive summary of all the types of models and estimators under choice-based sampling, including the results of the other papers discussed elsewhere in Section 9.5. However, we shall discuss only those topics that are not dealt with in the other papers, namely, the consistency and the asymptotic normality of CBMLE in the cases where/is known and Q is either known or unknown and MME in the case
where/is unknown and Q is known. Because the results presented here are straightforward, we shall only sketch the proof of consistency and asymptotic normality and shall not spell out all the conditions needed.
First, consider the case where both/and Q are known. CBMLE maximizes Lgo given in (9.5.3) subject to the condition
QoU) = J P(M dx, j = 1, 2,. . . , m. (9.5.31)
The condition corresponding to j = 0 is redundant because Sjl0 P{jx, P) — 1, which will be implicitly observed throughout the following analysis. Ignoring the known components of Ьл, CBMLE is defined as maximizing XJLj log PUixt, P) with respect to 0 subject to (9.5.31). Because the above maximand is the essential part of log L* under random sampling, we have CBMLE = RSMLE in this case. However, the properties of the estimator must be derived under the assumption of choice-based sampling and are therefore different from the results of Section 9.3.
To prove consistency, we need merely note that consistency of the unconstrained CBMLE follows from the general result of Section 4.2.2 and that the probability limit of the constrained CBMLE should be the same as that of the unconstrained CBMLE if the constraint is true.
To prove asymptotic normality, it is convenient to rewrite the constraint
for any w and G. This inequality follows straightforwardly from (9.5.19).
Second, consider the case where / is known and Q is unknown. Here, CBMLE is defined as maximizing (9.5.2) without constraint. The estimator is consistent by the result of Section 4.2.2, and its asymptotic normally follows from Theorem 4.2.4. Thus
(9.5.37)
л
where S = [3 log Q/dfiД,. As we would expect, /?Ml is not as efficient as because
G(G'Eyy'G)~lG' ^ (Eyy')-' =§ {Eyy' - Edd')~ (9.5.38)
Finally, consider the case where/is unknown and Q is known. We shall discuss CBMLE for this model in Section 9.5.5. Here, we shall consider the Manski-McFadden estimator (MME), which maximizes
m_ А РУі*п fl)QoUi)~'H(ji)
j-0
The motivation for this estimator is the following: As we can see from (9.5.3), the joint probability of j and x under the present assumption is
h(j, x) - P(j]x, №*)QoUrlH(j)- (9-5.40)
Therefore the conditional probability of j given x is
(9.5.4.)
2 h(J> *)
which leads to the conditional likelihood function (9.5.39). The estimator is computationally attractive because the right-hand side of (9.5.39) does not depend on /(x), which is assumed unknown and requires a nonstandard analysis of estimation, as we shall see in Sections 9.5.4 and 9.5.5.
To prove the consistency of the estimator, we observe that
plim n~l log ¥ = £ log - WXW) lHU) (9.5.42)
^pu)Qourlmj)
= J [E+ log Л(/|х)]С(х) dx,
where C(x) = P0(j)QoU)~lHjf(x) and E+ is the expectation taken with
respect to the true conditional probability A0(/|x). Equation (9.5.42) is maxi
mized at fi0 because £(x) > 0 and
E+ log h(jx) < E+ log hoUx),
which, like (9.5.8), is a consequence of Jensen’s inequality (4.2.6). By a straightforward application of Theorem 4.1.3, we can show
(9.5.44)
where e = [d log P(j)QoU)~lHU)/dflfia - The asymptotic covariance matrix in (9.5.44) is neither larger nor smaller (in matrix sense) than the asymptotic covariance matrix of WMLE given in (9.5.17).15