Advanced Econometrics Takeshi Amemiya

Results of Manski and McFadden

Manski and McFadden (1981) presented a comprehensive summary of all the types of models and estimators under choice-based sampling, including the results of the other papers discussed elsewhere in Section 9.5. However, we shall discuss only those topics that are not dealt with in the other papers, namely, the consistency and the asymptotic normality of CBMLE in the cases where/is known and Q is either known or unknown and MME in the case
where/is unknown and Q is known. Because the results presented here are straightforward, we shall only sketch the proof of consistency and asymptotic normality and shall not spell out all the conditions needed.

First, consider the case where both/and Q are known. CBMLE maximizes Lgo given in (9.5.3) subject to the condition

QoU) = J P(M dx, j = 1, 2,. . . , m. (9.5.31)

The condition corresponding to j = 0 is redundant because Sjl0 P{jx, P) — 1, which will be implicitly observed throughout the following analysis. Ignoring the known components of Ьл, CBMLE is defined as maxi­mizing XJLj log PUixt, P) with respect to 0 subject to (9.5.31). Because the above maximand is the essential part of log L* under random sampling, we have CBMLE = RSMLE in this case. However, the properties of the estima­tor must be derived under the assumption of choice-based sampling and are therefore different from the results of Section 9.3.

To prove consistency, we need merely note that consistency of the uncon­strained CBMLE follows from the general result of Section 4.2.2 and that the probability limit of the constrained CBMLE should be the same as that of the unconstrained CBMLE if the constraint is true.

To prove asymptotic normality, it is convenient to rewrite the constraint

Подпись: (9.5.31) in the form of fi=g(a), (9.5.32) where a is a (k — m)-vector, as we did in Section 4.5.1. Then, by a straightfor-ward application of Theorem 4.2.4, we obtain M&vL ~ Oo) — ЛГ[0, (G'-Eyy'G)-1], (9.5.33) where G = [dfi/da']^ and у = [dlogP(j)/д0]л. Therefore, using a Taylor series approximation, jkiL~0o = G(«ML ~OO (9-5.34) we obtain MLL fio) * m G(G'Eyy'G)-lG']. (9.5.35) As we would expect, is asymptotically more efficient than WMLE fiw. In other words, we should have G(G'Eyy'G)~lG' S {Ewyy'YlEw1yyEwyy,y (9.5.36)

for any w and G. This inequality follows straightforwardly from (9.5.19).

Second, consider the case where / is known and Q is unknown. Here, CBMLE is defined as maximizing (9.5.2) without constraint. The estimator is consistent by the result of Section 4.2.2, and its asymptotic normally follows from Theorem 4.2.4. Thus

(9.5.37)

л

where S = [3 log Q/dfiД,. As we would expect, /?Ml is not as efficient as because

G(G'Eyy'G)~lG' ^ (Eyy')-' =§ {Eyy' - Edd')~ (9.5.38)

Finally, consider the case where/is unknown and Q is known. We shall discuss CBMLE for this model in Section 9.5.5. Here, we shall consider the Manski-McFadden estimator (MME), which maximizes

m_ А РУі*п fl)QoUi)~'H(ji)

j-0

The motivation for this estimator is the following: As we can see from (9.5.3), the joint probability of j and x under the present assumption is

h(j, x) - P(j]x, №*)QoUrlH(j)- (9-5.40)

Therefore the conditional probability of j given x is

(9.5.4.)

2 h(J> *)

which leads to the conditional likelihood function (9.5.39). The estimator is computationally attractive because the right-hand side of (9.5.39) does not depend on /(x), which is assumed unknown and requires a nonstandard analysis of estimation, as we shall see in Sections 9.5.4 and 9.5.5.

To prove the consistency of the estimator, we observe that

plim n~l log ¥ = £ log - WXW) lHU) (9.5.42)

^pu)Qourlmj)

= J [E+ log Л(/|х)]С(х) dx,

where C(x) = P0(j)QoU)~lHjf(x) and E+ is the expectation taken with

respect to the true conditional probability A0(/|x). Equation (9.5.42) is maxi­
mized at fi0 because £(x) > 0 and

Подпись:E+ log h(jx) < E+ log hoUx),

which, like (9.5.8), is a consequence of Jensen’s inequality (4.2.6). By a straightforward application of Theorem 4.1.3, we can show

(9.5.44)

where e = [d log P(j)QoU)~lHU)/dflfia - The asymptotic covariance matrix in (9.5.44) is neither larger nor smaller (in matrix sense) than the asymptotic covariance matrix of WMLE given in (9.5.17).15

Добавить комментарий

Advanced Econometrics Takeshi Amemiya

Nonlinear Limited Information Maximum Likelihood Estimator

In the preceding section we assumed the model (8.1.1) without specifying the model for Y( or assuming the normality of u, and derived the asymptotic distribution of the class of …

Results of Cosslett: Part II

Cosslett (1981b) summarized results obtained elsewhere, especially from his earlier papers (Cosslett, 1978, 1981a). He also included a numerical evalua­tion of the asymptotic bias and variance of various estimators. We …

Other Examples of Type 3 Tobit Models

Roberts, Maddala, and Enholm (1978) estimated two types of simultaneous equations Tobit models to explain how utility rates are determined. One of their models has a reduced form that is …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.