A COMPANION TO Theoretical Econometrics
Maximum marginal likelihood estimation
There is a mounting literature that suggests that the method of maximum likelihood estimation as outlined in Section 3.1 can lead to biased estimates and inaccurate asymptotic test procedures based on these estimates. The problem is that, for the purpose of estimating p and y, в and o2 are nuisance parameters. An early contribution on methods of overcoming the problems of nuisance parameters was made by Kalbfleisch and Sprott (1970) who proposed the use of marginal likelihood estimation. This approach does not work for all cases of nuisance
parameters but fortunately works very well for our problem of estimating p and Y in the presence of в and a2.
An important contribution to this literature was made by Tunnicliffe Wilson (1989) who showed that the marginal loglikelihood for p and y in (3.22) is
1 1 n - k
fm(P, Y) = - 2 logl°(P' Y)l - 2loglX*(P' YYX*(P^ Y)l------------------------ 2“log e*(P' Y),e*(P' Y)
(3.35)
where X*(p, y) and e*(p, y) are given by (3.28) and (3.31), respectively. Maximum marginal likelihood (MML) estimates can be obtained from maximizing (3.35) with respect to p and y. The various tricks outlined in Section 3.1 can be used to evaluate (3.35) for given p and y in an efficient manner.
Levenbach (1972) considered MML estimation for the AR(1) model and Cooper and Thompson (1977) demonstrated its use reduces estimation bias for y1 in the MA(1) model. Corduas (1986) demonstrated that MML estimation removes estimation bias in estimates of p1 when estimating regressions with trending regressors and an AR(1) error term. Tunnicliffe Wilson (1989) also presented evidence that the MML reduces estimation bias. In addition, see Rahman and King (1998) and Laskar and King (1998). The evidence is clear. In order to reduce estimation bias in estimates of p and Y, it is better to use the MML rather than the profile or concentrated likelihood.