Advanced Econometrics Takeshi Amemiya
Bootstrap and Jacknife Methods
In this subsection, we shall consider briefly two methods of approximating the distribution of the nonlinear least squares estimator, these methods are called the bootstrap and the jackknife methods (see Efron, 1982, for the details). As the reader will see from the following discussion, they can be applied to many situations other than the nonlinear regression model.
The bootstrap method is carried out in the following steps:
1. Calculate u, = y, —/,(/?), where fi is the NLLS estimator.
2. Calculate the empirical distribution function F of {u,}.
3. Generate NT random variables {ы*}, /—1,2, ^ . ., N and t =
1,2,. . . , T, according to F, and calculate y* = fjfi) 4- mJ.
4. Calculate the NLLS estimator fif that minimizes
i[^-/X^)]2for/=l,2,. . . ,N.
/-1
A
5. Approximate the distribution of fi by the empirical distribution function of {fi?}.
The jackknife method works as follows: Partition у as y' = (yj, Уг,. . • , Удг), where each y{ is an m-vector such that mN = T. Let fi be the NLLS estimator using all data and let Д_, be the NLLS estimator obtained by omitting y,. Then “pseudovalues” fif = Nfl — (N — 1)Д_,, і = 1, 2,. . . , N, can be treated like N observations (though not independent) on fi. Thus, for example, Vfi may be estimated by (TV—l)-1 ~ fi*m - fi*Y, where fi* = N-^fif.
It is interesting to note that p* may be regarded as an estimator of ft in its own right and is called the jackknife estimator. Akahira (1983) showed that in the i. i.d. sample case the jackknife estimator is asymptotically equivalent to the bias-corrected maximum likelihood estimator (see Section 4.2.4) to the order T~K