A COMPANION TO Theoretical Econometrics
Uniform laws of large numbers
In the previous sections we have been concerned with various notions of convergence for sequences of random variables and random vectors. Sometimes one is confronted with sequences of random functions, say Qn(0), that depend on a parameter vector 0 contained in some parameter space 0. That is, Qn(0) is a random variable for each fixed 0 C 0.17 As an example of a random function
consider, for example, the loglikelihood function of iid random variables Za,..., Zn with a density that depends on a parameter vector 0:
Qn(0) = - X q(Zt, 0) (10.7)
nti
where q is the logarithm of the density of Zt. Clearly, for every fixed 0 £ 0 we can apply the notions of convergence for random variables discussed above to Qn(0). However, for many purposes these notions of "pointwise" convergence are not sufficient and stronger notions of convergence are needed. For example, those stronger notions of convergence are often useful in proving consistency of maximum likelihood estimators: in many cases an explicit expression for the maximum likelihood estimator will not be available. In those cases one may try to deduce the convergence behavior of the estimator from the convergence behavior of the loglikelihood objective function Qn(0). By Kolmogorov's LLN we have
Qn(0) Q(0) for all 0 £ 0, (10.8)
where Q(0) = Eq(Zt, 0), provided E | q(Z t, 0)| < ^. A well-established result from the theory of maximum likelihood estimation tells us furthermore that the limiting objective function Q(0) is uniquely maximized at the true parameter value, say 00, provided 00 is identified. It is tempting to conclude that these two facts imply a. s. convergence of the maximum likelihood estimators, i. e. of the maximizers of the objective functions Qn(0), to 0O. Unfortunately, this line of reasoning is not conclusive in general, as can be seen from counter examples; see, e. g., Amemiya (1985, p. 109). However, this line of reasoning can be salvaged if we can establish not only "pointwise" convergence a. s., i. e. (10.8), but even uniform convergence a. s., i. e.,
sup| Qn(0) - Q(0)|^ 0, (10.9)
0Є0
and if, for example, 0 is compact and Q is continuous.18
The above discussion motivates interest in results that establish uniform convergence of random functions Qn(0). In the important special case where Qn(0) = n_1 X n= 1 q(Zt, 0) and Q(0) = EQn(0) such results are called uniform laws of large numbers (ULLNs). We next present an ULLN for functions of iid random variables.
Theorem 23.19 Let Zt be a sequence of identically and independently distributed k x 1 random vectors, let 0 be a compact subset of Rp, and let q be a real valued function on Rk x 0. Furthermore, let q(., 0) be Borel-measurable for each 0 £ 0, and let q(z, .) be continuous for each z £ Rk. If E sup0£0 | q(Zt, 0)| < ^, then 1 n
- X [q(Zt, 0) - Eq(Zt, 0)] nl~1
i. e. Qn(0) = n 1 X n=1 q(Z t, 0) satisfies a ULLN.
The theorem also holds if the assumption that Zt is iid is replaced by the assumption that Zt is stationary and ergodic; see, e. g., Potscher and Prucha (1986, Lemma A.2). Uniform laws of large numbers that also cover functions of dependent and heterogeneous random vectors are, for example, given in Andrews (1987) and Potscher and Prucha (1989); for additional references see Potscher and Prucha (1997, ch. 5).