Introduction to the Mathematical and Statistical Foundations of Econometrics
First — and Second-Order Conditions
The following conditions guarantee that the first - and second-order conditions for a maximum hold.
Assumption 8.1: The parameter space © is convex and в0 is an interior point of ©. The likelihood function L n (в) is, with probability 1, twice continuously differentiable in an open neighborhood ©0 of в0, and, for i, i2 = 1, 2, 3,...,m,
![]() |
![]() |
![]() |
![]() |
![]() |
and
![]() |
![]() |
![]() |
![]() |
![]() |
|||
Theorem 8.2: Under Assumption 8.1,
= —Var
Proof: For notational convenience I will prove this theorem for the univariate parameter case m = 1 only. Moreover, I will focus on the case that Z = (zT, •••, zT )T is a random sample from an absolutely continuous distribution with density f (z^0).
Observe that
1 n f
E [ln( L n (в ))/n] = -£> [ln( f( Zj |в))] = Ы(/^в ))f(z^o)dz,
n j=i J
(8.23)
It follows from Taylor’s theorem that, for в e ©0 and every 8 = 0 for which
в + 8 e ©0, there exists a A.(z, 8) e [0, 1] such that
ln(f (z^ + 8)) — ln(f (z|в))
= 8 d ln(f(z |в)) 1 82 d 2ln(f (z6 + k(z,8)8))
d в + 2 8 (d (в + Mz,8)8))2
Note that, by the convexity of ©, в0 + A.(z, 8)8 e ©• Therefore, it follows from condition (8.22), the definition of a derivative, and the dominated convergence theorem that
d f f d ln( [Ш))
d^j Щ/^в))f(z|вo)dz = j -------------- f(z|вo)dz• (8.25)
![]() |
![]() |
![]() |
![]() |
||
Similarly, it follows from condition (8.21), Taylor’s theorem, and the dominated convergence theorem that
![]() |
Moreover,
The first part of Theorem 8.2 now follows from (8.23) through (8.27).
![]() |
As is the case for (8.25) and (8.26), it follows from the mean value theorem and conditions (8.21) and (8.22) that
and
[(dAz^ )/d в2„m4J| [d2 f (z^ ),,
- Д-№|в»)'4' - = ! - wdA
- f (d ln(f(z|9)) d)2/(-іво)^г|в.»o.
The adaptation of the proof to the general case is reasonably straightforward and is therefore left as an exercise. Q. E.D.
The matrix
H = Var (9 ln( L n (в ))/дв T|в =во) (8.30)
is called the Fisher information matrix. As we have seen in Chapter 5, the inverse of the Fisher information matrix is just the Cramer-Rao lower bound of the variance matrix of an unbiased estimator of в0.