Springer Texts in Business and Economics
Restricted MLE and Restricted Least Squares
Maximizing the likelihood function given in (7.16) subject to R/3 = r is equivalent to minimizing the residual sum of squares subject to R/3 = r. Forming the Lagrangian function
Ф(в, f) = (y — X0Y(y — X@) + 2g!(Rf3 — r) (7.32)
and differentiating with respect to в and f one gets
дФ(в, д)/дв = —2Xly + 2XlXp + 2R'f = 0 (7.33)
д V(e, f)/df = 2R — r) = 0 (7.34)
Solving for f, we premultiply (7.33) by R(XIX)-1 and use (7.34)
P = [R(XIX )-1Ri]-1(R^ols — r) (7.35)
Substituting (7.35) in (7.33) we get
Pels = Pols — (X 'X )-1Rl[R(X X )-1R1]-1 (RPols — r) (7.36)
The restricted least squares estimator of в differs from that of the unrestricted OLS estimator by the second term in (7.36) with the term in parentheses showing the extent to which the unrestricted OLS estimator satisfies the constraint. Problem 12 shows that eRLS is biased unless the restriction Re = r is satisfied. However, its variance is always less than that of eoLS. This brings in the trade-off between bias and variance and the MSE criteria which was discussed in Chapter 2.
The Lagrange Multiplier estimator fi is distributed N(0,a2[R(X'X)-1R']-1) under the null hypothesis. Therefore, to test f = 0, we use
P'[R(X 'X )-1R']p/a2 = (RPols — r)'[R(X'X )-1R']-1(RRols — r)/a2 (7.37)
Since f measures the cost of imposing the restriction Re = r, it is no surprise that the right hand side of (7.37) was already encountered in (7.29) and is distributed as x^.