Introduction to the Mathematical and Statistical Foundations of Econometrics
Hypotheses Testing
Theorem 5.19 is the basis for hypotheses testing in linear regression analysis. First, consider the problem of whether a particular component of the vector Xj of explanatory variables in model (5.31) have an effect on Yj or not. If not, the corresponding component of в is zero. Each component of в corresponds to a component ві 0, i > 0, of в0. Thus, the null hypothesis involved is
H : ві,0 = 0. (5.49)
Let ei be component i of 0, and let the vector ei be column i of the unit matrix Ik. Then it follows from Theorem 5.19(a) that, under the null hypothesis (5.49),
в
ti = —, = ~ tn-k■ (5.50)
SjeJ( XT X)-1ei
The statistic t i in (5.50) is called the t-statistic or t-value of the coefficient ei,0. If ei 0 can take negative or positive values, the appropriate alternative hypothesis is
Hi: eifi = 0. (5.51)
Given the size а є (0, 1)ofthetest, the critical value y corresponds to P [|T | > Y] = a, where T ~ tn-k. Thus, the null hypothesis (5.49) is accepted if |(; | < y and is rejected in favor of the alternative hypothesis (5.51) if t | > y. In the latter case, we say that 0i,0 is significant at the a x 100% significance level. This test is called the two-sided t-test. The critical value y can be found in Table IV.1 in Appendix IV for the 5% and 10% significance levels and degrees of freedom n — к ranging from 1 to 30. As follows from the results in the next chapter, for larger values of n — к one may use the critical values of the standard normal test in Table IV.3 of Appendix IV.
If the possibility that 0i,0 is negative can be excluded, the appropriate alternative hypothesis is
H+: 0ifi > 0. (5.52)
Given the size a, the critical value y+ involved now corresponds to P [T > Y+] = a, where again T ~ tn—k. Thus, the null hypothesis (5.49) is accepted if ti < Y+ and is rejected in favor of the alternative hypothesis (5.52) if ti > y+. This is the right-sided t-test. The critical value y+ can be found in Table IV.2 of Appendix IV for the 5% and 10% significance levels and degrees of freedom n — к ranging from 1 to 30. Again, for larger values of n — к one may use the critical values of the standard normal test in Table IV.3 of Appendix IV.
Similarly, if the possibility that 0i,0 is positive can be excluded, the appropriate alternative hypothesis is
H—: eit0 < 0. (5.53)
Then the null hypothesis (5.49) is accepted if ti >~Y+ and is rejected in favor of the alternative hypothesis (5.53) if ti < — y+. This is the left-sided t-test.
If the null hypothesis (5.49) is not true, then one can show, using the results in the next chapter, that for n and arbitrary M > 0, P [ti > M] ^ 1 if
0i,0 > 0 and P [ti < —M] ^ 1 if 0i,0 < 0. Therefore, the t-tests involved are consistent.
Finally, consider a null hypothesis of the form
H0: R00 = q, (5.54)
where R is a given m x к matrix with rank m < k, and q is a given m x 1 vector.
For example, the null hypothesis that the parameter vector в in model (5.31) is a zero vector corresponds to R = (0, Ік— 1), q = 0 є Кк—1, m = к — 1. This hypothesis implies that none of the components of Xj have any effect on Yj. In that case Yj = a + Uj, and because Uj and Xj are independent, so are Yj and
Xj.
It follows from Theorem 5.19(b) that, under the null hypothesis (5.54),
Given the size a, the critical value у is chosen such that P [F > у ] = a, where F ~ Fm, n-k. Thus, the null hypothesis (5.54) is accepted if F < у and is rejected in favor of the alternative hypothesis R90 = q if F > у. For obvious reasons, this test is called the F test. The critical value у can be found in Appendix IV for the 5% and 10% significance levels. Moreover, one can show, using the results in the next chapter, that if the null hypothesis (5.54) is false, then for any M > 0, limn^TOP [F > M] = 1. Thus, the F test is a consistent test.
1. Let
5. Show that for a random sample X 1, X2,.Xn from a distribution with expectation x and variance a2 the sample variance (5.15) is an unbiased estimator of a2 even if the distribution involved is not normal.
6. Prove (5.17).
7. Show that for a random sample X1, X2,..., Xn from a multivariate distribution with expectation vector /x and variance matrix X the sample variance matrix (5.18) is an unbiased estimator of X.
8. Given a random sample of size n from the N(x, a2) distribution, prove that the Cramer-Rao lower bound for an unbiased estimator of a2 is 2a4/n.
9. Prove Theorem 5.15.
10. Prove the second equalities in (5.34) and (5.35).
11. Show that the Cramer-Rao lower bound of an unbiased estimator of (5.37) is equal to a2(E [XTX])-1.
12. Show that the matrix (5.38) is idempotent.
13. Why is (5.40) true?
14. Why does (5.43) imply (5.44)?
15. Suppose your econometric software package reports that the OLS estimate of a regression parameter is 1.5, with corresponding t-value 2.4. However, you are only interested in whether the true parameter value is 1 or not. How would you test these hypotheses? Compute the test statistic involved. Moreover, given that the sample size is n = 30 and that your model has five other parameters, conduct the test at the 5% significance level.
APPENDIX
Note again that the condition AB = O only makes sense if both A and B are singular; if otherwise, either A, B or both are O. Write A = QA ЛA QT, B = QBЛвQB, where Qa and QB are orthogonal matrices of eigenvectors and ЛA and Лв are diagonal matrices of corresponding eigenvalues. Then Z1 = XTQaЛA QtaX, Z2 = XTQBЛBQTBX. Because A and B are both singular, it follows that ЛA and Лв are singular. Thus, let
where Л1 is the к x к diagonal matrix of positive eigenvalues and — Л2 the m x m diagonal matrix of negative eigenvalues of A with к + m < n. Then
Similarly, let
Л12 |
O |
O |
O |
Л2 —Л2 |
O |
O |
O |
O |
Лв = |
where Л2 is the p x p diagonal matrix of positive eigenvalues and — Л*2 is the q x q diagonal matrix of negative eigenvalues of B with p + q < n. Then
(Л2)2 |
O |
O |
h |
O |
O |
O |
(Л2)2 |
O |
O |
—Iq |
O |
O |
O |
O |
O |
O |
1 1 |
Z 2 = X1 Qb |
(Л2)2 |
O |
O |
O |
(Л2)2 |
O |
O |
O |
O |
Next, for instance, let |
|||||
O |
O O |
||||
Y1 = |
O |
1 Л] |
O |
Q TX = M1X, |
|
O |
O |
O |
|||
Yt)2 |
O |
O ^ |
QBx = M2 X. |
||
Y2 = |
O |
(Л2 )2 |
O |
||
O |
O |
O |
Then, for instance, |
fh O
Z1 = 7/1 O — Im. O O
Y2T D2Y2,
where the diagonal matrices Di and D2 are nonsingular but possibly different. Clearly, Z1 and Z2 are independent if Y1 and Y2 are independent. Now observe that
Л1 |
O |
O |
(ik |
O |
O |
|
O |
1 Л] O |
O |
о |
- Im |
O |
|
O |
a4 1 1 |
m ) |
O |
O |
s' 1 1 3 |
AB = Qa |
/ЛІ |
O |
O |
qTQb |
/Л1)1 |
O |
O |
1 л] |
O |
O |
Л2)1 |
|
O |
O |
O |
O |
O |
x |
(Ip O O /(ЛО0 O O
x O -Iq O о Л2)2 o qB