INTRODUCTION TO STATISTICS AND ECONOMETRICS
TESTS OF HYPOTHESES
10.3.1 Student's t Test
In Section 9.5 we showed that a hypothesis on the mean of a nomral i. i.d. sample with an unknown variance can be tested using the Student’s t statistic. A similar test can be devised for testing hypotheses on a and (3 in the bivariate regession model. Throughout this section we assume that ut) are normally distributed.
We shall consider the null hypothesis H0: (3 = (30, where (30 is a known specified value. A hypothesis on a can be similarly dealt with. Since (3 is a good estimator of p, it is reasonable to expect that a test statistic which essentially depends on (3 is also a good one. A linear combination of normal random variables is normally distributed by Theorem 5.3.2, so we see from (10.2.19) that
(10.3.1) ^Z(-- (P - Po) ~ N(0, 1)
cr
2
under the null hypothesis H0. Therefore, if ct were known, the distribution of P would be completely specified and we could perform the stand - ard normal test. If a is unknown, which is usually the case, we must use a Student’s t test. From Definition 2 of the Appendix we know that in order to construct a Student’s t statistic, we need a chi-square variable that is distributed independently of (10.3.1). In the next two paragraphs we show that а Ъщ fits this specification.
We state without proof that
Ъи, a
(10.3.2) ~ Xt—2 • a
To prove (10.3.2) we must show that а Ъщ can be written as a sum of the squares of T — 2 independent standard normal variables. We can do so by the method of induction, as in the proof of Theorem 3 of the Appendix. Since this proof is rather cumbersome, we shall postpone it until Chapter 12, where a simpler proof using matrix analysis is given.
We now prove that (10.3.1) and (10.3.2) are independent. Using (10.2.5) and (10.2.11), we have
(10.3.3) щ = щ — й - (P — P)xf.
Therefore, using (10.2.19), we obtain
* r * &
X, Lx, X,
---- — ----------- -— - —!— = 0,
Z(xf)2 TL{x*f L{xff
since Lx* = 0 by (10.2.17). Equation (10.3.4) shows that the covariance between p and ut is zero; but since they are jointly normal, it implies their independence by Theorem 5.3.4. Therefore, (10.3.1) and (10.3.2) are independent by Theorem 3.5.1.
Using Definition 2 of the Appendix, we conclude that under H0
(З — Зо) ~ h-ч (Student’s t with T — 2 degrees of freedom)
2 2
where a is the unbiased estimator of a defined in (10.2.45). Note that the left-hand side of (10.3.5) is simply 3 — Зо divided by the square root of an unbiased estimate of its variance. We could use either a one-tail or a two-tail test, depending on the alternative hypothesis.
The test is not exact if {ut) are not normal. Because of the asymptotic normality given in (10.2.76), however, the test based on (10.3.5) is approximately correct for a large sample even if {ut} are not normal, provided that the assumptions for the asymptotic normality are satisfied.
A test on the null hypothesis a = cio can be performed using a similar result:
ao) H-%-