Introduction to the Mathematical and Statistical Foundations of Econometrics
Testing Parameter Hypotheses
Suppose you consider starting a business to sell a new product in the United States such as a European car that is not yet being imported there. To determine whether there is a market for this car in the United States, you have randomly selected n persons from the population of potential buyers of this car. Each person j in the sample is asked how much he or she would be willing to pay for this car. Let the answer be Yj. Moreover, suppose that the cost of importing this car is a fixed amount Z per car. Denote Xj = ln(Yj /Z), and assume that Xj is N(д, a2) distributed. If д > 0, then your planned car import business will be profitable; otherwise, you should forget about this idea.
To decide whether д > 0 or д < 0, you need a decision rule based on the random sample X = (X1, X2,..., Xn)T. Any decision rule takes the following form. Given a subset C of Kn, to be determined below in this section, decide that д > 0 if X є C, and decide that д < 0 if X є C. Thus, you decide that the hypothesis д < 0 is true if I(X є C) = 0, and you decide that the hypothesis д > 0 is true if I(X є C) = 1. In this case the hypothesis д < 0 is called the null hypothesis, which is usually denoted by H0 : д < 0, and the hypothesis д > 0 is called the alternative hypothesis and is denoted by H1: д > 0. The procedure itself is called a statistical test.
This decision rule yields two types of errors. In the first one, called the Type I error, you decide that H1 is true whereas in reality H0 is true. In the other error, called the Type II error, H0 is considered to be true whereas in reality H1 is true. Both errors come with costs. If the Type I error occurs, you will incorrectly assume your car import business to be profitable, and thus you will lose your investment if you start up your business. If the Type II error occurs, you will forgo a profitable business opportunity. Clearly, the Type I error is the more serious of the two.
Now choose C such that X є C if and only if *Jn(X/S) > в for some fixed в > 0. Then
P[X є C] = P[/n(X/S) > в] = P[/n(X — д)/Б + /йд/Б > в]
= P[/n(X — д)/а + /ид/а > в • S/a]
TO
= j P[S/a < (u + л/нд/а)/в] exp[—u2/2]//2ndu,
— TO
(5.25)
where the last equality follows from Theorem 5.14 and (5.19). If д < 0, this probability is that of a Type I error. Clearly, the probability (5.25) is an increasing function of д; hence, the maximum probability of a Type I error is obtained for д = 0. But if д = 0, then it follows from Theorem 5.15 that^/n(X /S) ~ tn—1; hence,
TO
max P[X є C] = / h„- 1(u)du = a, (5.26)
x<0 J
в
for instance, where hn-1 is the density of the tn-1 distribution (see (5.21)). The probability (5.26) is called the size of the test of the null hypothesis involved, which is the maximum risk of a Type I error, and a x 100% is called the significance level of the test. Depending on how risk averse you are, you have to choose a size a є (0, 1), and therefore в = вп must be chosen such that /в°° hn-1(u)du = a. This value вп is called the critical value ofthe test involved, and because it is based on the distribution of */n(X/S), the latter is considered the test statistic involved. Moreover, a x 100% is called the significance level of the test.
If we replace в in (5.25) by в n, 1 minus the probability of a Type II error is a function of x/a > 0:
TO
Pn (x/a ) = / P [S/a < (u + fnx/a)/вп]
-*/nx/a
X > 0.
This function is called the power function, which is the probability of correctly rejecting the null hypothesis H0 in favor of the alternative hypothesis H1. Consequently, 1 - pn (x/a ),x > 0, is the probability of a Type II error.
The test in this example is called a t-test because the critical value вn is derived from the t-distribution.
A test is said to be consistent if the power function converges to 1 as n ^to for all values of the parameter(s) under the alternative hypothesis. Using the results in the next chapter, one can show that the preceding test is consistent:
lim pn (x/a) = 1 if x > 0. (5.28)
n^TO
Now let us consider the test of the null hypothesis H0 : x = 0 against the alternative hypothesis H1: x = 0. Under the null hypothesis, ^(X/S) ~ tn-1 exactly. Given the size a є (0, 1), choose the critical value вn > 0 as in (5.22). Then H0 is accepted if s/n(X/S) < вn and rejected in favor of H1 if к/П( X/S) > вп. The power function of this test is
TO
pn(x/a) = f P[S/a < u + fnx/a/вп] exp[-u2/2]/*j2ndu, x = 0.
This test is known as is the two-sided f-test with significance level a x 100%. The critical values вn for the 5% and 10% significance levels can be found in Table IV.1 in Appendix IV. Also, this test is consistent:
lim pn(д/а) = 1 if д = 0. (5.30)