INTRODUCTION TO STATISTICS AND ECONOMETRICS
COMPOSITE AGAINST COMPOSITE
In this section we consider testing a composite null hypothesis against a composite alternative hypothesis. As noted earlier, this situation is the most realistic. Let the null and alternative hypotheses be H0: 0 E 0O and Hi - 0 Є 0Ь where 0O and 0] are subsets of the parameter space 0. Here we define the concept of the UMP test as follows:
DEFINITION 9.5.1 A test R is the uniformly most powerful test of size (level) ct if sup0eeo P(R I 0) = (<) a and for any other test R such that sup9eeo P(R I 0) = (^) a we have P(R | 0) > P(Ri | 0) for any 0 Є 0j.
For the present situation we define the likelihood ratio test as follows.
definition 9.5.2 Let L(x | 0) be the likelihood function. Then the likelihood ratio test of H0 against Hi is defined by the critical region
sup L(0)
(9.5.1) A = -------- < c,
sup Д0)
OoUe,
where c is chosen to satisfy sup0oP(A < c | 0) = a for a certain specified value of a.
The following are examples of the likelihood ratio test.
EXAMPLE 9.5.1 Consider the same model as in Example 9.4.2, but here test H0: p < p0 against H. p > p0. If x/n ^ p0, A = 1; therefore, accept H0. Henceforth suppose x/n > p0. Since the numerator of the likelihood ratio attains its maximum зір = p0, A is the same as in (9.4.6). Therefore the critical region is again given by (9.4.8). Next we must determine d so as to satisfy
But since P(X/n > d I p) can be shown to be a monotonically increasing function of p, we have
(9.5.3) |
sup P |
(X, n>d |
> p |
= p |
(X „>dp0 |
P^Po |
K. |
/ |
) |
Therefore the value of d is also the same as in Example 9.4.2.
This test is UMP. To see this, let R be the test defined above and let /ц be some other test such that suppSpoP(Ri p) ^ a. Then it follows that P(R I po) ^ a. But since R is the UMP test of H0: p = p0 against Hi ■p> p0, we have P(Ri p) — P(R p) for all p > p0 by the result of Example 9.4.2.
2 2
EXAMPLE 9.5.2 Let the sample be X, ~ N{|x, a ) with unknown a, і =
о
1, 2, . . . , n. We are to test H0: |x = p,0 and 0 < a < °° against Hp. p, > p-o and 0 < cr2 < °°.
о
Denoting (p., a ) by 0, we have
(9.5.4) L(0) = (2тг) n/2(u2) n/2exp
Therefore
(9.5.5) sup L(0) = (2tt) n^u2) n/2exp во
/о — n/%-2 — я/2
= (2О (a ) exp
where a2 = n X’Lr-=i(xl — рь0)2. If x/n ^ po, A = 1; therefore, accept H0. Henceforth suppose x/n > p0. Then we have
(9.5.6) sup Ц0) = (2тт)-"/2(<т2)-п/,2ехр — >
eouei L
where d2 = n l'£/=i(x1 — x)2. Therefore the critical region is
(9.5.7)
(crVd2) n/2 < c for some c, which can be equivalently written as
where a2 is the unbiased estimator of a2 defined by (n — 1)_1£”=1(хг - — x)2. Since the left-hand side of (9.5.8) is distributed as Student’s t with n — 1 degrees of freedom, k can be computed or read from the appropriate table. Note that since P{R HQ) is uniquely determined in this example in spite of the composite null hypothesis, there is no need to compute the supremum.
If the alternative hypothesis specifies p Ф p0, the critical region (9.5.8) should be modified by putting the absolute value sign around the left - hand side. In this case the same test can be performed using the confidence interval defined in Example 8.2.3.
In Section 9.3 we gave a Bayesian interpretation of the classical method for the case of testing a simple null hypothesis against a simple alternative hypothesis. Here we shall do the same for the composite against composite case, and we shall see that the classical theory of hypothesis testing becomes more problematic. Let us first see how the Bayesian would solve the problem of testing H0: 0 ^ 0O against Hj: 0 > 0O. Let 1^(0) be the loss incurred by choosing H0, and /^(0) by choosing HA. Then the Bayesian rejects H0 if
‘со foo
A(0)/(0 I x)d0 < T2(0)/(0 I x)d0,
J — 00 J — 00 where / (0 I x) is the posterior density of 0. Suppose, for simplicity, that Li(0) and (0) are simple step functions defined by
(9.5.10) Lj(0) = 0 for 0 > 0O
= ух for 0 < 0O
and
1^(0) =0 for 0 < 0o = y2 for 0 — 00-
In this case the losses are as given in Table 9.2; therefore (9.5.9), as can be seen in (9.3.8), is reduced to
h(x І Я]) tjo L(x І Щ)
Recall that (9.5.11) is the basis for interpreting the Neyman-Pearson test. Here, in addition to the problem of not being able to evaluate тпо/ііі, the
classical statistician faces the additional problem of not being able to make sense of L(x I Hi) and L(x | H0).
The likelihood ratio test is essentially equivalent to rejecting H0 if
sup L(x I 0)
0>eo
sup L(x I 0)
0£во
A problem here is that the left-hand side of (9.5.12) may not be a good substitute for the left-hand side of (9.5.11).
Sometimes a statistical decision problem we face in practice need not and/or cannot be phrased as the problem of testing a hypothesis on a parameter. For example, consider the problem of deciding whether or not we should approve a certain drug on the basis of observing x cures in n independent trials. Let p be the probability of a cure when the drug is administered to a patient, and assume that the net benefit to society of approving the drug can be represented by a function U(p), nondecreasing in p. According to the Bayesian principle, we should approve the drug if
U(p)f(p I x)dp > 0,
where f(p I x) is the posterior density of p given x. Note that in this decision problem, hypothesis testing on the parameter p is not explicitly considered. The decision rule (9.5.13) is essentially the same kind as (9.5.9), however.
Next we try to express (9.5.13) more explicitly as an inequality concerning x, assuming for simplicity that f(p | x) is derived from a uniform prior density: that is, from (8.3.7), (9.5.14) f(p I x) = (n + 1)С*У(1 - p)n x.
Now suppose у > x. Then f(p x) and f(p y) cross only once, except possibly at p = 0 or 1. To see this, put f(p | x) = f(p у). Hp Ф 0 or 1, this equality can be written as
The left-hand side of (9.5.15) is 0 if p = 1 and is monotonically increasing as p decreases to 0. Let p* be the solution to (9.5.15) such that p* Ф 0 or
1, and define h{p) = f(p y) — f(p x) and k(p) = f(p x) — f(p y). Then
we have
because the left-hand side is greater than U(p*), whereas the right-hand side is smaller than U{p*). But (9.5.16) is equivalent to
[u(p)f{p I y)dp > flU(p)f(p I x)dp,
Jo Jo which establishes the result that the left-hand side of (9.5.13) is an increasing function in x. Therefore (9.5.13) is equivalent to
(9.5.18) x> c,
where c is determined by (9.5.13).
The classical statistician facing this decision problem will, first, paraphrase the problem into that of testing hypothesis H0: p > p0 versus Hx: p < po for a certain constant p0 and then use the likelihood ratio test. Her decision rule is of the same form as (9.5.18), except that she will determine c so as to conform to a preassigned size a. If the classical statistician were to approximate the Bayesian decision, she would have to engage in a rather intricate thought process in order to let her p0 and a reflect the utility consideration.