Advanced Econometrics Takeshi Amemiya
A Test of Structural Change when Variances Are Equal Suppose we have two regression regimes
Уі = X] A + u, (1.5.23)
and
y2 — X202 + u2,
where the vectors and matrices in (1.5.23) have T{rows and those in (1.5.24) T2 rows, X, is а Г, X K* matrix and X2 is a T2 X K* matrix, and u, and u2 are normally distributed with zero means and variance-covariance matrix
We assume that both X, and X2 have rank equal to K*.1 We want to test the null hypothesis Pi = рг assuming a = a2(— c2) in the present section and <r] Ф a in the next section. This test is especially important in econometric time series because the econometrician often suspects the occurrence of a structural change from one era to another (say, from the prewar era to the postwar era), a change that manifests itself in the regression parameters. When a = al, this test can be handled as a special case of the standard F test presented in the preceding section.
To apply the F test to the problem, combine Eqs. (1.5.23) and (1.5.24) as
y = xp + u,
where
Then, since a = a(=a2), (1.5.25) is the same as Model 1 with normality; hence we can represent our hypothesis px = p2 as a standard linear hypothesis on Model 1 with normality by putting T = Ti + T2,K = 2K*, q = K*, Q' = (I, — I), and c = 0. Inserting these values into (1.5.12) yields the test statistic
__ (Ti + T2- 2K* {pi - &УКХІХ,)-1 + (Х'2Х2)-'Г'(Рі ~ pi)
п к* ) y'[i - Х(Х'Ю_,Х']у
~ F(K*, Ti+T2- 2K*),
where Pi = (XiX^-'Xiy, and p2 = (Хру-‘Х$у2.
We shall now give an alternative derivation of (1.5.26). In (1.5.25) we combined Eqs. (1.5.23) and (1.5.24) without making use of the hypothesis Pi = p2. If we make use of it, we can combine the two equations as
y = X0 + u, (1.5.27)
where we have defined X = (X',, X2)' andP = Pi= p2. Let S(P) be the sum of squared residuals from (1.5.25), that is,
50§) = У'[І-Х(Х'ХГ1Х']У
and let S(0) be the sum of squared residuals from (1.5.27), that is,
SO?) = y'[I - ХСХ'ХГ‘Xly. (1.5.29)
Then using (1.5.9) we have8
(1.5.30)
To show the equivalence of (1.5.26) and (1.5.30), we must show Sifi) - so?) = (А - АУКХІХ. Г1 + да^гЧА - A).
From (1.5.29) we have
and from (1.5.28) we have
= y'i[I - Х,(ХІХ, Г‘Xlly, + УИІ - Х2(Х^Х2)~1Хау2. Therefore, from (1.5.32) and (1.5.33) we get
S(0)-S(0) (1.5.34)
= (уїх1іУ;х2)
Г (X'.x. r1 - (X{x, + х;х2Г‘ -(x;x, + x;x2)-> ]
L - да, + xjx2)-‘ (X'2x2r' - (x;x, + вд-'J
хГх^1
hyj
= (yiXj, y2X2)
[-даУ1] [WXl)“'+ WT'KX jx. r1, -(W4
ГхїуЛ
the last line of which is equal to the right-hand side of (1.5.31). The last equality of (1.5.34) follows from Theorem 19 of Appendix 1.
The hypothesisPx = ft2 is merely one of many linear hypotheses we can impose on the fi of the model (1.5.25). For instance, we might want to test the equality of a subset of px with the corresponding subset of Д. If the subset consists ofthe first Kf elements ofboth 0X and fi2, we should put T= Tx + T2 and К = 2К* as before, but q = Kf, Q' = (I, О, —I, 0), and c = 0 in the formula (1.5.12).
If, however, we wish to test the equality of a single element of jS, with the corresponding element of fi2, we should use the t test rather than the Ftest for the reason given in Section 1.5.2. Suppose the null hypothesis is pu = P2i where these are the rth elements ofpx and fi2, respectively. Let the rth columns of X, and X2 be denoted by хи and x2, and let Xt(0 and consist of the remaining К* — 1 columns of X, and X2, respectively. Define M1{0 = I — Хц^ХвдХц,.))-^), x1/ = M1(()xli, and y, = M1(0y, and similarly define Мад, x2/, and y2. Then using Eqs. (1.2.12) and (1.2.13), we have
and
(1.5.36)
Therefore, under the null hypothesis,
{———I———^
Also, by Theorem 2 of Appendix 2
where M, = I — X,(XIX,)-1X; and M2 = I — X2(X2X2)_,X2. Because
(1.5.37) and (1.5.38) are independent, we have by Theorem 3 of Appendix 2
(_2j_ , 1/2(УіміУі + У2М2у21/2 Гі+:Гі ”•
Putting a = a in (1.5.39) simplifies it to fiu ~ Ьгi
' a2 d2 ‘/>
where a2 is the unbiased estimate of a2 obtained from the model (1.5.25), that is,