Springer Texts in Business and Economics

Seemingly Unrelated Regressions

10.1 When Is OLS as Efficient as Zellner’s SUR?

a. From (10.2), OLS on this system gives

p /Р 1,ols I

p ols = "

P2,ols I

"(X1X1) 1 0

11

"(x! xO 1 х1уГ

о (x2X2)-1_

х2у2 )

_(x2x^ 1 X2y2_

This is OLS on each equation taken separately. For (10.2), the estimated var("ols) is given by

where s2 = RSS/(2T — (K1 + K2)) and RSS denotes the residual sum of squares of this system. In fact, the RSS = e,1e1 + e2e2 = RSS1 + RSS2 where

ei = yi — Xi P i, ols fori = 1,2.

If OLS was applied on each equation separately, then

var (p 1,ol^ = s2 (X1X1)-1 with s2 = RSS1/(T — K1)

and

var (p2,ol^ = s2 (X2X2)-1 with s2 = RSS2/(T — K2).

Therefore, the estimates of the variance-covariance matrix of OLS from the system of two equations differs from OLS on each equation separately by a scalar, namely s2 rather than si2 for i = 1 , 2.

B. H. Baltagi, Solutions Manual for Econometrics, Springer Texts in Business and Economics, DOI 10.1007/978-3-642-54548-1—10, © Springer-Verlag Berlin Heidelberg 2015

b. For the system of equations given in (10.2), X0 = diag[X0], £2 1 =

X 1 <S> IT and

X0£-1 =

1 1

o

0

x2_

1----------- 1

Q

H4 H4

Q

H4 H4 1_ 1

=

1 1

Q

><я

о12 x; о22 x2

Also, X0X = diag [X0X0] with (X0X) 1 = diag [(X[Xi) ;]. Therefore, Px = diagpXj and Px = diag [P^ ].

Hence,

o11X0iPx1 o12X0iPx2

o21X2Px1 ct22X2Px2

But, X0PXi = 0 for i = 1,2. Hence, X0£-;Px =

and this is zero if оijX0 PXj = 0 for i ф j with i, j = 1,2.

c. (i) If Ojj = 0 for i ф j, then, X is diagonal. Hence, E_1 is diagonal and оij = 0 for i ф j. This automatically gives оijX0PXj = 0 for i ф j from part (b).

(ii) If all the Xi’s are the same, then X1 = X2 = X* and PX1 = PX2 = Px* . Hence,

X0PXj = X*0PX* = 0 for i, j = 1,2.

This means that о^XiPxj = 0 from part (b) is satisfied for i, j = 1,2.

d. IfXi = XjC0 where C is a non-singular matrix, then X0PXj = CX0PXj = 0. Alternatively, X0X0 = CXj and (X0^-1 = C0-1 (xjX^ C-1.

Therefore,

Px0 = Pxj and Iі x0 = Iі Xj.

Hence, X0PXj = X0PXi = 0. Note that when X0 = Xj then C = Ik where K is the number of regressors.

10.3 What Happens to Zellner s SUR Estimator when the Set of Regressors in One Equation Are Orthogonal to Those in the Second Equation?

CT11X1X1 0

0 o22x2x2

a11 Xiy1 C tf12Xiy2 a21 X2y1 C o22X2y2

a. If Xi and X2 are orthogonal, then X1X2 = 0. From (10.6) we get

rr11X’ X 0 n 1 r

a11X1y1 C a12X1y2 a21X2y1 C a22X2y2

" GLS =

"(ВД) 1 X1y1 C a12 (X1X1) 1 X^/a11" _a21 (X2X2)_1 X2y1/a22 C (X2X2)-1 X2y2_

= "P 1,ols C (a12/a11) (ВД) 1 X^"

_P2,ols C (a21/a22) (X2X2)_1 X2y1_ as required.

(X1X1)"1 /a11
0

If you have doubts, note that

P 1,gls = "1 C (X1X1) 1 X1u1 C (a12/a11) (X1X1) 1 X^ using X1X2 = 0. Hence, E(" 1,GLS) = "1 and var 1,GLs) = E (p 1,GLS - (P 1,GLS -

a22 - a12

—a12 an

Hence, a12/a11 = —a12/a22 substituting that in var(" 1,GLS) we get

= a11 (X1X1)-1 C a22(a12/a11)2 (X1X1)-1 C 2a12a12/a11 (X1X1) 1 .

= (X1X1) 1 [ana22 C aj22 — 2ajy /a22 = (X^) 1 /a11

since a1 1 = a22/(a1 1 a22 — a^2). Similarly, one can show that

var(" 2,gls) = (X2X2) 1/a22.

c. We know that var("1,ols) = 011 (X'jX^ 1 andvar("2,ois) = 022 (X^X^ V If Xi and X2 are single regressors, then (X(Xi) are scalars for i = 1,2. Hence,

10.4 From (10.13), sij = e(ej/[T — Ki — Kj + tr(B)] for i, j = 1,2. where ei = yi — Xi"i ols for i = 1,2. This can be rewritten as ei = PXiyi = PXiui for i = 1 , 2. Hence,

E(e0ej) = E(u'PXiPXjuj) = E[tr(u0PXiPXjuj)] = E[tr (uju'PxiPxj)]

= tr[E (ujuQ PXi Xj] = °jitr (PXi Xj) .

But 0ji = 0jj and tr (IіXiPNXj) = tr (It — Pxj — PXi + Px^x) = T — Kj — Ki +

tr(B) where B = PXiPXj. Hence, E(sij) = E(e|ej) /tr(PXiPXj) = 0ij. This proves that sij is unbiased for 0ij for i, j = 1,2.

10.5 Relative Efficiency of OLS in the Case of Simple Regressions. This is based on Kmenta (1986, pp. 641-643).

a. Using the results in Chap. 3, we know that for a simple regression Yi = a + "Xi + Ui that "ols = p xyi/ p x2 and var ("ok) = 02 j p x2

where xi = Xi — Xi and var(ui) = 02. Hence, for the first equation in (10.15), we get var 12,ols) = 011^1x1 where mx1X1 = p (Xu — X1)2 and 011 = var(u1).

Similarly, for the second equation in (10.15), we get var^"22,ols) =

t - 2

022/mx2x2 where mx2x2 = J2 (X2t — X2 and 022 = var(u2).

orthogonal projection on the constant tT, see problem 7.2. This yields Y і = X i"i2 + Q Y2 = X2P22 + i-2

where Yi = (IT — Jt/Уі, Xi = (IT — JT)Xi and iii = (IT — JT/ui for i =

1,2. Note that Y' = (Yn,.., YiT),X' = (Xii, ..,XiT) andu' = (uii,..,uiT) for i = 1,2. GLS on this system of two equations yields the same GLS estimators of "12 and "22 as in (10.15). Note that £2 = El U 1 І (гі 1, ii 2) =

Vu7

X <S> (IT — JT) where X = [cFjj] for i, j = 1,2. Also, for this transformed system

CTnX1TX1 c12X 1X2

ct[5]2X22X1 c[6]X 2X 2 since X—1 <S> (IT — Jt) is the generalized inverse of X <S> (IT — Jt) and

(IT — JT)Xi = Xi for i = 1,2. But, Xj and Xj are Txl vectors, hence,

XjXj = mxi4 = p (Xjt — Xj)(Xjt — Xj) for i, j = 1,2. So

t=1

11 12

c11mxixi c12mxix2

c12mx2xi c11mx2x2

Simple inversion of this 2 x 2 matrix yields

var (" !2,GLS^ = (c11 C22 — CT^) c11mx2x2/ [C11C22mxixi mx2x2 — C^m^J.

where the denominator is the determinant of the matrix to be inverted. Also,

var ("22,gls) = (СГ11СТ22 - °h) CT22mx! x! / [CTnCT22mx1x1mx2x2 - 05^^] .

c. Defining p = correlation (u1,u2) = o12/(o11o22)1/2 and r = sample cor­relation coefficient between X1 and X2 = mx1x2/(mx2x2mx1x1) =2, then

var(" 12,GLs)/var(" 12,ols)

= (011022 - 0^) mx1x1 mx2x2/ [011022mx2x2 mx1x1 - 05^^]

= 011022(1 - р2)/оц022(1 - p2r2) = (1 - p2)/(1 - p2r2) similarly, var("22,GLs)/var("22,ols)

= (011022 - 0^2) mx1x1 mx2x2/ [011022mx2x2 mx1x1 - 05^^]

= 011022(1 - p2)/011022(1 - p2r2) = (1 - p2)/(1 - p2r2).

d. Call the relative efficiency ratio E = (1 - p2)/(1 - p2r2). Let 0 = p2, then E = (1 - 0)/(1 - 0г2). So that

-(1 - 0r2) + r2(1 - 0) -1 + 0r2 + r2 - 0r2 -(1 - r2) ^ 0

(1 - 0r2)2 (1 - 0r2)2 (1 - 0r2)2 <

since r2 < 1. Hence, the relative efficiency ratio is a non-increasing func­tion of 0 = p2. Similarly, let X = r2, then E = (1 - 0)/(1 - 0X) and 9E/9X = 0(1 - 0)/(1 - 0X)2 > 0 since 0 < 0 < 1. Hence, the relative efficiency ratio is a non-decreasing function of X = r2. This relative effi­ciency can be computed for various values of p2 and r2 between 0 and 1 at 0.1 intervals. See Kmenta (1986, Table 12-1, p. 642).

10.6 Relative Efficiency of OLS in the Case of Multiple Regressions. This is based on Binkely and Nelson (1988). Using partitioned inverse formulas, see the solution to problem 7.7(c), we get that the first block of the inverted matrix in (10.17) is

Divide both the matrix on the right hand side and the denominator by O11O22, we get

where p2 = o122/o11o22. Hence,

But p2022/022 = 1/011, hence

Add and subtract p2X,1X1 from the expression to be inverted, one gets var 1,gls) = 011(1 — p2) [(1 — p2)X[X1 + p2X1Px2X1] 1 .

Factor out (1 — p2) in the matrix to be inverted, one gets var 1,gl^ = 011 {x;X1 + [p2/(1 — p2)]E0E}-1

where E = Px2X1 is the matrix whose columns are the OLS residuals of each variable in X1 regressed on X2.

10.7 When X1 and X2 are orthogonal, then X[X2 = 0. Rq is the R2 of the regres­sion of variable Xq on the other (K1 — 1) regressors in X1. R*2 is the R2 of

E = Px2 X1 = X1 — Px2 X1 = X1 since Px2 X1 = X2 (X2X2) 1 X2X1 = 0.

regressors in Xi. Hence, R = R*2 and from (10.22) we get

IT T

XX (1 - R)+e2q (i - Rq)

t=1 t=1

= <wj E x2q (i - Rq^ (1 + 92)

T

= CT11(1 - P2)/J2 xtq(1 - R2)

t=1

since 1 C 02 = 1/(1 — p2).

10.8 SUR With Unequal Number of Observations. This is based on Schmid (1977).

a. £2 given in (10.25) is block-diagonal. Therefore, £2 1 is block-diagonal:

-11IT

-12IT

0

-12IT

-22IT

0

with S 1

0

0

-L In

-22 iNJ

[-ij] for i, j = 1,2.

X1

0

y1

X=

0

X2*

and y =

y2*

0

X20

y20

where X1 is TxK1, X* is TxK2 and Xo is NxK2. Similarly, y1 is Tx1, y* is

Tx1 andy2 is Nx1.

CT11X,1 CT12X! 0

-12X*' -22X2*' - lX20/

2 2 -22 2

Therefore, " GLS = (X,£_1X)_1X,£_1y

_ ’-пх;х1 -12x;x2*

= -12X*0X1 -22X2*'X2* C (X20%7-22)

"-11X,1y1 C -12Х1у*

-12X*'y1 C -22X*'y* C (Х2'у0/-22)

b. If -12 = 0, then from (10.25)

-11 It

0

0

-L It

C11

0

0

fi =

0

-22 It

0

and fi 1 =

0

_L It

-22

0

0

0

-22 In

0

0

—In

-22 N

so that -12 = 0 and сii = 1/-„ for i = 1,2. From (10.26)

XjXj

-11

0

-1

(X1y1/-11)

0

X*'X2* X?0%0 -22 -22

X2*0y2* C X20,y2^ /-22

Therefore, SUR with unequal number of observations reduces to OLS on each equation separately if -12 = 0.

10.9 Grunfeld (1958) investment equation.

a. OLS on each firm.

Firm 1

Autoreg Procedure Dependent Variable = INVEST Ordinary Least Squares Estimates

SSE

275298.9

DFE

16

MSE

17206.18

Root MSE

131.1723

SBC

244.7953

AIC

241.962

Reg Rsq

0.8411

Total Rsq

0.8411

Durbin-Watson

1.3985

Godfrey's Serial Correlation Test

Alternative

LM

Prob>LM

AR(+ 1)

2.6242

0.1052

AR(+ 2)

2.9592

0.2277

AR(+ 3)

3.8468

0.2785

Variable

DF

B Value

Std Error

t Ratio

Approx Prob

Intercept

1

-72.906480

154.0

-0.473

0.6423

VALUE1

1

0.101422

0.0371

2.733

0.0147

CAPITAL1

1

0.465852

0.0623

7.481

0.0001

Covariance of B-Values

Intercept

VALUE1

CAPITAL1

Intercept

23715.391439

-5.465224899

0.9078247414

VALUE1

-5.465224899

0.0013768903

-0.000726424

CAPITAL1

0.9078247414

-0.000726424

0.0038773611

Firm 2

Autoreg Procedure Dependent Variable = INVEST Ordinary Least Squares Estimates

SSE

224688.6

DFE

16

MSE

14043.04

Root MSE

118.5033

SBC

240.9356

AIC

238.1023

Reg Rsq

0.1238

Total Rsq

0.1238

Durbin-Watson

1.1116

Godfrey's Serial Correlation Test Alternative LM Prob>LM

AR(+ 1)

4.6285

0.0314

AR(+ 2)

10.7095

0.0047

AR(+ 3)

10.8666

0.0125

Variable

DF

B Value

Std Error

t Ratio

Approx Prob

Intercept

1

306.273712

185.2

1.654

0.1177

VALUE1

1

0.015020

0.0913

0.165

0.8713

CAPITAL1

1

0.309876

0.2104

1.473

0.1602

Covariance of B-Values

Intercept VALUE1 CAPITAL1

Intercept 34309.129267 -15.87143322 -8.702731269

VALUE1 -15.87143322 0.0083279981 -0.001769902

CAPITAL1 -8.702731269 -0.001769902 0.0442679729

Firm 3

Autoreg Procedure Dependent Variable = INVEST Ordinary Least Squares Estimates

SSE

14390.83

DFE

16

MSE

899.4272

Root MSE 29.99045

SBC

188.7212

AIC 185.8879

Reg Rsq 0.6385 Durbin-Watson 1.2413

Total Rsq

0.6385

Godfrey's Serial Correlation Test

Alternative LM

Prob>LM

AR(+ 1) 2.7742 0.0958 AR(+ 2) 8.6189 0.0134 AR(+ 3) 11.2541 0.0104

Variable

DF B Value

Std Error t Ratio

Approx Prob

Intercept

1 -14.649578

39.6927 -0.369

0.7169

VALUE1

1 0.031508

0.0189 1.665

0.1154

CAPITAL1

1 0.162300

0.0311 5.213

0.0001

Covariance of B-Values

Intercept

VALUE1

CAPITAL1

Intercept

1575.5078379

-0.706680779

-0.498664487

VALUE1

-0.706680779

0.0003581721

0.00007153

CAPITAL1

-0.498664487

0.00007153 0.0009691442

Plot of RESID*YEAR.

SUR model using the first 2 firms

Model: FIRM1

Dependent variable: FIRM1J

Parameter Estimates

Variable

DF

Parameter

Estimate

Standard

Error

T for HO: Parameter = 0

Prob > |T|

INTERCEP

1

-74.776254

153.974556

-0.486

0.6338

FIRM1 _F1

1

0.101594

0.037101

2.738

0.0146

FIRM1.C1

1

0.467861

0.062263

7.514

0.0001

Model: FIRM2

Dependent variable: FIRM2J

Parameter Estimates

Variable

DF

Parameter

Estimate

Standard

Error

T for HO: Parameter = 0

Prob >|T|

INTERCEP

1

309.978987

185.198638

1.674

0.1136

FIRM2.F1

1

0.012508

0.091244

0.137

0.8927

FIRM2_C1

1

0.314339

0.210382

1.494

0.1546

SUR model using the first 2 firms SYSLIN Procedure

Seemingly Unrelated Regression Estimation

Sigma

FIRM1

FIRM2

FIRM1

FIRM2

17206.183816

-355.9331435

-355.9331435

14043.039401

Cross Model Correlation

Corr

FIRM1

FIRM2

FIRM1

FIRM2

1

-0.022897897

-0.022897897

1

Cross Model Inverse Correlation

Inv Corr

FIRM1

FIRM2

FIRM1

FIRM2

1.0005245887

0.0229099088

0.0229099088

1.0005245887

Cross Model Inverse Covariance

Inv Sigma

FIRM1

FIRM2

FIRM1

FIRM2

0.0000581491 1.4738406E - 6

1.4738406E - 6 0.000071247

System Weighted MSE: 0.99992 with 32 degrees of freedom. System Weighted R-Square: 0.7338

d. SUR model using the first three firms

Model: FIRM1

Dependent variable: FIRM1J

Parameter Estimates

Variable

DF

Parameter

Estimate

Standard

Error

T for HO: Parameter = 0

Prob > |T|

INTERCEP

1

-27.616240

147.621696

-0.187

0.8540

FIRM1.F1

1

0.088732

0.035366

2.509

0.0233

FIRM1.C1

1

0.481536

0.061546

7.824

0.0001

Model: FIRM2

Dependent variable: FIRM2J

Parameter

Standard

T for HO:

Variable DF

Estimate

Error

Parameter = 0

Prob > |T|

INTERCEP 1

255.523339

167.011628

1.530

0.1456

FIRM2.F1 1

0.034710

0.081767

0.425

0.6769

FIRM2.C1 1

0.353757

0.195529

1.809

0.0892

Model: FIRM3

Dependent variable:

1

CO

cc

Ll_

Parameter Estimates

Parameter

Standard

T for HO:

Variable DF

Estimate

Error

Parameter = 0

Prob > |T|

INTERCEP 1

-27.024792

35.444666

-0.762

0.4569

FIRM3.F1 1

0.042147

0.016579

2.542

0.0217

FIRM3.C1 1

0.141415

0.029134

4.854

0.0002

SUR model using the first 3 firms

SYSLIN Procedure

Seemingly Unrelated Regression Estimation

Cross Model Covariance

Sigma

FIRM1

FIRM2

FIRM3

FIRM1

17206.183816

-355.9331435

1432.3838645

FIRM2

-355.9331435

14043.039401

1857.4410783

FIRM3

1432.3838645

1857.4410783

899.4271732

Cross Model Correlation

Corr

FIRM1

FIRM2

FIRM3

FIRM1

FIRM2

FIRM3

1

-0.022897897

0.3641112847

-0.022897897

1

0.5226386059

0.3641112847

0.5226386059

1

Cross Model

Inverse Correlation

Inv Corr

FIRM1

FIRM2

FIRM3

FIRM1

FIRM2

FIRM3

1.2424073467

0.3644181288

-0.642833518

0.3644181288

1.4826915084

-0.907600576

-0.642833518

-0.907600576

1.7084100377

Chapter 10: Seemingly Unrelated Regressions

247

Cross Model Inverse Covariance

Inv Sigma

FIRM1

FIRM2

FIRM3

FIRM1

0.000072207

0.0000234438

-0.000163408

FIRM2

0.0000234438

0.000105582

-0.000255377

FIRM3

-0.000163408

-0.000255377

0.0018994423

System Weighted MSE: 0.95532 with 48 degrees of freedom. System Weighted R-Square: 0.6685

SAS PROGRAM

data A; infile ‘B:/DATA/grunfeld. dat’; input firm year invest value capital;

data aa; set A; keep firm year invest value capital; if firm>3 then delete;

data A1; set aa; keep firm year invest value capital;

if firm>1 then delete;

data AA1; set A1;

value1=lag(value);

capital1=lag(capital);

Proc autoreg data=AA1;

model invest=value1 capital1/ godfrey=3 covb; output out=E1 r=resid; title‘Firm 1’; proc plot data=E1;

plot resid*year=‘*’;

Title ‘Firm 1’; run;

********************************************************,

data A2; set aa; keep firm year invest value capital; if firm=1 or firm=3 then delete;

data AA2; set A2;

value1=lag(value);

capital1=lag(capital);

Proc autoreg data=AA2;

model invest=value1 capital1/ godfrey=3 covb; output out=E2 r=resid; title ‘Firm 2’; proc plot data=E2;

plot resid*year=‘*’;

Title ‘Firm 2’; run;

data A3; set aa; keep firm year invest value capital;

iffirm<=2 then delete;

data AA3; set A3;

value1=lag(value);

capital1=lag(capital);

Proc autoreg data=AA3;

model invest=value1 capital1/ godfrey=3 covb; output out=E3 r=resid; title ‘Firm 3’; proc plot data=E3;

plot resid*year=‘*’; title ‘Firm 3’; run;

Proc iml; use aa;

read all into temp;

sur=temp[1:20,3:5]||temp[21:40,3:5]||temp[41:60,3:5]; c=f‘‘F1 _i” “F1_f” ‘‘F1_c’’ “F2_i" “F2_f” “F2_c” “F3_i" ‘‘F3. create sur_data from sur [colname=c]; append from sur;

data surdata; set sur_data; firml _i=f1 _i; firm1_f1=lag(f1_f); firm1_c1=lag(f1_c) firm2j=f^_i; firm2_f1=lag(f2_f); firm2_c1=lag(f2_c) firm3j=f3j; firm3_f1=lag(f3_f); firm3_c1=lag(f3_c)

proc syslin sur data=surdata;

Firm1: model firm1 _i=firm1_f1 firmed;

Firm2: model firm2_i=firm2_f1 firm2_c1; title ‘SUR model using the first 2 firms’; run;

proc syslin sur data=surdata;

Firm1: model firm1 J=firm1_f1 firmed;

Firm2: model firm2_i=firm2_f1 firmed;

Firm3: model firm3_i=firm3_f1 firm3_c1; title ‘SUR model using the first 3 firms’; run;

10.11 Grunfeld (1958) Data-Unequal Observations. The SAS output is given below along with the program.

Ignoring the extra OBS

BETA

STD. BETA

48.910297

116.04249

0.0834951

0.0257922

0.2419409

0.0654934

-58.84498

125.53869

0.1804117

0.0629202

0.3852054

0.1213539

WILKS METHOD

BETA

STD. BETA

47.177574

116.24131

0.0835601

0.0258332

0.245064

0.0656327

-58.93095

0.1803626

0.3858253

135.57919

0.0679503

0.1309581

SRIVASTAVA & ZAATAR METHOD

BETA

STD. BETA

47.946223

116.04249

0.0835461

0.0257922

0.2435492

0.0654934

-59.66478

135.42233

0.1808292

0.067874

0.3851937

0.1309081

HOCKING & SMITH METHOD

BETA

STD. BETA

48.769108

116.12029

0.083533

0.0258131

0.2419114

0.0655068

-60.39359

135.24547

0.1813054

0.0677879

0.384481

0.1308514

SAS PROGRAM

data AA; infile ‘a:/ch10/grunfeld. dat’; input firm year invest value capital;

data AAA; set AA;

keep firm year invest value capital;

if firm>=3 then delete;

Proc IML;

use AAA; read all into temp;

Y1=temp[1:15,3]; Y2=temp[21:40,3]; Y=Y1//Y2; X1=temp[1:15,4:5];X2=temp[21:40,4:5];

N1=15; N2=20; NN=5; K=3;

SCHMIDT’S FEASIBLE GLS ESTIMATORS X1=J(N1,1,1)||X1;

X2=J(N2,1,1)||X2;

X=(X1//J(N2,K,0))||(J(N1,K,0)//X2);

BT1=INV(X1'*X1)*X1'*Y1;

BT2=INV(X2'*X2)*X2'*Y2;

e1=Y1-X1*BT1; e2=Y2-X2*BT2; e2_15=e2[1:N1,];ee=e2[16:N2,];

S11=e1'*e1/N1; S12=e1'*e2_15/N1; S22_15=e2_15'*e2_15/N1;

S22_4=ee'*ee/NN; S22=e2'*e2/N2;

S_12=S12*SQRT (S22/S22J5);

S_11=S11 - (NN/N2)*((S12/S22_15)**(2))*(S22_15-S22_4); S1 _2=S12*S22/S22_15; ZERO=J(NN,2*N1,0); OMG1=((S11 ||S12)//(S12||S22_15))@I(N1);

OMG2=((S111| S12)//(S121| S22))@I(N1);

OMG3=((S111| S_12)//(S_121| S22))@I(N1);

OMG4=((S_11||S1_2)//(S1_2||S22))@I(N1);

OMEGA1=(OMG1//ZERO)||(ZERO//(S22_15@I(NN)));

OMEGA2=(OMG2//ZERO)||(ZERO'//(S22@I(NN)));

OMEGA3=(OMG3//ZERO)||(ZERO'//(S22@I(NN)));

OMEGA4=(OMG4//ZERO)||(ZERO'//(S22@I(NN)));

OMG1_INV=INV(OMEGA1);OMG2_INV=INV(OMEGA2);

OMG3JNV=INV(OMEGA3);OMG4JNV=INV(OMEGA4);

********** Ignoring the extra OBS **********;

BETA1=INV(X'*OMG1 _INV*X)*X'*OMG1 _INV*Y;

VAR_BT1=INV(X'*OMG1_INV*X);

STD_BT1=SQRT(VECDIAG(VAR_BT1));

OUT1=BETA1 ||STD_BT1; C={”BETA” "STD_BETA"}; PRINT‘Ignoring the extra OBS’,,OUT1(|COLNAME=C|);

********** WILKS METHOD **********;

BETA2=INV(X'*OMG2_INV*X)*X'*OMG2_INV*Y;

VAR_BT2=INV(X'*OMG2_INV*X);

STD_BT2=SQRT(VECDIAG(VAR_BT2));

OUT2=BETA2||STD_BT2;

PRINT ‘WILKS METHOD’,,OUT2(|COLNAME=C|);

********** SRIVASTAVA & ZAATAR METHOD **********; BETA3=INV(X'*OMG3_INV*X)*X'*OMG3_INV*Y;

VAR_BT3=INV(X'*OMG3_INV*X);

STD_BT3=SQRT(VECDIAG(VAR_BT3));

OUT3=BETA3||STD_BT3;

PRINT ‘SRIVASTAVA & ZAATAR METHOD’,,OUT3(|COLNAME=C|);

HOCKING & SMITH METHOD **********; BETA4=INV(X'*OMG4_INV*X)*X'*OMG4_INV*Y; VAR_BT4=INV(X'*OMG4_INV*X); STD_BT4=SQRT(VECDIAG(VAR_BT4)); OUT4=BETA4||STD_BT4;

PRINT ‘HOCKING & SMITH METHOD’,,OUT4(|COLNAME=C|);

10.12 Baltagi and Griffin (1996) Gasoline Data.

a. SUR Model with the first two countries

SYSLIN Procedure

Seemingly Unrelated Regression Estimation Model: AUSTRIA Dependent variable: AUS_Y

Parameter Estimates

Variable

DF

Parameter

Estimate

Standard

Error

T for HO: Parameter = 0

Prob > |T|

INTERCEP

1

3.713252

0.371877

9.985

0.0001

AUS_X1

1

0.721405

0.208790

3.455

0.0035

AUS_X2

1

-0.753844

0.146377

-5.150

0.0001

AUS_X3

1

-0.496348

0.111424

-4.455

0.0005

Model: BELGIUM Dependent variable: BELY

Parameter Estimates

Variable

DF

Parameter

Estimate

Standard

Error

T for HO: Parameter = 0

Prob > |T|

INTERCEP

1

2.843323

0.445235

6.386

0.0001

BELX1

1

0.835168

0.169508

4.927

0.0002

BELX2

1

-0.130828

0.153945

-0.850

0.4088

BELX3

1

-0.686411

0.092805

-7.396

0.0001

b. SUR Model with the first three countries

SYSLIN Procedure

Seemingly Unrelated Regression Estimation Model: AUSTRIA Dependent variable: AUS_Y

Parameter Estimates

Variable

DF

Parameter

Estimate

Standard

Error

T for HO: Parameter = 0

Prob > |T|

INTERCEP

1

3.842153

0.369141

10.408

0.0001

AUS_X1

1

0.819302

0.205608

3.985

0.0012

AUS_X2

1

-0.786415

0.145256

-5.414

0.0001

AUS_X3

1

-0.547701

0.109719

-4.992

0.0002

Model: BELGIUM Dependent variable: BELY

Parameter Estimates

Variable

DF

Parameter

Estimate

Standard

Error

T for HO: Parameter = 0

Prob > |T|

INTERCEP

1

2.910755

0.440417

6.609

0.0001

BELX1

1

0.887054

0.167952

5.282

0.0001

BELX2

1

-0.128480

0.151578

-0.848

0.4100

BELX3

1

-0.713870

0.091902

-7.768

0.0001

Model: CANADA Dependent variable: CAN_Y

Parameter Estimates

Variable

DF

Parameter

Estimate

Standard

Error

T for HO: Parameter = 0

Prob > |T|

INTERCEP

1

3.001741

0.272684

11.008

0.0001

CAN_X1

1

0.410169

0.076193

5.383

0.0001

CAN. X2

1

-0.390490

0.086275

-4.526

0.0004

CAN. X3

1

-0.462567

0.070169

-6.592

0.0001

c. Cross Model Correlation

Corr AUSTRIA BELGIUM CANADA

Breusch and Pagan (1980) diagonality LM test statistic for the first three countries yields XLM = T (j22 + r^ + r~22) = 2.77 which is asympoti - cally distributed as /2 under the null hypothesis. This does not reject the diagonality of the variance-covariance matrix across the three countries.

SAS PROGRAM

Data gasoline;

Input COUNTRY $ YEAR Y X1 X2 X3; CARDS;

Proc IML; use GASOLINE; read all into temp;

sur=temp [1:19,2:5] | | temp[20:38, 2:5] | | temp[39:57,2:5];

c={‘‘AUS_Y” “AUS_X1” “AUS. X2” “AUS. X3” “BELY” “BELX1" “BELX2"

“BELX3" “CAN_Y" “CAN. X1” “CAN. X2” “CAN. X3”};

create SUR_DATA from SUR [colname=c];

append from SUR;

proc syslin sur data=SUR_DATA;

AUSTRIA: model AUS_Y=AUS_X1 AUS. X2 AUS_X3;

BELGIUM: model BELY=BELX1 BELX2 BELX3; title ‘SUR MODEL WITH THE FIRST 2 COUNTRIES’; proc syslin sur data=SUR_DATA;

AUSTRIA: model AUS_Y=AUS_X1 AUS_X2 AUS_X3;

BELGIUM: model BELY=BELX1 BELX2 BELX3;

CANADA: model CAN_Y=CAN_X1 CAN_X2 CAN_X3; title ‘SUR MODEL WITH THE FIRST 3 COUNTRIES’; run;

10.13 Trace Minimization of Singular Systems with Cross-Equation Restrictions. This is based on Baltagi (1993).

3 3 T

a. Note that yit = 1, implies that ^ yi = 1, where yi = yit/T, for i =

i=i i=i t=i

T T T

1,2, 3.Thismeansthat J]xt(y3t-Уз) = - J2 xt(y2t-У2)-J2 xt(y1t—yi)

t=i t=i t=i

or b3 = —b2 — bi. This shows that the bi’s satisfy the adding up restriction "i + "2 + "з = 0.

b. Note also that "1 = "2 and "1 + "2 + "3 = 0 imply "3 = —2"i. If we ignore the first equation, and impose "1 = "2, we get

a2

C

©2

аз

©3

-1

t 0 X 0 t —2X

which means that the OLS normal equations yield

TT

T&2 + " i ^ xt = ^2 y2t

t=i

T

Taз — 2" i ^ xt = ^2 y3t

t=i t=i

'2 X! Xt — 2'3^2 Xt C 5"1^2 x2 = ^2 Xty2t — 2^2 xty3t.

t=i t=i t=i t=i t=i

Substituting the expressions for a2 and cO3 from the first two equations into the third equation, we get "i = 0.2b2 — 0.4b3. Using bi + b2 + b3 = 0 from part (a), one gets "i = 0.4bi + 0.6b2.

c. Similarly, deleting the second equation and imposing "1 = "2 one gets

ai

C

©i

аз

©3

-1

Pi_

t 0 X 0 t —2X

Forming the OLS normal equations and solving for " i, one gets "1 = 0.2bi — 0.4b3. Using b1 C b2 + b3 = 0 gives " 1 = 0.6b1 + 0.4b2.

d. Finally, deleting the third equation and imposing " = "2 one gets

Forming the OLS normal equations and solving for " 1 one gets " 1 = 0.5b1 + 0.5b2.

Therefore, the estimate of "1 is not invariant to the deleted equation. Also, this non-invariancy affects Zellner’s SUR estimation if the restricted least squares residuals are used rather than the unrestricted least squares residuals in estimating the variance covariance matrix of the disturbances. An alternative solution is given by Im (1994).

10.17 The SUR results are replicated using Stata below:

. sureg (Growth: dly = yrt gov m2y inf swo dtot f_pcy d80 d90) (Inequality: gih = yrt m2y civmlg mlgldc), corr

Seemingly unrelated regression

Equation

Obs

Parms

RMSE

“R-sq”

chi2

P

Growth

119

9

2.313764

0.4047

80.36

0.0000

Inequality

119

5

6.878804

0.4612

102.58

0.0000

Coef.

Std. Err.

z

P>|z|

[95% Conf. Interval]

Growth

yrt

-.0497042

.1546178

-0.32

0.748

-.3527496

.2533412

gov

-.0345058

.0354801

-0.97

0.331

-.1040455

.0350338

m2y

.0084999

.0163819

0.52

0.604

-.023608

.0406078

inf

-.0020648

.0013269

-1.56

0.120

-.0046655

.000536

swo

3.263209

.60405

5.40

0.000

2.079292

4.447125

dtot

17.74543

21.9798

0.81

0.419

-25.33419

60.82505

f-pcy

-1.038173

.4884378

-2.13

0.034

-1.995494

-.0808529

d80

-1.615472

.5090782

-3.17

0.002

-2.613247

-.6176976

d90

-3.339514

.6063639

-5.51

0.000

-4.527965

-2.151063

_cons

10.60415

3.471089

3.05

0.002

3.800944

17.40736

Inequality

Correlation matrix of residuals:

Growth Inequality Growth 1.0000 Inequality 0.0872 1.0000

Breusch-Pagan test of independence: chi2(1) = 0.905, Pr = 0.3415.

b. Note that the correlation among the residuals of the two equations is weak (0.0872) and the Breusch-Pagan test for diagonality of the variance - covariance matrix of the disturbances of the two equations is statistically insignificant, not rejecting zero correlation among the two equations.

References

Baltagi, B. H. (1993), “Trace Minimization of Singular Systems With Cross-Equation Restrictions,” Econometric Theory, Problem 93.2.4, 9: 314-315.

Baltagi, B. H. and J. M. Griffin (1983), “Gasoline Demand in the OECD: An Appli­cation of Pooling and Testing Procedures,” European Economic Review, 22: 117-137.

Binkley, J. K. and C. H. Nelson (1988), “A Note on the Efficiency of Seemingly Unrelated Regression,” The American Statistician, 42: 137-139.

Breusch, T. S. and A. R. Pagan (1980), “The Lagrange Multiplier Test and Its Appli­cations to Model Specification in Econometrics, Review of Economic Studies, 47: 239-253.

Grunfeld, Y. (1958), “The Determinants of Corporate Investment,” unpublished Ph. D. dissertation (University of Chicago: Chicago, IL).

Im, Eric Iksoon (1994), “Trace Minimization of Singular Systems With Cross­Equation Restrictions,” Econometric Theory, Solution 93.2.4, 10: 450.

Kmenta, J. (1986), Elements of Econometrics (Macmillan: New York).

Schmidt, P. (1977), “Estimation of Seemingly Unrelated Regressions With Unequal Numbers of Observations,” Journal of Econometrics, 5: 365-377.

CHAPTER 11

Добавить комментарий

Springer Texts in Business and Economics

The General Linear Model: The Basics

7.1 Invariance of the fitted values and residuals to non-singular transformations of the independent variables. The regression model in (7.1) can be written as y = XCC-1" + u where …

Regression Diagnostics and Specification Tests

8.1 Since H = PX is idempotent, it is positive semi-definite with b0H b > 0 for any arbitrary vector b. Specifically, for b0 = (1,0,.., 0/ we get hn …

Generalized Least Squares

9.1 GLS Is More Efficient than OLS. a. Equation (7.5) of Chap. 7 gives "ois = " + (X'X)-1X'u so that E("ois) = " as long as X and u …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.