Springer Texts in Business and Economics

Simple Linear Regression

3.1 For least squares, the first-order conditions of minimization, given by Eqs. (3.2) and (3.3), yield immediately the first two numerical properties of OLS esti-

n n n л n

mates, i. e., !>i = 0 and eiXi = 0. Now consider eiYi = a ei C

i=1 i=1 i=1 i=1

n

" P eiXi = 0 where the first equality uses Yi = a C "Xi and the second

i=1

equality uses the first two numerical properties of OLS. Using the fact that

/V n n n

ei = Yi — Yi, we can sum both sides to get ei = Yi — Yi, but

i=1 i=1 i=1

n n n л

P ei = 0, therefore we get £ Yi = P Yi. Dividing both sides by n, we get

i=1 _ i=1 i=1

Y = Y.

nn

3.2 Minimizing P (Yi — a)2 with respect to a yields — 2 P(Yi — a) = 0. Solv-

i=1 i=1

ing for a yields aols = Y. Averaging Yi = a C ui we get Y = a C u.

n

Hence aols = a C u with E (cіols) = a since E(u) = P E(ui)/n = 0 and

i= 1

var (aols) = E (aols — a)2 = E(u)2 = var(u) = o2/n. The residual sum of

n n __ 2 n

squares is £ (Yi — aols)2 = P (Yi — Y) = P yi2.

i=1 i=1 i=1

nn

3.3 a. Minimizing (Yi — "Xi)2 with respect to " yields — 2 (Yi — "Xi)Xi = 0.

i=1 i=1

nn

Solving for " yields "ols = YiX^/ Xi2. Substituting Yi = "Xi C ui

i=1 i=1

nn

yields "ols = "CP Xiu^/ p Xi2 withE(|3ols) = " sinceXi is nonstochastic

i= 1 i= 1

and E(ui) = 0. Also, var(°ols) = E(°ols — ")2 = E ^p X^/ p Xi2 j =

n / n 2 n

tf2 P Xi2/ p XiM = Р/ P Xi2.

i=1 i=1 i=1

n

b. From the first-order condition in part (a), we get eiXi = 0, where

i=1

nn

ei = Yi — OolsXi. However, P ei is not necessarily zero. Therefore P Yi is

i=1 i=1

B. H. Baltagi, Solutions Manual for Econometrics, Springer Texts in Business and Economics, DOI 10.1007/978-3-642-54548-1_3, © Springer-Verlag Berlin Heidelberg 2015

image030

image110

i= 1

 

image111
image112

image028

image113

= —,2^xi2

i=1

 

image114

using £ wiXi = 1 from (3.7).

i=1

c. Any other linear estimator of a, can be written as

n n n n

a = ^biYi = bi C biXi C ^biui

i=1 i=1 i=1 i=1

where the last equality is obtained by substituting Eq. (3.1) for Yi. For E(a)

n n n

to equal a, we must have P bi = 1, p biXi = 0. Hence, a = a C P biui.

 

image115

image030

image116

Solving this first-order condition yields a ols. Therefore, a mle = a ols. Simi-

9logL 1 9RSS

larly,-------- = 0 from (3.9) leads to------------------- = 0. Solving for " yields

9" 2a2 9"

"ols. Hence,1 "mle = "ols.

n

91 t 1 — a — "Xi)2

b 9 log L n 1 i=i

 

9a2 2 a2

 

2a4

 

image117
image118

image119

squares. Hence, yi;yi = y2. Therefore

image120
i=i i=i

Подпись: WiПодпись: nПодпись: n (Xi - X) / (Xi - x)2 i=i for " 3.

n

All satisfy J2 wiXi = 1, which is necessary for "i to be unbiased for i = 1,2,3.

i=i

Therefore

n

"i = " + ^wiui fori = 1,2,3

i=i

n

image124

with E("^ " and var("^ = o2 wi2 since the ui’s are IID(0, o2). Hence,

n n n n

a. cov(" 1, "2) = E Ui Xi XiUi Xi

i=1 i=1 i=1 i=1

n n n n

= o2 Xi Xi2 1 Xi = o2/ Xi2 = var(" O>0.

Подпись:1/2

n 1/2

Подпись: var “O1 /var “O2= I nX2 Xi2

with 0 < p12 < 1. Samuel-Cahn (1994) showed that whenever the correla­tion coefficient between two unbiased estimators is equal to the square root of the ratio of their variances, the optimal combination of these two unbiased estimators is the one that weights the estimator with the smaller variance by 1. In this example, " 1 is the best linear unbiased estimator (BLUE). There­fore, as expected, when we combine " 1 with any other unbiased estimator of ", the optimal weight a* turns out to be 1 for P 1 and zero for the other linear unbiased estimator.

b. Similarly,

Подпись: 11Подпись: (Xi - XM/ (Xi - X/2 i=1 i=1 n (Xi - X)/E (Xi - X/2 11

cov(P 1, "3) = E Xiiii/^X2

i=1 i=1 ,

image129

= -2 X Xi/E X2

P = var( P 2 var (" 2) + var(" 3 P 3

+ var(P3)/ var(P2) + var(P3) P2

— (! — P12) 1°з + P12I02

since var("3)/var("2) — pP12j (1 — рУ and

n n n

p22 — nX2/^ Xi2 while 1 — p22 — (Xi — X)2/^ Xi2. Hence.

image130 Подпись: і.

i=1 i=1 i=1

Подпись: "C X(" — " — E "C X(" — "Подпись: withSee also the solution by Trenkler (1996).

Добавить комментарий

Springer Texts in Business and Economics

The General Linear Model: The Basics

7.1 Invariance of the fitted values and residuals to non-singular transformations of the independent variables. The regression model in (7.1) can be written as y = XCC-1" + u where …

Regression Diagnostics and Specification Tests

8.1 Since H = PX is idempotent, it is positive semi-definite with b0H b > 0 for any arbitrary vector b. Specifically, for b0 = (1,0,.., 0/ we get hn …

Generalized Least Squares

9.1 GLS Is More Efficient than OLS. a. Equation (7.5) of Chap. 7 gives "ois = " + (X'X)-1X'u so that E("ois) = " as long as X and u …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.