Springer Texts in Business and Economics
Simple Linear Regression
3.1 For least squares, the first-order conditions of minimization, given by Eqs. (3.2) and (3.3), yield immediately the first two numerical properties of OLS esti-
n n n л n
mates, i. e., !>i = 0 and eiXi = 0. Now consider eiYi = a ei C
i=1 i=1 i=1 i=1
n
" P eiXi = 0 where the first equality uses Yi = a C "Xi and the second
i=1
equality uses the first two numerical properties of OLS. Using the fact that
/V n n n
ei = Yi — Yi, we can sum both sides to get ei = Yi — Yi, but
i=1 i=1 i=1
n n n л
P ei = 0, therefore we get £ Yi = P Yi. Dividing both sides by n, we get
i=1 _ i=1 i=1
Y = Y.
nn
3.2 Minimizing P (Yi — a)2 with respect to a yields — 2 P(Yi — a) = 0. Solv-
ing for a yields aols = Y. Averaging Yi = a C ui we get Y = a C u.
n
Hence aols = a C u with E (cіols) = a since E(u) = P E(ui)/n = 0 and
i= 1
var (aols) = E (aols — a)2 = E(u)2 = var(u) = o2/n. The residual sum of
n n __ 2 n
squares is £ (Yi — aols)2 = P (Yi — Y) = P yi2.
i=1 i=1 i=1
nn
3.3 a. Minimizing (Yi — "Xi)2 with respect to " yields — 2 (Yi — "Xi)Xi = 0.
nn
Solving for " yields "ols = YiX^/ Xi2. Substituting Yi = "Xi C ui
nn
yields "ols = "CP Xiu^/ p Xi2 withE(|3ols) = " sinceXi is nonstochastic
i= 1 i= 1
and E(ui) = 0. Also, var(°ols) = E(°ols — ")2 = E ^p X^/ p Xi2 j =
n / n 2 n
tf2 P Xi2/ p XiM = Р/ P Xi2.
i=1 i=1 i=1
n
b. From the first-order condition in part (a), we get eiXi = 0, where
i=1
nn
ei = Yi — OolsXi. However, P ei is not necessarily zero. Therefore P Yi is
i=1 i=1
B. H. Baltagi, Solutions Manual for Econometrics, Springer Texts in Business and Economics, DOI 10.1007/978-3-642-54548-1_3, © Springer-Verlag Berlin Heidelberg 2015
|
|
|
|
|
|
|
|
|
|
|
|
squares. Hence, yi;yi = y2. Therefore
i=i i=i
for " 3.
n
All satisfy J2 wiXi = 1, which is necessary for "i to be unbiased for i = 1,2,3.
i=i
Therefore
n
"i = " + ^wiui fori = 1,2,3
i=i
n
with E("^ " and var("^ = o2 wi2 since the ui’s are IID(0, o2). Hence,
n n n n
a. cov(" 1, "2) = E Ui Xi XiUi Xi
i=1 i=1 i=1 i=1
n n n n
= o2 Xi Xi2 1 Xi = o2/ Xi2 = var(" O>0.
1/2
= I nX2 Xi2
with 0 < p12 < 1. Samuel-Cahn (1994) showed that whenever the correlation coefficient between two unbiased estimators is equal to the square root of the ratio of their variances, the optimal combination of these two unbiased estimators is the one that weights the estimator with the smaller variance by 1. In this example, " 1 is the best linear unbiased estimator (BLUE). Therefore, as expected, when we combine " 1 with any other unbiased estimator of ", the optimal weight a* turns out to be 1 for P 1 and zero for the other linear unbiased estimator.
b. Similarly,
11
cov(P 1, "3) = E Xiiii/^X2
i=1 i=1 ,
= -2 X Xi/E X2
P = var( P 2 var (" 2) + var(" 3 P 3
+ var(P3)/ var(P2) + var(P3) P2
— (! — P12) 1°з + P12I02
since var("3)/var("2) — pP12j (1 — рУ and
n n n
p22 — nX2/^ Xi2 while 1 — p22 — (Xi — X)2/^ Xi2. Hence.
i=1 i=1 i=1
See also the solution by Trenkler (1996).