 ## The General Linear Model: The Basics

7.1 Invariance of the fitted values and residuals to non-singular transformations of the independent variables. The regression model in (7.1) can be written as y = XCC-1" + u where …

## Regression Diagnostics and Specification Tests

8.1 Since H = PX is idempotent, it is positive semi-definite with b0H b > 0 for any arbitrary vector b. Specifically, for b0 = (1,0,.., 0/ we get hn …

## Generalized Least Squares

9.1 GLS Is More Efficient than OLS. a. Equation (7.5) of Chap. 7 gives "ois = " + (X'X)-1X'u so that E("ois) = " as long as X and u …

## Seemingly Unrelated Regressions

10.1 When Is OLS as Efficient as Zellner’s SUR? a. From (10.2), OLS on this system gives p /Р 1,ols I p ols = " P2,ols I "(X1X1) 1 0 …

## Simultaneous Equations Model

11.1 The Inconsistency of OLS. The OLS estimator from Eq. (11.14) yields 8ols = T T E Ptqt/ E Pt2 where pt = Pt — l3 and qt = Qt …

## Pooling Time-Series of Cross-Section Data

12.1 Fixed Effects and the Within Transformation. a. Premultiplying (12.11) by Q one gets Qy = «Qint + QX" + QZpp + Qv But PZp = Zp and QZp = …

## Variance-Covariance Matrix of Random Effects

a. From (12.17) we get Й = ct^In <8> Jt) + c^.In <8> It) Replacing JT by TJT, and IT by (Et + JT) where ET is by definition (It …

## Limited Dependent Variables

13.1 The Linear Probability Model Уі u; Prob. 1 1 — x0" к; 0 CD. УС 1 1 — к; a. Let к і = Pr[y; = 1], then y; …

## Time-Series Analysis

14.1 The AR(1) Model. yt = pyt_i + ©t with |p| <1 and ©t ~ IIN (0, a©2). Also, yo - N (0, o©2/1 - p2). a. By successive substitution …

## Relative Efficiency of OLS Under Heteroskedasticity

a. From Eq. (5.9) we have n / n 2 n / n 2 var(p 0ls) = e x2^i2 / (Ex2) = °2 Ex2x8 / (Ex2) i=1 i=1 i=1 i=1 …

## Simple Versus Multiple Regression Coefficients. This is based on Baltagi (1987)

The OLS residuals from Yi = у + 82v2i + 83v3i + wi, say tVi, satisfy the following conditions: n n n wі = 0 ^]iViv2i = 0 ^^wiV 3i …

## Independence and Simple Correlation

a. Assume that X and Y are continuous random variables. The proof is similar if X and Y are discrete random variables and is left to the reader. If X …

## Distributed Lags and Dynamic Models

6.1 a. Using the Linear Arithmetic lag given in Eq. (6.2), a 6 year lag on income gives a regression of consumption on a constant and 6 Zt = J2 …

## The Best Predictor

a. The problem is to minimize E[Y — h(X)]2 with respect to h(X). Add and subtract E(Y/X) to get E{[Y — E(Y/X)] + [E(Y/X) — h(X)]}2 = E[Y — E(Y/X)]2 …

## . Effect of Additional Regressors on R2

a. Least Squares on the K = K1 + K2 regressors minimizes the sum of squared error and yields SSE2 = min P (Yi - a - "2 X2i -.. …

## The Binomial Distribution

a. Pr[X = 5 or 6] = Pr[X = 5] + Pr[X = 6] = b(n = 20,X = 5, 0 = 0.1) C b(n = 20,X = 6, 0 …

## Simple Linear Regression

3.1 For least squares, the first-order conditions of minimization, given by Eqs. (3.2) and (3.3), yield immediately the first two numerical properties of OLS esti- n n n л n …

## Violations of the Classical Assumptions

5.1 s2 is Biased Under Heteroskedasticity. From Chap. 3 we have shown that ei = Yi 'ols "olsXi = yi "olsxi = "ols^ Xi + (ui u/ for i = …

## The Wald, LR, and LM Inequality. This is based on Baltagi (1994). The likelihood is given by Eq. (2.1) in the text

(1)   (3)   (4)   (5)   (6)   (7)   where I11 denotes the (1,1) element of the information matrix evaluated at the unrestricted maximum likelihood estimates. It …

## Efficiency as Correlation. This is based on Zheng (1994)

3.12 Since " and " are linear unbiased estimators of ", it follows that" C X(" — ") for any X is a linear unbiased estimator of ". Since " …

## Weighted Least Squares. This is based on Kmenta (1986)

a. From the first equation in (5.11), one could solve for a n n n 5 ід2) = V°?) -" Xi/°2). i=l i=1 i=1 i=1 n n n n Yi/o? …

## Poisson Distribution

a. Using the MGF for the Poisson derived in problem 2.14c one gets Mx(t) = ex(e‘“1). Differentiating with respect to t yields MX(t) = ex(et-1}Xe‘. Evaluating MX(t) at t = …

## Adding 5 to each observation of Xi, adds 5 to the sample average X and it

__ n is now 12.5. This means that xi — Xi — X is unaffected. Hence 'Jf xi2 is the i=i same and since Yi, is unchanged, we conclude that …

## TheAR(1) model. From (5.26), by continuous substitution just like (5.29), one could stop at ut_s to get

ut = Psut_s + ps 1£t_s+1 + ps 2©t_s+2 + .. + pet-1 + ©t for t > s. Note that the power of p and the subscript of e …

## The Uniform Density

a. e(x) = У xdx = |[x2]0 = 2 E(x2) = /с - x2dx = 3 ИІ = 3 so that var(X) = E(X2) - (E(X))2 = - - - …

## Dependent Variable: LNC

Analysis of Variance Sum of Mean Source DF Squares Square F Value Prob > F Model 1 0.04693 0.04693 1.288 0.2625 Error 44 1.60260 0.03642 C Total 45 1.64953 Root …

## Regressions with Non-zero Mean Disturbances

a. For the gamma distribution, E(ui) = 0 and var(ui) = 0. Hence, the distur­bances of the simple regression have non-zero mean but constant variance. Adding and subtracting 0 on …

## The Exponential Distribution

a. Using the MGF for the exponential distribution derived in problem 2.14e, we get 1 (1 — 0t). Differentiating with respect to t yields O MX(t) = Therefore MX0(O) = …

## Theil’s minimum mean square estimator of «. » can be written as

n / Г n a. " = EXiYi/ EX? + (a2/"2) i=i 7 Li=i Substituting Yi = "Xi + Ui we get ре Xi? + e XiUi " = -=—=— …

## ML Estimation of Linear Regression Model with AR(1) Errors and Two

Observations. This is based on Baltagi and Li (1995). a. The OLS estimator of " is given by 22 "ols = Y^ xiyi/X)x2 = (y1x1 C y2x2)/ (x1 + x2) …

## The Gamma Distribution

a. Using theMGF for the Gamma distribution derived in problem 2.14f, we get MxO) = (1 — pt)-“. Differentiating with respect to t yields MX(t) = —a(1 — "t)-a-1(—") = …

## Dependent Variable: LNEN

Analysis of Variance Sum of Mean Source DF Squares Square F Value Prob > F Model 1 30.82384 30.82384 13.462 0.0018 Error 18 41.21502 2.28972 C Total 19 72.03886 Root …

## The backup regressions are given below: These are performed using SAS

OLS REGRESSION OF LNC ON CONSTANT, LNP, AND LNY Dependent Variable: LNC Analysis of Variance Sum of Mean Source DF Squares Square F Value Prob>F Model 2 0.50098 0.25049 9.378 …

## The t-distribution with r Degrees of Freedom

a. IfX1, ..,Xn are IIN(p, o2),thenX~ N(p, o2/n) andz = (X-^. isN(0,1). b. (n — 1)s2/o2 ~ x2—1. Dividing our N(0, 1) random variables z in part (a), by the …

## Dependent Variable: LNRGDP

Analysis of Variance Sumof Mean Source DF Squares Square F Value Prob > F Model 1 53.88294 53.88294 535.903 0.0001 Error 18 1.80983 0.10055 C Total 19 55.69277 RootMSE 0.31709 …

## Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия
+38 050 512 11 94 — гл. инженер-менеджер (продажи всего оборудования)

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

## Контакты для заказов шлакоблочного оборудования:

+38 096 992 9559 Инна (вайбер, вацап, телеграм)
Эл. почта: inna@msd.com.ua