Springer Texts in Business and Economics
Regressions with Non-zero Mean Disturbances
a. For the gamma distribution, E(ui) = 0 and var(ui) = 0. Hence, the disturbances of the simple regression have non-zero mean but constant variance. Adding and subtracting 0 on the right hand side of the regression we get Yi = (a + 0) + "Xi + (ui - 0) = a* + "Xi + ui* where a* = a + 0
and ui* = ui — 0 with E (ui*) = 0 and var (ui*) = 0. OLS yields the BLU estimators of a* and " since the ui* disturbances satisfy all the requirements of the Gauss-Markov Theorem, including the zero mean assumption. Hence, E(aols) = a* = a + 0 which is biased for a by the mean of the disturbances 0. ButE(s2) = var(u*) = var (ui) = 0. Therefore,
E Cols - s2) = E (aols) - E(s2) = a + 0 - 0 = a.
b. Similarly, for the x2 distribution, we have E(ui) = v and var(u^ = 2v. In this case, the same analysis applies except that E(aols) = a + v and E(s2) = 2v. Hence, E((Ools - s2/2) = a.
c. Finally, for the exponential distribution, we have E(ui) = 0 and var(ui) = 02. In this case, plim aols = a + 0 and plim s2 = 02. Hence, plim (aols - s) = plim aols - plim s = a.
5.10 The Heteroskedastic Consequences of an Arbitrary Variance for the Initial Disturbance of an AR(1) Model. Parts (a), (b), and (c) are based on the solution given by Kim (1991).
a. Continual substitution into the process ut = put_i + et yields
t-i
ut = p‘uo + ^ p£t-j. j=0
Exploiting the independence conditions, E(uo-j) = 0 for all j, and E(ei£j) = 0 for all i ф j, the variance of ut is
t-i t-i
ot2 = var(ut) = p2t var(uo) + ^ p2j var (-н) = p2toF2/x + a©2 ^ p2j
j=0 j=0
= p2to2/x + o2[(1 - p2t)/(1 - p2)]
= [{1/(1 - p2) + [1/x - 1/(1 - p2)]p2t} oe2 .
This expression for o2 depends on t. Hence, for an arbitrary value of x, the disturbances are rendered heteroskedastic. If x = 1 - p2 then o2 does not depend on t and o2 becomes homoskedastic.
b. Using the above equation for ot2, define
a = CTt2 - СТ-1 = [(1/x) - 1/(1 - p2)] p2t(1 - p_2)CTe2 = bc
whereb = (1/x) —1/(1 — p2) andc = p2t(1-p-2)o2. Note that c < 0. Ifx >
1 - p2, then b < 0 implying that a >0 and the variance is increasing. While
if x < 1 - p2, then b >0 implying that a <0 and the variance is decreasing.
If x = 1 - p2 then a = 0 and the process becomes homoskedastic.
Эст2 2oF2p2tlog p 2
Alternatively, one can show that —- = — --------- — [(1 - p2) - x]
@t x(1 - p2)
where @abx/@x = babx log a, has been used. Since log p < 0, the right hand side of the above equation is positive (ot2 is increasing) if x > 1 - p2, and is negative otherwise.
t-1
p*uo C J2 Pj ©t-j, and noting that E(ut) = 0 for all t, finds, for
j=0
t-1 t-s-1
cov(ut, ut-s) = E(utut-s) = p‘uo C ^2 Pi£‘-i I P* suo + ^2 p©t-s-j
i=0 j=0
t-s-1
= p2t-svar(uo) + ps p2i var(et-s-i)
i=0
d.
The Bias of the Standard Errors of OLS Process with an Arbitrary Variance on the Initial Observation. This is based on the solution by Koning (1992). Consider the time-series model
Also, note that in the stationary case (x = 1 — p2), we have:
since each term in the double summation is non-negative. Hence, the true variance of "ols (the left hand side of the last equation) is greater than the
estimated variance of "ols (the right hand side of the last equation). Therefore, the true t-ratio corresponding to Ho; " = 0 versus Hi; " ф 0 is lower than the estimated t-ratio. In other words, OLS rejects too often.
From the equation for ot2 it is easily seen that 9ot2/Эх < 0, and hence, that the true variance of "ols is larger than the estimated var("ols) if х < 1 — p2. Hence, the true t-ratio decreases and compared to the stationary case, OLS rejects more often. The opposite holds for х > 1 — p2.