Introduction to the Mathematical and Statistical Foundations of Econometrics
Expectations of Products of Independent Random Variables
Let X and Y be independent random variables, and let f and g be Borel - measurable functions on R. I will show now that
E [f(X)g(Y)] = (E [f(X)])(E [g(Y)]). (2.30)
In general, (2.30) does not hold, although there are cases in which it holds for dependent X and Y. As an example of a case in which (2.30) does not hold, let X = U0 ■ U1 and X = U0 ■ U2, where U0, U1, and U2 are independent and
uniformly [0, 1] distributed, and let f (x) = x, g(x) = x. The joint density of
U0, U1 and U2 is
h(u0, u1, u2) = 1 if (u0, u 1, u2)T є [0, 1] x [0, 1] x [0, 1], h(u0, u1, u2) = 0 elsewhere;
hence,
E[X ■ Y] = E[ Uo U1 U2 ]
1 1 1
/// u0u1U2du0 du1 du2
J u0 du0 J u1du1 J u2du2
(1/3) x (1/2) x (1/2) = 1/12,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Theorem 2.20: LetXand Ybe random vectors in Rp andR?, respectively. Then X and Y are independent if and only if E [f (X)g(Y)] = (E [f (X)])(E [g(Y)]) for all Borel-measurable functions fand g on Кp and R?, respectively, for which the expectations involved are defined.
This theorem implies that independent random variables are uncorrelated. The reverse, however, is in general not true. A counterexample is the case I have considered before, namely, X = Uo(Ui - 0.5) and Y = U0(U2 — 0-5), where U0, U1, and U2 are independent and uniformly [0, 1] distributed. In this case, E[X ■ Y] = E[X] = E[Y] = 0; hence, cov(X, Y) = 0, but X and Y are dependent owing to the common factor U0. The latter can be shown formally in different ways, but the easiest way is to verify that, for example, E[X2 ■ Y2] = (E[X2])(E[Y2]), and thus the dependence of X and Y follows from Theorem 2.20.