Introduction to the Mathematical and Statistical Foundations of Econometrics

Weak Laws of Large Numbers for Stationary Processes

I will show now that covariance stationary time series processes with a vanishing memory obey a weak law of large numbers and then specialize this result to strictly stationary processes.

Let Xt є К be a covariance stationary process, that is, for all t, E [Xt ] = д, var[Xt] = a2 and cov(Xt, Xt—m) = у (m). If Xt has a vanishing mem­ory, then by Theorem 7.1 there exist uncorrelated random variables Ut є К with zero expectations and common finite variance a^ such that Xt — д =

Подпись: Y (k) = E Подпись: ^ ^ am+kUt—m m=0 Подпись: ^ ^ am Ut —m m =0 Подпись: (7.11)

ЕГ=0 am Ut—m, where £“=0 a2m < “. Then

Because “=0 &m < “, it follows that lim^“ J2“=k a22 = 0. Hence, it fol­

image534 image535 Подпись: 0 as k -> oo.

lows from (7.11) and the Schwarz inequality that

image537

Consequently,

< a2/n + 2(1/n)J2 Y(m) ^ 0 as n

Подпись: m=1(7.12)

From Chebishev’s inequality, it follows now from (7.12) that

Theorem 7.3: If Xt is a covariance stationary time series process with van­ishing memory, thenplimn^“(1 /n) J^=i Xt = E[Xj].

This result requires that the second moment of Xt be finite. However, this condition can be relaxed by assuming strict stationarity:

Theorem 7.4: If Xt is a strictly stationary time series process with vanishing memory, and E [| X i|] < to, thenpHmn^CXJ(1/n)YTt=1 Xt = E [X1].

Proof: Assume first that P [Xt > 0] = 1. For any positive real number M, XtI(Xt < M) is a covariance stationary process with vanishing memory; hence, by Theorem 7.3,

n

plim(1 / n)J2 (XtI (Xt < M) - E [X11 (X1 < M)]) = 0. (7.13)

image539
T=f

Подпись: P image541 Подпись: > e/2 Подпись: (7.17)

Moreover, it follows from (7.13) that there exists a natural number n0(e, S) such that

If we combine (7.15)—(7.17), the theorem follows for the case P [Xt > 0] = 1. The general case follows easily from Xt = max(0, Xt) - max(0, - Xt) and Slutsky’s theorem. Q. E.D.

Most stochastic dynamic macroeconomic models assume that the model variables are driven by independent random shocks, and thus the model variables involved are functions of these independent random shocks and their past. These random shock are said to form a base for the model variables involved:

Definition 7.4: A time series process Ut is a base for a time series process Xt if, for each t, Xt is measurable ■_= a(Ut, Ut_1; Ut-2,...).

If Xt has an independent base, then it has a vanishing memory owing to Kolmogorov’s zero-one law:

Theorem 7.5: (Kolmogorov's zero-one law) Let Xt be a sequence of indepen­dent random variables orvectors, andlet ■_ = a (Xt, Xt -1, Xt _2,...). Then the sets in the remote a - algebra = n“ 1^_have either probability 0 or 1 .

Proof: Denote by ■+k the a - algebra generated by Xt, Xt +k. Moreover, denote by ■/і, the a-algebra generated by Xt_1,Xt_m. Each set A1 in ■+k takes the form

A1 = { є Q : (Xt (v),..., Xt+k (v))T є B1}

for some Borel set B1 є Kk+1. Similarly, each set A2 in Utakes the form

A2 = {co є Q : (Xt_1(v),..., Xt-m («))T є B2}

for some m > 1 and some Borel set B2 є Rm. Clearly, A1 and A2 are independent.

I will now show that the same holds for sets A2 in ■l-, = a (U “^■/г,), the smallest a-algebra containing U“ . Note that Urn=1^t'_,m may not be

a a-algebra itself, but it is easy to verify that it is an algebra because c ■'Г,-1. For a given set C in ■+k with positive probability and for all sets A in UrnU^-m, we have P(A|C) = P(A). Thus, P(-|C) is a probability measure on the algebra U^^/-,, which has a unique extension to the smallest a - algebra containing U »=1 (seeChapter 1). Consequently, P(A|C) = P(A)

is true for all sets A in Д Moreover, if C has probability zero, then P (A П

C) < P(C) = 0 = P(A)P(C). Thus, for all sets C in TZ+k and all sets A in F—T P(A n C) = P(A)P(C).

Next, let A e n^-d, where the intersection is taken over all integers t, and let C e Uf=1F‘t—k. Thenforsome k, C isasetin ^—к and A isasetin for all m; therefore, A e F—T—1 and hence P(A n C) = P(A)P(C). By a similar argument it can be shown that P(A n C) = P(A)P(C) for all sets A e n, F—T and C e a(U”=1^‘t—k). But 7Z—T = n, F—с а(ЦД k), and thus we

may choose C = A. Consequently, for all sets A e ntF—P(A) = P(A)2, which implies that P(A) is either zero or one. Q. E.D.

Добавить комментарий

Introduction to the Mathematical and Statistical Foundations of Econometrics

Mathematical Expectation

With these new integrals introduced, we can now answer the second question stated at the end of the introduction: How do we define the mathematical ex­pectation if the distribution of …

Hypotheses Testing

Theorem 5.19 is the basis for hypotheses testing in linear regression analysis. First, consider the problem of whether a particular component of the vector Xj of explanatory variables in model …

The Inverse and Transpose of a Matrix

I will now address the question of whether, for a given m x n matrix A, there exists an n x m matrix B such that, with y = Ax, …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.