Advanced Econometrics Takeshi Amemiya
Autoregressive Models
5.1.2 First-Order Autoregressive Model
Consider a sequence of random variables {y,}, t = 0, ± 1, ±2,. . . , which follows
Уі = РУі- i + e(, (5-2.1)
where we assume
Assumption А. {є,}, t = 0, ± 1, ±2,............. are i. i.d. with Ее, = 0 and
Ee}= a2 and independent of y,-i, yf_2,....
Assumption B. p < 1.
Assumption C. Ey, — 0 and Ey, yt+h = yh for all t. (That is, {}>,} are weakly stationary.)
Model (5.2.1) with Assumptions A, B, and C is called a stationaryfirst-order autoregressive model, abbreviated as AR(1).
From (5.2.1) we have
У, = Psy,-s + 2 fa-i - (5-2.2)
j-о
But 1іт,_ю E(psy,_s)2 = 0 because of Assumptions В and C. Therefore we have
Уг-^pbt-,, (5.2.3)
which means that the partial summation of the right-hand side converges to y, in the mean square. The model (5.2.1) with Assumptions A, B, and C is equivalent to the model (5.2.3) with Assumptions A and B. The latter is called the moving-average representation of the former.
A quick mechanical way to obtain the moving-average representation
(5.2.3) of (5.2.1) and vice-versa is to define the lag operator L such that Ly, — L2y, = y,_2 Then (5.2.1) can be written as
(1 - pL)y, = e„ (5.2.4)
where 1 is the identity operator such that ly, — y,. Therefore
(5.2.5)
which is (5.2.3).
An AR(1) process can be generated as follows: Define as a random variable independent of €,, e2,. . . , with Ey0 — 0 and Ey% = a2/(1 — p2). Then define y, by (5.2.2) after putting s = t.
The autocovariances (yA) can be expressed as functions of p and a2 as follows: Multiplying (5.2.1) with y,_A and taking the expectation yields
yh = pyh-, h =1,2,.... (5.2.6)
From (5.2.1), E(y, — py,_, )2 = Ее2, so that we have
{І+р^Уо-гру^а2. (5.2.7)
Solving (5.2.6) and (5.2.7), we obtain a2of1
Vh = jZTf’ Л-0,1,2,----------- (5.2.8)
Note that Assumption C implies y_A = yh.
![]() |
![]() |
![]() |
Arranging the autocovariances in the form of a matrix as in (5.1.1), we obtain the autocovariance matrix of AR(1),
![]() |
![]() |
![]() |
|
Now let us examine an alternative derivation of 2, that is useful for deriving the determinant and the inverse of 2, and is easily generalizable to higher - order processes. Define Г-vectors у = (Уі, y2> • • • . Утї and б(*0 = [(1 - рі*УІ2Уі, e2, €3,. . . , ет and а ГХ Tmatrix
Then we have
(5.2.11)
But, because £€*)€*) = cr2I, we obtain
2j = <t2R71(R'i)-,>
which can be shown to be identical with (5.2.9). Taking the determinant of both sides of (5.2.12) yields
T2T
1 - P2'
Inverting both sides of (5.2.12) yields
(5.2.14)
By inserting (5.2.8) into (5.1.2), we can derive the spectral density of AR( 1):
/Л")=ТТЛ E
1 P A—»
=у^ [i+J; (pe^f+J; (/*-*? ]
_ g2 Г. peia і
1 — p2 l 1 — 1 — pe~iaJ
1 — 2p COS G) + /)