A.2 Solving and estimating rational expectations models
To make the exposition self-contained, this appendix illustrates solution and estimation of simple models with forward looking variables—the illustration being the hybrid ‘New Keynesian Phillips curve’. Finally, we comment on a problem with observational equivalence, or lack of identification within this class of models.
A sufficiently rich data generating process (DGP) to illustrate the techniques are
Apt = bp1EtApt+i + 6p1Apt-i + bp2xt + £pt, (A.9)
xt = bxxt-1 + £xt, (A.10)
where all coefficients are assumed to be between zero and one. All of the techniques rely on the law of iterated expectations,
EtEt+kxt+j = Etxt+j, к < j,
saying that your average revision of expectations, given more information, will be zero.
This method is the brute force solution, and therefore cumbersome. But since it is also instructive to see exactly what goes on, we begin with this method.
We start by using a trick to get rid of the lagged dependent variable, following Pesaran (1987, pp. 108-109), by implicitly defining nt as
Apt = nt + aApt-i, (A.11)
where a will turn out to be the backward stable root of the process of Apt. We take expectations one period ahead
EtApt+i = Etnt+i + aEtApt,
EtApt+i = Etnt+i + ant + a^Apt-i.
Next, we substitute for EtApt+i into original model: nt + aAp_i = bfpi (Et^t+i + ant + a2_1Apt_i) + bpiAp— + bp2xt + £pt
1 - bpia
The parameter a is defined by
bpia2 - a + bpi = 0
with the solutions
The model will typically have a saddle-point behaviour with one root bigger than one and one smaller than one in absolute value. In the following we will use the backward stable solution, defined by:
In passing it might be noted that the restriction bbpi = 1 - bfpi often imposed in the literature implies the roots
as given in (A.13) as before. We choose |ai| < 1 in the following. So we now have a pure forward-looking model
Finally, using the relationship
between the roots,1 so:
the model becomes
Following Davidson (2000, pp. 109-110), we now derive the solution in two steps:
1. Find Etnt+i.
2. Solve for nt.
Find Etnt+i Define the expectations errors as:
nt+i = nt+i — Etnt+i.
We start by reducing the model to a single equation:
nt = Ynt+i + SbxXt-i + S£xt + Vpt — Ynt+i.
Solving forwards then produces:
nt = Y(Y^t+2 + Sbxxt + 5ext+i + Vpt+i - YYt+2)
+ SbxXt-i + S£xt + vpt — Y'Qt+i = (SbxXt-i + S£xt + Vpt — YVt+i)
+ Y (SbxXt + S£xt+i + vpt+i — YVt+2) + (Y )2nt+2
— ^~~^(Y)j (SbxXt+j-i + b£xt+j + vpt+j — YYt+j+i) + (Y)n+int+n+i. j=0
By imposing the transversality condition:
lim (Y)n+int+n+i = 0
and then taking expectations conditional at time t, we get the ‘discounted solution’:
Et nt+i = ^~~1(Y)j (SbxEtXt+j + SEt£xt+j+i + Etvpt+j+i — YEtVt+j+2) j=0
= J2(y )j (sbxEtXt+j). 
However, we know the process for the forcing variable, so:
Et-ixt = bxxt-,
Etxt = xt,
Etxt+2 = Et(Et+lxt+2) = Etbxxt+1 = b“Xxt,
Etxt+j = bj xt.
Etnt+i = J2(Y)j (Sbxbx xt)
= Sb^2 (Ybx)j xt j=0 Sbx
lEt nt+i + 5xt + vpt
Solve for nt Finally, using (A.11) and (A.16) we get the complete solution: