Using gret l for Principles of Econometrics, 4th Edition
Nonlinear Least Squares
Perhaps the best way to estimate a linear model that is autocorrelated is using nonlinear least squares. As it turns out, the nonlinear least squares estimator only requires that the errors be stable (not necessarily stationary). The other methods commonly used make stronger demands on the data, namely that the errors be covariance stationary. Furthermore, the nonlinear least squares estimator gives you an unconditional estimate of the autocorrelation parameter, p, and yields a simple t-test of the hypothesis of no serial correlation. Monte Carlo studies show that it performs well in small samples as well. So with all this going for it, why not use it?
The biggest reason is that nonlinear least squares requires more computational power than linear estimation, though this is not much of a constraint these days. Also, in gretl it requires an extra step on your part. You have to type in an equation for gretl to estimate. This is the way one works in EViews and other software by default, so the burden here is relatively low.
Nonlinear least squares (and other nonlinear estimators) use numerical methods rather than analytical ones to find the minimum of your sum of squared errors objective function. The routines that do this are iterative. You give the program a good first guess as to the value of the parameters and it evaluates the sum of squares function at this guess. The program looks at the slope of your sum of squares function at the guess, points you in a direction that leads closer to smaller values of the objective function, and computes a step in the parameter space that takes you some distance toward the minimum (further down the hill). If an improvement in the sum of squared errors function is found, the new parameter values are used as the basis for another step. Iterations continue until no further significant reduction in the sum of squared errors function can be found.
In the context of the area response equation the AR(1) model is
inft = ві(1 - p) + в2(Aut - p Aut-i) + p inft-i + vt (9.8)
The errors, vt, are random and the goal is to find ft1, fi2, and p that minimize vt2. Ordinary least squares is a good place to start in this case. The OLS estimates are consistent so we’ll start our numerical routine there, setting p equal to zero. The gretl script to do this follows:
1 open "@gretldirdatapoephillips_aus. gdt"
2 diff u
3 ols inf const d_u —quiet
4
4 scalar betal = $coeff(const)
5 scalar beta2 = $coeff(d_u)
6 scalar rho = 0
8
7 nls inf = beta1*(1-rho) + rho*inf(-1) + beta2*(d_u-rho*d_u(-1))
8 params rho beta1 beta2
9 end nls
Magically, this yields the same result from your text!
The nls command is initiated with nls followed by the equation representing the systematic portion of your model. The command is closed by the statement end nls. If possible, it is always a good idea to supply analytical derivatives for nonlinear maximization. In this case I did not, opting to let gretl take numerical derivatives. When using numerical derivatives, the params statement is required in order for gretl to figure out what to take the derivatives with respect to. In the script, I used gretl’s built in functions to take differences and lags. Hence, inf(-1) is the variable inf lagged by one period (-1). In this way you can create lags or leads of various lengths in your gretl programs without explicitly having to create new variables via the generate or series command. The results of nonlinear least squares appear below in Figure 9.13.