Using gret l for Principles of Econometrics, 4th Edition
Bandwidth and Kernel
HAC is not quite as automatic as the heteroskedasticity consistent (HCCME) estimator in chapter 8. To be robust with respect to autocorrelation you have to specify how far away in time the autocorrelation is likely to be significant. Essentially, the autocorrelated errors over the chosen time window are averaged in the computation of the HAC standard errors; you have to specify how many periods over which to average and how much weight to assign each residual in that average. The language of time-series analysis can often be opaque. This is the case here. The weighted average is called a kernel and the number of errors to average in this respect is called bandwidth. Just think of the kernel as another name for weighted average and bandwidth as the term for number of terms to average.
Now, what this has to do with gretl is fairly simple. You get to pick a method of averaging (Bartlett kernel or Parzen kernel) and a bandwidth (nwl, nw2 or some integer). Gretl defaults to the Bartlett kernel and the bandwidth nw 1 = 0.75 x N1/3. As you can see, the bandwidth nw1 is computed based on the sample size, N. The nw2 bandwidth is nw2 = 4 x (N/100)2/9. This one appears to be the default in other programs like EViews.
Implicity there is a trade-off to consider. Larger bandwidths reduce bias (good) as well as precision (bad). Smaller bandwidths exclude more relevant autocorrelations (and hence have more bias), but use more observations to compute the overall covariance and hence increase precision (smaller variance). The general principle is to choose a bandwidth that is large enough to contain the largest autocorrelations. The choice will ultimately depend on the frequency of observation and the length of time it takes for your system to adjust to shocks.
The bandwidth or kernel can be changed using the set command from the console or in a script. The set command is used to change various defaults in gretl and the relevant switches for our use are hac_lag and hac_kernel. The use of these is demonstrated below. The following script changes the kernel to bartlett and the bandwidth to nw2. Then the differences of the unemployment rate are generated. The Phillips curve is estimated by OLS using the ordinary covariance estimator and then by the HAC estimator. The results are collected in a model table.
1 open "@gretldirdatapoephillips_aus. gdt"
2 set hac_kernel bartlett
3 set hac_lag nw2
4 diff u
5 ols inf const d_u
6 modeltab add
7 ols inf const d_u —robust
8 modeltab add
9 modeltab show
The results from the model table are
OLS estimates
Dependent variable: inf
(OLS) |
(OLS w/HAC) |
|
const |
0.7776** |
0.7776** |
(0.06582) |
(0.1018) |
|
d_u |
-0.5279** |
-0.5279* |
(0.2294) |
(0.3092) |
|
n |
90 |
90 |
R2 |
0.0568 |
0.0568 |
£ |
-83.96 |
-83.96 |
Standard errors in parentheses
* indicates significance at the 10 percent level
** indicates significance at the 5 percent level
HAC: bandwidth 3 - Bartlett kernel
Notice that the standard errors computed using HAC are a little different from those in Hill et al. (2011). No worries, though. They are statistically valid and suggest that EViews and gretl are doing the computations a bit differently.