Using gret l for Principles of Econometrics, 4th Edition

Another Test for Autocorrelation

Another way to determine whether or not your residuals are autocorrelated is to use an LM (Lagrange multiplier) test. For autocorrelation, this test is based on an auxiliary regression where lagged least squares residuals are added to the original regression equation. If the coefficient on the lagged residual is significant then you conclude that the model is autocorrelated. So, for a regression model yt = ві + в2xt + et the first step is to estimate the parameters using least squares and save the residuals, et. An auxiliary regression model is formed using et as the dependent variable and original regressors and the lagged value et-1 as an independent variables. The resulting auxiliary regression is

et = ві + e2Xt + pet-1 + vt (9.7)

Now, test the hypothesis p = 0 against the alternative that p = 0 and you are done. The test statistic is NR2 from this regression which will have a x1 if H0 : is true. The script to accomplish this is:

1 ols ehat const d_u ehat(-1)

2 scalar TR2 = $trsq

3 pvalue X 1 TR2

Estimating the statistic in this way causes the first observation to be dropped (since во is not observed. The result for the phillips-aus data reveal

Chi-square(1): area to the right of 27.6088 = 1.48501e-007 (to the left: 1)

The no autocorrelation null hypothesis is clearly rejected at any reasonable level of significance.

Gretl also includes a model test function that does the same thing. model of interest and then use modtest 1 —autocorr as shown here:

1 Подпись: To use it, estimate theols inf const d_u —quiet

2 modtest 1 —autocorr

The print out from the modtest is fairly extensive as shown here:

Breusch-Godfrey test for first-order autocorrelation OLS, using observations 1987:2-2009:3 (T = 90)

Dependent variable: uhat

coefficient std. error t-ratio p-value

const -0.00216310 0.0551288 -0.03924 0.9688

d_u -0.151494 0.193671 -0.7822 0.4362

uhat_1 0.558784 0.0900967 6.202 1.82e-08 ***

Unadjusted R-squared = 0.306582

Test statistic: LMF = 38.465381,

with p-value = P(F(1,87) > 38.4654) = 1.82e-008

Alternative statistic: TR"2 = 27.592347,

with p-value = P(Chi-square(1) > 27.5923) = 1.5e-007

Ljung-Box Q' = 28.0056,

with p-value = P(Chi-square(1) > 28.0056) = 1.21e-007

Before explaining what is reported here, an important difference between the manual method and modtest needs to be pointed out. When modtest is used to perform this test, it sets e0 = 0, which is its expected value. By doing so, it is able to use the complete set of 90 observations in the data. The manual method used only 89. Hence, you’ll get slightly different results depending on the size of your sample and the number of lags tested.

The results themselves are relevant to those found in POE4. The first thing to notice is the t-ratio on uhat_1 is equal to 6.202, significantly different from zero at 5%. Next, the statistic named LMF actually performs an F-test of the no autocorrelation hypothesis based upon the regression. With only one autocorrelation parameter this is equivalent to the square of the t-ratio. The next test is the LM test, i. e., TR2 from the auxiliary regression. Gretl also computes a Ljung-Box Q statistic whose null hypothesis is no autocorrelation. It is also insignificant at the 5% level. These results match those in POE4 exactly.

If you prefer to use the dialogs, then estimate the model using least squares in the usual way (Model>Ordinary least squares). This generates a model window containing the regression
results. From this select Tests>Autocorrelation to reveal a dialog box that allows you to choose the number of lagged values of et to include as regressors in the auxiliary regression. Choose the number of lagged values of et you want to include (in our case 4) and click OK. This will give you the same result as the script. The result appears in Figure 9.12. Note, the first statistic reported is simply the joint test that all the lagged values of e you included in auxiliary are jointly zeros. The second one is the TR2 version of the test done in the script. This example shows the relative strength of the LM test. One can use it to test for any order of autocorrelation. Other tests, like that of Durbin and Watson discussed later, are more difficult to do in higher orders. The LM test is also robust to having lag(s) of the dependent variable as a regressor.

Добавить комментарий

Using gret l for Principles of Econometrics, 4th Edition

Simulation

In appendix 10F of POE4, the authors conduct a Monte Carlo experiment comparing the performance of OLS and TSLS. The basic simulation is based on the model y = x …

Hausman Test

The Hausman test probes the consistency of the random effects estimator. The null hypothesis is that these estimates are consistent-that is, that the requirement of orthogonality of the model’s errors …

Time-Varying Volatility and ARCH Models: Introduction to Financial Econometrics

In this chapter we’ll estimate several models in which the variance of the dependent variable changes over time. These are broadly referred to as ARCH (autoregressive conditional heteroskedas - ticity) …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.