A COMPANION TO Theoretical Econometrics
Testing linear restrictions
Under the standard assumptions of the basic SUR model, Atkinson and Wilson (1992) prove that:
var[S(9)] > var[$(9)] > E[X'(9-1 ® fT)X]-1 (5.11)
where 9 is any unbiased estimator of 9 and the inequalities refer to matrix differences. The first inequality indicates that FGLS is inefficient relative to GLS. The second inequality provides an indication of the bias in the conventional estimator of the asymptotic covariance matrix of FGLS. While the result requires unbiased estimation of 9, which does not hold for most conventional estimators of SUR models, it conveniently highlights an important testing problem in SUR models. Asymptotic Wald (W), Lagrange multiplier (LM), and likelihood ratio (LR) tests are prone to be biased towards overrejection.
Fiebig and Theil (1983) and Freedman and Peters (1984) reported Monte Carlo work and empirical examples where the asymptotic standard errors tended to understate the true variability of FGLS. Rosalsky, Finke, and Theil (1984), and Jensen (1995) report similar understatement of asymptotic standard errors for maximum likelihood estimation of nonlinear systems. Work by Laitinen (1978), Meisner (1979), and Bewley (1983) alerted applied researchers to the serious size problems of testing cross-equation restrictions in linear demand systems especially when the number of equations specified was large relative to the available number of observations. Similar problems have been observed in finance in relation to testing restrictions associated with the CAPM (capital asset pricing model); see for example MacKinlay (1987).
In some special cases exact tests have been provided. For example, Laitinen (1978) derived an exact test based on Hotelling's T2 statistic for testing homogeneity in demand systems. Bewley (1983), de Jong and Thompson (1990), and Stewart (1997) discuss a somewhat wider class of problems where similar results are available but these are very special cases and the search for solutions that involve test statistics with tractable distributions remains an open area. One contribution to this end is provided by Hashimoto and Ohtani (1990) who derive an exact test for linear restrictions on the regression coefficients in an SUR model. Apart from being confined to an SUR system with the same regressors appearing in each equation, the practical drawbacks of this test are that it is computationally complicated, has low power, and to be feasible requires a large number of observations. This last problem is especially troublesome, as this is exactly the situation where the conventional asymptotic tests are the most unreliable.
One approach designed to produce tests with improved sampling properties is to use bootstrap methods. Following the advice of Freedman and Peters (1984), the studies by Williams (1986) and Eakin, McMillen, and Buono (1990) both used bootstrap methods for their estimation of standard errors. Unconditional acceptance of this advice was questioned by Atkinson and Wilson (1992) who compared the bias in the conventional and bootstrap estimators of coefficient standard errors in SUR models. While their theoretical results were inconclusive, their simulation results cautioned that neither of the estimators uniformly dominated and hence bootstrapping provides little improvement in the estimation of standard errors for the regression coefficients in an SUR model.
Rilstone and Veall (1996) argue that an important qualification needs to be made to this somewhat negative conclusion of Atkinson and Wilson (1992). They demonstrated that bootstrapping could result in an improvement in inferences. Rather than using the bootstrap to provide an estimate of standard errors, Rilstone and Veall (1996) recommend bootstrapping the f-ratios. The appropriate percentiles of the bootstrap distribution of standardized FGLS estimators are then used to construct the bootstrap percentile-f interval. This is an example of what are referred to as pivotal methods. A statistic is (asymptotically) pivotal if its "limiting" distribution does not depend on unknown quantities. Theoretical work indicates that pivotal bootstrap methods provide a higher order of accuracy compared to the basic non-pivotal methods and, in particular, provide confidence intervals with superior coverage properties; see, for example, work cited in Jeong and Maddala (1993). This is precisely what Rilstone and Veall (1996) found.
Another potential approach to providing better inferences, involves the use of improved estimators for the disturbance covariance matrix. Fiebig and Theil (1983) and Ullah and Racine (1992) use nonparametric density estimation as an alternative source of moment estimators. In the tradition of the method of moments, the covariance matrix of a nonparametric density estimator is advocated as an estimator of the population covariance matrix. The proposed SUR estimators have the same structure as the FGLS estimator of equation (5.5) differing only in the choice of estimator for X. Ullah and Racine (1992) prove that their
nonparametric density estimator of X can be expressed as the usual estimator S plus a positive definite matrix that depends on the smoothing parameter chosen for the density estimation. It is this structure of the estimator that suggests the approach is potentially useful in large equation systems.
Fiebig and Kim (2000) investigate the combination of both approaches, bootstrapping and the improved estimation of the covariance matrix especially in the context of large systems. They conclude that using the percentile-t method of bootstrapping in conjunction with the kernel-based estimator introduced by Ullah and Racine (1992) provides a very attractive estimator for large SUR models.
These studies that evaluate the effectiveness of the bootstrap typically rely on Monte Carlo simulations to validate the procedure and hence require large amounts of computations. Hill, Cartwright, and Arbaugh (1997) investigate the possibility of using Efron's (1992) jackknife-after-bootstrap as an alternative approach. Unfortunately their results indicate that the jackknife-after-bootstrap substantially overestimates the standard deviation of the bootstrap standard errors.
Yet another approach to the testing problem in SUR models is the use of Bartlett-type corrections. For the basic SUR model, Attfield (1998) draws on his earlier work in Attfield (1995) to derive a Bartlett adjustment to the likelihood ratio test statistic for testing linear restrictions. Since the derivations require the assumption of normality, and the absence of lagged dependent variables, the approach of Rocke (1989) may be a useful alternative. He suggests a computational rather than analytical approach for calculating the Bartlett adjustment using the bootstrap. In conclusion Rocke (1989) indicates that while his approach "achieves some improvement, this behavior is still not satisfactory when the sample size is small relative to the number of equations". But as we have observed, this is exactly the situation where the adjustment is most needed.
Silver and Ali (1989) also consider corrections that are derived computationally for the specific problem of testing Slutsky symmetry in a system of demand equations. On the basis of extensive Monte Carlo simulations they conclude that the covariance structure and the form of the regressors are relatively unimportant in describing the exact distribution of the F-statistic. Given this they use their simulation results to suggest "average" corrections that are only based on the sample size, T, and the number of equations, N. A second approach they consider is approximating the exact distribution of the F-statistic for symmetry using the Pearson class of distributions.
In many modeling situations it is difficult to confidently specify the form of the error covariance matrix. In the single equation context it has become very popular amongst practitioners to construct tests based on the OLS parameter estimators combined with a heteroskedastic and autocorrelation consistent (HAC) or "robust" covariance matrix estimator. Creel and Farell (1996) have considered extensions of this approach to the SUR model. They argue that in many applications the basic SUR stochastic specification will provide a reasonable approximation to a more general error structure. Proceeding in this manner, the usual estimator will be quasi-FGLS and will require a HAC covariance matrix to deal with the remaining heteroskedasticity and serial correlation.
Dufour and Torres (1998) are also concerned with obtaining robust inferences. They show how to use union-intersection techniques to combine tests or confidence intervals, which are based on different subsamples. Amongst their illustrations is an example showing how to test the null hypothesis that coefficients from different equations are equal without requiring any assumptions on how the equations are related as would be needed in the basic SUR specification.