A COMPANION TO Theoretical Econometrics

A COMPANION TO THEORETICAL ECONOMETRICS

This is the first companion in econometrics. It covers 32 chapters written by international experts in the field. The emphasis of this companion is on "keeping things simple" so as to give students of econometrics a guide through the maze of important topics in econometrics. These chapters are helpful for readers and users of econometrics who are not looking for exhaustive surveys on the subject. Instead, these chapters give the reader some of the basics and point to further readings on the subject. The topics covered vary from basic chapters on serial correlation and heteroskedasticity, which are found in standard econometrics texts, to specialized topics that are covered by econometric society monographs and advanced books on the subject like count data, panel data, and spatial corre­lation. The authors have done their best to keep things simple. Space and time limitations prevented the inclusion of other important topics, problems and exer­cises, empirical applications, and exhaustive references. However, we believe that this is a good start and that the 32 chapters contain an important selection of topics in this young but fast growing field.

Chapter 1 by Davidson and MacKinnon introduces the concept of an artificial regression and gives three conditions that an artificial regression must satisfy. The widely used Gauss-Newton regression (GNR) is used to show how artificial regressions can be used for minimizing criterion functions, computing one-step estimators, calculating covariance matrix estimates, and more importantly com­puting test statistics. This is illustrated for testing the null hypothesis that a subset of the parameters of a nonlinear regression model are zero. It is shown that the test statistic can be computed as an explained sum of squares of the GNR divided by a consistent estimate of the residual variance. Two ways of comput­ing this statistic are: (i) the sample size times the uncentered R2 of the GNR, or (ii) an ordinary F-statistic testing the subset of the parameters are zero from the GNR. The two statistics are asymptotically equivalent under the null hypothesis. The GNR can be used for other types of specification tests, including serial corre­lation, nonnested hypothesis and obtaining Durbin-Wu-Hausman type tests. This chapter also shows how to make the GNR robust to heteroskedasticity of

unknown form. It also develops an artificial regression for the generalized method of moments (GMM) estimation. The outer-product-of-the-gradient (OPG) regres­sion is also discussed. This is a simple artificial regression which can be used with most models estimated by maximum likelihood. It is shown that the OPG satisfies the three conditions of an artificial regression and has the usual uses of an artificial regression. It is appealing because it requires only first derivatives. However, it is demonstrated that the OPG regression yields relatively poor esti­mates of the covariance matrices and unreliable test statistics in small samples. In fact, test statistics based on the OPG regression tend to overreject, often very severely. Davidson and MacKinnon also discuss double-length or triple-length artificial regressions where each observation makes two or three contributions to the criterion function. In this case, the artificial regression has twice or three times the sample size. This artificial regression can be used for many purposes, including tests of models with different functional forms. Finally, this chapter extends the GNR to binary response models such as the logit and probit models. For further readings on this subject, see Davidson and MacKinnon (1993).

Chapter 2 by Bera and Premaratne gives a brief history of hypothesis testing in statistics. This journey takes the reader through the basic testing principles lead­ing naturally to several tests used by econometricians. These tests are then linked back to the basic principles using several examples. This chapter goes through the Neyman-Pearson lemma and the likelihood ratio test. It explains what is meant by locally most powerful tests and it gives the origins of the Rao-score test. Next, locally most powerful unbiased tests and Neyman's smooth test are reviewed. The interrelationship among the holy trinity of test statistics, i. e., the Wald, likelihood ratio, and Rao-score tests is brought home to the reader by an amusing story. Neyman's C(a) test is derived and motivated. This approach provides an attractive way of dealing with nuisance parameters. Next, this chap­ter goes through some of the application of testing principles in econometrics. Again a brief history of hypothesis testing in econometrics is given beginning with the work of Ragnar Frisch and Jan Tinbergen, going into details through the Durbin-Watson statistic linking its origins to the Neyman-Pearson lemma. The use of the popular Rao-score test in econometrics is reviewed next, empha­sizing that several tests in econometrics old and new have been given a score test interpretation. In fact, Bera and Premaratne consider the conditional moment test developed by Newey (1985) and Tauchen (1985) and derive its score test inter­pretation. Applications of Neyman's C(a) test in econometrics are cited and some of the tests in econometrics are given a smooth test interpretation. The chapter finishes with a double warning about testing: be careful how you interpret the test results. Be careful what action you take when the null is rejected.

Chapter 3 by King surveys the problem of serial correlation in econometrics. Ignoring serial correlation in the disturbances can lead to inefficient parameter estimates and a misleading inference. This chapter surveys the various ways of modeling serial correlation including Box-Jenkins time series models. Auto­regressive and moving average (ARMA) models are discussed, highlighting the contributions of Cochrane and Orcutt (1949) for the AR(1) model; Thomas and Wallis (1971) for the restricted AR(4) process and Nichols, Pagan, and Terrell (1975) for the MA(1) process. King argues that modeling serial correlation in­volves taking care of the dynamic part of model specification. The simple version of a dynamic model includes the lagged value of the dependent variable among the regressors. The relationship between this simple dynamic model and the AR(1) model is explored. Estimation of the linear regression model with ARMA disturbances is considered next. Maximum likelihood estimation (MLE) under normality is derived and a number of practical computation issues are discussed. Marginal likelihood estimation methods are also discussed which work well for estimating the ARMA process parameters in the presence of nuisance parameters. In this case, the nuisance parameters include the regression parameters and the residual variance. Maximizing marginal likelihoods were shown to reduce the estimation bias of maximum likelihood methods. Given the nonexperimental nature of economic data and the high potential for serial correlation, King argues that it is important to test for serial correlation. The von Neuman as well as Durbin-Watson (DW) tests are reviewed and their statistical properties are dis­cussed. For example, the power of the DW test tends to decline to zero when the AR(1) parameter p tends to one. In this case, King suggests a class of point optimal tests that provide a solution to this problem. LM, Wald and LR tests for serial correlation are mentioned, but King suggests constructing these tests using the marginal likelihood rather than the full likelihood function. Testing for AR(1) disturbances in the dynamic linear regression model is also studied, and the difficulty of finding a satisfactory test for this model is explained by showing that the model may suffer from a local identification problem. The last section of this chapter takes up the problem of deciding what lags should be used in the ARIMA model. Model selection criteria are recommended rather than a test of hypotheses and the Bayesian information criteria is favored because it is consistent. This means that as the sample size goes to infinity, this criteria selects the correct model from a finite number of models with probability one.

Chapter 4 by Griffiths gives a lucid treatment of the heteroskedasticity prob­lem. The case of a known variance covariance term is treated first and general­ized least squares is derived. Finite sample inference under normality as well as large sample inference without the normality assumption are summarized in the context of testing linear restrictions on the regression coefficients. In addition, inference for nonlinear restrictions on the regression coefficients is given and the consequences of heteroskedasticity on the least squares estimator are explained. Next, the case of the unknown variance covariance matrix is treated. In this case, several specifications of the form of heteroskedasticity are entertained and maximum likelihood estimation under normality is derived. Tests of linear restrictions on the regression coefficients are then formulated in terms of the ML estimates. Likelihood ratio, Wald and LM type tests of heteroskedasticity are given under normality of the disturbances. Other tests of heteroskedasticity as well as Monte Carlo experiments comparing these tests are cited. Adaptive estimators that assume no form of heteroskedasticity are briefly surveyed as well as several other miscellaneous extensions. Next, this chapter discusses Bayesian inference under heteroskedasticity. The joint posterior probability density func­tion is specified assuming normality for the regression and uninformative priors on heteroskedasticity. An algorithm for obtaining the marginal posterior prob­ability density function is given.

Chapter 5 by Fiebig, surveys the most recent developments on seemingly unre­lated regressions (SUR) including both applied and theoretical work on the speci­fication, estimation and testing of SUR models. This chapter updates the survey by Srivastava and Dwivedi (1979) and the book by Srivastava and Giles (1987). A basic introduction of the SUR model introduced by Zellner (1962), is given along with extensions of the model to allow for more general stochastic specifications. These extensions are driven in part by diagnostic procedures as well as theoret­ical economic arguments presented for behavioral models of consumers and pro­ducers. Here the problem of testing linear restrictions which is important for testing demand systems or estimating say a cost function with share equations is studied. In addition, tests for the presence of contemporaneous correlation in SUR models as well as spatial autocorrelation are discussed. Next, SUR with miss­ing observations and computational matters are reviewed. This leads naturally to a discussion of Bayesian methods for the SUR model and improved estima­tion methods for SUR which include several variants of the Stein-rule family and the hierarchical Bayes estimator. Finally, a brief discussion of misspecification, robust estimation issues, as well as extensions of the SUR model to time series modeling and count data Poisson regressions are given.

Chapter 6 by Mariano, considers the problem of estimation in the simultane­ous equation model. Both limited as well as full information estimators are dis­cussed. The inconsistency of ordinary least squares (OLS) is demonstrated. Limited information instrumental variable estimators are reviewed including two-stage least squares (2SLS), limited information instrumental variable efficient (LIVE), Theil's k-class, and limited information maximum likelihood (LIML). Full in­formation methods including three-stage least squares (3SLS), full information instrumental variables efficient (FIVE) and full information maximum likeli­hood (FIML) are studied next. Large sample properties of these limited and full information estimators are summarized and conditions for their consistency and asymptotic efficiency are stated without proof. In addition, the finite sample properties of these estimators are reviewed and illustrated using the case of two included endogenous variables. Last, but not least, practical implications of these finite sample results are given. These are tied up to the recent literature on weak instruments.

Chapter 7 by Bekker and Wansbeek discusses the problem of identification in parametric models. Roughly speaking, a model is identified when meaningful estimates of its parameters can be obtained. Otherwise, the model is under­identified. In the latter case, different sets of parameter values agree well with the statistical evidence rendering scientific conclusions based on any estimates of this model void and dangerous. Bekker and Wansbeek define the basic concepts of observational equivalence of two parameter points and what is meant by local and global identification. They tie up the notion of identification to that of the existence of a consistent estimator, and provide a link between identifica­tion and the rank of the information matrix. The latter is made practically useful by presenting it in terms of the rank of a Jacobian matrix. Although the chapter is limited to the problem of parametric identification based on sample informa­tion and exact restrictions on the parameters, extensions are discussed and the reader is referred to the book by Bekker, Merckens, and Wansbeek (1994) for further analysis.

Chapter 8 by Wansbeek and Meijer discusses the measurement error problem in econometrics. Many economic variables like permanent income, productivity of a worker, consumer satisfaction, financial health of a firm, etc. are latent vari­ables that are only observed with error. This chapter studies the consequences of measurement error and latent variables in econometric models and possible solu­tions to these problems. First, the linear regression model with errors in variables is considered, the bias and inconsistency of OLS is demonstrated and the attenu­ation phenomenon is explained. Next, bounds on the parameters of the model are obtained by considering the reverse regression. Solutions to the errors in variables include restrictions on the parameters to identify the model and hence yield consistent estimators of the parameters. Alternatively, instrumental vari­ables estimation procedures can be employed, or nonnormality of the errors may be exploited to obtain consistent estimates of these parameters. Repeated meas­urements like panel data on households, firms, regions, etc. can also allow the consistent estimation of the parameters of the model. The second part of this chapter gives an extensive discussion of latent variable models including factor analysis, the multiple indicators-multiple causes (MIMIC) model and a frequently used generalization of the MIMIC model known as the reduced rank regression model. In addition, general linear structural equation models estimated by LISREL are considered and maximum likelihood, generalized least squares, test statistics, and model fit are studied.

Chapter 9 by Wooldridge provides a comprehensive account of diagnostic testing in econometrics. First, Wooldridge explains how diagnostic testing differs from classical testing. The latter assumes a correctly specified parametric model and uses standard statistics to test restrictions on the parameters of this model, while the former tests the model for various misspecifications. This chapter con­siders diagnostic testing in cross section applications. It starts with diagnostic tests for the conditional mean in the linear regression model. Conditional mean diagnostics are computed using variable addition statistics or artificial regres­sions (see Chapter 1 by Davidson and MacKinnon). Tests for functional form are given as an example and it is shown that a key auxiliary assumption needed to obtain a usable limiting distribution for the usual nR2 (LM) test statistic is homoskedasticity. Without this assumption, the limiting distribution of the LM statistic is not x2 and the resulting test based on chi-squared critical values may be asymptotically undersized or oversized. This LM statistic is adjusted to allow for heteroskedasticity of unknown form under the null hypothesis. Next, testing for heteroskedasticity is considered. A joint test of the conditional mean and con­ditional variance is an example of an omnibus test. However, if this test rejects it is difficult to know where to look. A popular omnibus test is White's (1982) information matrix test. This test is explicitly a test for homoskedasticity, condi­tional symmetry and homokurtosis. If we reject, it may be for any of these reasons and it is not clear why one wants to toss out a model because of asymmetry, or because its fourth and second moments do not satisfy the same relationship as that for a normal distribution. Extensions to nonlinear models are discussed next as well as diagnostic tests for completely specified parametric models like limited dependent variable models, probit, logit, tobit, and count data type models. The last section deals with diagnostic testing in time series models. In this case, one can no longer assume that the observations are independent of one another and the discussion of auxiliary assumptions under the null is more complicated. Wooldridge discusses different ways to make conditional mean diagnostics robust to serial correlation as well as heteroskedasticity. Testing for heteroskedasticity in time series contexts and omnibus tests on the errors in time series regressions, round up the chapter.

Chapter 10 by Potscher and Prucha gives the basic elements of asymptotic theory. This chapter discusses the crucial concepts of convergence in probability and distribution, the convergence properties of transformed random variables, orders of magnitude of the limiting behavior of sequences of random variables, laws of large numbers both for independent and dependent processes, and cen­tral limit theorems. This is illustrated for regression analysis. Further readings are suggested including the recent book by Potscher and Prucha (1997).

Chapter 11 by Hall provides a thorough treatment of the generalized method of moments (GMM) and its applications in econometrics. Hall explains that the main advantage of GMM for econometric theory is that it provides a general framework which encompasses many estimators of interest. For econometric ap­plications, it provides a convenient method of estimating nonlinear dynamic models without complete knowledge of the distribution of the data. This chapter gives a basic definition of the GMM estimation principle and shows how it is predicated on the assumption that the population moment condition provides sufficient information to uniquely determine the unknown parameters of the correctly specified model. This need not be the case, and leads naturally to a discussion of the concepts of local and global identification. In case the model is overidentified, Hall shows how the estimation effects a decomposition on the population moment condition into identifying restrictions upon which the estima­tion is based, and overidentifying restrictions which are ignored in the estima­tion. This chapter describes how the estimated sample moment can be used to construct the overidentification restrictions test for the adequacy of the model specification, and derives the consistency and asymptotic distribution of the esti­mator. This chapter also characterizes the optimal choice of the weighting matrix and shows how the choice of the weight matrix impacts the GMM estimator via its asymptotic variance. MLE is shown to be a special case of GMM. However, MLE requires that we know the distribution of the data. GMM allows one to focus on the information used in the estimation and thereby determine the conse­quences of choosing the wrong distribution. Since the population moment condi­tion is not known in practice, the researcher is faced with a large set of alternatives to choose from. Hall focuses on two extreme scenarios where the best and worst choices are made. The best choice is the population moment condition which leads to the estimator with the smallest asymptotic variance. The worst choice is when the population moment condition does not provide enough information to identify the unknown parameters of our model. This leads to a discussion of nearly uninformative population moment conditions, their consequence and how they might be circumvented.

Chapter 12 by Hill and Adkins gives a lucid discussion of the collinearity problem in econometrics. They argue that collinearity takes three distinct forms. The first is where an explanatory variable exhibits little variability and therefore makes it difficult to estimate its effect in a linear regression model. The second is where two explanatory variables exhibit a large correlation leaving little inde­pendent variation to estimate their separate effects. The third is where there may be one or more nearly exact linear relationships among the explanatory variables, hence obscuring the effects of each independent variable on the dependent vari­able. Hill and Adkins examine the damage that multicollinearity does to estima­tion and review collinearity diagnostics such as the variance decomposition of Belsley, Kuh, and Welsch (1980), the variance-inflation factor and the determi­nant of X'X, the sum of squares and cross-product of the regressor matrix X. Once collinearity is detected, this chapter discusses whether collinearity is harmful by using Belsley's (1982) test for adequate signal to noise ratio in the regression model and data. Next, remedies to harmful collinearity are reviewed. Here the reader is warned that there are only two safe paths. The first is obtaining more and better data which is usually not an option for practitioners. The second is imposing additional restrictions from economic theory or previous empirical research. Hill and Adkins emphasize that although this is a feasible option, only good nonsample information should be used and it is never truly known whether the information introduced is good enough. Methods of introducing exact and inexact nonsample information including restricted least squares, Stein-rule estimators, inequality restricted least squares, Bayesian methods, the mixed estimation procedure of Theil and Goldberger (1961) and the maximum entropy procedure of Golan, Judge, and Miller (1996) are reviewed. In addition, two estimation methods designed specifically for collinear data are discussed if only to warn the readers about their use. These are ridge regression and principal components regression. Finally, this chapter extends the collinearity analysis to nonlinear models.

Chapter 13 by Pesaran and Weeks gives an overview of the problem of non­nested hypothesis testing in econometrics. This problem arises naturally when rival economic theories are used to explain the same phenomenon. For example, competing theories of inflation may suggest two different sets of regressors nei­ther of which is a special case of the other. Pesaran and Weeks define non-nested models as belonging to separate families of distributions in the sense that none of the individual models may be obtained from the remaining either by imposition of parameter restrictions or through a limiting process. This chapter discusses the problem of model selection and how it relates to non-nested hypothesis testing. By utilizing the linear regression model as a convenient framework, Pesaran and Weeks examine three broad approaches to non-nested hypotheses testing:

(i) the modified (centered) log-likelihood ratio procedure also known as the Cox test; (ii) the comprehensive models approach, whereby the non-nested models are tested against an artificially constructed general model that includes the non-nested models as special cases; and (iii) the encompassing approach, where the ability of one model to explain particular features of an alternative model is tested directly. This chapter also focuses on the Kullback-Leibler divergence meas­ure which has played a pivotal role in the development of a number of non­nested test statistics. In addition, the Vuong (1989) approach to model selection, viewed as a hypothesis testing problem is also discussed. Finally, practical prob­lems involved in the implementation of the Cox procedure are considered. This involves finding an estimate of the Kullback-Leibler measure of closeness of the alternative to the null hypothesis which is not easy to compute. Two methods are discussed to circumvent this problem. The first examines the simulation approach and the second examines the parametric bootstrap approach.

Chapter 14 by Anselin provides an excellent review of spatial econometrics. These methods deal with the incorporation of spatial interaction and spatial struc­ture into regression analysis. The field has seen a recent and rapid growth spurred both by theoretical concerns as well as the need to apply econometric models to emerging large geocoded databases. This chapter outlines the basic terminology and discusses in some detail the specification of spatial effects including the incorporation of spatial dependence in panel data models and models with quali­tative variables. The estimation of spatial regression models including maximum likelihood estimation, spatial 2SLS, method of moments estimators and a number of other approaches are considered. In addition, specification tests for spatial effects as well as implementation issues are discussed.

Chapter 15 by Cameron and Trivedi gives a brief review of count data regres­sions. These are regressions that involve a dependent variable that is a count, such as the number of births in models of fertility, number of accidents in studies of airline safety, hospital or doctor visits in health demand studies, number of trips in models of recreational demand, number of patents in research and devel­opment studies or number of bids in auctions. In these examples, the sample is concentrated on a few discrete values like 0, 1 and 2. The data is skewed to the left and the data is intrinsically heteroskedastic with its variance increasing with the mean. Two methods of dealing with these models are considered. The first is a fully parametric approach which completely specifies the distribution of the data and restricts the dependent variable to nonnegative integer values. This includes the Poisson regression model which is studied in detail in this chapter including its extensions to truncated and censored data. Limitations of the Poisson model, notably the excess zeros problem and the overdispersion problem are explained and other parametric models, superior to the Poisson are presented. These include continuous mixture models, finite mixture models, modified count models and discrete choice models. The second method of dealing with count data is a partial parametric method which focuses on modeling the data via the conditional mean and variance. This includes quasi-maximum likelihood estima­tion, least squares estimation and semiparametric models. Extensions to other types of data notably time series, multivariate, and panel data are discussed and the chapter concludes with some practical recommendations. For further read­ings and diagnostic procedures, the reader is referred to the recent econometric society monograph on count data models by Cameron and Trivedi (1998).

Chapter 16 by Hsiao gives a selected survey of panel data models. First, the benefits from using panels are discussed. This includes more degrees of freedom, controlling for omitted variable bias, reducing the problem of multicollinearity and improving the accuracy of parameter estimates and predictions. A general encompassing linear panel data model is provided which includes as special cases the error components model, the random coefficients model and the mixed fixed and random coefficients model. These models assume that some variables are subject to stochastic constraints while others are subject to deterministic con­straints. In practice, there is little knowledge about which variables are subject to stochastic constraints and which variables are subject to deterministic constraints. Hsiao recommends the Bayesian predictive density ratio method for selecting between two alternative formulations of the model. Dynamic panel data models are studied next and the importance of the initial observation with regards to the consistency and efficiency of the estimators is emphasized. Generalized method of moments estimators are proposed and the problem of too many orthogonality conditions is discussed. Hsiao suggests a transformed maximum likelihood estim­ator that is asymptotically more efficient than GMM. Hsiao also reports that the mean group estimator suggested by Pesaran and Smith (1995) does not per­form well in finite samples. Alternatively, a hierarchical Bayesian approach per­forms well when T is small and the initial value is assumed to be a fixed constant. Next, the existence of individual specific effects in nonlinear models is discussed and the conditional MLE approach of Chamberlain (1980) is given. The problem becomes more complicated if lagged dependent variables are present. T > 4 is needed for the identification of a logit model and this conditional method will not work with the presence of exogenous variables. In this case, a consistent and asymptotically normal estimator proposed by Honore and Kyriazidou (1997) is suggested. An alternative semiparametric approach to estimating nonlinear panel models is the maximum score estimator proposed by Manski (1975). This applies some data transformation to eliminate the individual effects if the nonlinear model is of the form of a single index model with the index possessing a linear struc­ture. This estimator is consistent but not root n consistent. A third approach proposed by Lancaster (1998) finds an orthogonal reparametrization of the fixed effects such that the new fixed effects are independent of the structural para­meters in the information matrix sense. Hsiao discusses the limitations of all three methods, emphasizing that none of these approaches can claim general applicability and that the consistency of nonlinear panel data estimators must be established on a case by case basis. Finally, Hsiao treats missing observations in panels. If individuals are missing randomly, most estimators in the balanced panel case can be easily generalized to the unbalanced case. With sample selec­tion, Hsiao emphasizes the dependence of the MLE and Heckman's (1979) two - step estimators on the exact specification of the joint error distribution. If this distribution is misspecified, then these estimators are inconsistent. Alternative semiparametric methods are discussed based on the work of Ahn and Powell

(1993) , Kyriazidou (1997), and Honore and Kyriazidou (1998).

Chapter 17 by Maddala and Flores-Lagunes gives an update of the econometrics of qualitative response models. First, a brief introduction to the basic material on the estimation of binary and multinomial logit and probit models is given and the reader is referred to Maddala (1983) for details. Next, this chapter reviews specification tests in qualitative response models and the reader is referred to the recent review by Maddala (1995). Panel data with qualitative variables and semiparametric estimation methods for qualitative response models are reviewed including Manski's maximum score, quasi-maximum likelihood, generalized maximum likelihood and the semi-nonparametric estimator. Maddala and Flores - Lagunes comment on the empirical usefulness and drawbacks of the different methods. Finally, simulation methods in qualitative response models are reviewed. The estimation methods discussed are the method of simulated moments, the method of simulated likelihood and the method of simulated scores. Some examples are given comparing these simulation methods.

Chapter 18 by Lee gives an extensive discussion of the problem of self-selection in econometrics. When the sample observed is distorted and is not representative of the population under study, sample selection bias occurs. This may be due to the way the sample was collected or it may be due to the self-selection decisions by the agents being studied. This sample may not represent the true population no matter how large. This chapter discusses some of the conventional sample selection models and counterfactual outcomes. A major part of this chapter concentrates on the specification, estimation, and testing of parametric models of sample selection. This includes Heckman's two-stage estimation procedure as well as maximum likelihood methods, polychotomous choice sample selection models, simulation estimation methods, and the estimation of simultaneous equation sample selection models. Another major part of this chapter focuses on semiparametric and nonparametric approaches. This includes semiparametric two-stage estimation, semiparametric efficiency bound and semiparametric maximum likelihood estimation. In addition, semiparametric instrumental vari­able estimation and conditional moments restrictions are reviewed, as well as sample selection models with a tobit selection rule. The chapter concludes with the identification and estimation of counterfactual outcomes.

Chapter 19 by Swamy and Tavlas describes the purpose, estimation, and use of random coefficient models. Swamy and Tavlas distinguish between first gen­eration random coefficient models that sought to relax the constant coefficient assumption typically made by researchers in the classical tradition and second generation random coefficient models that relax the assumptions made regard­ing functional forms, excluded variables, and absence of measurement error. The authors argue that the latter are useful approximations to reality because they provide a reasonable approximation to the underlying "true" economic relation­ship. Several model validation criteria are provided. Throughout, a demand for money model is used as a backdrop to explain random coefficient models and an empirical application to United Kingdom data is given.

Chapter 20 by Ullah provides a systematic and unified treatment of estima­tion and test of hypotheses for nonparametric and semiparametric regression models. Parametric approaches to specifying functional form in econometrics may lead to misspecification. Nonparametric and semiparametric approaches provide alternative estimation procedures that are more robust to functional form misspecification. This chapter studies the developments in the area of kernel estimation in econometrics. Nonparametric estimates of conditional means as well as higher order moments and function derivatives are reviewed. In addi­tion, some new goodness of fit procedures for nonparametric regressions are presented and their application to determining the window width and variable selection are discussed. Next, a combination of parametric and nonparametric regressions that takes into account the tradeoff between good fit ( less bias) and smoothness (low variance) is suggested to improve (in mean-squared error sense) the drawbacks of each approach used separately. Additive nonparametric regres­sions and semiparametric models are given as possible solutions to the curse of dimensionality. The chapter concludes with hypothesis testing in nonparametric and semiparametric models.

Chapter 21 by Gourieroux and Jasiak gives a review of duration models. These models have been used in labor economics to study the duration of individual unemployment spells and in health economics to study the length of hospital stays to mention just two examples. Gourieroux and Jasiak discuss the main duration distribution families including the exponential, gamma, Weibull, and lognormal distributions. They explain what is meant by survivor functions, haz­ard functions, and duration dependence. Maximum likelihood estimation for the exponential duration model without heterogeneity as well as the gamma dis­tributed heterogeneity model are given. The latter leads to the Pareto regression model. Next, the effect of unobservable heterogeneity and its relationship with negative duration dependence as well as the problem of partial observability of duration variables due to truncation or censoring effects are considered. Semiparametric specifications of duration models are also studied. These distin­guish a parametric scoring function and an unconstrained baseline distribution. Accelerated and proportional hazard models are introduced and the estima­tion methods for the finite dimensional functional parameters are given. Finally, dynamic models for the analysis of time series durations are given. These are especially useful for applications to financial transactions data. Some recent developments in this field are covered including the Autoregressive Conditional Duration (ACD) model and the Stochastic Volatility Duration (SVD) model.

Chapter 22 by Geweke, Houser, and Keane provides a detailed illustration of how to implement simulation methods to dynamic discrete choice models where one has available a panel of multinomial choice histories and partially observed payoffs. The advantages of this procedure is that it does not require the eco­nometrician to solve the agents' dynamic optimization problem, or to make strong assumptions about the way individuals form expectations. The chapter focuses exclusively on simulation based Bayesian techniques. Monte Carlo results demonstrate that this method works well in relatively large state-space models with only partially-observed payoffs, where very high dimensional integrations are required.

Chapter 23 by Dufour and Khalaf reviews Monte Carlo test methods in econometrics. Dufour and Khalaf demonstrate that this technique can provide exact randomized tests for any statistic whose finite sample distribution may be intractable but can be simulated. They illustrate this technique using several specification tests in linear regressions including tests for normality, tests for independence, heteroskedasticity, ARCH, and GARCH. Also, nonlinear hypoth­eses in univariate and SUR models, tests on structural parameters in instrumental variable regressions, tests on long-run multipliers in dynamic models, long-run identification constraints in VAR (vector autoregression) models and confid­ence intervals on ratios of coefficients in discrete choice models. Dufour and Khalaf demonstrate, using several econometric examples, that standard testing procedures that rely on asymptotic theory can produce questionable p-values and confidence intervals. They recommend Monte Carlo test methods over these standard testing procedures because the former procedures produce a valid inference in small samples.

Chapter 24 by Koop and Steel uses the stochastic frontier models to illustrate Bayesian methods in econometrics. This is contrasted with the classical econo­metric stochastic frontier methods and the strengths and weaknesses of both approaches are reviewed. In particular, the stochastic frontier model with cross­sectional data is introduced with a simple log-linear model (e. g., Cobb-Douglas or translog) and a simple Gibbs sampler is used to carry out Bayesian inference. This is extended to nonlinear production frontiers (e. g., constant elasticity of substitution or the asymptotically ideal model) where more complicated poster­ior simulation methods are necessary. Next, the stochastic frontier model with panel data is considered and the Bayesian fixed effects and random effects models are contrasted with their classical alternatives. Koop and Steel show that the Bayesian fixed effects model imposes strong and possibly unreasonable prior assumptions. In fact, only relative efficiencies and not absolute efficiencies can be computed by this model and it is shown to be quite sensitive to prior assump­tions. In contrast, the Bayesian random effects model makes explicit distribu­tional assumptions for the inefficiencies and allows robust inference on the absolute efficiencies.

Chapter 25 by Maasoumi summarizes some of the technical and conceptual advances in testing multivariate linear and nonlinear inequality hypotheses in econometrics. This is done in the context of substantive empirical settings in economics in which either the null, or the alternative, or both hypotheses define more limited domains than the two-sided alternatives typically tested in classical hypotheses testing. The desired goal is increased power. The impediments are a lack of familiarity with implementation procedures, and characterization prob­lems of distributions under some composite hypotheses. Several empirically im­portant cases are identified in which practical "one-sided" tests can be conducted by either the mixed %2 distribution, or the union intersection mechanisms based on the Gaussian variate, or the popular resampling/simulation techniques. Point optimal testing and its derivatives find a natural medium here whenever unique characterization of the null distribution for the "least favorable" cases is not pos­sible. Applications in the parametric, semiparametric and nonparametric testing area are cited.

Chapter 26 by Granger gives a brief introduction to the problem of spurious regressions in econometrics. This problem is traced historically from its origins (see Yule 1926). A definition of spurious correlation is given and the problem is illustrated with simulations. The theoretical findings on spurious regressions undertaken by Phillips (1986) are summarized and some extensions of this work are cited. This chapter emphasizes that this spurious regression problem can occur with stationary series also. Applied econometricians have been worrying about spurious regressions only with nonstationary series. Usually, testing for unit roots before entering a regression. Granger warns that one should also be worried about this problem with stationary series and recommends care in the proper specification of one's model using lags of all variables.

Chapter 27 by Stock provides an introduction to the main methods used for forecasting economic time series. Throughout the chapter, Stock emphasizes the responsible production and interpretation of economic forecasts which have en­tered into many aspects of economic life. This requires a clear understanding of the associated econometric tools, their limitations, and an awareness of common pitfalls in their application. Stock provides a theoretical framework for consider­ing some of the tradeoffs in the construction of economic forecasts. In particular, he decomposes the forecast error into three terms and shows how two sources of this error entail a tradeoff. This tradeoff is between model approximation error and model estimation error. This leads naturally to considering information crite­ria that select among competing models. Stock also discusses prediction intervals and forecast comparison methods. Next, this chapter provides a glimpse at some of the relevant empirical features of five monthly macroeconomic time series data for the USA. These include the rate of inflation, output growth, the unem­ployment rate, a short-term interest rate and total real manufacturing and trade inventories. In addition, this chapter provides an overview of univariate fore­casts which are made solely using past observations on the series. For linear models, simple forecasting methods like the exponential smoothing method and the mainstay of univariate forecasting, i. e., autoregressive moving average models are discussed. Two nonlinear models are considered, smooth transition auto­regression and artificial neural networks. These models provide parametric and nonparametric approaches to nonlinear forecasting. Finally, this chapter tackles multivariate forecasting where forecasts are made using historical information on multiple time series. In particular, it considers vector autoregressions, (see also Chapter 32 by Lutkepohl), and forecasting with leading economic indicators.

Chapter 28 by Spanos examines time series and dynamic models in econometrics. After providing a brief historical perspective and a probabilistic framework, the chapter reviews AR models, MA models and ARMA models including VARs. Then it places these time series models in the perspective of econometric linear regression models. Throughout, the emphasis is on the statis­tical adequacy of the various models, and ways by which that adequacy may be evaluated.

Chapter 29 by Bierens explains the two most frequently applied unit root tests, namely the Augmented Dickey-Fuller tests and the Phillips-Perron tests. Bierens emphasizes three reasons why it is important to distinguish stationary processes from unit root processes. The first points to the fact that regressions involving unit roots may give spurious results (see Chapter 26 by Granger). The second reason is that for two or more unit root processes, there may exist linear

combinations which are stationary, and these linear combinations may be inter­preted as long-run relationships (see Cointegration, Chapter 30, by Dolado, Gonzalo, and Marmol). The third reason emphasizes that tests for parameter restrictions in autoregressions involving unit root processes have in general dif­ferent null distributions than in the case of stationary processes. Therefore, naive application of classical inference may give misleading results. Bierens considers in details the Gaussian AR(1) case without an intercept; the Gaussian AR(1) case with an intercept under the alternative of stationarity; the general AR(p) process with a unit root and the Augmented Dickey-Fuller test; the general ARIMA processes and the Phillips-Perron test; and the unit root with drift versus trend stationarity.

Chapter 30 by Dolado, Gonzalo, and Marmol discusses how the important concept of cointegration in econometrics bridged the gap between economic theo­rists who had much to say about equilibrium but relatively little to say about dynamics and the econometricians whose models concentrated on the short-run dynamics disregarding the long-run equilibrium. "In addition to allowing the data to determine short-run dynamics, cointegration suggests that models can be significantly improved by including long-run equilibrium conditions suggested by economic theory." This chapter discusses the implications of cointegration and gives the basic estimation and testing procedures in a single equation frame­work, when variables have a single unit root. These include the Engle and Granger

(1987) two-step procedure and the fully modified ordinary least squares proce­dure proposed by Phillips and Hansen (1990). Next, the analysis is extended to the multivariate setting where the estimation and testing of more general system based approaches to cointegration are considered. These include the Johansen

(1995) maximum likelihood estimation procedure based on the reduced rank regression method, and the Stock and Watson (1988) methodology of testing for the dimension of "common trends." The latter is based on the dual relationship between the number of cointegrating vectors and the number of common trends in the system. Several extensions of this literature are discussed including (i) higher order cointegrated systems; (ii) fractionally cointegrated systems; (iii) nearly cointegrated systems; (iv) nonlinear error correction models; and (v) structural breaks in cointegrated systems.

Modelling seasonality has progressed from the traditional view that seasonal patterns are a nuisance which need to be removed to the current view, see Ghysels

(1994) , that they are an informative feature of economic time series that should be modeled explicitly. Chapter 31 by Ghysels, Osborn, and Rodrigues discusses the properties of stochastic seasonal nonstationary processes. In particular, they consider the characteristics of a simple seasonal random walk model and then generalize the discussion to seasonally integrated autoregressive moving average processes. In addition, they discuss the implications of misdiagnosing nonstation­ary stochastic seasonality as deterministic. Finally, the asymptotic properties of several seasonal unit root tests are studied and the results are generalized to the case of a near seasonally integrated framework.

Chapter 32 by Lutkepohl gives a concise review of the vector autoregression (VAR) literature in econometrics. The poor performance of simultaneous equations macroeconometric models resulted in a critical assessment by Sims (1980) who advocated the use of VAR models as alternatives. This chapter gives a brief introduction to these models, their specification and estimation. Special attention is given to cointegrated systems. Model specification and model checks studied include testing for the model order and exclusion restrictions, determining the autoregressive order by model selection criteria, specifying the cointegrating rank as well as various model checks that are based on the residuals of the final model. Among the uses of VAR models discussed is forecasting and economic analysis. The concept of Granger causality is introduced which is based on fore­cast performance, impulse responses are considered as instruments for analyzing causal relationships between variables. Finally, forecast error variance decompo­sitions and policy analysis are discussed.

Добавить комментарий

A COMPANION TO Theoretical Econometrics

Normality tests

Let us now consider the fundamental problem of testing disturbance normality in the context of the linear regression model: Y = Xp + u, (23.12) where Y = (y1, ..., …

Univariate Forecasts

Univariate forecasts are made solely using past observations on the series being forecast. Even if economic theory suggests additional variables that should be useful in forecasting a particular variable, univariate …

Further Research on Cointegration

Although the discussion in the previous sections has been confined to the pos­sibility of cointegration arising from linear combinations of I(1) variables, the literature is currently proceeding in several interesting …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.