Autoregressive Models
5.1.2 First-Order Autoregressive Model Consider a sequence of random variables {y,}, t = 0, ± 1, ±2,. . . , which follows Уі = РУі- i + e(, (5-2.1) where …
Constrained Least Squares Estimator (CLS)
The constrained least squares estimator (CLS) of /?, denoted by Д is defined to be the value of P that minimizes the sum of squared residuals S(P) = (y-XP)'(y-XP) (1A2) …
General Results
4.1.1 Consistency Because there is no essential difference between maximization and minimization, we shall consider an estimator that maximizes a certain function of the parameters. Let us denote the function …
The Case of an Unknown Covariance Matrix
In the remainder of this chapter, we shall consider Model 6 assuming that 2 is unknown and therefore must be estimated. Suppose we somehow obtain an estimator 2. Then we …
Theil’s Corrected Я2
Theil (1961, p. 213) proposed a correction of R2 aimed at eliminating the aforementioned weakness of R2. Theil’s corrected R2, denoted by R2, is defined by
Asymptotic Tests and Related Topics
4.5.1 Likelihood Ratio and Related Tests Let Ux, 0) be the joint density of a Г-vector of random variables x = (Xi, x2,. . . , xTY characterized by a …
Theory of Least Squares
In this section we shall define the least squares estimator of the parameter P in Model 1 and shall show that it is the best linear unbiased estimator. We shall …
Independent and Identically Distributed Case
Let. . • , yrbe independent observations from a symmetric distribu tion function F[(y — fi)la such that F(0) = Thus ц is both the population mean and the median. …
Second-Order Autoregressive Model
A stationary second-order autoregressive model, abbreviated as AR(2), is defined by У, = РіУ,-і +р2У,-г + е» t = 0, ±1,±2................ (5.2.16) where we assume Assumptions A, C, and Assumption …
An Alternative Derivation of the Constrained Least Squares Estimator
Define а К X (К — q) matrix R such that the matrix (Q, R) is nonsingular and R' Q = 0. Such a matrix can always be found; it …
Asymptotic Normality
In this subsection we shall show that under certain conditions a consistent root of Eq. (4.1.9) is asymptotically normal. The precise meaning of this statement will be made clear in …
Asymptotic Normality of the Least Squares Estimator
As was noted earlier, the first step in obtaining FGLS is calculating LS. Therefore the properties of FGLS depend on the properties of LS. The least squares estimator is consistent …
Prediction Criterion
A major reason why we would want to consider any measure of the goodness of fit is that the better fit the data have in the sample period, the better …
Akaike Information Criterion
The Akaike information criterion in the context of the linear regression model was mentioned in Section 2.1.5. Here we shall consider it in a more general setting. Suppose we want …
Least Squares Estimator of a Subset of fi
It is sometimes useful to have an explicit^ formula for a subset of the least squares estimates fi. Suppose we partition fi' = (fl, 02),where /?, is a AT, - …
Regression Case
Let us generalize some of the estimation methods discussed earlier to the regression situation. M Estimator. The M estimator is easily generalized to the regression model: It minimizes 2£.i/>[(y, — …
Autogressive Models with Moving-Average Residuals
A stationary autoregressive, moving-average model is defined by 2) РіУг-і= 2) Pjet-j> A> = /?o = - l. t = 0, ± 1, ±2,. . . , j - о …
Constrained Least Squares Estimator as Best Linear Unbiased Estimator
That fi is the best linear unbiased estimator follows from the fact that y2 is the best linear unbiased estimator of y2 in (1.4.9); however, we also can prove it …
Maximum Likelihood Estimator
4.1.2 Definition Let LT(6) = L(y, в) be the joint density of a Г-vector of random variables У — (У і. Уі. • • • . Утї characterized by a …
Estimation of p
Because 2 defined in (5.2.9) depends on a2 only through a scalar multiplication, fio defined in (6.1.3) does not depend on a2. Therefore, in obtaining FGLS (6.2.1), we need to …
Optimal Significance Level
In the preceding sections we have considered the problem of choosing between equations in which there is no explicit relationship between the competing regressor matrices X, and X2. In a …
Tests of Separate Families of Hypotheses
So far we have considered testing or choosing hypotheses on parameters within one family of models or likelihood functions. The procedures discussed in the preceding sections cannot be used to …
The Mean and Variance of 0 and a2
1.1.3 Inserting (1.1.4) into (1.2.3), we have ^-(X'X^X'y (1.2.15) = jff + (X'X)-,X'u. Clearly, E0 = 0 by the assumptions of Model 1. Using the second line of (1.2.15) , …
Large Sample Theory
Large sample theory plays a major role in the theory of econometrics because econometricians must frequently deal with more complicated models than the classical linear regression model of Chapter 1. …
Asymptotic Properties of Least Squares and Maximum Likelihood Estimator in the Autoregressive Model
We shall first consider the least squares (LS) estimation of the parameters p's and <t2 of an AR(p) model (5.2.30) using T observations yuy2, ■ . . , yT. In …
Test of Linear Hypotheses
In this section we shall regard the linear constraints (1.4.1) as a testable hypothesis, calling it the null hypothesis. Throughout the section we shall assume Model 1 with normality because …
Concentrated Likelihood Function
We often encounter in practice the situation where the parameter vector 0O can be naturally partitioned into two subvectors Oq and fi0 as 0O = (a£, fi'0)'. The regression model …
Feasible Generalized Least Squares Estimator
Inserting^ defined in (6.3.3) into X71 defined in (5.2.14) and then inserting the estimate of X-1 into (6.2.1), we obtain the FGLS estimator fiF. As noted earlier, <72 drops out …
Ridge Regression and Stein’s Estimator
2.1.2 Introduction We proved in Section 1.2.5 that the LS estimator is best linear unbiased in Model 1 and proved in Section 1.3.3 that it is best unbiased in Model …
Least Absolute Deviations Estimator
The practical importance of the least absolute deviations (LAD) estimator as a robust estimator was noted in Section 2.3. Besides its practical importance, the LAD estimator poses an interesting theoretical …
Definition of Best
Before we prove that the least squares estimator is best linear unbiased, we must define the term best. First we shall define it for scalar estimators, then for vector estimators. …
Random Variables
At the level of an intermediate textbook, a random variable is defined as a real-valued function over a sample space. But a sample space is not defined precisely, and once …
Distributed-Lag Models
5.1.5 The General Lag If we add an exogenous variable to the right-hand side of Eq. (5.3.2), we obtain (*L)y, = ax, + р(Це,. (5.6.1) Such a model is called …