Asymptotic Normality of Least Absolute Deviations Estimator

As noted in Section 4.6.1, the asymptotic normality of LAD cannot be proved by means of Theorem 4.1.3; nor is the proof of Section 4.6.1 easily generaliz - able to …

Cram6r-Rao Lower Bound

The Cram6r-Rao lower bound gives a useful lower bound (in the matrix sense) for the variance-covariance matrix of an unbiased vector estimator.4 In this section we shall prove a general …

Laws of Large Numbers and Central Limit Theorems

Given a sequence of random variables {X,}, define Xn = и-1 2JL, X,.A law of large numbers (LLN) specifies the conditions under which X„ — EX„ con­verges to 0 either …

Generalized Least Squares Estimator

Because 2 is positive definite, we can define 2“1/2 as HD - 1/2H', where H is an orthogonal matrix consisting of the characteristic vectors of 2, D is the diago­nal …

Prediction

We shall add to Model 1 thepth period relationship (wherep> T) yp = 'p0 + up, (1.6.1) where yp and up are scalars and p are the pth period observations …

Methods of Iteration

Be it for the maximum likelihood or the nonlinear least squares estimator, we cannot generally solve the equation of the form (4.1.9) explicitly for 0. Instead, we must solve it …

Seemingly Unreleted Regression Model

The seemingly unrelated regression (SUR) model proposed by Zellner (1962) consists of the following N regression equations, each of which satisfies the assumptions of the standard regression model (Model 1): …

Classical Least Squares Theory

In this chapter we shall consider the basic results of statistical inference in the classical linear regression model—the model in which the regressors are inde­pendent of the error terrn and …

Stein’s Estimator: Heteroscedastic Case

Assume model (2.2.5), where Л is a general positive definite diagonal matrix. Two estimators for this case can be defined. Ridge estimator: a* = (Л 4- yI)_1Aa, af = (1 …

Time Series Analysis

Because there are many books concerned solely with time series analysis, this chapter is brief; only the most essential topics are considered. The reader who wishes to study this topic …

Least Squares Estimator as Best Unbiased Estimator (BUE)

In this section we shall show that under Model 1 with normality, the least squares estimator of the regression parameters f} attains the Cramer-Rao lower bound and hence is the …

Relationships among lim E, AE, and plim

Let F„ be the distribution function of X„ and Fn —■* Fat continuity points of F. We have defined plim Xn in Definition 3.2.1. We define lim E and AE …

Efficiency of Least Squares Estimator

It is easy to show that in Model 6 the least squares estimator fi is unbiased with its covariance matrix given by FJe = (X'X)-,X'XX(X'X)-1. (6.1.5) Because GLS is BLUE, …

Recent Developments in Regression Analysis

In this chapter we shall present three additional topics. They can be discussed in the framework of Model 1 but are grouped here in a separate chapter because they involve …

Newton-Raphson Method

The Newton-Raphson method is based on the following quadratic approxi­mation of the maximand (or minimand, as the case may be): (2(0) * <2(0i) + gi(0 - 0.) + i(0 - …

Heteroscedasticity

A heteroscedastic regression model is Model 6 where X is a diagonal matrix, the diagonal elements of which assume at least two different values. Hetero­scedasticity is a common occurrence in …

Model 1

By writing x, = (1, xf')', we can define Model 1 as follows. Assume yt = x'fi+ut, t= 1,2,. . . ,T, (1.1.2) where y, is a scalar observable random …

Monte Carlo and Applications

Thisted (1976) compared ridge 2, modified ridge 2, ridge 3, and generalized ridge 1 by the Monte Carlo method and found the somewhat paradoxical result that ridge 2, which is …

Stationary Time Series

A time series is a sequence of random variables {y,}, t = 0, ± 1, ± 2,. . . . We assume Ey, = 0 for every t. (If Ey, …

The Cramer-Rao Lower Bound for Unbiased Estimators of cr2

From (1.3.8) and (1.3.20) the Cramer-Rao lower bound for unbiased estima­tors of a2 in Model 1 with normality is equal to 2алТ~К We shall examine whether it is attained by …

Consistency and Asymptotic Normality of Least Squares Estimator

The main purpose of this section is to prove the consistency and the asympto­tic normality of the least squares estimators of P and cr2 in Model 1 (classical linear regression …

Consistency of the Least Squares Estimator

We shall obtain a useful set of conditions on X and X for the consistency of LS in Model 6. We shall use the following lemma in matrix analysis. Lemma …

Statistical Decision Theory

We shall briefly explain the terminology used in statistical decision theory. For a more thorough treatment of the subject, the reader should consult Zacks (1971). Statistical decision theory is a …

The Asymptotic Properties of the Second-Round Estimator in the Newton-Raphson Method

Ordinarily, iteration (4.4.2) is to be repeated until convergence takes place. However, if 0t is a consistent estimator of 60 such that 'ІТфі — 0O) has a proper limit distribution, …

Unrestricted Heteroscedasticity

When heteroscedasticity is unrestricted, the heteroscedasticity is not parame­terized. So we shall treat each of the Г variances {a}} as an unknown parame­ter. Clearly, we cannot consistently estimate these variances …

Implications of Linearity

Suppose random variables yt and xf have finite second moments and their variance-covariance matrix is denoted by Then we can always write + + (1.1.3) where yj[ = %22^12> A) …

Stein’s Estimator versus Pre-Test Estimators

Let a — N(a, cr2I) and S— <t2x„ (independent of a). Consider the strategy: Test the hypothesis a = 0 by the Ftest and estimate о by 0 if the …

Autocovariances

Define yh = Ey, yt+h, h = 0, 1, 2,. . . .A sequence (yA) contains important information about the characteristics of a time series {y,}. It is useful to …

Model 1 with Linear Constraints

In this section we shall consider estimation of the parameters /? and a2 in Model 1 when there are certain linear constraints on the elements of /?. We shall assume …

Asymptotic Properties of Extremum Estimators

By extremum estimators we mean estimators obtained by either maximizing or minimizing a certain function defined over the parameter space. First, we shall establish conditions for the consistency and the …

A Singular Covariance Matrix

If the covariance matrix X is singular, we obviously cannot define GLS by (6.1.3) . Suppose that the rank of X is S < T. Then, by Theorem 3 of …

Bayesian Solution

The Bayesian solution to the selection-of-regressors problem provides a peda - gogically useful starting point although it does not necessarily lead to a useful solution in practice. We can obtain …

Gauss-Newton Method

The Gauss-Newton method was specifically designed to calculate the nonlin­ear least square estimator. Expanding/X)?) of Eq. (4.3.5) in a Taylor series around the initial estimate P, we obtain (4.4.10) Substituting …

Matrix Notation

To facilitate the subsequent analysis, we shall write (1.1.2) in matrix notation as у = ХД + u, where у = (Уі, У2. • • • >УтУ, и = . …

Robust Regression

2.3.2 Introduction In Chapter 1 we established that the least squares estimator is best linear unbiased under Model 1 and best unbiased under Model 1 with normality. The theme of …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.