## EXAMPLES OF HYPOTHESIS TESTS

In the preceding sections we have studied the theory of hypothesis testing. In this section we shall apply it to various practical problems. EXAMPLE 9.6.1 (mean of binomial) It is …

## Multinomial Model

We illustrate the multinomial model by considering the case of three alternatives, which for convenience we associate with three integers 1, 2, and 3. One example of the three-response model …

## Tests for Structural Change

Suppose we have two regression regimes (10.3.7) and Уь = a + 3i*T + ut, t= 1, 2, . . ■ • ,Ti (10.3.8) Tit = a + 32*21 + …

## Continuous Sample

For the continuous case, the principle of the maximum likelihood estima­tor is essentially the same as for the discrete case, and we need to modify Definition 7.3.1 only slightly. DEFINITION …

## GENERALIZED LEAST SQUARES

In this section we consider the regression model (13.1.1) у = X0 + u, where we assume that X is a full-rank T X К matrix of known constants and …

## TESTING ABOUT A VECTOR PARAMETER

Those who are not familiar with matrix analysis should study Chapter 11 before reading this section. The results of this chapter will not be needed to understand Chapter 10. Insofar …

## CENSORED OR TRUNCATED REGRESSION MODEL (TOBIT MODEL)

Tobin (1958) proposed the following important model: (13.6.1) yf = Хг'р + Ui and (13.6.2) Уі = x'p + и{ if yf > о = 0 if yf >0, і …

## ELEMENTS OF MATRIX ANALYSIS

In Chapter 10 we discussed the bivariate regression model using summa­tion notation. In this chapter we present basic results in matrix analysis. The multiple regression model with many independent variables …

## 7.4 MAXIMUM LIKELIHOOD ESTIMATOR: PROPERTIES

In Section 7.4.1 we show that the maximum likelihood estimator is the best unbiased estimator under certain conditions. We show this by means of the Cramer-Rao lower bound. In Sections …

## Known Variance-Covariance Matrix

In this subsection we develop the theory of generalized least squares under the assumption that X is known (known up to a scalar multiple, to be precise); in the remaining …

## Variance-Covariance Matrix Assumed Known

Consider the case of К = 2. We can write 0 = (0b 02)' and 0O = (0ю, 02o)' • It is intuitively reasonable that an optimal critical region should …

## DURATION MODEL

The duration model purports to explain the distribution function of a duration variable as a function of independent variables. The duration variable may be human life, how long a patient …

## WHAT IS AN ESTIMATOR?

In Chapter 1 we stated that statistics is the science of estimating the probability distribution of a random variable on the basis of repeated observations drawn from the same random …

## DEFINITION OF BASIC TERMS

Matrix. A matrix, here denoted by a boldface capital letter, is a rectan­gular array of real numbers arranged as follows: A matrix such as A in (11.1.1), which has n …

## Cramer-Rao Lower Bound

We shall derive a lower bound to the variance of an unbiased estimator and show that in certain cases the variance of the maximum likelihood estimator attains the lower bound. …

## Heteroscedasticity

In the classical regression model it is assumed that the variance of the error term is constant (homoscedastic). Here we relax this assumption and specify more generally that 0^3.1.12) Vut …

## BIVARIATE REGRESSION MODEL

10.1 INTRODUCTION In Chapters 1 through 9 we studied statistical inference about the distri­bution of a single random variable on the basis of independent observa­tions on the variable. Let {Xt}, …

## APPENDIX: DISTRIBUTION THEORY

DEFINITION 1 (Chi-square Distribution) Let {ZJ, і = 1, 2, . . . , n, be i. i.d. as N(0, 1). Then the distribution of X”=1Z2 is called the …

## Sample Moments

In Chapter 4 we defined population moments of various kinds. Here we shall define the corresponding sample moments. Sample moments are “natural” estimators of the corresponding population moments. We define …

## MATRIX OPERATIONS

Equality. If A and В are matrices of the same size and A = {аф and В = {Ьф, then we write Ip = В if and only if at] …

## Asymptotic Normality

theorem 7.4.3 Let the likelihood function be L(XbX2,. . . ,Xn 0). Then, under general conditions, the maximum likelihood estimator 0 is asymptotically distributed as (Here we interpret the maximum …

## Serial Correlation

In this section we allow a nonzero correlation between ut and us for s Ф t in the model (12.1.1). Correlation between the values at different periods of a time …

## LEAST SQUARES ESTIMATORS

10.2.1 Definition In this section we study the estimation of the parameters a, (3, and a2 in the bivariate linear regression model (10.1.1). We first consider estimating a and (3. …

## Estimators in General

We may sometimes want to estimate a parameter of a distribution other than a moment. An example is the probability (pi) that the ace will turn up in a roll …

## DETERMINANTS AND INVERSES

Throughout this section, all the matrices are square and n X n. Before we give a formal definition of the determinant of a square matrix, let us give some examples. …

## CONFIDENCE INTERVALS

We shall assume that confidence is a number between 0 and 1 and use it in statements such as “a parameter 0 lies in the interval [a, b with 0.95 …

## Error Components Model

The error components model is useful when we wish to pool time-series and cross-section data. For example, we may want to estimate production functions using data collected on the annual …

## Properties of a. and j3

First, we obtain the means and the variances of the least squares estimators a and (3. For this purpose it is convenient to use the formulae (10.2.12) and (10.2.16) rather …

## Nonparametric Estimation

In parametric estimation we can use two methods. (1) Distribution-specific method. In the distribution-specific method, the distribution is assumed to belong to a class of functions that are characterized by …

## SIMULTANEOUS LINEAR EQUATIONS

Throughout this section, A will denote an n X n square matrix and X a matrix that is not necessarily square. Generally, we shall assume that X is n X …

## BAYESIAN METHOD

We have stated earlier that the goal of statistical inference is not merely to obtain an estimator but to be able to say, using the estimator, where the true value …

## TIME SERIES REGRESSION

In this section we consider the pxh order autoregressive model P (13.2.1) yt = X Pjjt-j + et, t = p+l, p+2, . . . , T, i= і where …

## Estimation of a2

We shall now consider the estimation of a. If ut) were observable, the most natural estimator of ct2 would be the sample variance T Since {ut} are not observable, we …

## Various Measures of Closeness

The ambiguity of the first kind is resolved once we decide on a measure of closeness between the estimator and the parameter. There are many reasonable measures of closeness, however, …

## PROPERTIES OF THE SYMMETRIC MATRIX

Now we shall study the properties of symmetric matrices, which play a major role in multivariate statistical analysis. Throughout this section, A will denote an n X n symmetric matrix …

## Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия
+38 050 512 11 94 — гл. инженер-менеджер (продажи всего оборудования)

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

## Контакты для заказов шлакоблочного оборудования:

+38 096 992 9559 Инна (вайбер, вацап, телеграм)
Эл. почта: inna@msd.com.ua