Mathematical Expectation

With these new integrals introduced, we can now answer the second question stated at the end of the introduction: How do we define the mathematical ex­pectation if the distribution of …

Hypotheses Testing

Theorem 5.19 is the basis for hypotheses testing in linear regression analysis. First, consider the problem of whether a particular component of the vector Xj of explanatory variables in model …

The Inverse and Transpose of a Matrix

I will now address the question of whether, for a given m x n matrix A, there exists an n x m matrix B such that, with y = Ax, …

Sets in Euclidean Spaces

An open e-neighborhood of a point x in a Euclidean space Kk is a set of the form Ne(x) = {y e Kk : Уy — x У < e}, …

The Poisson Distribution

A random variable X isPoisson(X)-distributediffor k = 0, 1, 2, 3,... and some X > 0, X k P (X = k) = exp(-X)(4.7) k! Recall that the Poisson probabilities …

Convergence of Characteristic Functions and Distributions

In this appendix I will provide the proof of the univariate version of Theorem 6.22. Let Fn be a sequence of distribution functions on К with corresponding characteristic functions yn …

Eigenvectors

By Definition I.21 it follows that if M is an eigenvalue of an n x n matrix A, then A — MIn is a singular matrix (possibly complex valued!). Suppose …

The Multivariate Normal Distribution

Now let the components of X = (x,..., xn)T be independent, standard nor­mally distributed random variables. Then, E(X) = 0(e Kn) and Var(X) = In. Moreover, the joint density f …

Linear Regression with Normal Errors

Let Zj = (Yj, Xj )T, j = 1,...,n be independent random vectors such that Yj = ao + eTXj + Uj, Uj Xj ~ N(0, ao2), where the latter …

Appendix III — A Brief Review of Complex Analysis

III.1. The Complex Number System Complex numbers have many applications. The complex number system allows computations to be conducted that would be impossible to perform in the real world. In …

Liapounov’s Inequality

Liapounov’s inequality follows from Holder’s inequality (2.22) by replacing Y with 1: E(|X|) < (E(|X |p))1/p, where p > 1. 2.6.3. Minkowski’s Inequality If for some p > 1, E[|X|p] …

Modes of Convergence

5.3. Introduction Toss a fair coin n times, and let Yj = 1 if the outcome of the j th tossing is heads and Yj = — 1 if the …

Elementary Matrices and Permutation Matrices

Let A be the m x n matrix in (I.14). An elementary m x m matrix E is a matrix such that the effect of EA is the addition of …

Transformations of Discrete Random Variables and Vectors

In the discrete case, the question Given a random variable or vector X and a Borel measure function or mapping g(x), how is the distribution of Y = g(X) related …

Dependent Laws of Large Numbers and Central Limit Theorems

Chapter 6 I focused on the convergence of sums of i. i.d. random variables - in particular the law of large numbers and the central limit theorem. However, macroeconomic and …

Eigenvalues and Eigenvectors of Symmetric Matrices

On the basis of (I.60) it is easy to show that, in the case of a symmetric matrix A, в = 0 and b = 0: Theorem I.34: The eigenvalues …

Follows now from Theorem 5.2. Q. E. D

Note that this result holds regardless of whether the matrix BE BT is non­singular or not. In the latter case the normal distribution involved is called “singular”: Definition 5.2: Ann …

The Tobit Model

Let Zj = (Yj, XTj )T, j = 1,..., и be independent random vectors such that Yj = max(Yj, 0), where Yj = a0 + всТXj + Uj with Uj …

The Complex Exponential Function

Recall that, for real-valuedx the exponential function ex, also denoted by exp(x), has the series representation The property ex+y = ex ey corresponds to the equality k=0 ^ (x + …

Expectations of Products of Independent Random Variables

Let X and Y be independent random variables, and let f and g be Borel - measurable functions on R. I will show now that E [f(X)g(Y)] = (E [f(X)])(E …

Convergence in Probability and the Weak Law of Large Numbers

Let Xn be a sequence of random variables (or vectors) and let X be a random or constant variable (or conformable vector). Definition 6.1: We say that Xn converges in …

Gaussian Elimination of a Square Matrix and the Gauss-Jordan Iteration for Inverting a Matrix

I.6.1. Gaussian Elimination of a Square Matrix The results in the previous section are the tools we need to derive the following result: Theorem I.8: Let A be a square …

Transformations of Absolutely Continuous Random Variables

If X is absolutely continuously distributed, with distribution function F(x) = /Е f (u)du, the derivation of the distribution function of Y = g(X) is less trivial. Let us assume …

Weak Laws of Large Numbers for Stationary Processes

I will show now that covariance stationary time series processes with a vanishing memory obey a weak law of large numbers and then specialize this result to strictly stationary processes. …

Positive Definite and Semidefinite Matrices

Another set of corollaries of Theorem I.36 concern positive (semi)definite ma­trices. Most of the symmetric matrices we will encounter in econometrics are positive (semi)definite or negative (semi)definite. Therefore, the following …

Conditional Distributions of Multivariate Normal Random Variables

Let Y be a scalar random variable and X be a k-dimensional random vector. Assume that where mY = E(Y), mX = E(X), and £yy = Var(Y), £yx = Cov(Y, …

Asymptotic Properties of ML Estimators

8.4.1. Introduction Without the conditions (c) in Definition 8.1, the solution 60 = argmax0e© E [ln(Ln (в))] may not be unique. For example, if Zj = cos(Xj + в0) with …

The Complex Logarithm

Like the natural logarithm ln( ), the complex logarithm log(z), z є C is a com­plex number a + i ■ b = log(z) such that exp(a + i ■ …

Moment-Generating Functions and Characteristic Functions

2.8.1. Moment-Generating Functions The moment-generating function ofa bounded random variable X (i. e., P [| X | < M] = 1 for some positive real number M < to) is …

The Uniform Law of Large Numbers and Its Applications

6.4.1. The Uniform Weak Law of Large Numbers In econometrics we often have to deal with means of random functions. A random function is a function that is a random …

The Gauss-Jordan Iteration for Inverting a Matrix

The Gaussian elimination of the matrix A in the first example in the previous section suggests that this method can also be used to compute the inverse of A as …

Transformations of Absolutely Continuous Random Vectors 4.4.1. The Linear Case

Let X — (X1, X2)T be a bivariate random vector with distribution function where x — (x1; x2)T, u — (u ь u2)T. In this section I will derive the …

Mixing Conditions

Inspection of the proof of Theorem 7.5 reveals that the independence assumption can be relaxed. We only need independence of an arbitrary set A in F—T and an arbitrary set …

Generalized Eigenvalues and Eigenvectors

The concepts of generalized eigenvalues and eigenvectors play a key role in cointegration analysis. Cointegration analysis is an advanced econometric time series topic and will therefore not likely be covered …

Independence of Linear and Quadratic Transformations of Multivariate Normal Random Variables

Let X be distributed Nn(0, In) - that is, X is n-variate, standard, normally distributed. Consider the linear transformations Y = BX, where B is a k x n matrix …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.