Introduction to the Mathematical and Statistical Foundations of Econometrics

Subspaces Spanned by the Columns and Rows of a Matrix

The result in Theorem I.9 also reads as follows: A = BU, where B = P-1L is a nonsingular matrix. Moreover, note that the size of U is the same as the size of A, that is, if A is an m x n matrix, then so is U. If we denote the columns of U by u 1,...,un, it follows therefore that the columns a1,...,an of A are equal to Bu1;Bun, respectively. This suggests that the subspace spanned by the columns of A has the same dimension as the subspace spanned by the columns of U. To prove this conjecture, let VA be the subspace spanned by the columns of A and let VU be the subspace spanned by the columns of U. Without loss or generality we may reorder the columns of A such that the first k columns a1,...,ak of A form a basis for VA. Now suppose that u1,...,uk are linear dependent, that is, there exist constants c1,...,ck not all equal to zero such that J]j=1 CjUj = 0. But then also =1 cjBuj =Y^j=1 cjaj = 0, which by

the linear independence of a1,...,ak implies that all the Cj’s are equal to zero.

Hence, u1,...,uk are linear independent, and therefore the dimension of VU is greater or equal to the dimension of VA. But because U = B-1 A, the same argument applies the other way around: the dimension of VA is greater or equal to the dimension of VU. Thus, we have

Theorem I.12: The subspace spanned by the columns of A has the same di­mension as the subspace spanned by the columns of the corresponding echelon matrix U in Theorem I.9.

Next, I will show that

Theorem I.13: The subspace spanned by the columns of AT is the same as the subspace spanned by the columns of the transpose UT of the corresponding echelon matrix U in Theorem I.9.

Proof: Let A be an m x n matrix. The equality A = BU implies that AT = UTBT. The subspace spanned by the columns of AT consists of all vectors x є Rm for which there exists a vector c1 є К” such that x = A Tc1, and similarly the subspace spanned by the columns of UT consists of all vectors x є Km for which there exists a vector c2 є Rn such that x = UTc2. If we let c2 = B Tc1, the theorem follows. Q. E.D.

Now let us have a closer look at a typical echelon matrix:

/0

... 0

©

... *

*

... *

*

... *

*

... *

0

... 0

0

... 0

©

... *

*

... *

*

... *

0

... 0

0

... 0

0

... 0

©

... *

*

... *

0

... 0

0

... 0

0

... 0

0

... 0

©

... *

... 0

0

... 0

0

... 0

0

... 0

0

... 0

(I.36)

where each symbol © indicates the first nonzero elements of the row involved called the pivot. The elements indicated by * may be zero or nonzero. Because the elements below a pivot © are zero, the columns involved are linear in­dependent. In particular, it is impossible to write a column with a pivot as a linear combination of the previous ones. Moreover, it is easy to see that all the columns without a pivot can be formed as linear combinations of the columns with a pivot. Consequently, the columns of U with a pivot form a basis for the subspace spanned by the columns of U. But the transpose UT of U is also an echelon matrix, and the number of rows of U with a pivot is the same as the number of columns with a pivot; hence,

Theorem I.14: The dimension of the subspace spanned by the columns of an echelon matrix U is the same as the dimension of the subspace spanned by the columns of its transpose UT.

If we combine Theorems 1.11,1.12 and I.13, it now follows that Theorem I.6 holds.

The subspace spanned by the columns of a matrix A is called the column space of A and is denoted by R( A). The row space of A is the space spanned by the columns of AT, that is, the row space of A is R(AT). Theorem 1.14 implies that the dimension of R(A) is equal to the dimension of R(AT).

There is also another space associated with a matrix A, namely, the null space of A denoted by N(A). This is the space of all vectors x for which Ax = 0, which is also a subspace of a vector space. If A is square and nonsingular, then N(A) = {0};ifnot it follows from Theorem I.12 that N(A) = N(U), where U is the echelon matrix in Theorem I.12.

image834 Подпись: (I.37)

To determine the dimension of N(U), suppose that A is an m x n matrix with rank r, and thus U is an m x n matrix with rank r. Let R be an n x n permutation matrix such that the first r columns of UR are the r columns of U with a pivot. Clearly, the dimension of N(U) is the same as the dimension of N(UR). We can partition UR as (Ur, Un—r), where Ur is the m x r matrix consisting of the columns of U with a pivot, and Un—r is the m x (n — r) matrix consisting of the other columns of U. Partitioning a vector x in ^V(UR) accordingly - that is, x = (xj, xj—r)T - we have

It follows from Theorem I.5 that UjUr is invertible; hence, it follows from (I.37) and the partition x = (xT, xj—r )T that

image836(I.38)

Therefore, N( UR) is spanned by the columns of the matrix in (I.38), which has rank n — r, and thus the dimension of N(A) is n — r. By the same argument it follows that the dimension of N(AT) is m — r.

The subspace N(AT) is called the left null space of A because it consists of all vectors y for which yTA = 0T

In summary, it has been shown that the following results hold.

Theorem I.15: Let A beanm x n matrix with rankr. Then R( A) and R( AT) have dimension r, N( A) has dimension n — r, and N( AT) has dimension m — r.

image837

Figure I.4. Projection of b on the subspace spanned by a.

Note that in general the rank of a product AB is not determined by the ranks r and ^ of A and B, respectively. At first sight one might guess that the rank of AB is min(r, s), but that in general is not true. For example, let A = (1, 0) and BT = (0, 1). Then A and B have rank 1,but AB = 0, which has rank zero. The only thing we know for sure is that the rank of AB cannot exceed min(r, s). Of course, if A and B are conformable, invertible matrices, then AB is invertible; hence, the rank of AB is equal to the rank of A and the rank of B, but that is a special case. The same applies to the case in Theorem I.5.

Добавить комментарий

Introduction to the Mathematical and Statistical Foundations of Econometrics

Mathematical Expectation

With these new integrals introduced, we can now answer the second question stated at the end of the introduction: How do we define the mathematical ex­pectation if the distribution of …

Hypotheses Testing

Theorem 5.19 is the basis for hypotheses testing in linear regression analysis. First, consider the problem of whether a particular component of the vector Xj of explanatory variables in model …

The Inverse and Transpose of a Matrix

I will now address the question of whether, for a given m x n matrix A, there exists an n x m matrix B such that, with y = Ax, …

Как с нами связаться:

Украина:
г.Александрия
тел./факс +38 05235  77193 Бухгалтерия

+38 050 457 13 30 — Рашид - продажи новинок
e-mail: msd@msd.com.ua
Схема проезда к производственному офису:
Схема проезда к МСД

Партнеры МСД

Контакты для заказов оборудования:

Внимание! На этом сайте большинство материалов - техническая литература в помощь предпринимателю. Так же большинство производственного оборудования сегодня не актуально. Уточнить можно по почте: Эл. почта: msd@msd.com.ua

+38 050 512 1194 Александр
- телефон для консультаций и заказов спец.оборудования, дробилок, уловителей, дражираторов, гереторных насосов и инженерных решений.