Introduction to the Mathematical and Statistical Foundations of Econometrics
Determinants of Block-Triangular Matrices
Consider a square matrix A partitioned as
where A1,1 and A2 2 are submatrices of size k x k and m x m, respectively, A12 is a k x m matrix, and A2,1 is an m x k matrix. This matrix A is block-triangular if either A12 or A21 is a zero matrix, and it is block-diagonal if both A12 and A21 are zero matrices. In the latter case
where the two O blocks represent zero elements. For each block A1,1 and A2 2 we can apply Theorem I.11, that is, A1,1 = P1rL1D1U1, A2 2 = P2TL2D2U2; hence,
for instance. Then det(A) = det(P) ■ det(D) = det(P1) ■ det(P2) ■ det(D1) ■ det(D2) = det(A1,1) ■ det(A2,2). More generally, we have that
Theorem I.27: The determinant of a block-diagonal matrix is the product of the determinants of the diagonal blocks.
Next, consider the lower block-diagonal matrix
where again A1,1 and A2 2 are k x k and m x m matrices, respectively, and A21 is an m x k matrix. Then it follows from Theorem I.25 that for any k x m
Ai, i O
A2,1 — CAi, i A2,2/
If Ai is nonsingular, then we can choose C = A—1A21 so that A21 — CA121 = O. In that case it follows from Theorem I.27 that det(A) = det(Aii) ■ det(A2,2). If A1,1 is singular, then the rows of Au are linear dependent, and so are the first k rows of A. Hence, if A11 is singular, then A is singular; thus, by Theorem I.23, det(A) = det(A1:1) ■ det(A2,2) = 0. Consequently,
Theorem I.28: The determinant of a block-triangular matrix is the product of the determinants of the diagonal blocks.
I.12. Determinants and Cofactors
Consider the n x n matrix
^1,1 ... a1,f'
A = . ... .
an,1 • • • an, n J
and define the following matrix-valued function of A:
Definition I.18: The transformation p( A|i1, i2, •••, in) is a matrix formed by replacing all but the ik’s element ak, it by zeros in rows k = 1,...,n of matrix
(1.51) . Similarly, the transformation к (A | i 1, i2,...,in) is a matrix formed by replacing all but the ik’s element ait k by zeros in columns k = 1,...,n of matrix
(1.51) .
For example, in the 3 x 3 case, P(A |2, 3, 1) =
к(A |2, 3, 1) =
Recall that a permutation of the numbers 1, 2,...,n is an ordered set of these n numbers and that there are n! of these permutations, including the trivial permutation 1, 2,...,n. Moreover, it is easy to verify that, for each permutation i1, i2,...,in of 1, 2,...,n, there exists a unique permutation j1, j2,jn
such that p(A|ib i2, ...,in) = к (A| j1, j2,jn) and vice versa. Now define the function
5(A) = E det[p(A|i1, i2,in)]
= E det[K (A | i 1, i2,in)], (I.52)
where the summation is over all permutations i1, i2,...,in of 1, 2,...,n.
Note that det[p(A |i 1, i2,in)] = ±a1,i1 a2,i2... an, in, where the sign depends on how many row or column exchanges are needed to convert p(A |i1, i2,..., in) into a diagonal matrix. If the number of exchanges is even, the sign is +; the sign is - if this number is odd. Clearly, this sign is the same as the sign of the determinant of the permutation matrix p(En |i1, i2,..., in), where En is the n x n matrix with all elements equal to 1.
I will show now that 5(A) in (I.52) satisfies the axioms in Definition I.17, and thus:
Theorem I.29: The function 5(A) in (I.52) is the determinant of A : 5(A) = det( A).
Proof: First, exchange rows of A such as rows 1 and 2, for example. The new matrix is P12A, where P12 is the elementary permutation matrix involved, that is, the unit matrix with the first two columns exchanged. Then p (P12 A | i 1, i2,...,in) = P12p(A|ib i2,...,in); hence, 5(P12A) = det(P1j2)5(A) = -5(A). Thus, 5(A) satisfies axiom (a) in Definition I.17.
Second, let A be lower triangular. Then p(A|i 1, i2,..., in) is lower triangular but has at least one zero diagonal element for all permutations i1, i2,...,in except for the trivial permutation 1, 2,...,n. Thus, in this case 5( A) = det[p( A11, 2,...,n) = det( A). The same applies, of course, to upper- triangular and diagonal matrices. Consequently 5(A) satisfies axiom (b) in Definition I.17.
Finally, observe that p (A^ | i 1, i2,...,in) is a matrix with elements YTk=1 am, kbk, im in position (m, im), m = 1,...,n and zeros elsewhere. Hence,
p(AB|ib i2, ...,in) = A ■ p (B |i1, i2,..., in), which implies that
5(AB) = det(A) ■ 5(B). (I.53)
Now write B as B = PTLDU, and observe from (I.53) and axiom (b) that 5(B) = 5((PTLD)U) = det(PTLD)5(U) = det(PTLD)det(U) = det(B). The same applies to A. Thus,
S(AB) = det(A) ■ det(B) = S(A) ■ S(B).
Q. E.D.
Next, consider the following transformation.
Definition I.19: The transformation т (Ak, m) is a matrix formed by replacing all elements in row k and column m by zeros except element ak, m itself.
For example, in the 3 x 3 case,
( |
a1,1 au 0
0 0 a2,3 . (I.55)
a3,1 a3,2 0 J
Then it follows from (I.52) and Theorem I.29 that
det[T (Ak, m)] = ^2 det[p( A i 1, i2,in)]
ik=m
= ^2det[K(Ai 1, i2,in)]; (I.56)
ik =k
hence,
Theorem I.30: Forn x n matrices A, det( A) = nm=1 det[T (Ak, m)] fork =
1, 2,...,n and det(A) = YTk=1 det[T (Ak, m)] for m = 1, 2,...,n.
Now let us evaluate the determinant of the matrix (I.55). Swap rows 1 and 2, and then recursively swap columns 2 and 3 and columns 1 and 2. The total number of row and column exchanges is 3; hence,
det[T(A2, 3)] = (-1)3det
for instance, where cof2,3(A) is the cofactor of element a2,3 of A. Note that the second equality follows from Theorem I.27. Similarly, we need k - 1 row exchanges and m — 1 column exchanges to convert т(Ak, m) into a block - diagonal matrix. More generally,
Definition I.20: The cofactor cofk, m (A) of an n x n matrix A is the determinant of the (n — 1) x (n — 1) matrix formed by deleting row k and column m times (— 1)k+m.
Thus, Theorem I.30 now reads as follows:
Theorem I.31: Forn x n matrices A, det (A) — nm=1 ak, mcofk, m (A) fork —
1, 2,...,n, and also det( A) — n=1 ak, mcofkm (A) form — 1, 2,...,n.