Many courses in linear algebra discuss determinants, since such a discussion empowers us to compute eigenvalues, which finds its way into many STEM applications. Yet, the theory of determinants is rather challenging to set up in a rigorous manner. To motivate our discussion, we will analyse the case.
Let denote a
matrix.
Lemma 1. is injective if and only if
.
Proof. Now if , then
has a zero column and thus is not injective. Furthermore,
.
Therefore, suppose without loss of generality. We make the row operations
Recall that is injective if and only if
is, which holds if and only if
.
The quantity is precisely the determinant of a
matrix.
Theorem 1. The determinant of is defined by
Consequently, is invertible if and only if
.
Proof. If , then
is injective. By the fundamental theorem of invertible matrices,
is bijective.
The determinant, therefore, tells us whether a matrix is invertible or not. This still raises a challenging question: in the case actually exists, how do we compute it?
Theorem 2. If is invertible, then
Proof. We have
Taking inverses,
Multiplying by relevant constants,
Subtracting the equations,
Hence,
A similar computation for yields
Therefore,
Corollary 1. For any such that
,
We have listed many common computations involving matrices. Yet, a simple yet challenging problem arises: how do we compute the determinant of a
matrix? More crucially, what is the determinant of such a matrix?
We are going to proceed like before. Consider a matrix
Our line of attack is to use the determinant to make sense of the
determinant. If
, then
is not injective and thus not invertible. We would expect
in this setting. Now suppose
without loss of generality. Apply elementary operations to get rid of the leading terms in the second two rows:
Since ,
will be injective if and only if
is. Yet, the entries of this matrix look very, very peculiar! Indeed, each entry is itself the determinant of some matrix. For instance,
Let’s bravely compute the determinant of :
Then if and only if
The expression on the left-hand side turns out to be the determinant of the matrix. This yields what is known as the cofactor expansion of the matrix. Now contrary to usual practice, we won’t be defining the determinant of a
matrix formally, since we have too many questions to resolve:
- How do we generalise the definition of the determinant to
matrices?
- Does the invertibility property still hold in the general case?
- What can we actually compute using determinants?
- What properties do determinants actually possess?
- What is a determinant?
The next few posts aim to resolve these questions. Once we actually can define the determinant of a general square matrix , then we can practically discuss one of its most useful applications—eigenvectors and eigenvalues.
—Joel Kindiak, 4 Mar 25, 1815H
Leave a comment