Let be a vector space over a field
and
be a linear transformation.
Lemma 1. Suppose is finite-dimensional with ordered basis
. Then
is similar to
. In particular, we define
.
Proof. Define the isomorphism by
. Then we leave it as an exercise to check that
.
The transformation being diagonalisable is a special property, and it satisfies the rather elegant result below.
Definition 1. For any polynomial defined by
, define the linear transformation
by
Since , we have
for polynomials
.
Theorem 1. Define the zero transformation by
for any
. If
is finite-dimensional and
is diagonalisable, then
.
Proof. Let be a basis of eigenvectors of
for
, and
denote the eigenvalue corresponding to
. Then each
is a factor of
. Hence, for each
, there exists some
such that
In particular,
Thus, for any ,
Now, for any , find unique constants
such that
By the linearity of ,
Corollary 1. Suppose is finite-dimensional. Let
. If
is invertible and
, then
Proof. Since is invertible,
is not an eigenvalue of
, so that
:
Therefore, .
Corollary 2. Suppose the characteristic equation of is
. If
is diagonalisable and invertible, then there exists an invertible matrix
and a diagonal matrix
such that
It turns out that we don’t actually require to be diagonalisable in Theorem 1. In fact,
just needs to be a linear transformation from
to
, and
must be finite-dimensional, so that
. This is the Cayley-Hamilton theorem which we aim to prove in this post.
Definition 2. We call a subspace a
–invariant subspace of
if
. Equivalently,
is well-defined.
Example 1. For any nonzero , the
–cyclic subspace generated by
defined by
is
-invariant. In particular, if
be an eigenvalue of
, then
is
-invariant.
Example 2. Define the differentiation operator by ,
. For any
,
is
-invariant.
In what follows, let be finite-dimensional with ordered basis
.
Definition 3. Let be a linear transformation. Suppose
is a finite-dimensional vector space with ordered basis
respectively. For each
, define
Then define . If
then define
.
Theorem 2. Suppose is
-invariant. Then
is a factor of
.
Proof. Let be a basis for
and extend it to a basis
for
. Then there exist matrices
of appropriate sizes such that
Therefore, is a factor of
.
Lemma 2. For any nonzero , there exists some unique integer
such that
is a basis for
. Furthermore,
Proof. Since as a subspace, it is finite-dimensional with dimension
. Now, let
be the smallest integer such that
Then for any ,
. Hence,
Therefore, . For scalars
, consider the equation
If , then
and
, a contradiction. Hence,
. By induction, we can show that
for any
, so that
is linearly independent. Therefore,
. Putting both results together yields the desired conclusion.
Finally, by considering the ordered basis , we have
by performing cofactor expansion on the first row and applying induction.
When I learned about the Cayley-Hamilton theorem and saw its proof, I’m not kidding when I said I shed a tear.
Theorem 3 (Cayley-Hamilton Theorem). .
Proof. Fix for non-triviality. Write
for some integer . Since
, find coefficients
such that
By Example 1, is
-invariant. By Lemma 2,
Therefore,
By Theorem 1, there exists some such that
so that
. In particular,
Since is arbitrary, we obtain
, as required.
Given , what polynomials
would yield
? By the Cayley-Hamilton theorem,
is one such polynomial. But is
the smallest such polynomial? The answer lies in the notion of a minimal polynomial, which intriguingly launches us into discussing generalised eigenstuff. More of such yapping in the next post.
—Joel Kindiak, 8 Mar 2258H
Leave a reply to Generalised Eigenstuff – KindiakMath Cancel reply