Let’s discuss polynomials. Let be any field.
Definition 1. Recall the constant function . A function
is a monomial with degree
if
for any
. By convention, we define
. We then define the set
of polynomials over
with degree
by
Then the set of polynomials over
is then defined to be the union of all such
:
It is clear that for any ,
as a subspace. Furthermore, for any
,
. In fact, more is true.
Theorem 1. For any ,
forms a basis for
.
Proof. We prove by induction. It is obvious that is linearly independent. Suppose for any
,
is linearly independent. Consider the equation
Setting ,
while
for
. Hence,
, yielding
Factoring ,
For , since
, we must have
By the induction hypothesis, since is linearly independent, we must have
Hence, for
. Therefore, the set
is linearly independent. By induction, for any
, the set
forms a basis for
.
Corollary 1. The set forms a basis for
.
Proof. To prove that , fix
. Then
for some
, yielding
On the other hand, since is the smallest subspace of
containing
, we have
as required. To prove that is linearly independent, let
be a finite set. Define
Then is linearly independent, as required. Since any finite subset of
is linearly independent,
is linearly independent.
Therefore, forms a basis for
.
Recall that differentiation is a linear transformation. In fact, techniques in calculus yield the famous result
In other words, is defined on the basis
. It turns out that we can go the other way around—first start with a function defined on a basis, then extend it to the rest of the vector space.
Let be a vector space over
.
Theorem 2. Let be a vector space over
. Suppose
forms a basis for
. For any function
, there exists a unique linear transformation
such that
. Furthermore, if
is linearly independent, then
forms a basis for
.
Proof. For any , find unique vectors
and constants
such that
Then define by
which can be verified to be a well-defined linear transformation satisfying . For uniqueness, suppose there exists another linear transformation
such that
. Then
The extension yields defined by
, establishing uniqueness.
Corollary 2. The map defined by
extends to a canonical linear transformation called the formal derivative.
From now on, we will write without loss of ambiguity.
Corollary 3. For any , the linear transformation
defined by
is a bijective linear transformation.
Proof. Since , we have
and thus is surjective. It suffices to prove that
is injective. Fix
and find unique scalars
such that
If , then
Since forms a basis for
,
so that , as required.
Corollary 3 tells us that and
are essentially the same as vector spaces. The technical term is that these two vector spaces are isomorphic to each other.
Definition 2. Two vector spaces over a field
are isomorphic if there exists a vector space isomorphism (i.e. a bijective linear transformation)
. In this case, we denote
.
Example 1. For any ,
.
As the infinite version of , it true that
? No! Consider the case
. It turns out there isn’t even a bijection, since
is a countably infinite set and
is uncountably infinite.
This surprising fact boils down to being defined as a union of a countably infinite number of countably infinite sets
. In a sense,
encodes the smallest infinity that encompasses the finite cases.
Yet, the finite-dimensional case is a fascinating study on its own right. We have seen that behaves essentially like
, which we will define to have dimension
. In fact, we will use this rather broad definition of a dimension.
Definition 3. We say that is finite-dimensional if there exists some
such that
. Otherwise,
is infinite-dimensional.
Example 2. For any ,
is finite-dimensional, while
is infinite-dimensional.
Furthermore, the study of finite-dimensional vector spaces is the motivation for the rather strange object known as matrices. In fact, studying matrices turns out to be the key to help us understand dimensions, and so we turn our attention there in the next post.
—Joel Kindiak, 27 Feb 25, 2339H
Leave a comment