Let be a vector space over a field
. We have previously seen that for vectors
,
Furthermore, if , we require at least one vector, namely
, to cook up the space
.
We are tempted then to think that requires two minimum ingredients. However, we have seen that this does not always hold. For instance, suppose
contains some
. Define
and
. Then
yields
which requires only one ingredient. The key observation is that . That is,
doesn’t truly increase
at all. We can consolidate by making the following observation.
Lemma 1. Let be sets of vectors. Suppose
. Then
as subspaces. Furthermore, if , then
Proof. Firstly, , so that
contains
. Since
is the smallest subspace containing
, we must have
If , then
implies
Hence, .
The main point of the equality is that for any excess vectors , if these vectors belong to
, then
doesn’t add any new information to
, and thus does not increase the minimum required number of ingredients to generate the space.
Recall our example where both vectors are nonzero. We have seen that if
, then
What if ? Then there are no scalars
such that
. In fact, something stronger happens.
Lemma 2. Let be nonzero vectors. Then
if and only if for any
,
Proof. We first prove . Fix scalars
such that
By algebra, . If
, then we have
and we are done. Otherwise,
, a contradiction. Therefore,
necessarily.
We next prove by contrapositive. Suppose
. Then there exists some
such that
. If
then
, a contradiction. Hence,
. Therefore,
Setting and
,
and yet is not true.
It is this latter condition that we will define as linear independence.
Definition 2. The finite set is called linearly independent if for any
,
It is not hard to see that nonempty finite subsets of are linearly independent as well. A single set is linearly independent as well too.
We use this property to generalise to infinite sets. For any set , we say that
is linearly independent if every nonempty finite subset of
is linearly independent. We say that
is linearly dependent if it is not linearly independent.
Corollary 1. Let be nonzero vectors. Then
if and only if
is linearly independent.
Roughly speaking, a set is linearly independent if it contains
pieces of information to its span. This is what we define to be the dimension of a span.
However, let’s first return to mathematical earth and discuss . Intuitively,
ought to have
dimensions. This is true.
Example 3. For each , define
, which means
Then is linearly independent, and
Proof. For linear independence, fix scalars such that
Then for any ,
Therefore, , so that
is linearly independent. For the spanning property, fix
, where
for each
. Then
This yields . On the other hand, since
and
is the smallest vector space containing
, we automatically have
. Therefore,
We call a basis for
, and can generalise the idea to other vector spaces.
Definition 3. We call a basis for
if
is linearly independent and
.
Theorem 1. Suppose is a basis for
. Then for any vector
, there exists unique scalars
such that
Proof. Existence is immediate from . For uniqueness, suppose there are two representations
Subtracting on both sides,
Since is linearly independent,
for each
, so that the “recipe” that creates
is unique.
Corollary 2. Suppose is a basis for
. Define the map
in the following manner:
for any
and for any
and
,
Then is a well-defined bijection. In fact,
is a linear transformation that maps the dish
to the recipe
that is required to “cook” it.
In a sense, any vector space that only requires a minimum of ingredients to cook all dishes is essentially the same as the vector space
. We can formalise this idea using isomorphisms, which is our next topic of discussion.
—Joel Kindiak, 24 Feb 0007H
Leave a comment