Let’s talk linear algebra. This subject involves two key words: linear—referring to some nice vector-ish objects and related properties, and algebra—the manipulations and transformations we can perform on said vector-ish properties.
For an introduction to the topic, we will discuss 2D vectors. But we shall not (and will not) shy away from its more exciting abstractions.
Throughout this post, let denote any set and
denote any field, which roughly speaking refers to any set where addition, subtraction, multiplication, and division is sufficiently well-defined.
Definition 1. The two-dimensional -space is defined to be
where we will denote the ordered pairs in column notation. In particular, denotes the two-dimensional real space that we all know and love.
Very soon we will discuss ideas in much broader generality. But perhaps to motivate the subject, we can recall our usual vector operations that correspond to two-dimensional vectors used in high school physics.
Definition 2. Define addition and scalar multiplication on via
We expect these objects to behave like the vectors that we are familiar with, that in essence, encode directed distance. We call the set of these vectors a vector space.
Theorem 1. Let . Then
satisfies the following additive properties:
- For
,
.
- For
,
.
- There exists an element
such that for any
,
.
- For any
, there exists an element
such that
.
In this case, we call a group under
. In addition, we can add vectors in either order (i.e. this is the commutativity property):
- For any
,
.
In this case, we call an abelian group under
. In addition,
satisfies the following scaling properties:
- For any
and
,
.
- For any
,
.
- For
and any
,
.
- For any
and
,
.
In this case, we call a vector space over
.
Proof Sketch. The proof is a matter of definition-checking. Nevertheless, we will complete some proofs to illustrate some of the techniques being used.
For the second property, we take advantage of the associativity of in
:
For the third property, we define and check that it satisfies the required equations. For the fourth property, we define
and do the needful bookkeeping.
Notice that this idea is not unique to . It could apply to many, many other sets, as we are about to see.
Lemma 1. For any field ,
forms a vector space over
. In particular,
forms a vector space over
.
Arguably the most important instances of vector spaces would be the function spaces. These spaces don’t always share all of the same properties as , but when they do, share these properties in a beautifully unified manner.
Theorem 2. For any vector space over
, let
denote the set of
-valued functions on
. Define addition and scalar multiplication according to the vector space structure of
:
For any and
,
Then forms a vector space over
with additive identity
defined by
and for any , additive inverse
defined by
In particular, forms a vector space over
.
It is this last example that we want to emphasise as the twin brother of .
Theorem 3. For any , define
by
Then the function defined by
satisfies the following property:
For any and
,
In this case, we call a linear transformation. In addition,
is bijective, and we call
a vector space isomorphism. Therefore, we can write
without ambiguity.
Proof Sketch. The proof is immediate after we recognise that for each ,
which implies that
The bijectivity of is easily verifiable.
This connection allows us to define -space as a function space.
Definition 3. For any vector space over
and
, we define the vector space
, which is a vector space over
. In particular,
, of which
is a special case.
We insist on defining the vector space , since we can make remarkable connections with other areas of mathematics, as we will see in the next post.
—Joel Kindiak, 19 Feb 25, 2233H
Leave a comment