Previously, our discussion on the dot product began with a simple question: What is the angle between two vectors? Our study on the cross product shall also begin with a simple question: What is a vector that is perpendicular to two vectors?
One possible approach is to give the answer, and leave it as it is. However, part of this blog’s goal is to journey from the known to the unknown, and in this context, it means properly constructing the cross product. Just like with determinants, we are going to impose several conditions on the cross product, and the result from these conditions will bring us to one and only one definition for the cross product.
Just like determinants, we are going to build the cross product one step at a time. Here, we denote
for readability. We will let
denote the usual standard ordered basis on
.
The first observation is that, as a basic normalising step, we want to be the vector perpendicular to
and
. They are all unit vectors (orthonormal, in fact). Actually, both
would be perpendicular to
and
.
Lemma 1. In the case , we have
,
. Equivalently,
and
.
Almost by custom, we will define and
.
Definition 1. Define ,
, and
. For
, define
whenever
.
By Lemma 1, we obtain the property
Now, the stipulation is rather arbitrary. If instead we allow
for any
, we actually obtain an alternating map, which yields
.
Theorem 1. For any ,
. In particular,
Next, and rather understandably, we would like to be multi-linear (since there are two arguments, we say that
is bi-linear). It turns out that we are forced into one formula for
.
Theorem 2. Suppose furthermore that is linear in both arguments. Then for any
,
Proof. We first exploit the linearity of . Writing
,
In particular,
Hence, exploiting the linearity of , writing
,
Corollary 1. For vectors ,
and identifying
,
we have
Proof. By expanding each component,
Hence, taking the dot product with ,
Interestingly however, if we began with this definition of the cross product, then we will recover all of our previous properties.
Corollary 2. For any , define
Then we recover the hypotheses in Definition 1, Theorem 1 and Theorem 2.
Proof. Since is alternative,
recovers our hypotheses in Definition 1 and Theorem 1. Since the determinant is multilinear, we recover Theorem 2.
Using the usual dot product (which still works as a computation in , barring some technical interpretive caveats), we have that
is orthogonal to both
and
.
Corollary 3. For vectors ,
Proof. The results follow since the determinant is alternative.
Now, the discussion on angles only really makes sense when , in that
so let’s zoom in on this situation. We observe that for any ,
. So what is it about
that makes it special? Perhaps
is intimately connected with the “degree” of separation, no pun intended, between
and
. Obviously, due to linearity,
whenever either
or
are the zero vector
.
Assume therefore that are nonzero vectors. Due to bilinearity, if we can decode the case for
, then using the property that
, we obtain
Therefore, let’s work with the simplified case . At this point, we could bash the algebra out. However, taking inspiration from this thread, we will adopt a much cleaner approach. Define the matrix
Then
On the left-hand side,
On the right-hand side,
Therefore, dividing by on both sides,
This equation can only mean one thing: , where we don’t even need to worry about
on the right-hand side because
whenever
. This yields a rather elegant geometric interpretation for the cross product:
Corollary 4. For with angle
between them as computed by the dot product (and consider
undefined when either
or
are the zero vector
),
Letting denote the unit vector of
, we recover the classic cross product formula:
—Joel Kindiak, 13 Mar 25, 2236H
Leave a comment