Defining Dimension

After discussing linear transformations and matrices, we are now ready to discuss dimensions rigorously. Let \mathbb K be any field.

Given any ordered set (\mathbf v_1, \dots, \mathbf v_k) of vectors, we can define the matrix

\mathbf A := \begin{bmatrix}\mathbf v_1 & \cdots & \mathbf v_k\end{bmatrix} \in \mathcal M_{n \times k}(\mathbb K) = \mathcal L(\mathbb K^k, \mathbb K^n).

It is then not too hard to observe that

  • \mathbf x \in \mathrm{ker}(\mathbf A) \iff \mathbf A \mathbf x = \mathbf 0, and
  • \mathrm{span}\{\mathbf v_1, \dots, \mathbf v_k\} = \mathbf A(\mathbb K^k).

Lemma 1. Let K:=\{\mathbf v_1, \dots, \mathbf v_k\} \subseteq \mathbb K^n.

Consequently, if \{\mathbf v_1,\dots,\mathbf v_k\} forms a basis for \mathbb K^n, then k = n.

Proof. For scalars x_1,\dots,x_k \in \mathbb K, consider the expression

\mathbf A \mathbf x = \begin{bmatrix} \mathbf v_1 & \cdots & \mathbf v_k \end{bmatrix} \begin{bmatrix} x_1 \\ \vdots \\ x_k \end{bmatrix} = x_1 \mathbf v_1+ x_2 \mathbf v_2 + \cdots + x_k \mathbf v_k.

If K is linearly independent, then the right-hand side equalling \mathbf 0 implies that \mathbf x = \mathbf 0. Hence,

\mathbf A \mathbf x = \mathbf 0 \quad \Rightarrow \quad \mathbf x = \mathbf 0.

Thus, \mathbf A is injective, which implies that n \leq k, as required.

Suppose instead that \mathrm{span}(K) = \mathbb K^n. Then \mathbf A(\mathbb K^k) = \mathbb K^n, which means that \mathbf A is surjective. Therefore, n \leq k, as required.

Lemma 2. Let V \subseteq \mathbb K^n. Then V has a finite basis (i.e. a linearly independent set K such that \mathrm{span}(K) = V).

Proof. If V = \{\mathbf 0\} then take \emptyset to be the required basis. Suppose V \neq \{\mathbf 0\}. Then there exists some nonzero \mathbf v_1 \in V.

  • If V_1 :=\mathrm{span}\{\mathbf v_1\} = V then we are done.
  • Inductively, given V_k = \mathrm{span}\{\mathbf v_1,\dots,\mathbf v_k\}, either \mathrm{span}(V_k) = V or there exists \mathbf v_{k+1} \in V \backslash V_k.

This process must end before V_n, otherwise V_{n+1} \subseteq \mathbb K^n will have a basis of n+1 > n elements, which by Lemma 1 is not linearly independent, which is a contradiction.

Lemma 3. Let V \subseteq \mathbb K^n be a subspace and \{\mathbf u_1,\dots, \mathbf u_k\} and \{\mathbf v_1,\dots,\mathbf v_m\} be bases for V. Then k = m.

Proof. Define the isomorphisms S : V \to \mathbb K^k and T : V \to \mathbb K^m by S(\mathbf u_i) = \mathbf e_i and T(\mathbf v_j) = \mathbf e_j. Then T \circ S^{-1} : \mathbb K^k \to \mathbb K^m is an isomorphism, and thus injective, therefore, (T \circ S^{-1}) (\{ \mathbf e_1,\dots,\mathbf e_k\}) forms a basis of k elements for \mathbb K^m, so that k \leq m. A symmetric argument yields m \leq k, so that k = m, as required.

Therefore, no matter which basis we use to define V, the number of basis elements will remain the same. This number is what we define to be the dimension of a subspace of \mathbb K^n.

Definition 1. For any linearly independent set K:= \{\mathbf v_1,\dots,\mathbf v_k\}, we define \dim(\mathrm{span}(K)) := k.

Example 1. More generally, for any subspace V \subseteq \mathbb K^n, if V has a basis \{\mathbf v_1,\dots,\mathbf v_k\}, then \dim(V) = k. In particular, considering the standard basis \{\mathbf e_1,\dots,\mathbf e_n\}, \dim(\mathbb K^n) = n.

Now working in the special case of \mathbb K^n, let’s answer a crucial question: what is the dimension of a subspace.

Theorem 1. Given subspaces W \subseteq V \subseteq \mathbb K^n, \dim(W) \leq \dim(V).

Proof. If W = V, then a basis for V will be a basis for W, and hence \dim(W) = \dim(V). If W \subsetneq V, then there exists some vector \mathbf v \in V \backslash W. Let \{\mathbf v_1, \dots, \mathbf v_k\} be any basis for W. Then extend this basis to a basis

\{\mathbf v_1, \dots, \mathbf v_k, \mathbf u_1,\dots\mathbf u_m\}

of V. Then

\dim(W) = k \leq k + m = \dim(V).

This definition certainly extends beyond \mathbb K^n.

Definition 2. Let V be a vector space over \mathbb K. We say that V is finite-dimensional if there exists n \in \mathbb N such that V \cong \mathbb K^n. In this case, we write \dim(V) = n. Otherwise, V is infinite-dimensional.

Corollary 1. Let V, W be vector spaces over \mathbb K. Suppose V is finite-dimensional. Then V \cong W if and only if W is finite-dimensional and \dim(V) = \dim(W).

Theorem 2. Let V be a finite-dimensional vector space over \mathbb K with \dim(V) = n. The following propositions hold:

  • V has a finite basis with n elements.
  • Any basis of V must have n elements.
  • For any subspace W \subseteq V, W is finite-dimensional and \dim(W) \leq \dim(V).

Proof. Let T : V \to \mathbb K^n be a vector space isomorphism. Since T^{-1} is bijective, T^{-1}(\{\mathbf e_1,\dots,\mathbf e_n\}) forms a basis for V. For any basis \{\mathbf u_1,\dots,\mathbf u_k\} of V, since T is bijective, T(\{\mathbf u_1,\dots,\mathbf u_k\}) forms a basis for \mathbb K^n, so that k = n.

Finally, T(W) \subseteq T(V) = \mathbb K^n as a subspace and thus has a basis \{\mathbf w_1,\dots,\mathbf w_k\} of k \leq n elements. Since T^{-1} is bijective, T^{-1}(\{\mathbf w_1,\dots,\mathbf w_k\}) forms a basis for W with k elements. Hence, we can define the isomorphism R : W \to \mathbb K^k by R(\mathbf w_i) = \mathbf e_i, so that W \cong \mathbb K^k, and thus

\dim(W) = k \leq n = \dim(V).

Can we discuss infinite-dimensional vector spaces? The idea is to use our finite-dimensional intuitions to describe infinite-dimensional definitions. We observe that in the finite dimensional setting, the dimension of a subspace V is the size of its basis K, i.e. if \mathrm{span}(K) = V, then \dim(V) = |K|. Unfortunately, we will need more effort to formally define this notion, and delay it to the setting when it becomes necessary.

For now, let’s explore isomorphisms in more detail. When are two vector spaces isomorphic to each other? It turns out that if we have a surjective linear transformation T : V \to W, then we can always find some kind of isomorphism between some kind of collection of subsets of V to W. This gives us the notion of the quotient space, encoded in the first isomorphism theorem in linear algebra.

—Joel Kindiak, 1 Mar 25, 2203H

,

Published by


Responses

  1. Generalised Perpendicularity – KindiakMath

    […] we only have a basis for , and not an orthonormal basis for . Can we make such a conversion? For finite-dimensional vector spaces, the answer is […]

    Like

  2. Baby Approximation Theory – KindiakMath

    […] also a subspace of . Furthermore, if is finite-dimensional, […]

    Like

  3. The Spectral Theorems – KindiakMath

    […] be a finite-dimensional inner product space over , and be a linear transformation (or if you’d prefer, linear […]

    Like

  4. Fréchet Derivatives – KindiakMath

    […] have a natural vector space isomorphism with . Roughly speaking, therefore, we call as a two-dimensional space of […]

    Like

Leave a reply to Fréchet Derivatives – KindiakMath Cancel reply