Many concepts regarding vectors in $\R^n $ can be extended to other mathematical systems. In this chapter we discuss collections of objects that behave like vectors do in $\R^n$, which we call vector spaces.
A vector space is a nonempty set $\cV$ of vectors on which we define two operations, addition and scalar multiplication, subject to the following rules.
For all $\vec{u}, \vec{v}, \vec{w}$ in $\cV$ and all scalars $c,d$ in $\R$:
There exists $\vec{0}$ (called the zero vector) in $\cV$ such that $\vec{u} + \vec{0} = \vec{u}$.
For each $\vec{u}$ in $\cV$, there exists $-\vec{u}$ in $\cV$ such that $\vec{u} + (-\vec{u}) = \vec{0}$.
$c \vec{u}$ is in $\cV$.
$c (\vec{u} + \vec{v}) = c \vec{u} + c \vec{v}$.
$(c+d) \vec{u} = c \vec{u} + d \vec{u}$.
$(cd) \vec{u} = c (d \vec{u})$.
$1 \vec{u} = \vec{u}$.
One can easily show the following
The zero vector $\vec{0}$ is unique.
For each $\vec{u}$, the vector $-\vec{u}$ in statement 5 is unique.
$0 \vec{u} = \vec{0}$ for any $\vec{u}$.
$-\vec{u} = (-1) \vec{u}$ for any $\vec{u}$.
Before looking at examples, note that $\R^n$ with vector addition and scalar multiplication (as defined in Section 1.3) is a vector space.
However: “Vectors” in a vector space can be loads of different things, like polynomials, continuous functions, matrices, sequences of numbers, operators, ...
Let $\cM_{2\times 2} = \cbr*{\mat{cc} a & b \\ c & d \rix \:\mid\: a,b,c,d \in \R}$ be the collection of $2\times 2$ matrices with real entries. Is $\cM_{2\times 2} $ a vector space?
For $n\ge 0$, define the set of polynomials of degree at most $n$ (in the variable $t$) by
\[
\cP_n \eq \cbr*{p(t) = a_0 + a_1t + \dots + a_nt^n \:\mid\: a_0, \dots , a_n \in \R }
.\] Is $\cP_n$ a vector space?
If $p(t) = a_0 + a_1t + \dots + a_n t^n$ and $q(t) = b_0 + b_1t + \dots + b_n t^n$, then ...
Let $\cS$ be the set of points inside and on the unit circle on the $xy$-plane:
\[
\cS \eq \cbr*{(x,y) \:\mid\: x^2+y^2 \le 1, \; x,y \in \R }
.\] Is $\cS$ a vector space?
The set of all real-valued functions defined on $[a,b]$ is denoted
\[
\cR[a,b] \eq \cbr*{f \:\mid\: f: [a,b] \to \R}
.\] Is $\cR[a,b]$ a vector space?
Notice that a subset of these functions are those which are continuous on $[a,b]$:
\[
C[a,b] \eq \cbr*{f \in \cR[a,b] \:\mid\: f \text{ is continuous} }
.\]
Like in the last example, vector spaces can be formed from subsets of other vector spaces.
A set $A$ is a subset of another set $B$, denoted $A \subseteq B$, if every element in $A$ is also in $B$.
A subset of a vector space is not always another vector space. More conditions are needed.
Let $\cV$ be a vector space. A subspace $\cH$ of $\cV$ is a subset of $\cV$ such that:
The zero vector $\vec{0}$ of $\cV$ is in $\cH$.
For each $\vec{u}$ and $\vec{v}$ in $\cH$, then $\vec{u} + \vec{v}$ is in $\cH$. ($\cH$ is closed under addition.)
For each $\vec{u}$ in $\cH$ and scalar $c \in \R $, then $c \vec{u}$ is in $\cH$. ($\cH$ is closed under scalar multiplication.)
Note that $\cH$ is a vector space on its own.
Every vector space $\cV$ is a subspace of itself, and $\cbr{\vec{0}}$ is a subspace of every $\cV$.
Then $C( \R )$ and $\cP_n$ are subspaces of $\cR(\R )$, since they contain the zero function $0(t)=0$, and the sum of two continuous functions (polynomials) and scalar multiples of a continuous function (polynomial) is also a continuous function (polynomial).
Further, $\cP_n$ is a subspace of $C(\R )$.
Is the set $\cH = \cbr*{\mat{c} a \\ 0 \\ b \rix \:\mid\: a, b \in \R }$ a subspace of $\R^3$?
The zero vector $\vec{0} = \mat{c} 0 \\ 0 \\ 0 \rix$ is in $\cH$.
The sum of any two vectors in $\cH$ remains in $\cH$:
\[
\vec{u} + \vec{v} \eq
.\]
For any scalar $c \in \R $ and $\vec{u} \in \cH$,
\[
c \vec{u} \eq
.\]
Let $\cS = \cbr*{\mat{c} x \\ x +1 \rix \:\mid\: x\in \R}$. Is $\cS$ a subspace of $\R^2$?
Even though the last example was negative, it does remind us of the span of a set of vectors (in this case, only one). Under what conditions can the span of some vectors be a subspace?
Let $\vec{v}_1$ and $\vec{v}_2$ be in a vector space $\cV$, and let $\cH = \Span \cbr*{\vec{v}_1, \vec{v}_2}$.
Is $\cH$ a subspace of $\cV$?
If $\vec{v}_1, \dots , \vec{v}_p$ are in a vector space $\cV$, then $\Span \cbr*{\vec{v}_1, \dots , \vec{v}_p}$ is a subspace of $\cV$.
Follows the same argument as the above.
We call $\Span \cbr*{\vec{v}_1, \dots , \vec{v}_p}$ the subspace generated by or spanned by the vectors $\vec{v}_1, \dots , \vec{v}_p$.
Notice that our last example fails because the set was
\[
\cS \eq \mat{c} 0 \\ 1 \rix + \Span \cbr*{\mat{c} 1 \\ 1 \rix}
\] which is not the span of $(1,1)$.
Let $\cV = \cbr*{\mat{c} a + 2b \\ 2a-3b \rix \:\mid\: a,b \in \R}$. Is $\cV$ a subspace of $\R^2$?
Let $\cV = \cbr*{ \mat{c} a + 2b \\ a + 1 \\ a \rix \:\mid\: a, b \in \R}$. Is $\cV$ a subspace of $\R^3$?
Recall that $\cM_{2\times 2} $ is the vector space of real-valued $2\times 2$ matrices.
Is $\cH = \cbr*{\mat{cc} 2a & b \\ 3a+b & 3b \rix \:\mid\: a,b \in \R }$ a subspace of $\cM_{2\times 2} $?
Any matrix in $\cH$ can be written as
\[
\mat{cc} 2a & b \\ 3a+b & 3b \rix \eq \mat{cc} 2a & 0 \\ 3a & 0 \rix + \mat{cc} 0 & b \\ b & 3b \rix
\eq a \mat{rr} 2 & 0 \\ 3 & 0 \rix + b \mat{cc} 0 & 1 \\ 1 & 3 \rix
.\]
Therefore, we have
\[
\cH \eq \Span \cbr*{ \mat{rr} 2 & 0 \\ 3 & 0 \rix, \mat{cc} 0 & 1 \\ 1 & 3 \rix }
\] which makes $\cH$ a subspace of $\cM_{2\times 2} $.
Recall that $C[a,b] = \cbr*{f: [a,b] \to \R \:\mid\: f \text{ is continuous} } $.
Determine if $\cH = \cbr*{f \in C[a,b] \:\mid\: f(a) = f(b)} $ is a subspace of $C[a,b]$.
The zero function is clearly in $\cH$.
Let $f$ and $g$ be in $\cH$.
Let $c \in \R $. Then
Section 4.2: Null, Column, Row Spaces
In this section we investigate three important subspaces associated to a matrix.
The null space of an $m\times n$ matrix $A$, denoted $\Null (A)$, is the set of all solutions to $A \vec{x} = \vec{0}$. That is,
\[
\Null (A) \eq \cbr*{\vec{x} \in \R^n \:\mid\: A \vec{x} = \vec{0}}
.\]
Note that $A \vec{x} = \vec{0}$ is in $\R^m $, whereas $\vec{x} \in \R^n$.
Clearly, $\Null (A)$ contains at least $\vec{0}$. For example, given $A = \mat{rrr} 1 & -3 & -2 \\ -5 & 9 & 1 \rix$, then
\[
\mat{r} 5 \\ 3 \\ -2 \rix \in \Null (A)
\qquad\text{and} \qquad
\mat{r} 0 \\ 0 \\ 0 \rix \in \Null (A)
.\]
The null space of an $m\times n$ matrix $A$ is a subspace of $\R^n $.
Clearly, $\vec{0}$ is in $\Null (A)$.
Let $\vec{u}, \vec{v}$ be in $\Null (A)$. Then $A \vec{u} = \vec{0}$ and $A \vec{v} = \vec{0}$, so
\[
A (\vec{u} + \vec{v}) \eq A \vec{u} + A \vec{v} \eq \vec{0}
.\] So $\vec{u} + \vec{v} \in \Null (A)$.
Let $\vec{u}$ be in $\Null (A)$ and $c$ be any scalar. Then
\[
A (c \vec{u}) \eq c \cdot A \vec{u} \eq c \cdot \vec{0} \eq \vec{0}
.\] So $c \vec{u} \in \Null (A)$.
Therefore, $\Null (A)$ is a subspace of $\R^n $.
Find a spanning set for the null space of
$A = \mat{rrrrr}
-3 & 6 & -1 & 1 & -7 \\
1 & -2 & 2 & 3 & -1 \\
2 & -4 & 5 & 8 & -4
\rix$
Since $x_2, x_4, x_5$ are free variables, the solution is
\begin{align*}
\mat{c}
x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \rix
& \eq \\
& \eq \\
& \eq
\end{align*}
Notice the following observations:
The spanning set $\cbr*{\vec{u}, \vec{v}, \vec{w}}$ in the last example is a linearly independent set.
\[
x_2 \mat{c} 2 \\ 1 \\ 0 \\ 0 \\ 0 \rix + x_4 \mat{r} 1 \\ 0 \\ -2 \\ 1 \\ 0 \rix + x_5 \mat{r} -3 \\ 0 \\ 2 \\ 0 \\ 1 \rix = \mat{c} 0 \\ 0 \\ 0 \\ 0 \\ 0 \rix
\qquad \implies \qquad
x_2 = x_4 = x_5 = 0
.\]
When $\Null (A)$ is nontrivial (i.e., contains at least one nonzero vector), then the number of vectors in the spanning set for $\Null (A)$ is the number of free variables in the equation $A \vec{x} = \vec{0}$. When $\Null (A) = \cbr*{\vec{0}}$, what can be said?
Summary of Nullspace For any $m\times n$ matrix $A$, then:
$\Null (A)$ is a subspace of $\R^n $.
$\Null (A)$ is the set of all vectors $x \in \R^n $ such that $A \vec{x} = \vec{0} \in \R^m$.
To find an explicit description of $\Null (A)$ (i.e., a spanning set), perform row operations on $\mat{c|c} A & \vec{0} \rix$.
$\Null (A) = \cbr*{\vec{0}}$ if and only if $A \vec{x} = \vec{0}$ has only the trivial solution.
The column space of an $m\times n$ matrix $A$, denoted $\Col (A)$, is the set of linear combinations of the columns of $A$. That is, for $A = \mat{ccc} \vec{a}_1 & \dots & \vec{a}_n \rix$, then
\[
\Col (A) \eq \Span \cbr*{\vec{a}_1, \dots , \vec{a}_n}
.\]
A vector in $\Col (A)$ can be written as $A \vec{x}$ for some $\vec{x}$ in $\R^n$ (Why?). Therefore,
\[
\Col (A) \eq \cbr*{\vec{b} \in \R^m \:\mid\: \vec{b} = A \vec{x} \quad \text{for some } x \in \R^n}
\] is an alternative way to describe the column space.
The column space of an $m\times n$ matrix $A$ is a subspace of $\R^m $.
Notice that
\[
\mat{c} x-2y \\ 3y \\ x + y \rix
\eq
.\]
Therefore, $\cW$ is given by $\Col (A)$ for
\[
A \eq
.\]
Observation: The range of the linear operator $T(\vec{x}) = A \vec{x}$ from $\R^2 \to \R^3$ is $\cW$. So, we also call the column space the range of the matrix.
Recall from Section 1.4 that the columns of an $m\times n$ matrix $A$ span $\R^m$ if and only if $A \vec{x} = \vec{b}$ has a solution for every $\vec{b} \in \R^m $. Hence, the following result.
The column space of an $m\times n$ matrix $A$ is all of $\R^m $ if and only if $A \vec{x} = \vec{b}$ has a solution for all $\vec{b} \in \R^m $.
Other equivalent conditions to $\Col (A) = \R^m $:
$A$ has a pivot position in every row.
Every $\vec{b} \in \R^m $ is a linear combination of the columns of $A$.
For any $m\times n$ matrix $A$: Null Space:
$\Null (A)$ is a subspace of $\R^n $.
$\Null (A)$ is the set of all $x \in \R^n $ such that $A \vec{x} = \vec{0} \in \R^m$.
To find a spanning set, perform row operations on $\mat{c|c} A & \vec{0} \rix$.
$\Null (A) = \cbr*{\vec{0}}$ if and only if $A \vec{x} = \vec{0}$ has only the trivial solution.
Column Space:
$\Col (A)$ is a subspace of $\R^m $.
$\Col (A)$ is the set of all $\vec{b} \in \R^m $ such that $\vec{b} = A \vec{x}$ for some $\vec{x} \in \R^n $.
To find a spanning set, list the columns of $A$.
$\Col (A) = \R^m $ if and only if $A \vec{x} = \vec{b}$ has a solution for all $\vec{b} \in \R^m $.
The row space of an $m\times n$ matrix $A$, denoted $\Row (A)$, is the set of linear combinations of the rows of $A$.
Determine if the following sets are vector spaces. Verify your answer.
Hint: Use the theorems of this section!
$\cV = \cbr*{\mat{c} x \\ y \\ z \rix \:\mid\: x-y=0, \; y+z=0}$
$\cM = \cbr*{\mat{c} c-6d \\ d \\ c \rix \:\mid\: c,d \in \R } $
Section 4.3: Linearly Independent Sets, Bases
Here we answer: which subsets of vectors span a vector space as “efficiently” as possible?
\bigskip
Recall that a set of vectors $\cbr*{\vec{v}_1, \dots , \vec{v}_p}$ in a vector space $\cV$ is linearly independent if
\[
c_1 \vec{v}_1 + \dots + c_p \vec{v}_p \eq \vec{0}
\] has only the trivial solution $c_1 = 0$, \dots, $c_p=0$.
The following theorem from Section 1.7 applies to a general vector space.
A set of two or more vectors $\cbr*{\vec{v}_1, \dots , \vec{v}_p}$, with $\vec{v}_1 \neq \vec{0}$, is linearly dependent if and only if some $\vec{v}_j$ ($j>1$) is a linear combination of the preceding vectors $\vec{v}_1, \dots , \vec{v}_{j-1}$.
Are the following sets linearly independent or linearly dependent?
\begin{enumerate}
\item $\cbr*{p_1, p_2, p_3}$ in $\cP_2$ where $p_1(t)=t, p_2(t)=t^2, p_3(t)=4t+2t^2$.
\sol[4em]{Linearly dependent, since $p_3 = 4 p_1 + 2 p_2$.}
\item $\cbr*{p_1, p_2, p_3}$ in $\cP_3$, where $p_1(t) = (t-1)$, $p_2(t) = (t-1)(t-2)$, and
$p_3(t) = (t-1)(t-2)(t-3)$.
\bigskip
Notice that
\begin{align*}
c_1 p_1 + c_2 p_2 + c_3 p_3 & \eq 0 \\
c_1 (t-1) + c_2 (t-1)(t-2) + c_3 (t-1)(t-2)(t-3) & \eq 0 \\
-a+2b-6c + t(a-3b+11c) + t^2(b-6c) + t^3(c) & \eq 0
\end{align*}
\sol{$-a+2b-6c=0$, $a-3b+11c=0$, $b-6c=0$, $c=0$; so $b=0$ and $a=0$.}
\end{enumerate}
In Homework 2, you considered the matrix
\[
A \eq
\mat{rrrrr}
8 & 11 & -6 & -7 & 13 \\
-7 & -8 & 5 & 6 & -9 \\
11 & 7 & -7 & -9 & -6 \\
-3 & 4 & 1 & 8 & 7
\rix
\qquad \xrightarrow{\RREF} \quad
\mat{ccccc}
1 & 0 & -7 /13 & 0 & 0 \\
0 & 1 & -2 /13 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1
\rix
.\] The columns of $A$ span $\R^4$ (pivot position in every row) but are not linearly independent.
Indeed, $\vec{a}_3 = \textcolor{red}{-\frac{7}{13}\vec{a}_1 - \frac{2}{13} \vec{a}_2}$.
This tells us that
\begin{align*}
\Span \cbr*{\vec{a}_1, \vec{a}_2, \vec{a}_3, \vec{a}_4, \vec{a}_5}
& \eq \Span \cbr*{\vec{a}_1, \vec{a}_2, \textcolor{red}{-\frac{7}{13}\vec{a}_1 - \frac{2}{13} \vec{a}_2}, \vec{a}_4, \vec{a}_5} \\
& \eq \Span \cbr*{\vec{a}_1, \vec{a}_2, \vec{a}_4, \vec{a}_5 }.
\end{align*}
Why?
Let's prove a simpler version (c.f. homework 2!).
If $\cH = \cbr*{\vec{v}_1, \vec{v}_2, \vec{v}_3}$ and $\vec{v}_3 = \vec{v}_1 + \vec{v}_2$, then $\Span \cH = \Span \cbr*{\vec{v}_1, \vec{v}_2}$.
The goal: show 1) all vectors in $\Span \cH$ are also in $\Span \cbr*{\vec{v}_1, \vec{v}_2}$, and 2) all vectors in $\Span \cbr*{\vec{v}_1, \vec{v}_2}$ are in $\Span \cH$. I.e., $\Span \cH \subseteq \Span \cbr*{\vec{v}_1, \vec{v}_2}$ and $\Span \cbr*{\vec{v}_1, \vec{v}_2} \subseteq \Span \cH$.
\medskip
First, let $\vec{x}$ be in $\Span \cH$. Then for some $c_1, c_2, c_3$,
\[
\vec{x} \eq c_1 \vec{v}_1 + c_2 \vec{v}_2 + c_3 \pbr*{\vec{v}_1 + \vec{v}_2} \eq (c_1 + c_3) \vec{v}_1 + (c_2 + c_3) \vec{v}_2
.\] Therefore, $\vec{x}$ is in $\Span \cbr*{\vec{v}_1, \vec{v}_2}$.
On the other hand, let $\vec{y}$ be in $\Span \cbr*{\vec{v}_1, \vec{v}_2}$. Then for some $c_1, c_2$,
\[
\vec{y} \eq c_1 \vec{v}_1 + c_2 \vec{v}_2 \eq c_1 \vec{v}_1 + c_2 \vec{v}_2 + 0\cdot \vec{v}_3
.\] Therefore, $\vec{y}$ is in $\Span \cH$. So $\cH$ and $\cbr*{\vec{v}_1, \vec{v}_2}$ span the same space.
\bigskip
So columns 1, 2, 4, and 5 of $A$ span $\R^4$. This is a “more efficient” spanning set for $\R^4$ compared to all the columns of $A$.
This gives us the notion of a basis set—an “efficient” spanning set in that it does not contain unnecessary vectors.
Let $\cH$ be a subspace of a vector space $\cV$. A set of vectors $\cB$ in $\cV$ is a basis for $\cH$ if
\begin{enumerate}
\item $\cB$ is a linearly independent set, and
\item $\cH = \Span (\cB)$.
\end{enumerate}
That is: \fbox{a basis is a linearly independent spanning set} for the subspace.
A basis for $\R^4$ is given by
\[
\cB \eq \cbr*{\vec{e}_1, \vec{e}_2, \vec{e}_3, \vec{e}_4}
\] which is called the standard basis for $\R^4$. Why? Because
\[
A \eq \mat{cccc} \vec{e}_1 & \vec{e}_2 & \vec{e}_3 & \vec{e}_4 \rix \eq
\mat{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\rix
\] has a pivot in every row, so its columns span $\R^4$, and there is a pivot in every column, so its columns are linearly independent.
Show that $\cB = \cbr*{1, t, t^2, \dots , t^{n}}$ is a basis for $\cP_n$.
Clearly, $\cB$ spans $\cP_n$ because any polynomial in $\cP_n$ can be written as
\sol[4em]{
\[
c_0 \cdot 1 + c_1 \cdot t + c_2 \cdot t^2 + \dots + c_n \cdot t ^{n}
\]}
for some choice of $c_0, \dots , c_n$ coefficients. And, $\cB$ is linearly independent because
\sol[4em]{
\[
c_0 \cdot 1 + c_1 \cdot t + c_2 \cdot t^2 + \dots + c_n \cdot t ^{n} \eq 0
\]}
implies that all coefficients $c_0, \dots , c_n$ are $0$ (by simply matching).
The following (very important!) theorem generalizes .
Let $\cS = \cbr*{\vec{v}_1, \dots , \vec{v}_p}$ be a set of vectors in a vector space $\cV$ and $\cH = \Span \cbr*{\vec{v}_1, \dots , \vec{v}_p}$.
\begin{enumerate}
\item Suppose one of the vectors in $\cS$, say $\vec{v}_k$, is a linear combination of the other vectors in $\cS$. Then $\cS$ without this vector $\vec{v}_k$ still spans $\cH$.
\item If $\cH \neq \cbr*{\vec{0}}$, then some subset of $\cS$ is a basis for $\cH$.
\end{enumerate}
That is, by removing a vector that is a linear combination of the others, you obtain a new set which still spans the same space.
\bigskip
This theorem tells us that we can construct a basis for a vector space $\cV$ by starting with a spanning set of $\cV$ and then pruning it down to a linearly independent set.
\bigskip
In this view, a basis is a spanning set that is as small as possible.
If you keep removing vectors from a spanning set after you have removed enough to make it linearly independent, then the vector you delete would not be a linear combination of the smaller set. Hence, you no longer have a spanning set and thus no longer a basis.
% If another vector was removed from this linearly independent set, then that vector would not be a linear combination from the set, and hence it would no longer span the space.
\vspace{8em}
Alternatively, a basis is a linearly independent set that is as large as possible.
If $\cS$ is linearly independent and spans $\cV$, and then you add another vector from $\cV$ into $\cS$, the enlarged set becomes linearly dependent and therefore not a basis.
% If $\cS$ is a basis for $\cV$ and another vector from $\cV$ is added to $\cS$, then the new set is not linearly independent (since $\cS$ spans $\cV$) and so this vector is a linear combination from $\cS$, which is a contradiction.
Given a matrix, what is a basis for its nullspace?
For illustration, recall :
\[
A=\mat{rrrrr|c}
-3 & 6 & -1 & 1 & -7 & 0 \\
1 & -2 & 2 & 3 & -1 & 0\\
2 & -4 & 5 & 8 & -4 & 0
\rix
\quad
\xrightarrow{\text{\normalsize Row Ops.}}
\quad
\RREF(A)=
\mat{rrrrr|c}
1 & -2 & 0 & -1 & 3 & 0 \\
0 & 0 & 1 & 2 & -2 & 0 \\
0 & 0 & 0 & 0 & 0 & 0
\rix
\]
Since $x_2, x_4, x_5$ are free variables, the solution is
\begin{align*}
\mat{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \rix
& \eq
x_2 \mat{c} 2 \\ 1 \\ 0 \\ 0 \\ 0 \rix + x_4 \mat{r} 1 \\ 0 \\ -2 \\ 1 \\ 0 \rix + x_5 \mat{r} -3 \\ 0 \\ 2 \\ 0 \\ 1 \rix \eq x_2 \vec{u} + x_4 \vec{v} + x_5 \vec{w} .
\end{align*}
Therefore, we have
\[
\Null (A) \eq \sol[][13em]{\Span \cbr*{\vec{u}, \vec{v}, \vec{w}}}
\]
\sol[5em]{Because $\cbr*{\vec{u}, \vec{v}, \vec{w}}$ spans the nullspace and is a linearly independent set, it is a basis for the nullspace.}
What is a basis for the column space? Observe that
\begin{align*}
\vec{a}_2 & = -2 \vec{a}_1 & \quad \vec{b}_2 &= -2 \vec{b}_1 \\
\vec{a}_4 & = -\vec{a}_1 + 2 \vec{a}_3 & \quad \vec{b}_4 & = - \vec{b}_1 + 2 \vec{b}_3 \\
\vec{a}_5 & = 3 \vec{a}_1 -2 \vec{a}_3 & \quad \vec{b}_5 & = 3 \vec{b}_1 -2 \vec{b}_3
.\end{align*}
Elementary row operations do not affect linear dependence relations among the columns!
Since a spanning set for $\Col(A)$ is all the columns of $A$, but only columns 1 and 3 are linearly independent, then
by the Spanning Set Theorem
\[
\Span \cbr*{\vec{a}_1, \vec{a}_2, \vec{a}_3, \vec{a}_4, \vec{a}_5}
\eq
\sol[][16em]{\Span \cbr*{\vec{a}_1, \vec{a}_3}}
\]
\sol{and so a basis for $\Col(A)$ is $\cbr*{\vec{a}_1, \vec{a}_3} $.}
In summary, we have the following.
The pivot columns of a matrix form a basis for its column space.
Warning! The pivot columns of $A$ form the basis, not the columns in the \RREF.
\bigskip
What about a basis for the row space?
Two row equivalent matrices $A$ and $B$ have the same row space.
Furthermore, if $B = \RREF(A)$, then the nonzero rows of $B$ form a basis for $\Row(A)$.
Continuing from the matrix in :
\[
A=\mat{rrrrr}
-3 & 6 & -1 & 1 & -7 \\
1 & -2 & 2 & 3 & -1 \\
2 & -4 & 5 & 8 & -4
\rix
\quad
\xrightarrow{\text{\normalsize Row Ops.}}
\quad
B=
\mat{rrrrr}
1 & -2 & 0 & -1 & 3 \\
0 & 0 & 1 & 2 & -2 \\
0 & 0 & 0 & 0 & 0
\rix
.\] A basis for the row space of $A$ is:
\sol[12em]{the (transposes) of the first two rows of $B$:
\[
\cbr*{\mat{r} 1 \\ -2 \\ 0 \\ -1 \\ 3 \rix, \mat{r} 0 \\ 0 \\ 1 \\ 2 \\ -2 \rix}
.\] }
Summary: Given $A$ and its \RREF, then:
\medskip
\begin{enumerate}\itemsep=1em
\item A basis for $\Null (A)$ consists of the vectors used in the parametric form of the solution to $A \vec{x} = \vec{0}$.
\item A basis for $\Col (A)$ consists of the pivot columns of $A$.
\item A basis for $\Row(A)$ consists of the transposes of the nonzero rows in the \RREF.
\end{enumerate}
The matrix $A$ and its \RREF are given below:
\[
A =
\mat{rrr}
1 & -2 & 3 \\
2 & -4 & 6 \\
-3 & 6 & 9
\rix \qquad
\RREF(A)=
\mat{rrr} 1 & -2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \rix
.\]
Find bases for $\Null (A)$, $\Col (A)$, $\Row(A)$.
\bigskip
Solving $A \vec{x} = \vec{0}$ yields
\[
\mat{c} x_1 \\ x_2 \\ x_3 \rix
\eq
\sol[][20em]{
\mat{c} 2x_2 \\ x_2 \\ 0 \rix
\eq
x_2 \mat{r} -2 \\ 1 \\ 0 \rix
}
\]
Therefore, a basis for $\Null (A)$ is
\sol[8em]{\[
\cbr*{\mat{r} -2 \\ 1 \\ 0 \rix}
.\]}
A basis for $\Col (A)$ is
\sol[10em]{
\[
\cbr*{\mat{r} 1 \\ 2 \\ -3 \rix, \mat{c} 3 \\ 6 \\ 9 \rix }
.\] (cols 1 and 3 are basic)}
A basis for $\Row(A)$ is
\sol{
\[
\cbr*{\mat{r} 1 \\ -2 \\ 0 \rix, \mat{r} 0 \\ 0 \\ 1 \rix}
.\] (rows 1 and 2 are nonzero)}
Section 4.5: Dimension of a Vector Space
How “large” is a vector space? What is a good measure to quantify its size?
We will answer this after two insightful results.
\bigskip
If a vector space $\cV$ has a basis of $n$ vectors, then any set in $\cV$ containing more than $n$ vectors must be linearly dependent.
Why? Think about $\R^n $. Clearly a set of $n+1$ vectors from $\R^n $ must be linearly dependent! A proof for a general vector space requires more tools than we currently have.
Put another way: If a vector space $\cV$ has a basis of $n$ vectors, then every linearly independent subset of $\cV$ has no more than $n$ vectors.
Be Careful! If $\cV$ has a basis of $n$ vectors, then a subset of $\cV$ with at most $n$ vectors may or may not be linearly dependent.
\vspace{4em}
If a vector space $\cV$ has a basis of $n$ vectors, then every basis of $\cV$ consists of exactly $n$ vectors.
Suppose $\cB_1$ is a basis with $n$ vectors and $\cB_2$ is any other basis. Because $\cB_2$ is linearly independent, then by we know $\cB_2$ has no more than $n$ vectors.
\begin{itemize}
\item If $\cB_2$ has exactly $n$ vectors, we are done.
\item If $\cB_2$ has less than $n$ vectors, then applying to $\cB_2$ implies that $\cB_1$ must be linearly dependent (since $\cB_1$ has more vectors than $\cB_2$), which is a contradiction.
\end{itemize}
Therefore, $\cB_1$ and $\cB_2$ have the same number of vectors $n$.
This result means that the number of elements in any basis of a vector space is always the same. Even though a vector space can have many bases, they must all have the same number of vectors!
\medskip
This allows us to quantify the “size” of the vector space.
Suppose $\cV$ is a vector space spanned by a finite set. Then we say $\cV$ is finite dimensional, and the dimension of $\cV$, written $\Dim (\cV)$, is the number of vectors in any basis for $\cV$.
\begin{itemize}
\item The dimension of $\cV = \cbr*{\vec{0}}$ is defined to be $0$.
\item If $\cV$ is not spanned by a finite set, we say $\cV$ is infinite dimensional.
\end{itemize}
Determine the dimensions of the following vector spaces and provide a basis.
\begin{itemize}
\item $\R^2$ \sol[4em]{Dimension 2, $\cbr*{\mat{c} 1 \\ 0 \rix, \mat{c} 0 \\ 1 \rix }$ (the standard basis).}
\item $\R^n $ \sol[4em]{Dimension $n$, $\cbr*{\vec{e}_1, \vec{e}_2, \dots , \vec{e}_n}$ (the standard basis).}
\item $\cP_2$ \sol[3em]{Dimension 3, $\cbr*{1, t, t^2}$ (the standard basis).}
\item $\cP_n$ \sol[4em]{Dimension $n+1$, $\cbr*{1, t, t^2, \dots , t^n} $ (the standard basis).}
\item $\cM_{2\times 2} $ \sol[5em]{Dimension 4, $\cbr*{\mat{cc} 1 & 0 \\ 0 & 0 \rix, \mat{cc} 0 & 1 \\ 0 & 0 \rix, \mat{cc} 0 & 0 \\ 1 & 0 \rix, \mat{cc} 0 & 0 \\ 0 & 1 \rix }$.}
\item $\cP$ (polynomials of all degrees) \sol{Infinite dimensional space.}
\end{itemize}
Find a basis and the dimension of the subspace
\[
\cW \eq \cbr*{\mat{c} a + b + 2c \\ 2a + 2b + 4c + d \\ b + c + d \\ 3a + 3c + d \rix \:\mid\: a,b,c,d \in \R }
.\]
Notice that we can write
\[
\mat{c} a + b + 2c \\ 2a + 2b + 4c + d \\ b + c + d \\ 3a + 3c + d \rix
\eq
\sol[][20em]{
a \mat{c} 1 \\ 2 \\ 0 \\ 3 \rix +
b \mat{c} 1 \\ 2 \\ 1 \\ 0 \rix +
c \mat{c} 2 \\ 4 \\ 1 \\ 3 \rix +
d \mat{c} 0 \\ 1 \\ 1 \\ 1 \rix}
\]
\sol[5em]{Therefore, $\cW = \Span \cbr*{\vec{v}_1, \vec{v}_2, \vec{v}_3, \vec{v}_4}$.
Notice that $\vec{v}_3 = \vec{v}_1 + \vec{v}_2$, therefore we can remove $v_3$ by the Spanning Set Theorem. What remains is
\[
\cbr*{\mat{c} 1 \\ 2 \\ 0 \\ 3 \rix, \mat{c} 1 \\ 2 \\ 1 \\ 0 \rix, \mat{c} 0 \\ 1 \\ 1 \\ 1 \rix}
\] and this set is linearly independent. Hence, it is a basis for $\cW$ and therefore $\Dim (\cW) = 3$.
}
Let $\cH$ be a subspace of a finite dimensional vector space $\cV$. Any linearly independent subset in $\cH$ can be expanded, if necessary, to a basis for $\cH$.
Furthermore, $\cH$ is finite dimensional and
\[
\Dim (\cH) \eq[\le] \Dim (\cV)
.\]
Read the textbook (page 243).
\bigskip
\medskip
\begin{tabularx}{\linewidth}{XXX}
Start & Action & Result \\ [0.2em]\hline\hline
Spanning set & Remove L.D. vectors & \sol{Create a basis} \\ [2em]
Spanning set & Add vectors & \sol{Cannot create a basis} \\ [2em]
L.I. set & Remove vectors & \sol{Cannot create a basis} \\ [2em]
L.I. set & Add vectors & Create a basis
\end{tabularx}
\vspace{2em}
Let $\cH = \Span \cbr*{\vec{v}_1, \vec{v}_2}$ be a subspace of $\R^3$, where $\vec{v}_1 = \mat{c} 1 \\ 0 \\ 0 \rix, \; \vec{v}_2 = \mat{c} 1 \\ 1 \\ 0 \rix$.
\begin{enumerate}
\item What is the dimension of $\cH$?
\sol[6em]{The dimension is $2$, since the vectors are linearly independent.}
\item Is $\cH$ a basis for $\R^3$? If not, create one using $\vec{v}_1 $ and $\vec{v}_2$.
\sol{
No, because $2 < 3 = \Dim (\R^3)$. One vector that can be added is
$\vec{v}_3 = \mat{c} 0 \\ 0 \\ 1 \rix$
and then $\cbr*{\vec{v}_1, \vec{v}_2, \vec{v}_3}$ is a basis for $\R^3$.
}
\end{enumerate}
If you know the dimension of a vector space (say, $p$), you can find a basis by:
\begin{enumerate}
\item Finding a set of $p$ vectors, and
\item Checking if the set is linearly independent OR checking if the set spans the space.
\end{enumerate}
Namely, if you have the right number of vectors you do NOT have to check both!
\bigskip
Let $\cV$ be a $p$-dimensional vector space, $p\ge 1$. Then:
\begin{enumerate}
\item Any set of $p$ linearly independent vectors is a basis for $\cV$.
\item Any spanning set of $p$ vectors is a basis for $\cV$.
\end{enumerate}
Read the textbook (page 243).
Show that a basis for $\cP_2$ is $\cbr*{t, \, 1-t, \, 1+t-t^2}$.
\bigskip
We know $\cP_2$ has dimension $3$, which is the number of elements in this set. Therefore, we can check (via The Basis Theorem) if the set is either 1) linearly independent, or 2) a spanning set.
\bigskip
For linear independence, form the homogeneous equation
\begin{align*}
c_1 \cdot t + c_2 \cdot (1-t) + c_3 \cdot (1+t-t^2) & \eq 0 \\
(c_2 + c_3) \cdot 1 + (c_1-c_2+c_3)\cdot t + (-c_3) \cdot t^2 &\eq 0
.\end{align*}
Matching coefficients, then the only solution is $c_3=0$ (from $t^2$) and so $c_2=0$ (from $1$). Therefore, $c_2=0$ and the set is linearly independent; so it is a basis.
\bigskip
Showing that the set spans $\cP_2$ is very difficult!
Recall how we found bases for $\Null (A), \Col (A), \Row(A)$:
Given $A$ and its \RREF, then:
\medskip
\begin{enumerate}\itemsep=1em
\item A basis for $\Null (A)$ consists of the vectors used in the parametric form of the solution to $A \vec{x} = \vec{0}$.
\item A basis for $\Col (A)$ consists of the pivot columns of $A$.
\item A basis for $\Row(A)$ consists of the transposes of the nonzero rows in $\RREF(A)$.
\end{enumerate}
The dimensions of these spaces are very important!
The rank of an $m\times n$ matrix is the dimension of its column space.
The nullity of an $m\times n$ matrix is the dimension of its null space.
\begin{itemize}\itemsep=4em
\item A basis for $\Col (A)$ are the pivot columns of $A$. Therefore,
\[
\Rank (A) \eq \Dim \Col(A) \eq \sol[][15em]{\text{ Number of pivot columns of } A}
\]
\item A basis for $\Row(A)$ are the pivot rows (transposed) from the \RREF of $A$. The number of pivot rows is equal to the number of pivot columns, so
\[
\Dim \Row (A) \eq \Dim \Col (A^T) \eq \sol[][15em]{\Rank (A) \eq \Rank (A^T)}
\] (Recall: $\Col (A^T) = \Row(A)$.)
\item A basis for $\Null (A)$ are the vectors used in the parametric representation for the solution to $A \vec{x} = \vec{0}$, so
\[
\mathrm{Nullity}(A) \eq \Dim \Null (A) \eq \sol[][15em]{\text{ number of free variables of } A}
\]
\end{itemize}
Because a column of $A$ is either a pivot column or not, we have the \large important result.
If $A$ is an $m\times n$ matrix, then
\[
\Rank (A) + \mathrm{Nullity}(A) \eq \text{ number of columns of } A \eq n
.\]
Find rank and nullity of $A = \mat{rrrrr} -3 & 6 & -1 & 1 & -7 \\ 1 & -2 & 2 & 3 & -1 \\ 2 & -4 & 5 & 8 & -4 \rix$, with \RREF $\mat{rrrrr} 1 & -2 & 2 & 3 & -1 \\ 0 & 0 & 1 & 2 & -2 \\ 0 & 0 & 0 & 0 & 0 \rix$.
\sol[3em]{Since columns 1 and 3 are pivot columns, the rank of $A$ is $2$.
Since columns 2, 4, and 5 are not pivot columns (i.e., $x_2, x_4, x_5$ are free variables), then the nullity of $A$ is $3$. Indeed, $2 + 3 = 5$ which is the number of columns of $A$.}
Suppose $A$ is $5\times 8$ with rank $5$. Find $\mathrm{Nullity}(A)$, $\Dim \Row(A)$, $\Rank (A)$, and $\Rank (A^T)$.
\sol[6em]{
By the rank-nullity theorem, $5 + \mathrm{Nullity}(A) = 8$ and therefore the nullity is $3$. Since the rank is $5$, then $\Dim \Row(A)=5 = \Rank (A^T)$.}
Suppose $A$ is a $9\times 12$ matrix. What is the smallest possible nullity of $A$?
\sol[6em]{
Since $\mathrm{Nullity}(A) = 12 - \Rank (A)$, and the largest possible rank of $A$ is $9$, then the smallest nullity is $12-3=9$.}
A homogeneous system of $50$ equations in $54$ variables has $4$ free variables. Does every nonhomogeneous system have a solution?
\sol{$A$ is $50 \times 54$. There are $4$ free variables, so $\Rank (A) + 4 = 54$ so $\Rank (A)=50$.
We would say that $A$ has “full row rank”, and therefore has a pivot in every row. By previous theorems, this means $A \vec{x} = \vec{b}$ has a solution for every $\vec{b} \in \R^{50}$.
}
With our knowledge of rank, column space, and null space, we can extend the Invertible Matrix Theorem from Section 2.3.
Let $A$ be an $n\times n$ matrix. Then the following are equivalent (i.e., either all statements are true or all are false).
\begin{enumerate}
\item $A$ is an invertible matrix.
\item $A$ is row equivalent to the $n\times n$ identity matrix $I_n$.
\item $A$ has $n$ pivot positions.
\item The equation $A \vec{x} = \vec{0}$ has only the trivial solution.
\item The columns of $A$ form a linearly independent set.
\item The equation $A \vec{x} = \vec{b}$ has a unique solution for all $\vec{b} \in \R^n $.
\item The columns of $A$ span $\R^n $.
\item There exists an $n\times n$ matrix $C$ such that $CA = I_n$.
\item There exists an $n\times n$ matrix $D$ such that $AD = I_n$.
\item $A^T$ is an invertible matrix.
\end{enumerate}
line
\begin{enumerate}[resume]
\item The columns of $A$ form a basis of $\R^n $.
\item $\Col (A) = \R^n $.
\item $\Rank (A) = n$.
\item $\mathrm{Nullity}(A) = 0$.
\item $\Null (A) = \cbr*{\vec{0}}$.
\end{enumerate}
\vspace{3em}
Warning! Just as before, the IMT applies only to square matrices.