MTH 215 — Intro to Linear Algebra
Section 4.1: Vector Spaces and Subspaces
Many concepts regarding vectors in $\R^n $ can be extended to other mathematical systems. In this chapter we discuss collections of objects that behave like vectors do in $\R^n$, which we call vector spaces.
A vector space is a nonempty set $\cV$ of vectors on which we define two operations, addition and scalar multiplication, subject to the following rules.
For all $\vec{u}, \vec{v}, \vec{w}$ in $\cV$ and all scalars $c,d$ in $\R$:
- $\vec{u} + \vec{v}$ is in $\cV$.
- $\vec{u} + \vec{v} = \vec{v} + \vec{u}$.
- $(\vec{u} + \vec{v}) + \vec{w} = \vec{u} + (\vec{v} + \vec{w})$.
- There exists $\vec{0}$ (called the zero vector) in $\cV$ such that $\vec{u} + \vec{0} = \vec{u}$.
- For each $\vec{u}$ in $\cV$, there exists $-\vec{u}$ in $\cV$ such that $\vec{u} + (-\vec{u}) = \vec{0}$.
- $c \vec{u}$ is in $\cV$.
- $c (\vec{u} + \vec{v}) = c \vec{u} + c \vec{v}$.
- $(c+d) \vec{u} = c \vec{u} + d \vec{u}$.
- $(cd) \vec{u} = c (d \vec{u})$.
- $1 \vec{u} = \vec{u}$.
One can easily show the following
- The zero vector $\vec{0}$ is unique.
- For each $\vec{u}$, the vector $-\vec{u}$ in statement 5 is unique.
- $0 \vec{u} = \vec{0}$ for any $\vec{u}$.
- $-\vec{u} = (-1) \vec{u}$ for any $\vec{u}$.
Before looking at examples, note that $\R^n$ with vector addition and scalar multiplication (as defined in Section 1.3) is a vector space.
However: “Vectors” in a vector space can be loads of different things, like polynomials, continuous functions, matrices, sequences of numbers, operators, ...
If $p(t) = a_0 + a_1t + \dots + a_n t^n$ and $q(t) = b_0 + b_1t + \dots + b_n t^n$, then ...
Notice that a subset of these functions are those which are continuous on $[a,b]$: \[ C[a,b] \eq \cbr{f \in \cR[a,b] \:\mid\: f \text{ is continuous} } .\]
A subset of a vector space is not always another vector space. More conditions are needed.
- The zero vector $\vec{0}$ of $\cV$ is in $\cH$.
- For each $\vec{u}$ and $\vec{v}$ in $\cH$, then $\vec{u} + \vec{v}$ is in $\cH$. ($\cH$ is closed under addition.)
- For each $\vec{u}$ in $\cH$ and scalar $c \in \R $, then $c \vec{u}$ is in $\cH$. ($\cH$ is closed under scalar multiplication.)
Note that $\cH$ is a vector space on its own.
- Every vector space $\cV$ is a subspace of itself, and $\cbr{\vec{0}}$ is a subspace of every $\cV$.
- Recall the following sets:
\begin{align*} \cR[a,b] & \eq \cbr{f \:\mid\: f: [a,b] \to \R} \\ C[a,b] & \eq \cbr{f \in \cR[a,b] \:\mid\: f \text{ is continuous} } \\ \cP_n & \eq \cbr{p(t) = a_0 + \dots + a_n t^n \:\mid\: a_0, \dots , a_n \in \R }. \end{align*}
Then $C( \R )$ and $\cP_n$ are subspaces of $\cR(\R )$, since they contain the zero function $0(t)=0$, and the sum of two continuous functions (polynomials) and scalar multiples of a continuous function (polynomial) is also a continuous function (polynomial). Further, $\cP_n$ is a subspace of $C(\R )$.
- The zero vector $\vec{0} = \mat{c} 0 \\ 0 \\ 0 \rix$ is in $\cH$.
- The sum of any two vectors in $\cH$ remains in $\cH$: \[ \vec{u} + \vec{v} \eq .\]
- For any scalar $c \in \R $ and $\vec{u} \in \cH$, \[ c \vec{u} \eq .\]
Even though the last example was negative, it does remind us of the span of a set of vectors (in this case, only one). Under what conditions can the span of some vectors be a subspace?
Let $\vec{v}_1$ and $\vec{v}_2$ be in a vector space $\cV$, and let $\cH = \Span \cbr{\vec{v}_1, \vec{v}_2}$. Is $\cH$ a subspace of $\cV$?
We call $\Span \cbr{\vec{v}_1, \dots , \vec{v}_p}$ the subspace generated by or spanned by the vectors $\vec{v}_1, \dots , \vec{v}_p$.
Notice that our last example fails because the set was \[ \cS \eq \mat{c} 0 \\ 1 \rix + \Span \cbr{\mat{c} 1 \\ 1 \rix} \] which is not the span of $(1,1)$.
Recall that $\cM_{2\times 2} $ is the vector space of real-valued $2\times 2$ matrices.
Is $\cH = \cbr{\mat{cc} 2a & b \\ 3a+b & 3b \rix \:\mid\: a,b \in \R }$ a subspace of $\cM_{2\times 2} $?
Any matrix in $\cH$ can be written as \[ \mat{cc} 2a & b \\ 3a+b & 3b \rix \eq \mat{cc} 2a & 0 \\ 3a & 0 \rix + \mat{cc} 0 & b \\ b & 3b \rix \eq a \mat{rr} 2 & 0 \\ 3 & 0 \rix + b \mat{cc} 0 & 1 \\ 1 & 3 \rix .\]
Therefore, we have \[ \cH \eq \Span \cbr{ \mat{rr} 2 & 0 \\ 3 & 0 \rix, \mat{cc} 0 & 1 \\ 1 & 3 \rix } \] which makes $\cH$ a subspace of $\cM_{2\times 2} $.
Recall that $C[a,b] = \cbr{f: [a,b] \to \R \:\mid\: f \text{ is continuous} } $.
Determine if $\cH = \cbr{f \in C[a,b] \:\mid\: f(a) = f(b)} $ is a subspace of $C[a,b]$.
- The zero function is clearly in $\cH$.
- Let $f$ and $g$ be in $\cH$.
- Let $c \in \R $. Then
Section 4.2: Null, Column, Row Spaces
In this section we investigate three important subspaces associated to a matrix.
- Clearly, $\vec{0}$ is in $\Null (A)$.
- Let $\vec{u}, \vec{v}$ be in $\Null (A)$. Then $A \vec{u} = \vec{0}$ and $A \vec{v} = \vec{0}$, so \[ A (\vec{u} + \vec{v}) \eq A \vec{u} + A \vec{v} \eq \vec{0} .\] So $\vec{u} + \vec{v} \in \Null (A)$.
- Let $\vec{u}$ be in $\Null (A)$ and $c$ be any scalar. Then \[ A (c \vec{u}) \eq c \cdot A \vec{u} \eq c \cdot \vec{0} \eq \vec{0} .\] So $c \vec{u} \in \Null (A)$.
Find a spanning set for the null space of $A = \mat{rrrrr} -3 & 6 & -1 & 1 & -7 \\ 1 & -2 & 2 & 3 & -1 \\ 2 & -4 & 5 & 8 & -4 \rix$
Find the solution of $A \vec{x} = \vec{0}$ in terms of free variables by row reducing $\mat{c|c} A & \vec{0} \rix$: \[ \mat{rrrrr|c} -3 & 6 & -1 & 1 & -7 & 0 \\ 1 & -2 & 2 & 3 & -1 & 0\\ 2 & -4 & 5 & 8 & -4 & 0 \rix \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad \mat{rrrrr|c} 1 & -2 & 0 & -1 & 3 & 0 \\ 0 & 0 & 1 & 2 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \rix \]
Since $x_2, x_4, x_5$ are free variables, the solution is \begin{align*} \mat{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \rix & \eq \\ & \eq \\ & \eq \end{align*}
Notice the following observations:
- The spanning set $\cbr{\vec{u}, \vec{v}, \vec{w}}$ in the last example is a linearly independent set. \[ x_2 \mat{c} 2 \\ 1 \\ 0 \\ 0 \\ 0 \rix + x_4 \mat{r} 1 \\ 0 \\ -2 \\ 1 \\ 0 \rix + x_5 \mat{r} -3 \\ 0 \\ 2 \\ 0 \\ 1 \rix = \mat{c} 0 \\ 0 \\ 0 \\ 0 \\ 0 \rix \qquad \implies \qquad x_2 = x_4 = x_5 = 0 .\]
- When $\Null (A)$ is nontrivial (i.e., contains at least one nonzero vector), then the number of vectors in the spanning set for $\Null (A)$ is the number of free variables in the equation $A \vec{x} = \vec{0}$. When $\Null (A) = \cbr{\vec{0}}$, what can be said?
For any $m\times n$ matrix $A$, then:
- $\Null (A)$ is a subspace of $\R^n $.
- $\Null (A)$ is the set of all vectors $x \in \R^n $ such that $A \vec{x} = \vec{0} \in \R^m$.
- To find an explicit description of $\Null (A)$ (i.e., a spanning set), perform row operations on $\mat{c|c} A & \vec{0} \rix$.
- $\Null (A) = \cbr{\vec{0}}$ if and only if $A \vec{x} = \vec{0}$ has only the trivial solution.
A vector in $\Col (A)$ can be written as $A \vec{x}$ for some $\vec{x}$ in $\R^n$ (Why?). Therefore, \[ \Col (A) \eq \cbr{\vec{b} \in \R^m \:\mid\: \vec{b} = A \vec{x} \quad \text{for some } x \in \R^n} \] is an alternative way to describe the column space.
Find $A$ so that $\Col (A) = \cW = \cbr{\mat{c} x-2y \\ 3y \\ x+y \rix \:\mid\: x,y \in \R }$.
Notice that \[ \mat{c} x-2y \\ 3y \\ x + y \rix \eq .\]
Therefore, $\cW$ is given by $\Col (A)$ for \[ A \eq .\]
Observation: The range of the linear operator $T(\vec{x}) = A \vec{x}$ from $\R^2 \to \R^3$ is $\cW$. So, we also call the column space the range of the matrix.
Recall from Section 1.4 that the columns of an $m\times n$ matrix $A$ span $\R^m$ if and only if $A \vec{x} = \vec{b}$ has a solution for every $\vec{b} \in \R^m $. Hence, the following result.
Other equivalent conditions to $\Col (A) = \R^m $:
- $A$ has a pivot position in every row.
- Every $\vec{b} \in \R^m $ is a linear combination of the columns of $A$.
Null Space:
- $\Null (A)$ is a subspace of $\R^n $.
- $\Null (A)$ is the set of all $x \in \R^n $ such that $A \vec{x} = \vec{0} \in \R^m$.
- To find a spanning set, perform row operations on $\mat{c|c} A & \vec{0} \rix$.
- $\Null (A) = \cbr{\vec{0}}$ if and only if $A \vec{x} = \vec{0}$ has only the trivial solution.
- $\Col (A)$ is a subspace of $\R^m $.
- $\Col (A)$ is the set of all $\vec{b} \in \R^m $ such that $\vec{b} = A \vec{x}$ for some $\vec{x} \in \R^n $.
- To find a spanning set, list the columns of $A$.
- $\Col (A) = \R^m $ if and only if $A \vec{x} = \vec{b}$ has a solution for all $\vec{b} \in \R^m $.
\[ \text{If } A = \mat{c} \vec{r}_1^T \\ \vdots \\ \vec{r}_m^T \rix, \qquad \Row(A) \eq \Span \cbr{\vec{r}_1, \dots , \vec{r}_n} .\]
\begin{align*} \Row(A) & \eq \Span \cbr{\mat{r} -1 \\ 2 \\ 3 \\ 6 \rix, \mat{r} 2 \\ -5 \\ -6 \\ -12 \rix, \mat{r} 1 \\ -3 \\ -3 \\ -6 \rix } \quad & \subseteq \R^4 \\ \Col (A) & \eq \Span \cbr{\mat{r} -1 \\ 2 \\ 1 \rix, \mat{r} 2 \\ -5 \\ -3 \rix, \mat{r} 3 \\ -6 \\ -3 \rix, \mat{r} 6 \\ -12 \\ -6 \rix } \quad & \subseteq \R^3 \\ A^T & \eq \mat{rrr} -1 & 2 & 1 \\ 2 & -5 & -3 \\ 3 & -6 & -3 \\ 6 & -12 & -6 \rix \\ \Col(A^T) & \eq \Span \cbr{\mat{r} -1 \\ 2 \\ 3 \\ 6 \rix, \mat{r} 2 \\ -5 \\ -6 \\ -12 \rix, \mat{r} 1 \\ -3 \\ -3 \\ -6 \rix } \quad & \subseteq \R^4 \end{align*} This reveals that:
- $\Col (A)$ is a subspace of what?
- $\Row (A)$ is a subspace of what?
- $\Null (A)$ is a subspace of what?
- Find a nonzero vector in $\Col (A)$.
- Find a nonzero vector in $\Row(A)$.
- Find a nonzero vector in $\Null (A)$.
- $\cV = \cbr{\mat{c} x \\ y \\ z \rix \:\mid\: x-y=0, \; y+z=0}$
- $\cM = \cbr{\mat{c} c-6d \\ d \\ c \rix \:\mid\: c,d \in \R } $
Section 4.3: Linearly Independent Sets, Bases
Here we answer: which subsets of vectors span a vector space as “efficiently” as possible?
Recall that a set of vectors $\cbr{\vec{v}_1, \dots , \vec{v}_p}$ in a vector space $\cV$ is linearly independent if \[ c_1 \vec{v}_1 + \dots + c_p \vec{v}_p \eq \vec{0} \] has only the trivial solution $c_1 = 0, \dots, c_p=0$.
The following theorem from Section 1.7 applies to a general vector space.
- $\cbr{p_1, p_2, p_3}$ in $\cP_2$ where $p_1(t)=t, p_2(t)=t^2, p_3(t)=4t+2t^2$.
- $\cbr{p_1, p_2, p_3}$ in $\cP_3$, where $p_1(t) = (t-1)$, $p_2(t) = (t-1)(t-2)$, and $p_3(t) = (t-1)(t-2)(t-3)$. Notice that \begin{align*} c_1 p_1 + c_2 p_2 + c_3 p_3 & \eq 0 \\ c_1 (t-1) + c_2 (t-1)(t-2) + c_3 (t-1)(t-2)(t-3) & \eq 0 \\ -a+2b-6c + t(a-3b+11c) + t^2(b-6c) + t^3(c) & \eq 0 \end{align*}
In Homework 2, you considered the matrix \[ A \eq \mat{rrrrr} 8 & 11 & -6 & -7 & 13 \\ -7 & -8 & 5 & 6 & -9 \\ 11 & 7 & -7 & -9 & -6 \\ -3 & 4 & 1 & 8 & 7 \rix \qquad \xrightarrow{\RREF} \quad \mat{ccccc} 1 & 0 & -7 /13 & 0 & 0 \\ 0 & 1 & -2 /13 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \rix .\]
The columns of $A$ span $\R^4$ (pivot position in every row) but are not linearly independent. Indeed, $\vec{a}_3 = \textcolor{red}{-\frac{7}{13}\vec{a}_1 - \frac{2}{13} \vec{a}_2}$. This tells us that \begin{align*} \Span \cbr{\vec{a}_1, \vec{a}_2, \vec{a}_3, \vec{a}_4, \vec{a}_5} & \eq \Span \cbr{\vec{a}_1, \vec{a}_2, \textcolor{red}{-\frac{7}{13}\vec{a}_1 - \frac{2}{13} \vec{a}_2}, \vec{a}_4, \vec{a}_5} \\ & \eq \Span \cbr{\vec{a}_1, \vec{a}_2, \vec{a}_4, \vec{a}_5 }. \end{align*} Why? Let's prove a simpler version (c.f. homework 2!).
First, let $\vec{x}$ be in $\Span \cH$. Then for some $c_1, c_2, c_3$, \[ \vec{x} \eq c_1 \vec{v}_1 + c_2 \vec{v}_2 + c_3 \pbr{\vec{v}_1 + \vec{v}_2} \eq (c_1 + c_3) \vec{v}_1 + (c_2 + c_3) \vec{v}_2 .\] Therefore, $\vec{x}$ is in $\Span \cbr{\vec{v}_1, \vec{v}_2}$.
On the other hand, let $\vec{y}$ be in $\Span \cbr{\vec{v}_1, \vec{v}_2}$. Then for some $c_1, c_2$, \[ \vec{y} \eq c_1 \vec{v}_1 + c_2 \vec{v}_2 \eq c_1 \vec{v}_1 + c_2 \vec{v}_2 + 0\cdot \vec{v}_3 .\] Therefore, $\vec{y}$ is in $\Span \cH$. So $\cH$ and $\cbr{\vec{v}_1, \vec{v}_2}$ span the same space.
So columns 1, 2, 4, and 5 of $A$ span $\R^4$. This is a “more efficient” spanning set for $\R^4$ compared to all the columns of $A$.
This gives us the notion of a basis set—an “efficient” spanning set in that it does not contain unnecessary vectors.
- $\cB$ is a linearly independent set, and
- $\cH = \Span (\cB)$.
Clearly, $\cB$ spans $\cP_n$ because any polynomial in $\cP_n$ can be written as
for some choice of $c_0, \dots , c_n$ coefficients. And, $\cB$ is linearly independent because
implies that all coefficients $c_0, \dots , c_n$ are $0$ (by simply matching).
The following (very important!) theorem generalizes .
- Suppose one of the vectors in $\cS$, say $\vec{v}_k$, is a linear combination of the other vectors in $\cS$. Then $\cS$ without this vector $\vec{v}_k$ still spans $\cH$.
- If $\cH \neq \cbr{\vec{0}}$, then some subset of $\cS$ is a basis for $\cH$.
That is, by removing a vector that is a linear combination of the others, you obtain a new set which still spans the same space.
This theorem tells us that we can construct a basis for a vector space $\cV$ by starting with a spanning set of $\cV$ and then pruning it down to a linearly independent set.
In this view, a basis is a spanning set that is as small as possible. If you keep removing vectors from a spanning set after you have removed enough to make it linearly independent, then the vector you delete would not be a linear combination of the smaller set. Hence, you no longer have a spanning set and thus no longer a basis.
Alternatively, a basis is a linearly independent set that is as large as possible. If $\cS$ is linearly independent and spans $\cV$, and then you add another vector from $\cV$ into $\cS$, the enlarged set becomes linearly dependent and therefore not a basis.
Given a matrix, what is a basis for its nullspace?
For illustration, recall : \[ A=\mat{rrrrr|c} -3 & 6 & -1 & 1 & -7 & 0 \\ 1 & -2 & 2 & 3 & -1 & 0\\ 2 & -4 & 5 & 8 & -4 & 0 \rix \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad \RREF(A)= \mat{rrrrr|c} 1 & -2 & 0 & -1 & 3 & 0 \\ 0 & 0 & 1 & 2 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \rix \]
Since $x_2, x_4, x_5$ are free variables, the solution is \begin{align*} \mat{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \rix & \eq x_2 \mat{c} 2 \\ 1 \\ 0 \\ 0 \\ 0 \rix + x_4 \mat{r} 1 \\ 0 \\ -2 \\ 1 \\ 0 \rix + x_5 \mat{r} -3 \\ 0 \\ 2 \\ 0 \\ 1 \rix \eq x_2 \vec{u} + x_4 \vec{v} + x_5 \vec{w} . \end{align*} Therefore, we have \[ \Null (A) \eq \]
What is a basis for the column space? Observe that \begin{align*} \vec{a}_2 & = -2 \vec{a}_1 & \quad \vec{b}_2 &= -2 \vec{b}_1 \\ \vec{a}_4 & = -\vec{a}_1 + 2 \vec{a}_3 & \quad \vec{b}_4 & = - \vec{b}_1 + 2 \vec{b}_3 \\ \vec{a}_5 & = 3 \vec{a}_1 -2 \vec{a}_3 & \quad \vec{b}_5 & = 3 \vec{b}_1 -2 \vec{b}_3 .\end{align*} Elementary row operations do not affect linear dependence relations among the columns!
Since a spanning set for $\Col(A)$ is all the columns of $A$, but only columns 1 and 3 are linearly independent, then by the Spanning Set Theorem \[ \Span \cbr{\vec{a}_1, \vec{a}_2, \vec{a}_3, \vec{a}_4, \vec{a}_5} \eq \]
In summary, we have the following.
What about a basis for the row space?
Continuing from the matrix in : \[ A=\mat{rrrrr} -3 & 6 & -1 & 1 & -7 \\ 1 & -2 & 2 & 3 & -1 \\ 2 & -4 & 5 & 8 & -4 \rix \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad B= \mat{rrrrr} 1 & -2 & 0 & -1 & 3 \\ 0 & 0 & 1 & 2 & -2 \\ 0 & 0 & 0 & 0 & 0 \rix .\] A basis for the row space of $A$ is:
- A basis for $\Null (A)$ consists of the vectors used in the parametric form of the solution to $A \vec{x} = \vec{0}$.
- A basis for $\Col (A)$ consists of the pivot columns of $A$.
- A basis for $\Row(A)$ consists of the transposes of the nonzero rows in the $\RREF$.
Solving $A \vec{x} = \vec{0}$ yields \[ \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \] Therefore, a basis for $\Null (A)$ is
A basis for $\Col (A)$ is
A basis for $\Row(A)$ is
Section 4.5: Dimension of a Vector Space
How “large” is a vector space? What is a good measure to quantify its size?
We will answer this after two insightful results.
Why? Think about $\R^n $. Clearly a set of $n+1$ vectors from $\R^n $ must be linearly dependent! A proof for a general vector space requires more tools than we currently have.
Put another way: If a vector space $\cV$ has a basis of $n$ vectors, then every linearly independent subset of $\cV$ has no more than $n$ vectors.
- If $\cB_2$ has exactly $n$ vectors, we are done.
- If $\cB_2$ has less than $n$ vectors, then applying to $\cB_2$ implies that $\cB_1$ must be linearly dependent (since $\cB_1$ has more vectors than $\cB_2$), which is a contradiction.
Therefore, $\cB_1$ and $\cB_2$ have the same number of vectors $n$.
This result means that the number of elements in any basis of a vector space is always the same. Even though a vector space can have many bases, they must all have the same number of vectors!
This allows us to quantify the “size” of the vector space.
- The dimension of $\cV = \cbr{\vec{0}}$ is defined to be $0$.
- If $\cV$ is not spanned by a finite set, we say $\cV$ is infinite dimensional.
- $\R^2$
- $\R^n $
- $\cP_2$
- $\cP_n$
- $\cM_{2\times 2} $
- $\cP$ (polynomials of all degrees)
| Start | Action | Result |
|---|---|---|
| Spanning set | Remove L.D. vectors | Create a basis |
| Spanning set | Add vectors | Cannot create a basis |
| L.I. set | Remove vectors | Cannot create a basis |
| L.I. set | Add vectors | Create a basis |
- What is the dimension of $\cH$?
- Is $\cH$ a basis for $\R^3$? If not, create one using $\vec{v}_1 $ and $\vec{v}_2$.
If you know the dimension of a vector space (say, $p$), you can find a basis by:
- Finding a set of $p$ vectors, and
- Checking if the set is linearly independent OR checking if the set spans the space.
Namely, if you have the right number of vectors you do NOT have to check both!
- Any set of $p$ linearly independent vectors is a basis for $\cV$.
- Any spanning set of $p$ vectors is a basis for $\cV$.
We know $\cP_2$ has dimension $3$, which is the number of elements in this set. Therefore, we can check (via The Basis Theorem) if the set is either 1) linearly independent, or 2) a spanning set.
For linear independence, form the homogeneous equation \begin{align*} c_1 \cdot t + c_2 \cdot (1-t) + c_3 \cdot (1+t-t^2) & \eq 0 \\ (c_2 + c_3) \cdot 1 + (c_1-c_2+c_3)\cdot t + (-c_3) \cdot t^2 &\eq 0 .\end{align*} Matching coefficients, then the only solution is $c_3=0$ (from $t^2$) and so $c_2=0$ (from $1$). Therefore, $c_2=0$ and the set is linearly independent; so it is a basis.
Showing that the set spans $\cP_2$ is very difficult!
Recall how we found bases for $\Null (A), \Col (A), \Row(A)$:
- A basis for $\Null (A)$ consists of the vectors used in the parametric form of the solution to $A \vec{x} = \vec{0}$.
- A basis for $\Col (A)$ consists of the pivot columns of $A$.
- A basis for $\Row(A)$ consists of the transposes of the nonzero rows in $\RREF(A)$.
The dimensions of these spaces are very important!
- A basis for $\Col (A)$ are the pivot columns of $A$. Therefore, \[ \Rank (A) \eq \Dim \Col(A) \eq \]
- A basis for $\Row(A)$ are the pivot rows (transposed) from the $\RREF$ of $A$. The number of pivot rows is equal to the number of pivot columns, so \[ \Dim \Row (A) \eq \Dim \Col (A^T) \eq \] (Recall: $\Col (A^T) = \Row(A)$.)
- A basis for $\Null (A)$ are the vectors used in the parametric representation for the solution to $A \vec{x} = \vec{0}$, so \[ \mathrm{Nullity}(A) \eq \Dim \Null (A) \eq \]
With our knowledge of rank, column space, and null space, we can extend the Invertible Matrix Theorem from Section 2.3.
- $A$ is an invertible matrix.
- $A$ is row equivalent to the $n\times n$ identity matrix $I_n$.
- $A$ has $n$ pivot positions.
- The equation $A \vec{x} = \vec{0}$ has only the trivial solution.
- The columns of $A$ form a linearly independent set.
- The equation $A \vec{x} = \vec{b}$ has a unique solution for all $\vec{b} \in \R^n $.
- The columns of $A$ span $\R^n $.
- There exists an $n\times n$ matrix $C$ such that $CA = I_n$.
- There exists an $n\times n$ matrix $D$ such that $AD = I_n$.
- $A^T$ is an invertible matrix.
- The columns of $A$ form a basis of $\R^n $.
- $\Col (A) = \R^n $.
- $\Rank (A) = n$.
- $\mathrm{Nullity}(A) = 0$.
- $\Null (A) = \cbr{\vec{0}}$.
Basis for $\Row(A) = \Col (A^T)$: $\cbr{\mat{c} 1 \\ 1 \rix}$
Basis for $\Col (A)$: $\cbr{\mat{c} 1 \\ 2 \rix} $
Basis for $\Null (A)$: $\cbr{\mat{r} 1 \\ -1 \rix} $
Basis for $\Null (A^T)$: $\cbr{\mat{r} 2 \\ -1 \rix}$