Web Analytics
MTH 215 Chapter 1

MTH 215 — Intro to Linear Algebra

Kyle Monette
Spring 2026

Section 1.1: Systems of Linear Equations

Our goal: Solve a collection of equations, such as:

\begin{align*} x_1 - 2x_2 & \eq -1 & (L_1) & \\ -x_1 + 3x_2 & \eq 3 & (L_2) & \end{align*}

Two lines that intersect at the ordered pair (3,2)

\[ \begin{array}{crcc} & x_1 - 2x_2 & = & -1 \\ + & \\ & -x_1 + 3x_2 & = & 3 \\[1em]\hline & \\ \end{array} \]

The solution is a pair of numbers $(x_1,x_2)$ satisfying both equations.

A linear equation in the variables $x_1, \dots , x_n$ is an equation of the form \[ a_1 x_1 + a_2 x_2 + \dots + a_n x_n \eq b, \qquad a_i \in \R ,\; b\in \R , \; n\ge 1 .\] A system of linear equations (linear system) is a collection of linear equations in the same variables.
The solution set of the linear system is all possible solutions.

Nonlinear equations like $4x_1-6x_2=x_1x_2$ or $x_2 = 2 \sqrt{x_1} + 5$ are not our interest!

For example -- the above solution set is only $x_1=3, x_2 = 2$. Can solution sets be larger? Can no solutions exist? Infinitely many?

From our knowledge of linear equations, what are other possibilities for the solution set?

Consider slight variations of the linear system above:

\begin{align*} x_1 - 2x_2 & = -1 && (L_1) \\ -x_1+2x_2 & = 3 && (L_2) \end{align*}

\begin{align*} x_1 - 2x_2 & = -1 && (L_1) \\ -x_1+2x_2 & = 1 && (L_2) \end{align*}

Two parallel lines
Two lines that coincide through (3,2)
A linear system can have either:
  1. One solution.
  2. Infinitely many solutions.
  3. No solutions.
The system is called consistent if one or infinitely many solutions exist; inconsistent if no solutions exist.

A major goal of this class is to classify a linear system as consistent or inconsistent. And, to say how many solutions there are (if any).

The two-variable systems above are easy—what about more variables?

\[ \begin{array}{rcrcrcr} x_1 &-& 2x_2 &+& x_3 &=& 0 \\ && x_2 &-& 4x_3 &=& 4 \\ && && x_3 &=& -1 \end{array} \]
Working from the bottom upwards: \[ \begin{array}{ccclcl} x_3 &= -1 & \qquad\qquad\qquad x_2 & = & \qquad\qquad\qquad\qquad\qquad x_1 & = \\[1em] & & &= \quad & & = \\[1em] & & &= \quad & & = \end{array} \]

Two linear systems are called equivalent if their solution sets are equal.

Therefore, we'll convert “hard to solve” systems into equivalent but “easy to solve” systems.

To do so, it's helpful to store only the essential information of a system. The coefficients of the variables are stored in a coefficient matrix, and the known values (often appearing on the right-hand side) form the last column in the augmented matrix.

For example, in this linear system, we have the following matrices: \[ \begin{array}{rcrcrcr} x_1 &-& 2x_2 &+& x_3 &=& 0 \\ && 2x_2 &-& 8x_3 &=& 8 \\ 5x_1 && &-& 5x_3 &=& 10 \end{array} \]

The size of a matrix is: (number of rows) $\times $ (number of columns). For example:

$\mat{crr} 1 & -2 & 1 \\ 0 & 2 & -8 \\ 5 & 0 & -5 \rix$ $ \mat{crr|c} 1 & -2 & 1 & 0 \\ 0 & 2 & -8 & 8 \\ 5 & 0 & -5 & 10 \rix$ $\mat{rr} 1 & 0 \\ -5 & 8 \rix$ $\mat{rrr|r} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \rix$

Therefore, a system of $m$ equations with $n$ unknowns is associated to:

  1. a coefficient matrix of size:
  2. an augmented matrix of size:

For example, this coefficient matrix is $3\times 3$, the augmented matrix is $3\times 4$. In general, an $m \times n$ matrix has $m$ rows and $n$ columns. Order matters!

Solving a linear system: To solve a linear system, create the augmented matrix. Our goal is to convert this matrix to an equivalent system (so, not changing the solution set) which is easier to solve.

Elementary Row Operations:
  1. (Replacement) Replace one row by the sum of that row and a multiple of another.
    That is, add to one row a multiple of another row.
  2. (Interchange) Interchange two rows.
  3. (Scaling) Multiply a row by a nonzero constant.
Any combination of elementary row operations applied to any augmented matrix creates a new augmented matrix that is row equivalent to the original, thus preserving the solution set.
Solve the system of linear equations by forming the augmented matrix and performing row operations. \[ \begin{array}{rcrcr} 4 x_1 &+& 6x_2 &=& -12 \\ -2 x_1 &+& x_2 &=& -10 \end{array} \] \begin{align*} \mat{rr|r} 4 & 6 & -12 \\ -2 & 1 & -10 \rix & \qquad R_2 \leftarrow & \qquad & \hspace{24em} \\[4em] & \qquad R_2 \leftarrow & \qquad & \\[4em] & \qquad R_1 \leftarrow & \qquad & \\[4em] & \qquad R_1 \leftarrow & \qquad & \\[3em] \end{align*} The solution is then $(x_1, x_2) = ?$.
Solve the system of linear equations. \[ \begin{array}{ccccccc} & & x_2 &-& 4x_3 &=& 8 \\ 2x_1 &-& 3x_2 &+& 2x_3 &=& 1 \\ 4x_1 &-& 8x_2 &+& 12x_3 &=& 1 \end{array} \] Start by forming the augmented matrix, and performing row operations: \begin{align*} \mat{rrr|r} 0 & 1 & -4 & 8 \\ 2 & -3 & 2 & 1 \\ 4 & -8 & 12 & 1 \rix & \qquad R_1 \leftrightarrow R_2 & \qquad & \hspace{20em} \\[6em] & \qquad R_3 \leftarrow R_3 - 2\cdot R_1 & \qquad & \\[6em] & \qquad R_3 \leftarrow R_3 + 2\cdot R_2 & \qquad & \\[4em] \end{align*} Is the system consistent?
Solve the linear system whose augmented matrix is $\mat{crr|c} 1 & -2 & 1 & 0 \\ 0 & 2 & -8 & 8 \\ 5 & 0 & -5 & 10 \rix$. \begin{align*} \mat{crr|c} 1 & -2 & 1 & 0 \\ 0 & 2 & -8 & 8 \\ 5 & 0 & -5 & 10 \rix & \qquad R_3 \leftarrow R_3 - 5\cdot R_1 & \qquad & \mat{crr|c} 1 & -2 & 1 & 0 \\ 0 & 2 & -8 & 8 \\ 0 & 10 & -10 & 10 \rix \\ & \qquad R_2 \leftarrow \frac{1}{2} \cdot R_2 & \qquad & \mat{crr|c} 1 & -2 & 1 & 0 \\ 0 & 1 & -4 & 4 \\ 0 & 10 & -10 & 10 \rix \\ & \qquad R_3 \leftarrow R_3 - 10\cdot R_2 & \qquad & \mat{crr|r} 1 & -2 & 1 & 0 \\ 0 & 1 & -4 & 4 \\ 0 & 0 & 30 & -30 \rix \\ & \qquad R_3 \leftarrow \frac{1}{30} \cdot R_3 & \qquad & \mat{crr|r} 1 & -2 & 1 & 0 \\ 0 & 1 & -4 & 4 \\ 0 & 0 & 1 & -1 \rix \\ & \qquad R_2 \leftarrow R_2 + 4\cdot R_3 & \qquad & \mat{crr|r} 1 & -2 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & -1 \rix \\ & \qquad R_1 \leftarrow R_1 -R_3 & \qquad & \mat{crr|r} 1 & -2 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & -1 \rix \\ & \qquad R_1 \leftarrow R_1 +2\cdot R_2 & \qquad & \mat{crr|r} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & -1 \rix \end{align*} The solution is then $(x_1, x_2, x_3) = ?$.
For what values of $h$ is the system consistent? For those values of $h$, how many solutions are there? \[ \begin{array}{rcccc} 3x_1 &-& 9x_2 &=& 4 \\ -2x_1 &+& 6x_2 &=& h \end{array} .\] Start by row reducing the augmented matrix to a triangular form:

Section 1.2: Row Reduction and Echelon Forms

In past examples, we performed row operations and obtained “special” matrices \[ \mat{rrrr} 2 & -3 & 2 & 1 \\ 0 & 1 & -4 & 8 \\ 0 & 0 & 0 & 15 \rix \quad \text{and} \quad \mat{crr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \rix .\] (They were augmented matrices, but the following applies to any matrix.)
In this section, we describe a procedure for “reducing” matrices to this desired form.

Any matrix is in row echelon form ($\REF$) if:
  1. All nonzero rows are above any rows of all zeros.
  2. Each leading entry of a row (the leftmost nonzero entry) is in a column to the right of the leading entry of the row above it.
  3. All entries in a column below a leading entry are zero.

For example: these are in $\REF$. Leading entries are the $\blacksquare$, and $\ast$ denotes anything. \[ \mat{cccc} \blacksquare & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \rix \qquad \mat{ccc} \blacksquare & \ast & \ast \\ 0 & \blacksquare & \ast \\ 0 & 0 & \blacksquare \\ 0 & 0 & 0 \rix \qquad \mat{ccccccccccc} 0 & \blacksquare & \ast & \ast & \ast & \ast & \ast & \ast & \ast & \ast & \ast \\ 0 & 0 & 0 & \blacksquare & \ast & \ast & \ast & \ast & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & \blacksquare & \ast & \ast & \ast & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & 0 & 0 & \blacksquare & \ast & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \blacksquare & \ast & \ast & \ast \rix .\]

A matrix is in reduced row echelon form ($\RREF$) if
  1. It is in $\REF$.
  2. The leading entry in each nonzero row is $1$.
  3. Each leading $1$ is the only nonzero entry in its column.

For example: the above matrices in $\RREF$ are:

$\mat{cccc} \blacksquare & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \rix$ $\mat{ccc} \blacksquare & \ast & \ast \\ 0 & \blacksquare & \ast \\ 0 & 0 & \blacksquare \\ 0 & 0 & 0 \rix$ $\mat{ccccccccccc} 0 & \blacksquare & \ast & \ast & \ast & \ast & \ast & \ast & \ast & \ast & \ast \\ 0 & 0 & 0 & \blacksquare & \ast & \ast & \ast & \ast & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & \blacksquare & \ast & \ast & \ast & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & 0 & 0 & \blacksquare & \ast & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \blacksquare & \ast & \ast & \ast \rix$
$\mat{cccc} 1 & 0 & \ast & \ast \\ 0 & 1 & \ast & \ast \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \rix$ $\mat{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \rix$ $\mat{ccccccccccc} 0 & 1 & \ast & 0 & 0 & \ast & 0 & 0 & \ast & \ast & \ast \\ 0 & 0 & 0 & 1 & 0 & \ast & 0 & 0 & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & 1 & \ast & 0 & 0 & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \ast & \ast & \ast \rix$
Every matrix is row equivalent to only one reduced row echelon matrix.
  • A pivot position is a location of a leading entry in a $\REF$ of a matrix.
  • A pivot is nonzero number in a pivot position used to create zeros via row operations.
  • A pivot column is a column that contains a pivot position.
Row Reduction Steps: ($\REF$ and $\RREF$)
  1. Start with leftmost nonzero column. The pivot position is the top.
  2. If needed, do a row swap to put a (nonzero) pivot in the pivot position.
  3. Do row operations to zero out all positions below the pivot.
  4. Ignore this row (and everything above). Apply steps 1--3 to the submatrix that remains. Repeat until there are no more rows to modify.
For $\RREF$:
  1. Start with rightmost pivot. Work upward and to the left to create zeros above each pivot. Make each pivot $1$ by scaling.
Convert $A$ to a row echelon form: \begin{align*} A=\mat{rrrrr} 0 & -3 & -6 & 4 & 9 \\ -1 & -2 & -1 & 3 & 1 \\ -2 & -3 & 0 & 3 & -1 \\ 1 & 4 & 5 & -9 & -7 \rix & \qquad \hspace{10em} & \quad & \mat{rrrrr} 1 & 4 & 5 & -9 & -7 \\ -1 & -2 & -1 & 3 & 1 \\ -2 & -3 & 0 & 3 & -1 \\ 0 & -3 & -6 & 4 & 9 \rix \\ & \qquad & \quad & \mat{rrrrr} 1 & 4 & 5 & -9 & -7 \\ 0 & 2 & 4 & -6 & -6 \\ -2 & -3 & 0 & 3 & -1 \\ 0 & -3 & -6 & 4 & 9 \rix \\ & \qquad & \quad & \mat{rrrrr} 1 & 4 & 5 & -9 & -7 \\ 0 & 2 & 4 & -6 & -6 \\ 0 & 5 & 10 & -15 & -15 \\ 0 & -3 & -6 & 4 & 9 \rix \\ & \qquad & \quad & \mat{rrrrr} 1 & 4 & 5 & -9 & -7 \\ 0 & 2 & 4 & -6 & -6 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & -3 & -6 & 4 & 9 \rix \\ & \qquad & \quad & \mat{rrrrr} 1 & 4 & 5 & -9 & -7 \\ 0 & 2 & 4 & -6 & -6 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -5 & 0 \rix \\ & \qquad & \quad & \mat{rrrrr} 1 & 4 & 5 & -9 & -7 \\ 0 & 2 & 4 & -6 & -6 \\ 0 & 0 & 0 & -5 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \rix \end{align*}
Reduce the matrix to its $\RREF$: $\mat{rrrrrr} 0 & 3 & -6 & 6 & 4 & -5 \\ 3 & -7 & 8 & -5 & 8 & 9 \\ 3 & -9 & 12 & -9 & 6 & 15 \rix$ \begin{align*} & \text{Step 2: Pivot at } (1,1) & \quad R_1 \leftrightarrow R_3 & \qquad \mat{rrrrrr} 3 & -9 & 12 & -9 & 6 & 15 \\ 3 & -7 & 8 & -5 & 8 & 9 \\ 0 & 3 & -6 & 6 & 4 & -5 \rix \\ & \text{Step 3: Row ops.} & \quad R_2 \leftarrow R_2 - R_1 & \qquad \mat{rrrrrr} 3 & -9 & 12 & -9 & 6 & 15 \\ 0 & 2 & -4 & 4 & 2 & -6 \\ 0 & 3 & -6 & 6 & 4 & -5 \rix \\ & \text{Step 4: Ignore Row 1, pivot at } (2,2) & \quad & \qquad \mat{rrrrrr} 3 & -9 & 12 & -9 & 6 & 15 \\ 0 & 2 & -4 & 4 & 2 & -6 \\ 0 & 3 & -6 & 6 & 4 & -5 \rix \\ & \text{Step 3: Row ops. } & \quad R_3 \leftarrow R_3 - \frac{3}{2} R_2 & \qquad \mat{rrrrrr} 3 & -9 & 12 & -9 & 6 & 15 \\ 0 & 2 & -4 & 4 & 2 & -6 \\ 0 & 0 & 0 & 0 & 1 & 4 \rix \end{align*} Continue on your own, finding the $\RREF$.

Applying these row reduction steps to the augmented matrix of a linear system allows us to explicitly describe the solution set.

  • Variables that correspond to pivot columns are called basic variables.
  • Variables that correspond to columns without a pivot are called free variables or nonbasic variables.
Consider this linear system and its corresponding augmented matrix in $\RREF$: \[ \begin{array}{rcrcrcrc} x_1 & & &-& 5x_3 &=& 1 \\ & &x_2 &+& x_3 &=& 4 \\ && && 0 &=& 0 \end{array} \qquad \qquad \mat{ccr|c} 1 & 0 & -5 & 1 \\ 0 & 1 & 1 & 4 \\ 0 & 0 & 0 & 0 \rix .\] Because columns 1 and 2 have pivots, and column 3 does not, then \[ x_1 \text{ and } x_2: \hspace{8em} \qquad x_3: \hspace{9em} .\] Now, we can solve for the basic variables in terms of the free variables: \[ \begin{array}{rcrcrcrc} x_1 & & &-& 5x_3 &=& 1 \\ & &x_2 &+& x_3 &=& 4 \\ && && 0 &=& 0 \end{array} \qquad \longrightarrow \qquad \hspace{10em} \] So, the system is:
Warning! Only use the reduced row echelon form ($\RREF$) to solve a system!
(Just the $\REF$ is insufficient!)
Find the solution to the system whose augmented matrix has a $\REF$ of: \[ \mat{rrrrr|r} 1 & 6 & 2 & -5 & -2 & -4 \\ 0 & 0 & 2 & -8 & -1 & 3 \\ 0 & 0 & 0 & 0 & 1 & 7 \rix .\] We begin by row reducing further to get the $\RREF$:
Basic columns are:
Nonbasic columns are:
The $\RREF$ gives the following system of equations:
The basic variables in terms of the free variables are: \begin{align*} x_1 & = \hspace{30em} \\[0.5em] x_2 & = \\[0.5em] x_3 & = \\[0.5em] x_4 & = \\[0.5em] x_5 &= \end{align*}

Even though the $\REF$ is insufficient to solve a linear system, it does answer if the system is consistent or not (and, how many solutions there are, if any).

Determine the existence and uniqueness of solutions to the systems with augmented matrices: \[ \mat{rrrrr|r} 3 & -9 & 12 & -9 & 6 & 15 \\ 0 & 2 & -4 & 4 & 2 & -6 \\ 0 & 0 & 0 & 0 & 1 & 4 \rix \qquad\qquad \mat{cc|r} 3 & 4 & -3 \\ 0 & 1 & 3 \\ 0 & 0 & 0 \rix \]
A linear system is consistent (i.e., has solutions) if and only if the rightmost column of the augmented matrix is not a pivot column, i.e., if and only if any echelon form of its augmented matrix does not contain a row of the form \[ \mat{ccc|c} 0 & \dots & 0 & c \rix, \qquad \text{where } c \neq 0 .\] If a linear system is consistent, then the solution set contains either:
  1. a unique solution, if there are no free variables;
  2. infinitely many solutions, if there is at least one free variable.

Suppose a linear system is consistent. To determine if there is a unique solution, what must be true about the pivot locations? Does every row need a pivot? Does every column need a pivot?
Think about it!

In summary: To solve a linear system, use the following procedure.

Solving a Linear System via Row Reduction:
  1. Write the augmented matrix of the system.
  2. Obtain the $\REF$ of the augmented matrix.
    • If a row of $\mat{ccc|c} 0 & \dots & 0 & c \rix$ where $c\neq 0$ is encountered, stop. There are no solutions.
    • Otherwise, continue.
  3. Continuing row reducing to obtain the $\RREF$.
  4. Write the system of equations obtained from the $\RREF$.
  5. Rewrite each equation so that its basic variable is in terms of any free variables.
Answer the following. Be sure that you can fully explain your work!
  1. What is the largest possible number of pivots a $4\times 6$ matrix can have?
  2. What is the largest possible number of pivots a $6\times 4$ matrix can have?
  3. A consistent linear system has $3$ equations and $4$ unknowns. How many solutions does it have?
  4. Suppose the coefficient matrix corresponding to a linear system is $4\times 6$ with $3$ pivot columns. How many pivot columns does the augmented matrix have, if the linear system is inconsistent?

Section 1.3: Vector Equations

Vectors are just an ordered list of numbers. Their relevance to linear systems is immediate.
A matrix with one column is called a column vector, typically denoted by $\mathbf{x}$ or $\vec{x}$.

For example, the following are two-dimensional vectors: \[ \vec{u} = \mat{r} 3 \\ -1 \rix, \qquad \vec{v} = \mat{c} \pi \\ 0.1 \rix, \qquad \vec{w} = \mat{r} -0.5 \\ 45 \rix , \qquad \vec{x} = \mat{r} 2 \\ 2 \rix \]

Each point $(x_1,x_2)$ in the plane corresponds to a vector $\mat{c} x_1 \\ x_2 \rix$ in $\R^2$:

Points and vectors in the space R-squared

\[ \R^2 \eq \Bigg\{ \hspace{12em} \Bigg\} .\]

Vector Addition: We can add two vectors by adding corresponding components: \[ \vec{u} = \mat{c} 1 \\ 3 \rix \qquad \vec{v} = \mat{c} 2 \\ 1 \rix \qquad \text{yields} \qquad \vec{u} + \vec{v} = \hspace{5em} \] Geometrically, this is the so-called parallelogram rule.

If $\vec{u}$ and $\vec{v}$ are vectors in $\R^2$, then $\vec{u} + \vec{v}$ is the fourth vertex of a parallelogram with vertices $\vec{u}, \vec{v}, \vec{0}$.
The parallelogram rule

Note that $\vec{0} = \mat{c} 0 \\ 0 \rix$ in $\R^2$.

Vector Scaling: We can multiply a vector $\vec{x}$ by a scalar $c \in \R $. \[ \vec{x} = \mat{r} 1 \\ -2 \rix \qquad 3 \vec{x} = \mat{c} 3\cdot 1 \\ 3\cdot -2 \rix = \mat{r} 3 \\ -6 \rix, \qquad -2 \vec{x} = \mat{r} -2 \\ 4 \rix, \qquad \frac{1}{2}\vec{x} = \mat{c} 1 /2 \\ -1 \rix .\]

Illustrations of scaling vectors.

Vectors in $\R^n $: Everything we have done above extends naturally to higher dimensional spaces. Namely, for an arbitrary integer $n$.

For $n\ge 1$, the collection of all ordered $n$-tuples of real numbers is given by \[ \R^n \eq \cbr{\mat{c} u_1 \\ \vdots \\ u_n \rix \:\mid\: u_1, \dots , u_n \in \R } \] which is read “r-n”, not “r to the n”.
For all $\vec{u}, \vec{v}, \vec{w}$ in $\R^n $ and all scalars $c,d \in \R $:
  1. $\vec{u} + \vec{v} = \vec{v} + \vec{u}$
  2. $(\vec{u} + \vec{v}) + \vec{w} = \vec{u} + (\vec{v} + \vec{w})$
  3. $\vec{u} + \vec{0} = \vec{u}$
  4. $\vec{u} + (-1)\vec{u} = \vec{u} - \vec{u} = \vec{0}$
  5. $c (\vec{u} + \vec{v}) = c \vec{u} + c \vec{v}$
  6. $(c+d) \vec{u} = c \vec{u} + d \vec{u}$
  7. $c (d \vec{u}) = (cd) \vec{u}$
  8. $1 \vec{u} = \vec{u}$

Adding and scaling vectors gives another vector (still in $\R^n $, say) which we call a linear combination.

Given vectors $\vec{v}_1, \vec{v}_2, \dots , \vec{v}_p$ in $\R^n $, and scalars $c_1, c_2, \dots , c_p$ in $\R $ (sometimes called the weights), then the vector \[ \vec{y} \eq c_1 \vec{v}_1 + \dots + c_p \vec{v}_p \] is a linear combination of $\vec{v}_1, \dots , \vec{v}_p$.
Given vectors $\vec{v}_1$ and $\vec{v}_2$ in $\R^n $, which of the following are linear combinations of $\vec{v}_1$ and $\vec{v}_2$?
  1. $2 \vec{v}_1 + \frac{1}{2} \vec{v}_2$
  2. $-\vec{v}_1 + \sqrt{2} \, \vec{v}_2$
  3. $4 \vec{v}_2$
  4. $\vec{0}$
  5. $\vec{v}_1 - \vec{v}_2$
Let $\vec{v}_1 = \mat{c} 2 \\ 1 \rix$ and $\vec{v}_2 = \mat{r} -2 \\ 2 \rix$. Graph each of the following, then express them as a linear combination of $\vec{v}_1$ and $\vec{v}_2$. \[ \vec{a} = \mat{c} 0 \\ 3\rix, \quad \vec{b} = \mat{r} -4 \\ 1 \rix, \quad \vec{c} = \mat{c} 4 \\ 8 \rix, \quad \vec{d} = \mat{r} 7 \\ -4 \rix .\]
A grid with a plot of v1 and v2 to show the plots of vectors a, b, c, and d above.

In general, the last example is hard! to solve. Actually, it's one of the major goals of linear algebra!

For example, take the vectors \[ \vec{a}_1 = \mat{c} 1 \\ 0 \\ 3 \rix \quad \vec{a}_2 = \mat{c} 4 \\ 2 \\ 14 \rix \quad \vec{a}_3 = \mat{c} 3 \\ 6 \\ 10 \rix \quad \vec{b} = \mat{r} -1 \\ 8 \\ -5 \rix .\] Is $\vec{b}$ a linear combination of $\vec{a}_1, \vec{a}_2, \vec{a}_3$? That is, do there exist weights $c_1, c_2, c_3 \in \R $ such that

This vector equation can be written as

which can, in turn, be written as the following augmented matrix and then row reduced: \[ \hspace{10em} \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad \]

Therefore, the weights in the linear combination are found to be \[ c_1 = \qquad\quad c_2 = \qquad \quad c_3 = \]

The vector $\vec{b}$ is a linear combination of $\vec{a}_1, \dots , \vec{a}_n$ if and only if there exists weights $c_1, \dots , c_n$ such that \[ c_1 \vec{a}_1 + c_2 \vec{a}_2 + \dots + c_n \vec{a}_n \eq \vec{b} \] which in turn holds if and only if the linear system \[ \mat{cccc|c} \vec{a}_1 & \vec{a}_2 & \dots & \vec{a}_n & \vec{b} \rix \] has a solution.
The set of all linear combinations of $\vec{v}_1, \dots , \vec{v}_p \in \R^n $ is called the span of the vectors $\vec{v}_1, \dots , \vec{v}_p$, denoted by \[ \Span \cbr{\vec{v}_1, \dots , \vec{v}_p} \eq \cbr{c_1 \vec{v}_1 + \dots + c_p \vec{v}_p \:\mid\: c_i \in \R} .\]
Let $\vec{u}$ and $\vec{v}$ be nonzero vectors in $\R^3$.
  • Describe $\Span \cbr{\vec{u}}$.
  • Describe $\Span \cbr{\vec{u}, \vec{v}}$ when $\vec{u}$ is not a scalar multiple of $\vec{v}$.
  • Describe $\Span \cbr{\vec{u}, \vec{v}}$ when $\vec{u}$ is a scalar multiple of $\vec{v}$.
  • When is $\vec{0}$ in $\Span \cbr{\vec{u}}$? In $\Span \cbr{\vec{v}}$? In $\Span \cbr{\vec{u}, \vec{v}}$?
Let \[A = \mat{cc} 1 & 2 \\ 3 & 1 \\ 0 & 5 \rix, \qquad \vec{b} = \mat{c} 8 \\ 3 \\ 17 \rix.\] Is $\vec{b}$ in the span of the columns of $A$?
By , we can check if the following augmented matrix has a solution: \[ \mat{cc|c} 1 & 2 & 8 \\ 3 & 1 & 3 \\ 0 & 5 & 17 \rix .\] A $\REF$ of this augmented matrix is \[ \mat{cc|r} 1 & 2 & 8 \\ 0 & -5 & -21 \\ 0 & 0 & -4 \rix .\] Therefore $\dots$

Section 1.4: The Matrix Equation Ax=b

In this section we define multiplication of a matrix by a vector.
Let $A$ be an $m\times n$ matrix, considered as a collection of $m$-vectors: \[ A \eq \mat{cccc} \\ \vec{a}_1 & \vec{a}_2 & \dots & \vec{a}_n \\ \\ \rix .\] If $\vec{x}$ is in $\R^n $, then the product of $A$ and $\vec{x}$ is the linear combination of the columns of $A$ using the corresponding entries in $\vec{x}$ as weights. That is, \[ A \vec{x} \eq \mat{cccc} \\ \vec{a}_1 & \vec{a}_2 & \dots & \vec{a}_n \\ \\ \rix \mat{c} x_1 \\ x_2 \\ \vdots \\ x_n \rix \eq x_1 \vec{a}_1 + x_2 \vec{a}_2 + \dots + x_n \vec{a}_n .\]

Note: This requires that \[ \text{Number of columns of } A \eq \text{Number of entries in } \vec{x} .\]

Compute the matrix-vector products: \begin{align*} \mat{rrr} 1 & 2 & -1 \\ 0 & -5 & 3 \rix \mat{c} 4 \\ 3 \\ 7 \rix & \; = \; \hspace{30em} \\[2em] \mat{rr} 2 & -3 \\ 8 & 0 \\ -5 & 2 \rix \mat{c} 4 \\ 7 \rix & \eq \end{align*}

Now, we have three equivalent ways to view a linear system:

  1. As a system of linear equations: \[ \begin{array}{rcrcrcr} 1\cdot x_1 &+& 4\cdot x_2 &+& 3\cdot x_3 &=& -1 \\ 0\cdot x_1 &+& 2\cdot x_2 &+& 6\cdot x_3 &=& 8 \\ 3\cdot x_1 &+& 14\cdot x_2 &+& 10\cdot x_3 &=& -5 \end{array} .\]
  2. As a vector equation: \[ x_1 \mat{c} 1 \\ 0 \\ 3 \rix + x_2 \mat{c} 4 \\ 2 \\ 14 \rix + x_3 \mat{c} 3 \\ 6 \\ 10 \rix \eq \mat{r} -1 \\ 8 \\ -5 \rix .\]
  3. As a matrix equation: \[ \mat{ccc} 1 & 4 & 3 \\ 0 & 2 & 6 \\ 3 & 14 & 10 \rix \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \mat{r} -1 \\ 8 \\ -5 \rix .\]
Let $A$ be an $m\times n$ matrix $A = \mat{cccc} \vec{a}_1 & \vec{a}_2 & \dots & \vec{a}_n \\ \rix$ and let $\vec{b}$ be in $\R^m $.
Then:
  • The matrix equation $A \vec{x}=\vec{b}$;
  • The vector equation $x_1 \vec{a}_1 + \dots + x_n \vec{a}_n = \vec{b}$;
  • The augmented matrix $\mat{ccc|c} \vec{a}_1 & \dots & \vec{a}_n & \vec{b} \rix$
all have the same solution set.

Furthermore, by it follows that:

$A \vec{x}=\vec{b}$ has a solution if and only if $\vec{b}$ is a linear combination of the columns of $A$.

In Section 1.3, we asked if a given vector $\vec{b}$ was in $\Span \cbr{\vec{a}_1, \dots , \vec{a}_n}$. By , the answer can be determined by asking if $A \vec{x} = \vec{b}$ has a solution (i.e., is consistent).

Now—is any vector $\vec{b}$ in $\Span \cbr{\vec{a}_1, \dots , \vec{a}_n}$? That is, does the linear system $A \vec{x} = \vec{b}$ have a solution for all $\vec{b}$?

Is the following system $A \vec{x} = \vec{b}$ consistent for any choice of $b_1, b_2, b_3$? \[ \mat{rrr|c} 1 & 3 & 4 & b_1\\ -4 & 2 & -6 & b_2\\ -3 & -2 & -7 & b_3 \rix \] Row reduce the augmented matrix: \[ \begin{aligned} R_2 &\leftarrow R_2 + 4 R_1 \\[0.5em] R_3 & \leftarrow R_3 + 3 R_1 \\[0.5em] R_3 & \leftarrow R_3 - \frac{1}{2} R_2 \end{aligned} \quad \longrightarrow \quad \hspace{15em} \]
If $A = \mat{cccc} \vec{a}_1 & \vec{a}_2 & \dots & \vec{a}_n \rix$ is an $m\times n$ matrix and $\vec{b}$ is in $\R^m $, then we say that the columns of $A$ span $\R^m $ if every vector $\vec{b} \in \R^m $ is a linear combination of the columns of $A$. When this occurs, we write $\Span \cbr{\vec{a}_1, \dots , \vec{a}_n} = \R^m$.

In the last example, do the columns of $A$ span $\R ^3$?

The next theorem is critically important!

Let $A$ be an $m\times n$ matrix. Then the following are logically equivalent:
  1. The equation $A \vec{x} = \vec{b}$ has a solution for every $\vec{b} \in \R^m $.
  2. Every $\vec{b} \in \R^m $ is a linear combination of the columns of $A$.
  3. The columns of $A$ span $\R^m $.
  4. $A$ has a pivot position in every row.
That (1), (2), and (3) are equivalent follows from , , and . Now, suppose (4) is true. We'll show (1) is true. For any $\vec{b}$, consider the augmented matrix $\mat{c|c} A & \vec{b} \rix$ and row reduce to obtain $\mat{c|c} U & \vec{d} \rix$. Since every row of $U$ has a pivot position, then $\vec{d}$ cannot be a pivot column. Remember—the assumption is that $A$, not $\mat{c|c} A & \vec{b} \rix$, has a pivot in every row. Therefore, by the system $A \vec{x} = \vec{b}$ is consistent, which is exactly statement (1) since $\vec{b}$ was arbitrarily chosen.
On the other hand, suppose (1) is true. We will assume (4) is false, and then obtain a contradiction. In any row echelon form $\mat{c|c} U & \vec{d} \rix$ of $\mat{c|c} A & \vec{b} \rix$, then the last row of $U$ consists of zeros because we assumed that (4) is false and hence $A$ does not have a pivot in (at least) one row. Suppose we chose $\vec{b}$ so that the last entry of $\vec{d}$ is nonzero. This contradicts that $A \vec{x} = \vec{b}$ is consistent since the last row of $\mat{c|c} U & \vec{d} \rix$ is of the form $\mat{ccc|c} 0 & \dots & 0 & d \rix$ where $d\neq 0$. Therefore, if (1) is true then (4) must be true.
Let $A$ be a $3\times 2$ matrix. Is the equation $A \vec{x} = \vec{b}$ consistent for all $\vec{b}$?
Let $A$ be a $3\times 6$ matrix. Is the equation $A \vec{x} = \vec{b}$ consistent for all $\vec{b}$?
Suppose $\mat{c|c} A & \vec{b} \rix$ has a pivot in every row. Is $A \vec{x} = \vec{b}$ consistent?

There is another way to compute $A \vec{x}$, other than strict use of the definition (as a linear combination of the columns of $A$).

The idea is to notice that each entry in $A \vec{x}$ is a sum of products (actually, a “dot product”) from the corresponding row of $A$ with the vector $\vec{x}$. For example, the first entry is computed as \[ \mat{ccc} 2 & 3 & 4 \\ \\ \\ \rix \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \] \[ \mat{ccc} \\ -1 & 5 & -3 \\ \\ \rix \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \] \[ \mat{ccc} \\ \\ 6 & -2 & 8 \rix \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \] All together, the product is \[ A \vec{x} \eq \]

\[ \mat{rrr} 1 & 2 & -1 \\ 0 & -5 & 3 \rix \mat{c} 4 \\ 3 \\ 7 \rix \eq \hspace{25em} \] \[ \mat{rr} 2 & -3 \\ 8 & 0 \\ -5 & 2 \rix \mat{c} 4 \\ 7 \rix \eq \hspace{25em} \]
If $A$ is an $m\times n$ matrix, $\vec{u}, \vec{v} \in \R^n $, and $c$ is a scalar, then: \[ A(\vec{u} + \vec{v}) = A \vec{u} + A \vec{v}, \qquad A (c \vec{u}) = c (A \vec{u}) \]

Section 1.5: Solution Sets of Linear Systems

Here, we look into homogeneous systems of linear equations—where the right-hand side is $\vec{0}$.
A system of linear equations is said to be homogeneous if it is of the form \[ A \vec{x} \; = \; \vec{0} \] where $A$ is $m\times n$ and $\vec{0}$ is the $m\times 1$ vector of $0$'s.

The equation $A \vec{x} = \vec{0}$ ALWAYS has a solution. Namely, $\vec{x} = \vec{0}$. We call this the trivial solution.
The question is: are there nontrivial solutions? That is, a vector $\vec{x}$ such that $A \vec{x} = \vec{0}$ but $\vec{x} \neq \vec{0}$?

Quick Aside—Notice how this question would never apply to, for example, the real numbers. If $a$ and $x$ are real numbers, then $ax=0$ means either $a=0$ or $x=0$. For matrix-vector products, one can (often!) have $A \vec{x} = \vec{0}$ without $A$ or $\vec{x}$ being entirely zero.

Recall the Existence and Uniqueness Theorem () from Section 1.2—a linear system is consistent if and only if a row of the following form is never encountered while doing row operations: \[ \mat{ccc|c} 0 & \dots & 0 & c \rix, \qquad c\neq 0 .\] A homogeneous system will always have zeros in the last column of the augmented matrix: \[ \mat{c|c} A & \vec{0} \rix .\] So it is impossible for a zero row with nonzero right-hand side to appear. Not only does this confirm that homogeneous systems are always consistent, but also tells us the following.

The homogeneous equation $A \vec{x} = \vec{0}$ has a nontrivial solution if and only if there is at least one free variable.
Determine if the following homogeneous system has nontrivial solutions: \[ \begin{array}{rcrcrcc} 2x_1 &+& 4x_2 &-& 6x_3 &=& 0 \\ 4x_1 &+& 8x_2 &-& 10x_3 &=& 0 \end{array} .\]

The augmented matrix is \[ \hspace{10em} \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad \hspace{10em} \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad \]

\[ \vec{x} \eq \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \] In these examples, the solutions $\vec{x}$ are in parametric vector form.
Consider the nonhomogeneous system \[ \mat{rrr|r} 2 & 4 & -6 & 0 \\ 4 & 8 & -10 & 4 \rix \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad \hspace{6em} \]
Suppose $A \vec{x} = \vec{b}$ is consistent, and $\vec{p}$ is any solution. Then the solution set of $A \vec{x} = \vec{b}$ is the set of all vectors of the form $\vec{p} + \vec{v}$, where $\vec{v}$ is any solution to the homogeneous equation $A \vec{x} = \vec{0}$.

This result means that the solution set of $A \vec{x} = \vec{b}$ is obtained from translating the solution set of $A \vec{x} = \vec{0}$ by any solution of $A \vec{x} = \vec{b}$.

The translation of the solution set of Ax=0 by any solution of Ax=b
Compare the solution set of $2x_1 - 4x_2 - 4x_3=0$ to that of $2x_1 - 4x_2 - 4x_3 = 6$. The augmented matrix for the homogeneous system is \[ \mat{rrr|r} 2 & -4 & -4 & 0 \rix \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad \mat{rrr|r} 1 & -2 & -2 & 0 \rix \] and therefore the vector form of the solution is \[ \vec{v} \eq \hspace{30em} \]
Meanwhile, the nonhomogeneous system has \[ \mat{rrr|r} 2 & -4 & -4 & 6 \rix \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad \mat{rrr|r} 1 & -2 & -2 & 3 \rix \] and therefore has solution \[ \vec{w} \eq \hspace{30em} \]

Section 1.7: Linear Independence

A homogeneous system like \[ \mat{rrr} 1 & 2 & -3 \\ 3 & 5 & 9 \\ 5 & 9 & 3 \rix \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \mat{c} 0 \\ 0 \\ 0 \rix \] can be viewed as a vector equation \[ x_1 \mat{c} 1 \\ 3 \\ 5 \rix + x_2 \mat{c} 2 \\ 5 \\ 9 \rix + x_3 \mat{r} -3 \\ 9 \\ 3 \rix \eq \mat{c} 0 \\ 0 \\ 0 \rix .\] Are there nontrivial solutions? Or, is $\vec{0}$ the only solution?

A set of vectors $\cbr{\vec{v}_1, \vec{v}_2, \dots , \vec{v}_p}$ in $\R^n $ is said to be linearly independent if the vector equation \[ c_1 \vec{v}_1 + c_2 \vec{v}_2 + \dots + c_p \vec{v}_p \eq \vec{0} \] has only the trivial solution $c_1 = c_2 = \dots = c_p = 0$. Otherwise, $\cbr{\vec{v}_1, \vec{v}_2, \dots , \vec{v}_p}$ is linearly dependent, meaning there exists weights $c_1, c_2, \dots , c_p$ not all of which are $0$ such that \[ c_1 \vec{v}_1 + c_2 \vec{v}_2 + \dots + c_p \vec{v}_p \eq \vec{0} \] (which we call a linear dependence relation).
Consider the system above: \[ x_1 \vec{v}_1 + x_2 \vec{v}_2 + x_3 \vec{v}_3 \eq x_1 \mat{c} 1 \\ 3 \\ 5 \rix + x_2 \mat{c} 2 \\ 5 \\ 9 \rix + x_3 \mat{r} -3 \\ 9 \\ 3 \rix \eq \mat{c} 0 \\ 0 \\ 0 \rix .\] Is $\cbr{\vec{v}_1, \vec{v}_2, \vec{v}_3}$ linearly independent? If not, find a linear dependence relation.
Row reduce the augmented matrix: \[ \mat{rrr|r} 1 & 2 & -3 &0 \\ 3 & 5 & 9 & 0 \\ 5 & 9 & 3 & 0 \rix \qquad\quad \begin{aligned} R_2 &\leftarrow R_2 - 3 R_1 \\[0.5em] R_3 & \leftarrow R_3 -5 R_1 \\[0.5em] R_3 & \leftarrow R_3 - R_2 \\[0.5em] R_1 & \leftarrow R_1 + 2 R_2 \end{aligned} \qquad\quad \hspace{15em} \] This yields the solution set: \[ \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \hspace{5em} \eq \hspace{5em} \]
A matrix $A$ and its $\RREF$ have the same linear dependencies (if any) between the columns.

This result is very helpful in determining any linear dependencies in the columns of $A$ when given the $\RREF$.

From , $c_1 \vec{v}_1 + c_2 \vec{v}_2 + \dots + c_p \vec{v}_p =\vec{0}$ has the same solution set as the homogeneous equation \[ \mat{cccc|c} \vec{v}_1 & \vec{v_2} & \dots & \vec{v}_p & \vec{0} \rix \] which has a nontrivial solution if and only if there is at least one free variable.

The columns of a matrix $A$ are linearly independent if and only if $A \vec{x} = \vec{0}$ has only the trivial solution $\vec{x} = \vec{0}$.
Are the columns of $A = \mat{rrrrr} 5 & 6 & -1 & -11 & 6 \\ -4 & 0 & 2 & 22 & -6 \\ 2 & 3 & 4 & 10 & -2 \rix$ linearly independent?
Observe that \[ A \quad \xrightarrow{\text{\normalsize Row Ops.}} \quad \mat{rrrrr} 1 & 0 & 0 & -4 & 1 \\ 0 & 1 & 0 & 2 & 0 \\ 0 & 0 & 1 & 3 & -1 \rix .\]

(A Set of One Vector) Consider $\cbr{\vec{v}_1}$. Is the set linearly independent?

(A Set with Two Vectors) Consider $\cbr{\vec{v}_1, \vec{v}_2}$. Is the set linearly independent?
For sake of argument, suppose $\vec{v}_1\neq \vec{0} \neq \vec{v}_2$. We determine if there are nontrivial solutions to \[ c_1 \vec{v}_1 + c_2 \vec{v}_2 \eq \vec{0} .\] If $c_1\neq 0$, then $\vec{v}_1 = -\dfrac{c_2}{c_1} \, \vec{v}_2$. Therefore, a set of two vectors is linearly independent if and only if the vectors are not scalar multiples of each other.

Geometrically, two vectors are linearly independent if and only if they do not lie on the same line through the origin.

vectors v1 and v2 are linearly dependent because they are scalar multiples of each other
vectors v1 and v2 are linearly independent because they are not scalar multiples of each other

Generalizing further, $\cbr{\vec{u}, \vec{v}, \vec{w}}$ in $\R^3$, say, is linearly dependent if and only if:

(A Set with $\vec{0}$) Any set with the zero vector, e.g., $\cbr{\vec{0}, \vec{v}_1, \dots , \vec{v}_p}$, must be linearly dependent because of the nontrivial equation \[ \hspace{12em} \eq \vec{0} .\]

An ordered set $\cS = \cbr{\vec{v}_1, \dots , \vec{v}_p}$, $p\ge 2$, is linearly dependent if and only if at least one vector in $\cS$ is a linear combination of the others. Moreover, if $\cS$ is linearly dependent and $\vec{v}_1 \neq \vec{0}$, then some vector $\vec{v}_j$ for $j\ge 2$ is a linear combination of $\vec{v}_1, \dots , \vec{v}_{j-1} $.

Warning—If $\cS$ is linearly dependent, this theorem does not say that every vector is a linear combination of the others. For example, $\vec{v}_1$ and $\vec{v}_2$ are not scalar multiples of each other in (hence they are linearly independent) even though $\cbr{\vec{v}_1, \vec{v}_2, \vec{v}_3}$ is linearly dependent!

Any set of vectors $\cbr{\vec{v}_1, \dots , \vec{v}_p}$ in $\R^n $ is linearly dependent if $p>n$. That is, if there are more vectors in the set than entries in each vector.

Assuming $p>n$, then if you were to put the vectors into a matrix it would appear as \[ A \eq \mat{ccccc} \ast & \ast & \ast & \ast & \ast \\ \ast & \ast & \ast & \ast & \ast \\ \ast & \ast & \ast & \ast & \ast \rix \qquad \text{ size: } n\times p .\]

Let $A$ be the $n\times p$ matrix described above with columns $\vec{v}_1, \dots , \vec{v}_p$. Then $A \vec{x} = \vec{0}$ has more variables ($p$) than equations ($n$), so there is a free variable and hence nontrivial solutions. By , the columns of $A$ are linearly dependent.

Warning—If $p < n$, the theorem does not apply and the set may or may not be linearly dependent. As an example, when $n=3$ and $p=2$, \[ \cbr{\vphantom{\mat{c} 1 \\ 1 \\ 1 \\ 1 \\ \rix} \quad, \quad} \quad \text{is linearly independent}, \quad \cbr{\vphantom{\mat{c} 1 \\ 1 \\ 1 \\ 1 \\ \rix} \quad , \quad } \quad \text{is linearly dependent} .\]

If possible, classify the following sets as linearly independent or linearly dependent. Provide justification.
  1. $\cbr{\mat{c} 3 \\ 2 \\ 1 \rix, \mat{c} 9 \\ 6 \\ 4 \rix }$
  2. $\cbr{\mat{c} 1 \\ 7 \\ 6 \rix, \mat{c} 2 \\ 0 \\ 9 \rix, \mat{c} 3 \\ 1 \\ 5 \rix, \mat{c} 4 \\ 1 \\ 8 \rix} $
  3. $\cbr{\mat{c} 2 \\ 3 \\ 5 \rix, \mat{c} 0 \\ 0 \\ 0 \rix, \mat{c} 1 \\ 1 \\ 8 \rix} $
  4. $\cbr{\mat{c} 1 \\ 2 \\ 0 \rix} $
  5. $\cbr{\vec{u}, \vec{v}, \vec{w}}$, assuming $\cbr{\vec{u}, \vec{v}}$, $\cbr{\vec{u}, \vec{w}}$, and $\cbr{\vec{v}, \vec{w}}$ are each linearly independent.

Section 1.8: Introduction to Linear Transformations

In this section we'll view $A$ as an operation on a vector $\vec{x}$ to produce $A \vec{x} = \vec{b}$. For example, \[ A = \mat{rrrr} 4 & -3 & 1 & 3 \\ 2 & 0 & 5 & 1 \rix \quad \text{transforms}\quad \vec{x} = \mat{r} 1 \\ 1 \\ 1 \\ 1 \rix \quad \text{into}\quad \vec{b} = A \vec{x} = \mat{c} 5 \\ 8 \rix .\]

Suppose $A$ is $m\times n$. Then solving $A \vec{x} = \vec{b}$ amounts to finding all $\vec{x} \in \R^n $ which are transformed into $\vec{b} \in \R^m $ via multiplication by $A$.

For instance, the above matrix $A$ also transforms $\vec{x} = \qquad \qquad $ into $\vec{b}$.

A transformation (also called a function or mapping) $T$ from $\R^n $ to $\R^m $ is a rule that assigns to each vector in $\vec{x} \in \R^n $ a vector $T(\vec{x}) \in \R^m$.
  • The domain of $T$ is $\R^n $.
  • The codomain of $T$ is $\R^m $.
  • The image of $\vec{x}$ under $T$ is denoted $T(\vec{x})$ (which is in $\R^m $).
  • The range of $T$ is the set of all images: $\Range (T) = \cbr{T(\vec{x}) \:\mid\: \vec{x} \in {\rm dom}(T)} $

We'll use the following notation: $T : \R^n \to \R^m \qquad \text{and} \qquad \vec{x} \mapsto T(\vec{x})$

illustrating the domain, codomain, and range of a transformation T
Let $A = \mat{rr} 1 & -3 \\ 3 & 5 \\ -1 & 7 \rix$, and define $T: \R^2 \to \R^3$ by $T(\vec{x}) = A \vec{x}$.
  1. Find the image of $\vec{x} = \mat{c} x_1 \\ x_2 \rix$ under $T$.
  2. Find the image of $\vec{u} = \mat{r} 2 \\ -1 \rix$ under $T$.
  3. Find $\vec{x} \in \R ^2$ whose image is $\vec{b} = \mat{r} 3 \\ 2 \\ -5 \rix$.

    We solve $A \vec{x} = \vec{b}$. That is, \[ \mat{rr|r} 1 & -3 & 3 \\ 3 & 5 & 2 \\ -1 & 7 & -5 \rix \qquad\qquad \begin{aligned} R_2 &\leftarrow R_2 - 3 R_1 \\[0.5em] R_3 & \leftarrow R_3 + R_1 \\[0.5em] R_2 & \leftarrow 1 / 14\cdot R_2 \\[0.5em] R_3 & \leftarrow R_3 -4R_2 \\[0.5em] R_1 & \leftarrow R_1 + 3 R_2 \end{aligned} \qquad\qquad \mat{rr|r} 1 & 0 & 1.5 \\ 0 & 1 & -0.5 \\ 0 & 0 & 0 \rix .\] Therefore, $\vec{x} = \mat{c} x_1 \\ x_2 \rix = $

  4. Is there more than one $\vec{x}$ such that $T (\vec{x}) = \vec{b}$?
  5. Is the vector $\vec{c} = \mat{c} 3 \\ 2 \\ 5 \rix$ in the range of $T$?

Recall : $A( \vec{u} + \vec{v}) = A \vec{u} + A \vec{v}$. Important to our discussion are transformations with this same property.

A transformation $T$ is said to be linear if:
  1. $T (\vec{u} + \vec{v}) = T(\vec{u}) + T (\vec{v})$ for all $\vec{u}, \vec{v} \in {\rm dom}(T)$.
  2. $T(c \cdot \vec{u}) = c \cdot T(\vec{u})$ for all scalars $c$ and all $\vec{u} \in {\rm dom}(T)$.

Because of , every matrix transformation $T(\vec{x}) = A \vec{x}$ is a linear transformation.

If $T$ is a linear transformation, then:
  1. $T (\vec{0}) = \vec{0}$
  2. $T(c \cdot \vec{u} + d \cdot \vec{v}) = c \cdot T(\vec{u}) + d\cdot T(\vec{v})$ for all scalars $c,d$ and $\vec{u}, \vec{v} \in {\rm dom}(T)$.
More generally, $T(c_1 \vec{u}_1 + \dots + c_p \vec{u}_p) = c_1 T(\vec{u}_1) + \dots + c_p T(\vec{u}_p)$.
Both immediately follow by the definition of a linear transformation. First, we have \[ T(\vec{0}) \eq \hspace{5em} \eq \hspace{5em} \eq \vec{0} \] and then \[ T(c \cdot \vec{u} + d\cdot \vec{v}) \eq \hspace{8em} \eq \hspace{8em} .\]
Moreover, a transformation is linear if and only if properties 1 and 2 in hold.
Is the following transformation linear? \[ T: \R^4 \to \R ^3 \qquad \text{by} \qquad T\pbr{\mat{c} x_1 \\ x_2 \\ x_3 \\ x_4 \rix} = \mat{c} x_1 + x_2 \\ x_2 + x_3 \\ x_3 + x_4 \rix \] Let $\vec{x}, \vec{y}\in \R ^4$ and $c\in \R $. Then \begin{align*} T( \vec{x}+\vec{y}) & \; =\; T\pbr{ \mat{c} x_1+y_1 \\ x_2+y_2 \\ x_3+y_3 \\ x_4+y_4 \\ \rix } \; = \; \hspace{10em} \\[2em] & \; = \; \hspace{6em} + \hspace{6em} \\[3em] & \; = \; T(\vec{x})+T(\vec{y}) \\[1em] T(c \vec{x}) &\; = \; T \pbr{\mat{c} c x_1 \\ c x_2 \\ cx_3 \\ cx_4 \rix} \; = \; \hspace{7em} \; = \; \hspace{8em} \; = \; c T(\vec{x}) \end{align*} Therefore, $T$ is linear. Another solution is to recognize that $T$ can be represented as a matrix transformation: \[ T(\vec{x}) \eq \hspace{15em} \eq \mat{c} x_1 + x_2 \\ x_2 + x_3 \\ x_3 + x_4 \rix .\] Therefore, by the properties of matrix-vector products, $T$ is a linear transformation.
Is the following transformation linear? \[ T : \R^3 \to \R ^3 \qquad \text{by} \qquad T \pbr{\mat{c} x_1 \\ x_2 \\ x_3 \rix} = \mat{c} x_1 \\ 0 \\ x_3 \rix .\] Let $\vec{x}, \vec{y}\in \R ^3$ and $c\in \R $. Then \begin{align*} T(\vec{x} + \vec{y}) & \; = \; T\pbr{\mat{c} x_1 + y_1 \\ x_2 + y_2 \\ x_3 + y_3 \rix} \; = \; \hspace{7em} \\[3em] & \; = \; \hspace{8em} + \hspace{8em} \\[3em] & \; = \; T(\vec{x}) + T(\vec{y}) \\[2em] T(c \vec{x}) & \; = \; T\pbr{\mat{c} c x_1 \\ cx_2 \\ c x_3 \rix} \; = \; \hspace{7em} \; = \; \hspace{8em} \; = \; c \cdot T(\vec{x}) \end{align*} Another solution is to recognize that $T$ can be represented via \[ T(\vec{x}) \eq A \vec{x} \eq \hspace{14em} \eq \mat{c} x_1 \\ 0 \\ x_3 \rix .\] Therefore, $T$ is a matrix transformation and hence linear.
Are the following transformations linear? \[ T : \R ^2 \to \R ^3 \qquad \text{by} \qquad T\pbr{\mat{c} x_1 \\ x_2 \rix} = \mat{c} 2x_1 + 3x_2 \\ x_1 + 5 \\ x_2 - 2x_3 \rix .\]
\[ S: \R ^2 \to \R ^2 \qquad \text{by} \qquad S \pbr{\mat{c} x_1 \\ x_2 \rix} = \mat{c} 4x_1 - 2x_2 \\ 3 \cdot \abs{x_2} \rix .\]
\[ P : \R^3 \to \R ^3 \qquad \text{by} \qquad P \pbr{\mat{c} x_1 \\ x_2 \\ x_3 \rix} = \mat{r} x_1 \\ x_2 \\ -x_3\rix .\]

Section 1.9: Matrix of a Linear Transformation

Note: We are not covering all the material in this section.

It turns out that every linear transformation can be described by a matrix transformation. This is extremely nontrivial to prove and requires more material than we will cover in this course.

But, it poses the question: how do we find the matrix?
The $n\times n$ identity matrix, denoted $I_n$ or just $I$ if the size is clear, has $1$ on the main diagonal and $0$ elsewhere: \[ I_n \eq \mat{cccc} 1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & 1 \rix .\]

The $i$-th column of $I_n$ is denoted by the vector $\vec{e}_i \in \R^n$. For example, \[ I_3 \eq \mat{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \rix \eq \mat{ccc} \vec{e}_1 & \vec{e}_2 & \vec{e}_3 \rix .\] The identity matrix plays the role of $1$ in multiplication of real numbers: \[ I_3 \vec{x} \eq \mat{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \rix \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \vec{x} .\]

The columns $\vec{e}_i$ of the identity matrix allow us to find the matrix transformation that represents a linear transformation $T$, assuming we know $T(\vec{e}_i)$.

Suppose $T: \R ^2 \to \R ^3$ is a linear transformation and \[ T(\vec{e}_1) = \mat{r} 2 \\ -3 \\ 4 \rix, \qquad T(\vec{e}_2) = \mat{r} 5 \\ 0 \\ 1 \rix \qquad \text{where}\quad \vec{e}_1 = \mat{c} 1 \\ 0 \rix, \quad \vec{e}_2 = \mat{c} 0 \\ 1 \rix .\] What is $T(\vec{x})$ for any vector $\vec{x} = \mat{c} x_1 \\ x_2 \rix$?

Notice that any vector $x\in \R ^2$ can be written as \[ \mat{c} x_1 \\ x_2 \rix \eq \hspace{8em} \eq \hspace{8em} .\] Because $T$ is a linear transformation, then \begin{align*} T(\vec{x}) & \eq \hspace{7em} \eq \hspace{7em} \\[4em] & \eq \hspace{7em} + \hspace{7em} \\[4em] & \eq \hspace{8em} \\[4em] & \eq \hspace{8em} \\[4em] & \eq \end{align*}

Let $T: \R^n \to \R^m $ be a linear transformation. Then there exists a unique $m\times n$ matrix $A$ such that \[ T(\vec{x}) \eq A \vec{x} \qquad \text{for all } \vec{x} \in \R^n .\] In fact, the columns of $A$ are \[ A \eq \mat{ccc} T(\vec{e}_1) & \dots & T(\vec{e}_n) \rix \] where $\vec{e}_i \in \R^n$ is the $i$-th column of $I_n$.

(See Exercise 41 for a proof of the uniqueness of $A$.)

We call $A$ the standard matrix for the linear transformation $T$.

Find the $3\times 2$ matrix $A$ such that $A \mat{c} x_1 \\ x_2 \rix = \mat{c} x_1 - 2x_2 \\ 4x_1 \\ 3x_1 + 2x_2 \rix = T(\vec{x})$. From our result, we know that \begin{align*} A & \eq \mat{cc} T(\vec{e}_1) & T(\vec{e}_2) \rix \\ & \eq \mat{cc} T\pbr{\mat{c} 1 \\ 0 \rix} & T \pbr{\mat{c} 0 \\ 1 \rix} \rix \\ & \eq \mat{cc} \qquad & \qquad \\ & \\ & \rix \end{align*} Check: \[ A \mat{c} x_1 \\ x_2 \rix \eq \hspace{5em} \eq \mat{c} x_1 - 2x_2 \\ 4x_1 \\ 3x_1 + 2x_2 \rix .\]
Find the standard matrix representation of the linear transformation $T: \R^2 \to \R ^2$ which rotates a point about the origin through an angle of $\pi /4$ radians counterclockwise.
illustrating the rotation of the standard basis vectors e1 and e2 through an angle of positive pi/4 radians.
\[ T (\textcolor{green}{\vec{e}_1}) \eq \hspace{8em} \qquad T(\textcolor{red}{\vec{e}_2}) = \hspace{6em} .\] Therefore, the standard matrix representation is \[ A \eq \hspace{8em} \eq \hspace{8em} .\] In general, the matrix \[ G \eq \hspace{7em} \] rotates a vector in $\R ^2$ counterclockwise by an angle $\theta $. For an interesting application, see Exercise 47 in the Chapter 1 Supplementary Exercises.