Web Analytics
MTH 215 Chapter 3

MTH 215 — Intro to Linear Algebra

Kyle Monette
Spring 2026

Section 3.1: Intro to Determinants

Motivation for studying determinants can be rooted in linear transformations and geometry.

To start, find the images of the unit square in $\R^2$ under the transformations $T (\vec{x}) = A \vec{x}$.

$\displaystyle \xrightarrow{\displaystyle A = \mat{rr} -2 & 0 \\ 0 & 4 \rix}$

$\displaystyle \xrightarrow{\displaystyle A = \mat{rr} 1 & -2 \\ 4 & 2 \rix}$

$\displaystyle \xrightarrow{\displaystyle A = \mat{rr} -3 & 1 \\ 3 & -1 \rix}$

What is the area of the parallelogram with vertices $(0,0)$, $(a,c)$, $(b,d)$?

Notice that the blue triangles are congruent, and the green triangles are congruent. The other vertex is therefore $(a+b, c+d)$. The areas are: \begin{align*} R & \eq \text{Area of rectangle} & \eq & \\ T_1 & \eq \text{Area of blue triangle} & \eq & \\ T_2 & \eq \text{Area of green triangle} & \eq & \\ P & \eq \text{Area of parallelogram} & \eq & R - 2 T_1 - 2 T_2 \\ & & \eq & \\ & & \eq & \\ \end{align*}

Therefore, the linear transformation $T(\vec{x}) = A \vec{x}$ where $A = \mat{cc} a & b \\ c & d\rix$ will map the unit square (defined by $\vec{e}_1$ and $\vec{e}_2$) into the parallelogram spanned by the vectors $\mat{c} a \\ c \rix$ and $\mat{c} b \\ d \rix$.

This term of $ad-bc$ in relation to $2\times 2$ matrices was seen in Chapter 2. We call it the determinant of the matrix.

The determinant of a $2\times 2$ matrix $A = \mat{cc} a & b \\ c & d \rix$ is given by \[ \Det (A) \eq \abs{A} \eq \vmat{cc} a & b \\ c & d \vrix \eq ad - bc .\]

Compute the determinants of the following matrices:

  1. $A = \mat{rr} -2 & 0 \\ 0 & 4 \rix$
  2. $A = \mat{rr} 1 & -2 \\ 4 & 2 \rix$
  3. $A = \mat{rr} -3 & 1 \\ 3 & -1 \rix$

What about $3\times 3$ matrices and larger? First, we need some notation. If $A$ is an $n\times n$ square matrix, let $A_{ij}$ denote the submatrix formed by removing row $i$ and column $j$. For example,

\[ A = \mat{rrrr} 1 & -2 & 5 & 0 \\ 2 & 0 & 4 & -1 \\ 3 & 1 & 0 & 7 \\ 0 & 4 & -2 & 0 \rix \qquad A_{32} = \mat{rrrr} 1 & 5 & 0\\ 2 & 4 & -1 \\ 0 & -2 & 0 \rix .\]

For $n\ge 2$, the determinant of an $n\times n$ matrix $A$ is the sum \begin{align*} \Det (A) & \eq a_{11}\Det (A_{11}) - a_{12} \Det (A_{12}) + \dots + (-1)^{1+n} a_{1n} \Det (A_{1n} )\\ & \eq \sum_{j=1}^{n} (-1)^{1+j} a_{1j} \Det (A_{1j} ) \end{align*}

Compute the determinants of $A = \mat{rrr} 1 & 5 & -1 \\ 2 & 4 & -1 \\ 0 & -2 & 0 \rix$ and $B = \mat{rrr} -2 & -4 & 5 \\ 0 & 1 & -6 \\ -8 & 2 & 1 \rix$.

It turns out you can use any row (or column) of $A$ to compute the determinant. Some more notation is needed.

The $(i,j)$-cofactor of $A$, denoted $C_{ij}$, is \[ C_{ij} \eq (-1)^{i+j} \cdot \Det (A_{ij}) .\]

Recall that $A_{ij}$ is the submatrix obtained by removing row $i$ and column $j$. Then, \begin{align*} \Det (A) & \eq \sum_{j=1}^{n} (-1)^{1+j} a_{1j} \Det (A_{1j} ) \\ & \eq a_{11}\Det (A_{11}) - a_{12} \Det (A_{12}) + \dots + (-1)^{1+n} a_{1n} \Det (A_{1n} ) \\ & \eq \sum_{j=1}^{n} a_{1j} C_{1j} \\ & \eq a_{11}C_{11} + a_{12}C_{12} + \dots + a_{1n} C_{1n} \end{align*} The latter is a cofactor expansion across the first row of $A$.

The determinant of an $n\times n$ matrix can be computed with cofactor expansion across any row or down any column.
  • Across $i$-th row: $\Det (A) = a_{i 1} C_{i 1} + a_{i 2} C_{i 2} + \dots + a_{i n} C_{i n} $.
  • Down $j$-th column: $\Det (A) = a_{1j} C_{1j} + a_{2j} C_{2j} + \dots + a_{nj} C_{nj} $.

The plus or minus sign in the $(i,j)$-cofactor $C_{ij}$ depends on the position of $a_{ij} $ in the matrix, in particular $(-1)^{i+j}$, not on the sign of $a_{ij}$ itself.

The following pattern is useful in determining the sign of each cofactor: \[ \mat{ccccc} + & - & + & - & \dots \\ - & + & - & + & \dots \\ + & - & + &- & \dots \\ \vdots & \vdots & \vdots & \vdots & \ddots \rix .\]

Find the determinant of $A = \mat{rrr} 1 & 5 & -1 \\ 2 & 4 & -1 \\ 0 & -2 & 0 \rix$ using a cofactor expansion about $\dots$

  1. $\dots$ the first column:
  2. $\dots$ the second row:
  3. $\dots$ the third row:

Carefully choosing a row or column on which to use cofactor expansion can be very advantageous.

Compute the determinant of $A = \mat{rrrrr} 3 & -7 & 8 & 9 & -6 \\ 0 & 2 & -5 & 7 & 3 \\ 0 & 0 & 6 & 2 & 0 \\ 0 & 0 & -1 & 2 & -1 \\ 0 & 0 & 0 & 1 & 0 \rix$.

By exploiting structure of a matrix, the determinant can easily be computed.

An $n\times n$ matrix is said to be upper triangular if all entries below the main diagonal are zero, and lower triangular if all entries above the main diagonal are zero. \[ \mat{ccccc} a_{11} & a_{12} & a_{13} & \dots & a_{1n}\\ 0 & a_{22} & a_{23} & \dots & a_{2n}\\ 0 & 0 & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & a_{nn} \rix \qquad \qquad \mat{ccccc} a_{11} & 0 & 0 & \dots & 0 \\ a_{21} & a_{22} & 0 & \dots & 0 \\ a_{32} & a_{32} & a_{33} & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n 1} & a_{n 2} & a_{n 3} & \dots & a_{nn} \rix .\] If specification is not needed, a matrix is triangular if it is either lower or upper triangular.

Notice that a diagonal matrix is both upper and lower triangular.

The following theorem will be very important later, and is of great computational advantage.

The determinant of any $n\times n$ triangular matrix is the product of the entries on the main diagonal.
Compute the determinants of the following matrices.
  1. $\mat{rrr} -2 & 4 & 5 \\ 0 & 4 & 10 \\ 0 & 0 & 3 \rix$
  2. $\mat{rrr} 7 & 0 & 0 \\ -2 & -1 & 0 \\ 9 & 1 & 3\rix$
  3. $\mat{rrr} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & -1\rix$

Section 3.2: Properties of Determinants

There are several important properties of the determinant. First, we will inspect how the determinant changes when row operations are performed on the matrix.

Let $A$ be a square matrix.
  1. (Row Replacement) If a row operation of the form $R_i \leftarrow R_i + \alpha R_j$ is performed on $A$ to produce $B$, then $\Det (B) = \Det (A)$.
  2. (Row Swap) If a row operation of the form $R_i \leftrightarrow R_j$ is performed on $A$ to produce $B$, then $\Det (B) = - \Det (A)$.
  3. (Row Scaling) If a row operation of the form $R_i \leftarrow k \cdot R_i$ is performed on $A$ to produce $B$, then $\Det (B) = k \cdot \Det (A)$.
Let $A = \mat{rrr} 1 & -4 & 2 \\ -2 & 8 & -9 \\ -1 & 7 & 0 \rix$. Instead of computing the determinant by definition, we can perform the following row operations: \begin{align*} R_2 &\leftarrow R_2 + 2 R_1 & \quad & \mat{rrr} 1 & -4 & 2 \\ 0 & 0 & -5 \\ -1 & 7 & 0 \rix & \\ R_3 &\leftarrow R_3 + R_1 & \quad & \mat{rrr} 1 & -4 & 2 \\ 0 & 0 & -5 \\ 0 & 3 & 2 \rix & \\ R_2 & \leftrightarrow R_3 & \quad & \mat{rrr} 1 & -4 & 2 \\ 0 & 3 & 2 \\ 0 & 0 & -5 \rix & \ \end{align*} This new matrix $B$ is upper triangular, and so:
Let $A = \mat{rrrr} 2 & -8 & 6 & 8 \\ 3 & -9 & 5 & 10 \\ -3 & 0 & 1 & -2 \\ 1 & -4 & 0 & 6 \rix $. \begin{align*} & R_1 & \leftarrow \frac{1}{2}\cdot R_1 \quad & \mat{rrrr} 1 & -4 & 3 & 4 \\ 3 & -9 & 5 & 10 \\ -3 & 0 & 1 & -2 \\ 1 & -4 & 0 & 6 \rix \\ & R_2 & \leftarrow R_2 - 3 R_1, R_3 \leftarrow R_3 + 3 R_1, R_4 \leftarrow R_4 - R_1 \quad & \mat{rrrr} 1 & -4 & 3 & 4 \\ 0 & 3 & -4 & -2 \\ 0 & -12 & 10 & 10 \\ 0 & 0 & -3 & 2 \rix \\ & R_3 & \leftarrow R_3 + 4 R_2, R_4 \leftarrow R_4 -\frac{1}{2}R_3 \quad & \mat{rrrr} 1 & -4 & 3 & 4 \\ 0 & 3 & -4 & -2 \\ 0 & 0 & -6 & 2 \\ 0 & 0 & 0 & 1 \rix \end{align*} Call this new matrix $B$. Then:

Suppose that $A$ has an $\REF$ of $B$ obtained only by using row replacements and row swaps (no scaling). If $r$ row swaps are used, then \[ \Det (A) \eq (-1)^r \cdot \Det (B) .\]

Because $B$ is in $\REF$, then its determinant is the product of its diagonal entries.

\[ \Det (A) \eq \begin{cases} (-1)^r \cdot \pbr{\text{product of pivots in } B} & A \text{ is nonsingular}\\ 0 & A \text{ is singular} \end{cases} \]

A square matrix $A$ is invertible if and only if $\Det (A)\neq 0$.

Next, we will investigate how matrix operations change (or do not change) the determinant.

If $A$ is a square matrix, then $\Det (A^T) = \Det (A)$.
See page 182 in the textbook for details. A short summary is the following---cofactor expansion along the first row of $A$ is the same as expansion along the first column of $A^T$. In the $2\times 2$ case, we can easily prove the theorem by noticing that: \begin{align*} \Det (A) & \eq \vmat{cc} a & b \\ c & d \vrix \eq ad-bc \\ \Det(A^T) & \eq \vmat{cc} a & c \\ b & d \vrix \eq ad-cb \end{align*} (Try this for the $3\times 3$ case!)

An important implication of this result is that holds when “row” is replaced with “column”.

Let $A$ be a square matrix.
  1. If a multiple of a row (column) of $A$ is added to another row (column) of $A$ to produce $B$, then $\Det (B) = \Det (A)$.
  2. If two rows (columns) of $A$ are swapped to produce $B$, then $\Det (B) = - \Det (A)$.
  3. If one row (column) of $A$ is multiplied by $k$ to produce $B$, then $\Det (B) = k \cdot \Det (A)$.
For two $n\times n$ matrices $A$ and $B$, then \[ \Det (AB) \eq \Det (A) \cdot \Det (B) .\]
A proof on page 184 of the textbook uses elementary matrices. This is complex, and will not be discussed.
Warning! $A$ and $B$ must be square matrices! A common misuse of this theorem is claiming that, for example, \[ A = \mat{cc} 1 & 2 \\ 3 & 4 \\ 5 & 6 \rix , \quad B = \mat{ccc} 1 & 1 & 1 \\ 2 & 2 & 2 \rix \quad \implies \quad \Det (AB) = \vmat{ccc} 5 & 5 & 5 \\ 11 & 11 & 11 \\ 17 & 17 & 17 \vrix = \underbrace{\Det (A) \cdot \Det (B)}_{\text{\normalsize \color{red} not even defined!!}} .\]

Assuming $A$ is nonsingular, what is the determinant of $A^{-1}$? Observe that \[ A \cdot A^{-1} = I_n .\]

By , then \[ \Det (A A^{-1}) \eq \Det (I_n) \qquad \implies \qquad \]

It immediately follows that \[ \Det (A^{-1}) \eq \]

Also from , for any scalar $c \in \R $ then \[ \Det (c\cdot A) \eq \]

What about sums? Is $\Det (A+B) = \Det (A) + \Det (B)$?

Review of Determinant Properties Let $A$ and $B$ be $n\times n$ matrices.
  1. Row (column) replacements do not change the determinant, row (column) swaps negate the determinant, and row (column) scaling also scales the determinant.
  2. $\Det (AB) = \Det (A) \cdot \Det (B)$.
  3. $\Det (A) =0$ if and only if $A$ is singular.
  4. If $A$ is nonsingular, then $\Det (A^{-1}) = 1 / \Det (A)$.
  5. For any scalar $c$, $\Det (c\cdot A) = c^n \cdot \Det (A)$.
  6. In general, $\Det (A+B) \neq \Det (A) + \Det (B)$.
Suppose $A$ and $B$ are $5\times 5$ matrices and $\Det (A)=2$, $\Det (B) = -3$. If possible, compute:
  1. $\Det (A^2 B^3)$
  2. $\Det (A^{-1} B^2 A^2 B^{-1})$
  3. $\Det (4 A^{-1} B)$

Even though the determinant is a handy theoretical tool, it is almost nonexistent in applications. There are much more efficient ways to, for example, determine if a matrix is singular, instead of using a determinant. Furthermore, calculating a determinant is a dead-end street whereas row reducing a matrix at least gives you very important knowledge about its fundamental subspaces.

TL;DR: You need to know properties of the determinant and be skilled in computing them—especially for Chapter 5, where this ability is crucial!—but in applications (the “real world”) they are seldom used.

In section 2.3, recall the following two examples. These can be proven easily with determinants.

  1. Suppose $A$ and $B$ are $n\times n$ matrices, and the columns of $B$ are linearly dependent. Show that the columns of $AB$ are linearly dependent.

    Solution: The columns of $AB$ are linearly dependent if and only if $\Det (AB)=0$. Notice that $\Det (B)=0$ by the same reasoning. Therefore, \[ \Det (AB) \eq \Det (A) \cdot \Det (B) \eq 0 .\] The conclusion follows.

  2. Let $A$ and $B$ be $n\times n$ matrices. If $AB$ is invertible, show that $A$ is invertible.

    Solution: If $AB$ is invertible, then $\Det (AB)\neq 0$. Therefore, \[ \Det (AB) \eq \Det (A) \cdot \Det (B) \eq[\neq ] 0 .\] Therefore, neither $A$ nor $B$ can be singular.