MTH 215 — Intro to Linear Algebra
Section 3.1: Intro to Determinants
Motivation for studying determinants can be rooted in linear transformations and geometry.
To start, find the images of the unit square in $\R^2$ under the transformations $T (\vec{x}) = A \vec{x}$.
|
|
$\displaystyle \xrightarrow{\displaystyle A = \mat{rr} -2 & 0 \\ 0 & 4 \rix}$ |
|
| $\displaystyle \xrightarrow{\displaystyle A = \mat{rr} 1 & -2 \\ 4 & 2 \rix}$ |
|
|
| $\displaystyle \xrightarrow{\displaystyle A = \mat{rr} -3 & 1 \\ 3 & -1 \rix}$ |
|
What is the area of the parallelogram with vertices $(0,0)$, $(a,c)$, $(b,d)$?
Notice that the blue triangles are congruent, and the green triangles are congruent. The other vertex is therefore $(a+b, c+d)$. The areas are: \begin{align*} R & \eq \text{Area of rectangle} & \eq & \\ T_1 & \eq \text{Area of blue triangle} & \eq & \\ T_2 & \eq \text{Area of green triangle} & \eq & \\ P & \eq \text{Area of parallelogram} & \eq & R - 2 T_1 - 2 T_2 \\ & & \eq & \\ & & \eq & \\ \end{align*}
Therefore, the linear transformation $T(\vec{x}) = A \vec{x}$ where $A = \mat{cc} a & b \\ c & d\rix$ will map the unit square (defined by $\vec{e}_1$ and $\vec{e}_2$) into the parallelogram spanned by the vectors $\mat{c} a \\ c \rix$ and $\mat{c} b \\ d \rix$.
This term of $ad-bc$ in relation to $2\times 2$ matrices was seen in Chapter 2. We call it the determinant of the matrix.
Compute the determinants of the following matrices:
- $A = \mat{rr} -2 & 0 \\ 0 & 4 \rix$
- $A = \mat{rr} 1 & -2 \\ 4 & 2 \rix$
- $A = \mat{rr} -3 & 1 \\ 3 & -1 \rix$
What about $3\times 3$ matrices and larger? First, we need some notation. If $A$ is an $n\times n$ square matrix, let $A_{ij}$ denote the submatrix formed by removing row $i$ and column $j$. For example,
\[ A = \mat{rrrr} 1 & -2 & 5 & 0 \\ 2 & 0 & 4 & -1 \\ 3 & 1 & 0 & 7 \\ 0 & 4 & -2 & 0 \rix \qquad A_{32} = \mat{rrrr} 1 & 5 & 0\\ 2 & 4 & -1 \\ 0 & -2 & 0 \rix .\]
Compute the determinants of $A = \mat{rrr} 1 & 5 & -1 \\ 2 & 4 & -1 \\ 0 & -2 & 0 \rix$ and $B = \mat{rrr} -2 & -4 & 5 \\ 0 & 1 & -6 \\ -8 & 2 & 1 \rix$.
It turns out you can use any row (or column) of $A$ to compute the determinant. Some more notation is needed.
The $(i,j)$-cofactor of $A$, denoted $C_{ij}$, is \[ C_{ij} \eq (-1)^{i+j} \cdot \Det (A_{ij}) .\]
Recall that $A_{ij}$ is the submatrix obtained by removing row $i$ and column $j$. Then, \begin{align*} \Det (A) & \eq \sum_{j=1}^{n} (-1)^{1+j} a_{1j} \Det (A_{1j} ) \\ & \eq a_{11}\Det (A_{11}) - a_{12} \Det (A_{12}) + \dots + (-1)^{1+n} a_{1n} \Det (A_{1n} ) \\ & \eq \sum_{j=1}^{n} a_{1j} C_{1j} \\ & \eq a_{11}C_{11} + a_{12}C_{12} + \dots + a_{1n} C_{1n} \end{align*} The latter is a cofactor expansion across the first row of $A$.
- Across $i$-th row: $\Det (A) = a_{i 1} C_{i 1} + a_{i 2} C_{i 2} + \dots + a_{i n} C_{i n} $.
- Down $j$-th column: $\Det (A) = a_{1j} C_{1j} + a_{2j} C_{2j} + \dots + a_{nj} C_{nj} $.
The plus or minus sign in the $(i,j)$-cofactor $C_{ij}$ depends on the position of $a_{ij} $ in the matrix, in particular $(-1)^{i+j}$, not on the sign of $a_{ij}$ itself.
The following pattern is useful in determining the sign of each cofactor: \[ \mat{ccccc} + & - & + & - & \dots \\ - & + & - & + & \dots \\ + & - & + &- & \dots \\ \vdots & \vdots & \vdots & \vdots & \ddots \rix .\]
Find the determinant of $A = \mat{rrr} 1 & 5 & -1 \\ 2 & 4 & -1 \\ 0 & -2 & 0 \rix$ using a cofactor expansion about $\dots$
- $\dots$ the first column:
- $\dots$ the second row:
- $\dots$ the third row:
Carefully choosing a row or column on which to use cofactor expansion can be very advantageous.
Compute the determinant of $A = \mat{rrrrr} 3 & -7 & 8 & 9 & -6 \\ 0 & 2 & -5 & 7 & 3 \\ 0 & 0 & 6 & 2 & 0 \\ 0 & 0 & -1 & 2 & -1 \\ 0 & 0 & 0 & 1 & 0 \rix$.
By exploiting structure of a matrix, the determinant can easily be computed.
Notice that a diagonal matrix is both upper and lower triangular.
The following theorem will be very important later, and is of great computational advantage.
- $\mat{rrr} -2 & 4 & 5 \\ 0 & 4 & 10 \\ 0 & 0 & 3 \rix$
- $\mat{rrr} 7 & 0 & 0 \\ -2 & -1 & 0 \\ 9 & 1 & 3\rix$
- $\mat{rrr} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & -1\rix$
Section 3.2: Properties of Determinants
There are several important properties of the determinant. First, we will inspect how the determinant changes when row operations are performed on the matrix.
- (Row Replacement) If a row operation of the form $R_i \leftarrow R_i + \alpha R_j$ is performed on $A$ to produce $B$, then $\Det (B) = \Det (A)$.
- (Row Swap) If a row operation of the form $R_i \leftrightarrow R_j$ is performed on $A$ to produce $B$, then $\Det (B) = - \Det (A)$.
- (Row Scaling) If a row operation of the form $R_i \leftarrow k \cdot R_i$ is performed on $A$ to produce $B$, then $\Det (B) = k \cdot \Det (A)$.
Suppose that $A$ has an $\REF$ of $B$ obtained only by using row replacements and row swaps (no scaling). If $r$ row swaps are used, then \[ \Det (A) \eq (-1)^r \cdot \Det (B) .\]
Because $B$ is in $\REF$, then its determinant is the product of its diagonal entries.
\[ \Det (A) \eq \begin{cases} (-1)^r \cdot \pbr{\text{product of pivots in } B} & A \text{ is nonsingular}\\ 0 & A \text{ is singular} \end{cases} \]
Next, we will investigate how matrix operations change (or do not change) the determinant.
An important implication of this result is that holds when “row” is replaced with “column”.
- If a multiple of a row (column) of $A$ is added to another row (column) of $A$ to produce $B$, then $\Det (B) = \Det (A)$.
- If two rows (columns) of $A$ are swapped to produce $B$, then $\Det (B) = - \Det (A)$.
- If one row (column) of $A$ is multiplied by $k$ to produce $B$, then $\Det (B) = k \cdot \Det (A)$.
Assuming $A$ is nonsingular, what is the determinant of $A^{-1}$? Observe that \[ A \cdot A^{-1} = I_n .\]
By , then \[ \Det (A A^{-1}) \eq \Det (I_n) \qquad \implies \qquad \]
It immediately follows that \[ \Det (A^{-1}) \eq \]
Also from , for any scalar $c \in \R $ then \[ \Det (c\cdot A) \eq \]
What about sums? Is $\Det (A+B) = \Det (A) + \Det (B)$?
- Row (column) replacements do not change the determinant, row (column) swaps negate the determinant, and row (column) scaling also scales the determinant.
- $\Det (AB) = \Det (A) \cdot \Det (B)$.
- $\Det (A) =0$ if and only if $A$ is singular.
- If $A$ is nonsingular, then $\Det (A^{-1}) = 1 / \Det (A)$.
- For any scalar $c$, $\Det (c\cdot A) = c^n \cdot \Det (A)$.
- In general, $\Det (A+B) \neq \Det (A) + \Det (B)$.
- $\Det (A^2 B^3)$
- $\Det (A^{-1} B^2 A^2 B^{-1})$
- $\Det (4 A^{-1} B)$
Even though the determinant is a handy theoretical tool, it is almost nonexistent in applications. There are much more efficient ways to, for example, determine if a matrix is singular, instead of using a determinant. Furthermore, calculating a determinant is a dead-end street whereas row reducing a matrix at least gives you very important knowledge about its fundamental subspaces.
TL;DR: You need to know properties of the determinant and be skilled in computing them—especially for Chapter 5, where this ability is crucial!—but in applications (the “real world”) they are seldom used.
In section 2.3, recall the following two examples. These can be proven easily with determinants.
- Suppose $A$ and $B$ are $n\times n$ matrices, and the columns of $B$ are linearly dependent. Show that the columns of $AB$ are linearly dependent.
Solution: The columns of $AB$ are linearly dependent if and only if $\Det (AB)=0$. Notice that $\Det (B)=0$ by the same reasoning. Therefore, \[ \Det (AB) \eq \Det (A) \cdot \Det (B) \eq 0 .\] The conclusion follows.
- Let $A$ and $B$ be $n\times n$ matrices. If $AB$ is invertible, show that $A$ is invertible.
Solution: If $AB$ is invertible, then $\Det (AB)\neq 0$. Therefore, \[ \Det (AB) \eq \Det (A) \cdot \Det (B) \eq[\neq ] 0 .\] Therefore, neither $A$ nor $B$ can be singular.