Web Analytics
MTH 215 Chapter 5

MTH 215 — Intro to Linear Algebra

Kyle Monette
Spring 2026

Section 5.1: Eigenvalues and Eigenvectors

The transformation $\vec{x} \mapsto A \vec{x}$ can move vectors in a variety of directions. However, there are special vectors on which the action of $A$ is simple.

For example, consider $A = \mat{rr} 0 & -2 \\ -4 & 2 \rix$. Graph $A \vec{x}$ for each $\vec{x}$ shown.

The vector from (0,0) to (0,1) in the xy plane. The vector from (0,0) to (-1,1) in the xy plane.
The vector from (0,0) to (1,1) in the xy plane. The vector from (0,0) to (-1,2) in the xy plane.

An eigenvector of an $n\times n$ matrix $A$ is a nonzero! vector $\vec{x}$ such that \[ A \vec{x} \eq \lambda \vec{x} \] for some (potentially zero) scalar $\lambda$.

An eigenvalue $\lambda $ of $A$ is a scalar for which there is a nontrivial solution to $A \vec{x} = \lambda \vec{x}$.

Show that $A = \mat{cc} 1 & 6 \\ 5 & 2 \rix$ has an eigenvalue of $\lambda =7$, and find corresponding eigenvectors.

$\lambda =7$ is an eigenvalue of $A$ if and only if the equation

has a nontrivial solution. That is, if and only if the homogenous equation

has a nontrivial solution. This augmented matrix is

and is row equivalent to

So, eigenvectors corresponding to $\lambda =7$ are

The eigenspace of $A$ corresponding to eigenvalue $\lambda $ is the set of all solutions to \[ (A-\lambda I) \vec{x} \eq \vec{0} .\] That is, the nullspace of $A-\lambda I$.

Note: The eigenspace of $A$ corresponding to $\lambda $ is a subspace of $\R^n $.

In the last example, another eigenvalue for $A = \mat{cc} 1 & 6 \\ 5 & 2 \rix$ is $\lambda =-4$. Eigenvectors corresponding to $\lambda =-4$ are \[ \vec{x} \eq \mat{r} x_1 \\ x_2 \rix \eq \]

A blue line illustrating the eigenspace for eigenvalue 7, which is a line through the origin and the point (1,1). A green line illustrating the eigenspace for eigenvalue -4, which is a line through the origin and the point (6,-5).
Let $A = \mat{rrr} 4 & -1 & 6 \\ 2 & 1 & 6 \\ 2 & -1 & 8 \rix$. An eigenvalue of $A$ is $\lambda =2$. Find a basis for the corresponding eigenspace. We need to find a basis for $\Null (A-2 I)$. Form the matrix \[ A - 2 I \eq \mat{rrr} 4 & -1 & 6 \\ 2 & 1 & 6 \\ 2 & -1 & 8 \rix - \mat{rrr} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \rix \eq \mat{rrr} 2 & -1 & 6 \\ 2 & -1 & 6 \\ 2 & -1 & 6 \rix \] and then row reduce the augmented matrix for $(A-2I) \vec{x} = \vec{0}$: \[ \mat{rrr|r} 2 & -1 & 6 & 0 \\ 2 & -1 & 6 & 0 \\ 2 & -1 & 6 & 0 \rix \qquad \xrightarrow{\text{\normalsize Row Ops.}} \qquad \mat{rrr|r} 1 & -1 /2 & 3 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \rix .\] From here, it is clear that $\lambda =2$ is an eigenvalue since there are free variables. Furthermore, \[ \mat{c} x_1 \\ x_2 \\ x_3 \rix \eq \] and so a basis for the eigenspace is:

Suppose $\lambda $ is an eigenvalue of $A$ with corresponding eigenvector $\vec{x}$. Determine an eigenvalue-eigenvector pair of $A^2$ and of $A^3$.

Since $\lambda $ and $\vec{x}$ are an eigenvalue-eigenvector pair, then $A \vec{x} = \lambda \vec{x}$. Multiplying on the left by $A$ yields \[ A^2 \vec{x} \eq \lambda A \vec{x} .\]

Lots of applications in differential or difference equations and signal processing are based on this concept! See problems 41 and 42 in the textbook if you are interested.

The eigenvalues of a triangular matrix are the entries on the main diagonal.
Suppose $A$ is upper triangular (proof holds similarly for lower triangular). Then \[ A - \lambda I \eq \mat{ccccc} a_{11}-\lambda & a_{12} & a_{13} & \dots & a_{1n} \\ 0 & a_{22}-\lambda & a_{23} & \dots & a_{2n} \\ 0 & 0 & a_{33}-\lambda & \dots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & a_{nn} -\lambda \rix .\] Recall that $\lambda $ is an eigenvalue if and only if $(A-\lambda I) \vec{x} = \vec{0}$ has a nontrivial solution, i.e., has a free variable. And, $(A-\lambda I) \vec{x} = \vec{0}$ has a free variable if and only if at least one of the diagonal entries is $0$. This happens if and only if $\lambda $ equals $a_{11}, a_{22}, \dots , a_{nn}$.
For example, what are the eigenvalues of $A$ and $B$ where \[ A = \mat{rrr} 3 & 6 & -8 \\ 0 & 0 & 6 \\ 0 & 0 & 2 \rix \hspace{10em} B = \mat{rrr} 4 & 0 & 0 \\ -2 & 1 & 0 \\ 5 & 3 & 4 \rix .\]

What happens when $A$ has a zero eigenvalue?

$A$ has zero eigenvalue if and only if $A \vec{x} = 0 \vec{x}$
if and only if $A \vec{x} = \vec{0}$ has a nontrivial solution
if and only if $A \vec{x} = \vec{0}$ has a free variable
if and only if $A$ does not have a pivot in all columns
if and only if $A$ is singular
If $\vec{v}_1, \dots , \vec{v}_p$ are eigenvectors corresponding to distinct eigenvalues $\lambda_1, \dots , \lambda_p$ of an $n\times n$ matrix $A$, then $\cbr{\vec{v}_1, \dots , \vec{v}_p}$ are linearly independent.
See the textbook (page 278). This result will be crucial for us later.
  1. Suppose $A$ has an eigenvalue $\lambda$ corresponding to eigenvector $\vec{x}$. Is $-\vec{x}$ also an eigenvector of $A$?
  2. Now suppose $A$ also has an eigenvalue $-\lambda$. What is a corresponding eigenvector?
A mysterious matrix $A$ is such that \[ A \mat{r} 2 \\ -1 \\ 3 \rix \;=\; \mat{r} -4 \\ 2 \\ -6\rix, \qquad A \mat{r} 1 \\ 0 \\ 0 \rix \;=\; \mat{r} 0 \\ 0 \\ 0 \rix, \qquad A \mat{r} 1 \\ 1 \\ 2 \rix \;=\; \mat{r} -1 \\ -1 \\ -2 \rix .\] What can be said about the matrix?

Section 5.2: The Characteristic Equation

Recall that $\vec{x}$ is an eigenvector associated with eigenvalue $\lambda$ if $A \vec{x} = \lambda \vec{x}$. Given an eigenvalue $\lambda $, we can find eigenvectors by solving \[ (A - \lambda I) \vec{x} \eq \vec{0} \] for $\vec{x}$. But, how do we find the eigenvalues?

\begin{align*} (A-\lambda I) \vec{x} = \vec{0} & \quad \text{ with } \vec{x} \neq \vec{0} \\ & \quad \text{implies} \\ (A-\lambda I) \vec{x} = \vec{0} & \quad \\ & \quad \text{implies}\\ (A-\lambda I) & \quad \\ & \quad \text{implies} \\ \end{align*}

Find the eigenvalues of $A = \mat{rr} 0 & 1 \\ -6 & 5 \rix$.
A scalar $\lambda $ is an eigenvalue of $A$ if and only if $\lambda $ satisfies the characteristic equation of $A$: \[ \Det (A-\lambda I) \eq 0 .\] (We call $\Det (A-\lambda I)$ the characteristic polynomial of $A$.)
Find the eigenvalues of $A = \mat{rrr} 1 & 2 & 1 \\ 0 & -5 & 0 \\ 1 & 8 & 1 \rix$. Since $A - \lambda I = \mat{ccc} 1-\lambda & 2 & 1 \\ 0 & -5-\lambda & 0 \\ 1 & 8 & 1-\lambda \rix$ then:
Find the characteristic polynomial of $A = \mat{rrrr} 5 & -2 & 6 & -1 \\ 0 & 3 & -8 & 0 \\ 0 & 0 & 5 & 4 \\ 0 & 0 & 0 & 1 \rix$. Since $A$ and indeed $A-\lambda I$ is upper triangular, its determinant is the product of the diagonal entries. So, the characteristic polynomial of $A$ is \[ \Det (A-\lambda I) \eq \vmat{cccc} 5-\lambda & -2 & 6 & -1 \\ 0 & 3-\lambda & -8 & 0 \\ 0 & 0 & 5-\lambda & 4 \\ 0 & 0 & 0 & 1-\lambda \vrix \eq (5-\lambda )^2 (3-\lambda )(1-\lambda ) .\] Therefore, eigenvalues of $A$ are $5$ (with algebraic multiplicity 2), $3$, and $1$.

The algebraic multiplicity of an eigenvalue $\lambda$ is the multiplicity of $\lambda $ as a root in the characteristic polynomial: \[ \algmult(\lambda ) \eq \text{ multiplicity of } \lambda \text{ as a root in } p(\lambda ) .\]

For example, in the last example \begin{align*} \algmult(5) & \eq \\ \algmult(3) & \eq \\ \algmult(1) & \eq \end{align*}

For any $n\times n$ matrix $A$:
  • The characteristic polynomial of $A$, $\Det (A-\lambda I)$, is of degree $n$.
  • $A$ has $n$ eigenvalues when counted with algebraic multiplicity. Note: complex eigenvalues are possible, even for real matrices.
  • $A$ is invertible if and only if $0$ is not an eigenvalue of $A$.

Computing eigenvalues of a matrix is extremely difficult. Instead, we opt for finding the eigenvalues of a different matrix whose eigenvalues are the same as the original but for which the calculation is easier to do (what matrices come to mind?).

For $n\times n$ matrices $A$ and $B$, we say that $A$ is similar to $B$ if there exists an invertible matrix $P$ for which \[ P^{-1} A P = B \qquad \text{or equivalently,} \qquad A = PBP^{-1} .\] The act of transforming $A$ to $P^{-1}AP$ is called a similarity transformation.

Are the eigenvalues preserved with a similarity transformation? Let $B = P^{-1}AP$. Then \begin{align*} \Det (B-\lambda I) & \eq \Det \pbr{P^{-1}AP - \lambda I} \\ & \eq \Det \pbr{P^{-1} AP - P^{-1} \lambda I P} \\ & \eq \Det \pbr{P^{-1} (A-\lambda I) P} \\ & \eq \Det (P^{-1}) \cdot \Det (A-\lambda I) \cdot \Det (P) \\ & \eq \frac{1}{\Det (P)} \cdot \Det (A-\lambda I) \cdot \Det (P) \\ & \eq \Det (A-\lambda I) \end{align*}

Let $A = \mat{rrr} 2 & 0 & 0 \\ 1 & 2 & 1 \\ -1 & 0 & 1 \rix$, where $A = PBP^{-1}$ for $P = \mat{rrr} 0 & 0 & -1 \\ -1 & 1 & 0 \\ 1 & 0 & 1 \rix, B = \mat{rrr} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \rix$.

Find eigenvalues and eigenvectors of $A$.

If $n\times n$ matrices $A$ and $B$ are similar, then they have the same characteristic polynomial and hence the same eigenvalues.
Warning! The converse does not hold. For example, \[ \mat{cc} 2 & 1 \\ 0 & 2 \rix \quad \text{and} \quad \mat{rr} 2 & 0 \\ 0 & 2 \rix \] have the same eigenvalues but are not similar!
If $A = \mat{cc} 2 & 1 \\ 0 & 2 \rix$ and $B = \mat{rr} 2 & 0 \\ 0 & 2 \rix$ were similar, then there exists an invertible $P$ for which \[ AP \eq P B \quad \implies \quad \mat{cc} 2 & 1 \\ 0 & 2 \rix \mat{cc} p_{11} & p_{12} \\ p_{21} & p_{22} \rix = \mat{cc} p_{11} & p_{12} \\ p_{21} & p_{22} \rix \mat{cc} 2 & 0 \\ 0 & 2 \rix .\] Multiplying these matrices: \[ \mat{cc} 2p_{11} + p_{21} & 2p_{12} + p_{22} \\ 2p_{21} & 2 p_{22} \rix \eq \mat{cc} 2p_{11} & 2p_{12} \\ 2 p_{21} & 2p_{22} \rix .\] This implies that $p_{21} = 0 = p_{22}$ and contradicts the assumption of $P$ being invertible.

Also—similarity is not the same as row equivalence. Recall that $A$ and $B$ are row equivalent if there exists an invertible $E$ such that $B = E A$. This is not a similarity transformation!

Section 5.3: Diagonalization

Recall that $n\times n$ matrices $A$ and $B$ are similar if there exists an invertible $n\times n$ matrix $P$ for which \[ A = PBP^{-1} .\] An example in Chapter 2 revealed that \[ A^2 = PB^2P^{-1}, \qquad A^3 = P B^3 P^{-1}, \quad \dots, \quad A^k = P B^k P^{-1} .\]

Perhaps computing $A^k$ is easier with the factorization $A = PBP^{-1}$; in which case $B^k$ needs to be easy to compute. One option is to suppose $B = D$ is a diagonal matrix.

Compute $D^3$, where $D = \mat{rr} 5 & 0 \\ 0 & 4 \rix$.
Compute $A^k$ for any $k\ge 1$, where $A = PDP^{-1}$ with $P = \mat{cc} 1 & 1 \\ 1 & 2 \rix$, $D = \mat{rr} 5 & 0 \\ 0 & 4 \rix$, $P^{-1} = \mat{rr} 2 & -1 \\ - 1& 1 \rix$. Because $A^k = P D^k P^{-1}$, then \begin{align*} A^k & \eq \mat{cc} 1 & 1 \\ 1 & 2 \rix \mat{rr} 5^k & 0 \\ 0 & 4^k \rix \mat{rr} 2 & -1 \\ - 1& 1 \rix \eq \\ & \eq \end{align*}
An $n\times n$ matrix $A$ is diagonalizable if $A$ is similar to a diagonal matrix. That is, if \[ A \eq P D P^{-1} \] for some $n\times n$ invertible $P$ and $n\times n$ diagonal $D$.

The important question is — When is $A$ diagonalizable? And, how do we find $P$ and $D$?

Let $A = \mat{rr} 6 & -1 \\ 2 & 3 \rix$. Eigenvalues of $A$ are $\lambda_1 = 5$ and $\lambda_2 = 4$ with associated eigenvectors $\vec{v}_1 = \mat{c} 1 \\ 1 \rix$ and $\vec{v}_2 = \mat{c} 1 \\ 2 \rix$. That is, \[ \begin{aligned} A \mat{c} 1 \\ 1 \rix & \eq 5 \mat{c} 1 \\ 1 \rix \\ A \mat{c} 1 \\ 2 \rix & \eq 4 \mat{c} 1 \\ 2 \rix \end{aligned} \qquad \longrightarrow \qquad \]

An $n\times n$ matrix $A$ is diagonalizable if and only if $A$ has $n$ linearly independent eigenvectors.

In such a case, let $\lambda_1, \dots , \lambda_n$ be eigenvalues associated to eigenvectors $\vec{v}_1, \dots , \vec{v}_n$. Then $A = PDP^{-1}$ is given by \[ A \eq \mat{cccc} \vec{v}_1 & \vec{v}_2 & \dots & \vec{v}_n \rix \mat{cccc} \lambda_1 & \\ & \lambda_2 \\ & & \ddots \\ & & & \lambda_n \rix \mat{cccc} \vec{v}_1 & \vec{v}_2 & \dots & \vec{v}_n \rix^{-1} .\]

That is, $A$ is diagonalizable if and only if the eigenvectors of $A$ form a basis for $\R^n$.

Diagonalize $A = \mat{rrr} 2 & 0 & 0 \\ 1 & 2 & 1 \\ -1 & 0 & 1 \rix$, if possible.

Step 1: Find the eigenvalues of $A$:

Step 2: Find linearly independent eigenvectors of $A$ (if possible): \begin{align*} (A-1I) \vec{x} = \vec{0} & \quad \mat{rrr|r} 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 \\ -1 & 0 & 0 & 0 \rix & \longrightarrow \quad \underbrace{\mat{rrr|r} 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \rix}_{x_3 \text{ free}} & \qquad \vec{x} = \\ (A-2I) \vec{x} = \vec{0} & \quad \mat{rrr|r} 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ -1 & 0 & -1 & 0 \rix & \longrightarrow \quad \underbrace{\mat{rrr|r} 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \rix}_{x_2 , x_3 \text{ free} } & \qquad \vec{x} = \end{align*}

Corresponding eigenvectors are then \[ \vec{v}_1 = \qquad \vec{v}_2 = \qquad \vec{v}_3 = \]

Step 3: If you found $n$ linearly independent eigenvectors, form $P$ and $D$: \[ P \eq \qquad D \eq \]

Note: It is always possible to find the eigenvectors and eigenvalues of $A$ and write $A P = P D$. However, $A$ must be diagonalizable (i.e., $P$ must be invertible) to write \[ A \eq P D P^{-1} .\]

If possible, diagonalize $A = \mat{rrr} 2 & 4 & 6 \\ 0 & 2 & 2 \\ 0 & 0 & 4 \rix $.

Step 1: Find the eigenvalues of $A$:

Step 2: Find linearly independent eigenvectors of $A$ (if possible):

\begin{align*} (A-2I) \vec{x} = \vec{0} & \quad \mat{rrr|r} 0 & 4 & 6 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 2 & 0 \rix & \longrightarrow \quad & \underbrace{\mat{rrr|r} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \rix}_{x_1 \text{ free}} & \qquad \vec{x} = \\ (A-4I) \vec{x} = \vec{0} & \quad \mat{rrr|r} -2 & 4 & 6 & 0 \\ 0 & -2 & 2 & 0 \\ 0 & 0 & -2 & 0 \rix & \longrightarrow \quad & \underbrace{\mat{rrr|r} 1 & 0 & -5 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \rix}_{x_3 \text{ free} } & \qquad \vec{x} = \end{align*}

The following theorem provides sufficient conditions for a matrix to be diagonalizable.

If an $n\times n$ matrix has distinct eigenvalues, then it is diagonalizable.

Why? Because eigenvectors corresponding to distinct eigenvalues are linearly independent (See ).

Warning! The following statements are wrong, but are easy to make!
  • “$A$ has a repeating eigenvalue, therefore $A$ is not diagonalizable”. See .
  • “$A$ has a repeating eigenvalue, therefore $A$ is diagonalizable.” See .
  • “If $A$ is diagonalizable, then $A$ has distinct eigenvalues”. See .
  1. Is $\mat{rrr} 5 & -8 & 1 \\ 0 & 0 & 7 \\ 0 & 0 & -2\rix$ diagonalizable?
  2. Is $\mat{rrr} 2 & 4 & 3 \\ -4 & -6 & -3 \\ 3 & 3 & 1 \rix$ diagonalizable? Hint: $p(\lambda ) = -(\lambda -1)(\lambda +2)^2$

If $A$ has some repeated eigenvalues, it may or may not be diagonalizable. The following result determines this. First, we need a definition.

Let $A$ be an $n\times n$ matrix with eigenvalue $\lambda$. The geometric multiplicity of $\lambda$ is the dimension of its eigenspace. That is, \[ \geomult(\lambda ) \eq \Dim \pbr{\Null (A-\lambda I)} \] is the number of linearly independent eigenvectors associated to $\lambda $.

Recall that the algebraic multiplicity, $\algmult(\lambda)$, is the multiplicity of $\lambda $ as a root in the characteristic polynomial.

Let $A$ be an $n\times n$ matrix.
  1. For every eigenvalue $\lambda_k$ of $A$, $\geomult(\lambda_k) \le \algmult(\lambda_k)$.
  2. $A$ is diagonalizable if and only if $\geomult(\lambda_k) = \algmult(\lambda_k)$ for every eigenvalue $\lambda_k$ of $A$.
  3. Suppose $A$ is diagonalizable and has $p$ distinct eigenvalues $\lambda_1, \dots , \lambda_p$. Let $\cB_1, \dots , \cB_p$ be bases for each corresponding eigenspace. Then the total collection of vectors in $\cB_1, \dots , \cB_p$ form a basis for $\R^n $. That is, these vectors form the matrix $P$.
Let $A = \mat{rrrr} 5 & 0 & 0 & 0 \\ 0 & 5 & 0 & 0 \\ 1 & 4 & -3 & 0 \\ -1 & -2 & 0 & -3 \rix$.

Its characteristic equation is $p(\lambda ) = (\lambda -5)^2 (\lambda +3)^2$, so eigenvalues are $5$ and $-3$ each with algebraic multiplicity $2$.

One can show that a basis for the eigenspace associated to $\lambda =5$ is \[ \cB_1 = \cbr{\vec{v}_1, \vec{v}_2}, \qquad \text{where} \quad \vec{v}_1 = \mat{r} -8 \\ 4 \\ 1 \\ 0 \rix, \quad \vec{v}_2 = \mat{r} -16 \\ 4 \\ 0 \\ 1 \rix \] and that a basis for the eigenspace of $\lambda =-3$ is \[ \cB_2 = \cbr{\vec{v}_3, \vec{v}_4}, \qquad \text{where} \quad \vec{v}_3 = \mat{r} 0 \\ 0 \\ 1 \\ 0 \rix, \quad \vec{v}_4 = \mat{r} 0 \\ 0\\ 0 \\ 1 \rix .\]

Therefore, $A$ is diagonalizable because $\geomult(5) = 2 = \algmult(5)$ and $\geomult(-3) = 2 = \algmult(-3)$. Namely, \[ A \eq PDP^{-1} \eq \mat{rrrr} -8 & -16 & 0 & 0 \\ 4 & 4 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \rix \mat{rrrr} 5 & \\ & 5 \\ & & -3 \\ & & & -3 \rix \mat{rrrr} -8 & -16 & 0 & 0 \\ 4 & 4 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \rix^{-1} .\]

Using , revisit and to explain why those matrices cannot be diagonalized.