Diagonalization is the process of finding a corresponding diagonal matrix (a matrix in which the only non-zero components are on the diagonal line from
A
1
,
1
{\displaystyle A_{1,1}}
to
A
n
,
n
{\displaystyle A_{n,n}}
for an
n
×
n
{\displaystyle n \times n}
matrix) for a given diagonalizable matrix. A matrix is diagonalizable if and only if the matrix of eigenvectors is invertable (that is, the determinant does not equal zero). If a matrix is not diagonalizable, is is called a defective matrix .
The diagonal
D
{\displaystyle D}
of a matrix is equal to
P
−
1
A
P
{\displaystyle P^{-1}AP}
such that
P
{\displaystyle P}
is the matrix of eigenvectors (
P
=
[
v
1
,
…
,
v
n
]
{\displaystyle P=[v_1,\ldots,v_n]}
). Diagonal matrices are very useful, as computing determinants, products and sums of matrices, and powers becomes much simpler. For example, given the matrix
A
{\displaystyle A}
,
A
n
=
P
D
n
P
−
1
{\displaystyle A^n=PD^nP^{-1}}
.
Computation of the diagonal matrix [ ]
Given
A
P
=
P
D
{\displaystyle AP=PD}
,
D
{\displaystyle D}
can be found be making a diagonal matrix of the eigenvalues of
A
{\displaystyle A}
.
P
{\displaystyle P}
will be equal to the matrix of corresponding eigenvectors. For example, say we have the matrix
A
=
[
1
2
−
1
4
]
{\displaystyle A=\begin{bmatrix}1&2\\-1&4\end{bmatrix}}
To find the eigenvalues, we must first find the characteristic polynomial , which will be equal to
|
λ
I
−
[
1
2
−
1
4
]
|
=
|
λ
[
1
0
0
1
]
−
[
1
2
−
1
4
]
|
=
0
{\displaystyle \begin{vmatrix}\lambda I-\begin{bmatrix}1&2\\-1&4\end{bmatrix}\end{vmatrix}
=\begin{vmatrix}\lambda\begin{bmatrix}1&0\\0&1\end{bmatrix}-\begin{bmatrix}1&2\\-1&4\end{bmatrix}\end{vmatrix}=0}
|
λ
−
1
−
2
1
λ
−
4
|
=
(
λ
2
−
5
λ
+
4
)
+
2
=
λ
2
−
5
λ
+
6
=
(
λ
−
2
)
(
λ
−
3
)
=
0
{\displaystyle \begin{vmatrix}\lambda-1&-2\\1&\lambda-4\end{vmatrix}=(\lambda^2-5\lambda+4)+2=\lambda^2-5\lambda+6=(\lambda-2)(\lambda-3)=0}
λ
=
2
,
3
{\displaystyle \lambda=2,3}
Therefore
D
{\displaystyle D}
will be equal to
[
2
0
0
3
]
{\displaystyle \begin{bmatrix}2&0\\0&3\end{bmatrix}}
P
{\displaystyle P}
will be the matrix of eigenvectors corresponding to the above diagonal matrix. The eigenvectors will be the non-trivial solution to
2
I
−
[
1
2
−
1
4
]
=
[
1
−
2
1
−
2
]
=
0
2
,
1
{\displaystyle 2I-\begin{bmatrix}1&2\\-1&4\end{bmatrix}
=\begin{bmatrix}1&-2\\1&-2\end{bmatrix}=0_{2,1}}
[
x
y
]
=
t
[
2
1
]
{\displaystyle \begin{bmatrix}x\\y\end{bmatrix}=t\begin{bmatrix}2\\1\end{bmatrix}}
v
→
1
=
[
2
1
]
{\displaystyle \vec{v}_1=\begin{bmatrix}2\\1\end{bmatrix}}
3
I
−
[
1
2
−
1
4
]
=
[
2
−
2
1
−
1
]
=
0
2
,
1
{\displaystyle 3I-\begin{bmatrix}1&2\\-1&4\end{bmatrix}
=\begin{bmatrix}2&-2\\1&-1\end{bmatrix}=0_{2,1}}
[
x
y
]
=
t
[
1
1
]
{\displaystyle \begin{bmatrix}x\\y\end{bmatrix}=t\begin{bmatrix}1\\1\end{bmatrix}}
v
→
2
=
[
1
1
]
{\displaystyle \vec{v}_2=\begin{bmatrix}1\\1\end{bmatrix}}
P
=
[
v
→
1
v
→
2
]
=
[
2
1
1
1
]
{\displaystyle P=\begin{bmatrix}\vec{v}_1&\vec{v}_2\end{bmatrix}
=\begin{bmatrix}2&1\\1&1\end{bmatrix}}
P
−
1
=
1
|
P
|
adj
[
2
1
1
1
]
=
[
1
−
1
−
1
2
]
{\displaystyle P^{-1}=\frac1{|P|}\text{adj}\begin{bmatrix}2&1\\1&1\end{bmatrix}
=\begin{bmatrix}1&-1\\-1&2\end{bmatrix}}
Therefore,
A
=
[
1
2
−
1
4
]
=
P
D
P
−
1
=
[
2
1
1
1
]
[
2
0
0
3
]
[
1
−
1
−
1
2
]
{\displaystyle A=\begin{bmatrix}1&2\\-1&4\end{bmatrix}
=PDP^{-1}=\begin{bmatrix}2&1\\1&1\end{bmatrix}
\begin{bmatrix}2&0\\0&3\end{bmatrix}\begin{bmatrix}1&-1\\-1&2\end{bmatrix}}
This is useful to us because, among other things, we can use this to find large powers of
A
{\displaystyle A}
.
A
5
=
[
1
2
−
1
4
]
5
=
P
D
5
P
−
1
=
[
2
1
1
1
]
[
2
5
0
0
3
5
]
[
1
−
1
−
1
2
]
=
[
2
1
1
1
]
[
32
0
0
243
]
[
1
−
1
−
1
2
]
=
[
−
179
422
−
211
454
]
{\displaystyle \begin{align}A^5&=\begin{bmatrix}1&2\\-1&4\end{bmatrix}^5=PD^5P^{-1}
=\begin{bmatrix}2&1\\1&1\end{bmatrix}\begin{bmatrix}2^5&0\\0&3^5\end{bmatrix}\begin{bmatrix}1&-1\\-1&2\end{bmatrix}\\
&=\begin{bmatrix}2&1\\1&1\end{bmatrix}\begin{bmatrix}32&0\\0&243\end{bmatrix}\begin{bmatrix}1&-1\\-1&2\end{bmatrix}=\begin{bmatrix}-179&422\\-211&454\end{bmatrix}\end{align}}