Differential equations – Systems of linear differential equations – Defect matrices 1

Diagonalizability

A matrix \(A\) is not diagonalizable if

  1. \(A\) has nonreal eigenvalues or

  2. \(A\) has multiple real eigenvalues, for which the geometric multiplicity is smaller than the algebraic multiplicity.

Here the algebraic multiplicity is the number of times that the eigenvalue occurs as a zero of the characteristic polynomial and the geometric multiplicity the dimension of the corresponding eigenspace or equivalently the number of linearly independent eigenvectors.

So just the existence of multiple eigenvalues needs not to be a problem for the diagonalizability; as long as there are enough linearly independent eigenvectors there is no problem.

Example: Consider \(\mathbf{x}'(t)=A\mathbf{x}(t)\) with \(A=\begin{pmatrix}1&2&1\\0&1&0\\0&2&2\end{pmatrix}\). Then we have:

\[|A-rI|=\begin{vmatrix}1-r&2&1\\0&1-r&0\\0&2&2-r\end{vmatrix}=(1-r)\begin{vmatrix}1-r&0\\2&2-r\end{vmatrix}=(1-r)^2(2-r).\]

So the eigenvalues are: \(r=2\) and \(r=1\) (twice). Then we have:

\[r=2:\quad\begin{pmatrix}-1&2&1\\0&-1&0\\0&2&0\end{pmatrix}\sim\begin{pmatrix}-1&0&1\\0&1&0\\0&0&0\end{pmatrix} \quad\Longrightarrow\quad\text{E}_2=\text{Span}\left\{\begin{pmatrix}1\\0\\1\end{pmatrix}\right\}\]

and

\[r=1:\quad\begin{pmatrix}0&2&1\\0&0&0\\0&2&1\end{pmatrix}\sim\begin{pmatrix}0&2&1\\0&0&0\\0&0&0\end{pmatrix} \quad\Longrightarrow\quad\text{E}_1=\text{Span}\left\{\begin{pmatrix}1\\0\\0\end{pmatrix},\begin{pmatrix}0\\1\\-2\end{pmatrix}\right\}.\]

Hence the matrix \(A\) is diagonizable: \(A=PDP^{-1}\) with \(P=\begin{pmatrix}1&1&0\\0&0&1\\1&0&-2\end{pmatrix}\) and \(D=\text{diag}(2,1,1)\). Then we have: \(P^{-1}=\begin{pmatrix}0&2&1\\1&-2&-1\\0&1&0\end{pmatrix}\).

Hence: \(\Psi(t)=\begin{pmatrix}e^{2t}&e^t&0\\0&0&e^t\\e^{2t}&0&-2e^t\end{pmatrix}\) is a fundamental matrix. Then we have:

\[\Psi(0)=\begin{pmatrix}1&1&0\\0&0&1\\1&0&-2\end{pmatrix}=P\quad\Longrightarrow\quad \Psi^{-1}(0)=P^{-1}=\begin{pmatrix}0&2&1\\1&-2&-1\\0&1&0\end{pmatrix}.\]

Hence:

\[e^{At}=\Psi(t)\Psi^{-1}(0)=\begin{pmatrix}e^{2t}&e^t&0\\0&0&e^t\\e^{2t}&0&-2e^t\end{pmatrix} \begin{pmatrix}0&2&1\\1&-2&-1\\0&1&0\end{pmatrix}=\begin{pmatrix}e^t&2e^{2t}-2e^t&e^{2t}-e^t\\0&e^t&0\\0&2e^{2t}-2e^t&e^{2t}\end{pmatrix}.\]

We also have:

\[e^{At}=Pe^{Dt}P^{-1}=\begin{pmatrix}1&1&0\\0&0&1\\1&0&-2\end{pmatrix}\begin{pmatrix}e^{2t}&0&0\\0&e^t&0\\0&0&e^t\end{pmatrix} \begin{pmatrix}0&2&1\\1&-2&-1\\0&1&0\end{pmatrix}=\begin{pmatrix}e^t&2e^{2t}-2e^t&e^{2t}-e^t\\0&e^t&0\\0&2e^{2t}-2e^t&e^{2t}\end{pmatrix}.\]

Definition: A matrix is called defect if it has multiple real eigenvalues and it is not diagonizable.

This occurs if \(A\) has a multiple eigenvalue for which the algebraic multiplicity is (strict) larger than the geometric multiplicity.

Example: Consider \(\mathbf{x}'(t)=A\mathbf{x}(t)\) with \(A=\begin{pmatrix}1&-1\\1&3\end{pmatrix}\). Then we have:

\[|A-rI|=\begin{vmatrix}1-r&-1\\1&3-r\end{vmatrix}=r^2-4r+4=(r-2)^2.\]

Hence: \(A\) has a double eigenvalues \(r=2\). Now we have:

\[r=2:\quad\begin{pmatrix}-1&-1\\1&1\end{pmatrix}\sim\begin{pmatrix}1&1\\0&0\end{pmatrix}\quad\Longrightarrow\quad \text{E}_2=\text{Span}\left\{\begin{pmatrix}-1\\1\end{pmatrix}\right\}.\]

Hence: \(A\) is not diagonalizable (\(A\) is defect). We know that \(\mathbf{x}(t)=\begin{pmatrix}1\\-1\end{pmatrix}e^{2t}\) is a solution. For a second (linearly independent) solution we now try \(\mathbf{x}(t)=(\mathbf{u}t+\mathbf{v})e^{2t}\). Then we have: \(\mathbf{x}'(t)=\mathbf{u}(2t+1)e^{2t}+2\mathbf{v}e^{2t}\). Substitution then gives

\[\mathbf{u}(2t+1)e^{2t}+2\mathbf{v}e^{2t}=A(\mathbf{u}t+\mathbf{v})e^{2t}\quad\Longleftrightarrow\quad 2\mathbf{u}te^{2t}+(\mathbf{u}+2\mathbf{v})e^{2t}=A\mathbf{u}te^{2t}+A\mathbf{v}e^{2t}.\]

This implies that

\[A\mathbf{u}=2\mathbf{u}\quad\text{and}\quad A\mathbf{v}=\mathbf{u}+2\mathbf{v}\quad\Longleftrightarrow\quad (A-2I)\mathbf{u}=\mathbf{0}\quad\text{and}\quad(A-2I)\mathbf{v}=\mathbf{u}.\]

So the vector \(\mathbf{u}\) is an eigenvector of \(A\) corresponding to the eigenvalue \(2\). The vector \(\mathbf{v}\) is then called a generalized eigenvector of \(A\) corresponding to the eigenvalue \(2\). We have: \((A-2I)^2\mathbf{v}=(A-2I)\mathbf{u}=\mathbf{0}\). If we now choose \(\mathbf{u}=\begin{pmatrix}-1\\1\end{pmatrix}\), then we have:

\[(A-2I)\mathbf{v}=\mathbf{u}:\quad\left(\left.\begin{matrix}-1&-1\\1&1\end{matrix}\,\right|\,\begin{matrix}-1\\1\end{matrix}\right) \sim\left(\left.\begin{matrix}1&1\\0&0\end{matrix}\,\right|\,\begin{matrix}1\\0\end{matrix}\right).\]

This has infinitely many solutions; choose for instance \(\mathbf{v}=\begin{pmatrix}1\\0\end{pmatrix}\). Then we have:

\[\mathbf{x}(t)=(\mathbf{u}t+\mathbf{v})e^{2t}=\begin{pmatrix}-1\\1\end{pmatrix}te^{2t}+\begin{pmatrix}1\\0\end{pmatrix}e^{2t} =\begin{pmatrix}-t+1\\t\end{pmatrix}e^{2t}\]

is also a solution of \(\mathbf{x}'(t)=A\mathbf{x}(t)\). The Wronskian of the two solutions is

\[\begin{vmatrix}-e^{2t}&(-t+1)e^{2t}\\e^{2t}&te^{2t}\end{vmatrix}=-t^{4t}+te^{4t}-e^{4t}=-e^{4t}\neq0.\]

Hence the general solution is: \(\mathbf{x}(t)=c_1\begin{pmatrix}-1\\1\end{pmatrix}e^{2t}+ c_2\left[\begin{pmatrix}-1\\1\end{pmatrix}t+\begin{pmatrix}1\\0\end{pmatrix}\right]e^{2t}\) with \(c_1,c_2\in\mathbb{R}\).

Hence: \(\Psi(t)=\begin{pmatrix}-e^{2t}&(-t+1)e^{2t}\\e^{2t}&te^{2t}\end{pmatrix}=\begin{pmatrix}-1&1-t\\1&t\end{pmatrix}e^{2t}\) is a fundamental matrix. Then we have:

\[\Psi(0)=\begin{pmatrix}-1&1\\1&0\end{pmatrix}\quad\Longrightarrow\quad\Psi^{-1}(0)=\begin{pmatrix}0&1\\1&1\end{pmatrix}.\]

Hence:

\[e^{At}=\Psi(t)\Psi^{-1}(0)=\begin{pmatrix}-1&1-t\\1&t\end{pmatrix}\begin{pmatrix}0&1\\1&1\end{pmatrix}e^{2t} =\begin{pmatrix}1-t&-t\\t&1+t\end{pmatrix}e^{2t}.\]

Jordan normal form

If the matrix \(A\) is diagonalizable, then we have: \(A=PDP^{-1}\) for some invertible matrix \(P\) and diagonal matrix \(D\). If \(D=\text{diag}(\lambda_1,\ldots,\lambda_n)\), then \(e^{At}=Pe^{Dt}P^{-1}\) with \(e^{Dt}=\text{diag}(e^{\lambda_1t},\ldots,e^{\lambda_nt})\).

The matrix \(A=\begin{pmatrix}1&-1\\1&3\end{pmatrix}\) is not diagonalizable.

However, we have: \(A=PJP^{-1}\) with \(P=\begin{pmatrix}-1&1\\1&0\end{pmatrix}\) and \(J=\begin{pmatrix}2&1\\0&2\end{pmatrix}\). The matrix \(J\) is called the Jordan normal form of the matrix \(A\). Op the main diagonal are the eigenvalues (\(r_1=r_2=2\)) of \(A\). The first column of \(P\) is an eigenvector of \(A\) corresponding to the eigenvalue \(2\) and the second column is a generalized eigenvector.

Now let \(\mathbf{x}(t)=P\mathbf{y}(t)\), then we have:

\[\mathbf{x}'(t)=A\mathbf{x}(t)\quad\Longleftrightarrow\quad P\mathbf{y}'(t)=PJP^{-1}P\mathbf{y}(t)\quad\Longleftrightarrow\quad \mathbf{y}'(t)=J\mathbf{y}(t).\]

Although this system is not (completely) uncoupled, it is much easier than the original system:

\[\mathbf{y}'(t)=J\mathbf{y}(t):\quad\left\{\begin{array}{l}y_1'(t)=2y_1(t)+y_2(t)\\[2.5mm]y_2'(t)=2y_2(t).\end{array}\right.\]

This implies that \(y_2(t)=c_2e^{2t}\) and \(y_1(t)=c_1e^{2t}+c_2te^{2t}\). Hence:

\[\mathbf{y}(t)=\begin{pmatrix}y_1(t)\\y_2(t)\end{pmatrix}=\begin{pmatrix}c_1e^{2t}+c_2te^{2t}\\c_2e^{2t}\end{pmatrix} =c_1\begin{pmatrix}1\\0\end{pmatrix}e^{2t}+c_2\begin{pmatrix}t\\1\end{pmatrix}e^{2t}.\]

Hence: \(e^{Jt}=\begin{pmatrix}1&t\\0&1\end{pmatrix}e^{2t}\). Then we have:

\[e^{At}=Pe^{Jt}P^{-1}=\begin{pmatrix}-1&1\\1&0\end{pmatrix}\begin{pmatrix}1&t\\0&1\end{pmatrix}\begin{pmatrix}0&1\\1&1\end{pmatrix}e^{2t} =\begin{pmatrix}1-t&-t\\t&1+t\end{pmatrix}e^{2t}.\]
Last modified on July 1, 2021
© Roelof Koekoek

Metamenu