Linear Algebra – Eigenvalues and eigenvectors – Systems of linear differential equations
We consider systems of linear differential equations of the form
\[\left\{\begin{array}{lcl}x_1'(t)&=&a_{11}x_1(t)+\cdots+a_{1n}x_n(t)\\ x_2'(t)&=&a_{21}x_1(t)+\cdots+a_{2n}x_n(t)\\ &\vdots&\\ x_n'(t)&=&a_{n1}x_1(t)+\cdots+a_{nn}x_n(t).\end{array}\right.\]Here \(x_1(t),\ldots, x_n(t)\) are functions of one variable \(t\) and the coefficients \(a_{11},\ldots,a_{nn}\) are real constants. Such a system of first-order linear differential equations can be written in the form
\[\mathbf{x}'(t)+A\mathbf{x}(t)\quad\text{met}\quad\mathbf{x}(t)=\begin{pmatrix}x_1(t)\\\vdots\\x_n(t)\end{pmatrix}\quad\text{en}\quad A=\begin{pmatrix}a_{11}&\ldots&a_{1nn}\\\vdots&&\vdots\\a_{n1}&\ldots&a_{nn}\end{pmatrix}.\]Such a sytem \(\mathbf{x}'(t)=A\mathbf{x}(t)\) is also called a continuous dynamical system. One can possibly add an initial condition \(\mathbf{x}(0)=\mathbf{x}_0\in\mathbb{R}^n\) to this. In that case it is called an initial-value problem:
\[\mathbf{x}'(t)=A\mathbf{x}(t)\quad\text{met}\quad\mathbf{x}(0)=\mathbf{x}_0\in\mathbb{R}^n.\]If \(A\) is a diagonal matrix, then we have an uncoupled system of differential equations (or a system of uncoupled differential equations):
\[\mathbf{x}'(t)=\begin{pmatrix}\lambda_1&0&\ldots&0\\0&\lambda_2&&0\\\vdots&&\ddots&\\0&0&\ldots&\lambda_n\end{pmatrix}\mathbf{x}(t) \quad\Longleftrightarrow\quad\left\{\begin{array}{lcl}x_1'(t)&=&\lambda_1x_1(t)\\x_2'(t)&=&\lambda_2x_2(t)\\&\vdots&\\x_n'(t)&=&\lambda_nx_n(t).\end{array}\right.\]The solution is easy: \(x_k(t)=c_ke^{\lambda_kt}\) with \(k=1,2,\ldots,n\). Here the constants \(c_k\in\mathbb{R}\) with \(k=1,2,\ldots,n\) are arbitrary. In vector form this can be written as:
\[\mathbf{x}(t)=c_1\begin{pmatrix}1\\0\\0\\\vdots\\0\end{pmatrix}e^{\lambda_1t}+c_2\begin{pmatrix}0\\1\\0\\\vdots\\0\end{pmatrix}e^{\lambda_2t} +\cdots+c_n\begin{pmatrix}0\\0\\\vdots\\0\\1\end{pmatrix}e^{\lambda_nt}.\]If \(A\) is not a diagonal matrix, then we have a coupled system of differential equations (or a system of coupled differential equations). Now we look for solutions of the form \(\mathbf{x}(t)=\mathbf{v}e^{\lambda t}\) of such a coupled system \(\mathbf{x}'(t)=A\mathbf{x}(t)\). Then we obtain that \(\mathbf{x}'(t)=\lambda\mathbf{v}e^{\lambda t}\) and therefore:
\[\mathbf{x}'(t)=A\mathbf{x}(t)\quad\Longleftrightarrow\quad\lambda\mathbf{v}e^{\lambda t}=A\mathbf{v}e^{\lambda t}.\]Since \(e^{\lambda t}\neq0\) this implies that \(A\mathbf{v}=\lambda\mathbf{v}\). Hence: if \(\mathbf{x}(t)=\mathbf{v}e^{\lambda t}\) is a nontrivial solution of \(\mathbf{x}'(t)=A\mathbf{x}(t)\), then \(\mathbf{v}\neq\mathbf{0}\) is an eigenvector of \(A\) corresponding to the eigenvalue \(\lambda\).
It can be shown (in the theory of differential equations) that each system of differential equations of the form \(\mathbf{x}'(t)=A\mathbf{x}(t)\) with \(A\) an \(n\times n\) matrix has \(n\) linear independent solutions \(\mathbf{x}_1(t),\ldots,\mathbf{x}_n(t)\). Then the general solution can be written in the form \(\mathbf{x}(t)=c_1\mathbf{x}_1(t)+\cdots+c_n\mathbf{x}_n(t)\) with \(c_1,\ldots,c_n\in\mathbb{R}\) arbitrary. These coefficients can be determined by an initial condition of the form \(\mathbf{x}(0)=\mathbf{x}_0\in\mathbb{R}^n\), such that the solution of such an initial-value problem \(\mathbf{x}'(t)=A\mathbf{x}(t)\) with \(\mathbf{x}(0)=\mathbf{x}_0\in\mathbb{R}^n\) is unique.
So for a system of the form \(\mathbf{x}'(t)=A\mathbf{x}(t)\) with \(A\) an \(n\times n\) matrix one needs to find \(n\) linear independent solutions. Since there exists solutions of the form \(\mathbf{x}(t)=\mathbf{v}e^{\lambda t}\) with \(\mathbf{v}\) an eigenvector of \(A\), this certainly succeeds if \(A\) is diagonizable. Because then there exists a basis of \(\mathbb{R}^n\) that fully consists of eigenvectors of \(A\). So then we have \(n\) linear independent solutions of the form \(\mathbf{x}(t)=\mathbf{v}e^{\lambda t}\). If \(A\) is not diagonizable, this might not succeed. In that case we have to find other solutions. These cases will not be consider here. This problems will be solved later in the course on Differential equations.
Example: Consider \(\mathbf{x}'(t)=A\mathbf{x}(t)\) with \(A=\begin{pmatrix}-2&-5\\1&4\end{pmatrix}\). Then we have:
\[\det(A-\lambda I)=\begin{vmatrix}-2-\lambda&-5\\1&4-\lambda\end{vmatrix}=\lambda^2-2\lambda-3=(\lambda-3)(\lambda+1).\]Further we have:
\[\lambda_1=3:\quad\begin{pmatrix}-5&-5\\1&1\end{pmatrix}\sim\begin{pmatrix}1&1\\0&0\end{pmatrix}\quad\Longrightarrow\quad \mathbf{v}_1=\begin{pmatrix}1\\-1\end{pmatrix}\]and
\[\lambda_2=-1:\quad\begin{pmatrix}-1&-5\\1&5\end{pmatrix}\sim\begin{pmatrix}1&5\\0&0\end{pmatrix}\quad\Longrightarrow\quad \mathbf{v}_2=\begin{pmatrix}5\\-1\end{pmatrix}.\]Hence the general solution is
\[\mathbf{x}(t)=c_1\mathbf{v}_1e^{\lambda_1t}+c_2\mathbf{v}_2e^{\lambda_2t}=c_1\begin{pmatrix}1\\-1\end{pmatrix}e^{3t} +c_2\begin{pmatrix}5\\-1\end{pmatrix}e^{-t},\quad c_1,c_2\in\mathbb{R}.\]Explicitly written this means:
\[\left\{\begin{array}{rcrr}x_1'(t)&=&-2x_1(t)&-5x_2(t)\\[2.5mm]x_2'(t)&=&x_1(t)&+4x_2(t)\end{array}\right.\quad\Longrightarrow\quad \left\{\begin{array}{rcrr}x_1(t)&=&c_1e^{3t}&+5c_2e^{-t}\\[2.5mm]x_2(t)&=&-c_1e^{3t}&-c_2e^{-t}.\end{array}\right.\]With for instance the initial condition \(\mathbf{x}(0)=\begin{pmatrix}7\\1\end{pmatrix}\) we then obtain:
\[c_1\begin{pmatrix}1\\-1\end{pmatrix}+c_2\begin{pmatrix}5\\-1\end{pmatrix}=\begin{pmatrix}7\\1\end{pmatrix}:\quad \left(\left.\begin{matrix}1&5\\-1&-1\end{matrix}\,\right|\,\begin{matrix}7\\1\end{matrix}\right)\sim \left(\left.\begin{matrix}1&5\\0&4\end{matrix}\,\right|\,\begin{matrix}7\\8\end{matrix}\right)\sim \left(\left.\begin{matrix}1&0\\0&1\end{matrix}\,\right|\,\begin{matrix}-3\\2\end{matrix}\right)\quad\Longrightarrow\quad c_1=-3\quad\text{and}\quad c_2=2.\]Hence the unique solution of the initial-value problem \(\mathbf{x}'(t)=\begin{pmatrix}-2&-5\\1&4\end{pmatrix}\mathbf{x}(t)\) and \(\mathbf{x}(0)=\begin{pmatrix}7\\1\end{pmatrix}\) is
\[\mathbf{x}(t)=-3\begin{pmatrix}1\\-1\end{pmatrix}e^{3t}+2\begin{pmatrix}5\\-1\end{pmatrix}e^{-t} =\begin{pmatrix}-3e^{3t}+10e^{-t}\\3e^{3t}-2e^{-t}\end{pmatrix}.\]For a diagonizable matrix \(A\) we have: \(A=PDP^{-1}\) for some invertible matrix \(P\) and a diagonal matrix \(D\). Moreover we have:
\[P=\Bigg(\mathbf{v}_1\;\ldots\;\mathbf{v}_n\Bigg)\quad\text{and}\quad D=\text{diag}(\lambda_1,\ldots,\lambda_n)\quad\text{with}\quad A\mathbf{v}_k=\lambda_k\mathbf{v}_k,\quad k=1,2,\ldots,n.\]Now let \(\mathbf{x}(t)=P\mathbf{y}(t)\), then we have:
\[\mathbf{x}'(t)=A\mathbf{x}(t)\quad\Longleftrightarrow\quad P\mathbf{y}'(t)=AP\mathbf{y}(t)=PDP^{-1}P\mathbf{y}(t)=PD\mathbf{y}(t).\]Since \(P\) is invertible, we may (left) multiply by \(P^{-1}\) which implies that: \(\mathbf{x}'(t)=A\mathbf{x}(t)\quad\Longleftrightarrow\quad\mathbf{y}'(t)=D\mathbf{y}(t)\). The latter is an uncoupled system. This process is therefore called a decoupling of the system of differential equations. Then we have:
\[\mathbf{y}(t)=\begin{pmatrix}c_1e^{\lambda_1t}\\\vdots\\c_ne^{\lambda_nt}\end{pmatrix}\quad\Longrightarrow\quad \mathbf{x}(t)=P\mathbf{y}(t)=\Bigg(\mathbf{v}_1\;\ldots\;\mathbf{v}_n\Bigg)\begin{pmatrix}c_1e^{\lambda_1t}\\\vdots\\c_ne^{\lambda_nt}\end{pmatrix} =c_1\mathbf{v}_1e^{\lambda_1t}+\cdots+c_n\mathbf{v}_ne^{\lambda_nt}.\]If \(A\) has nonreal eigenvalues \(\lambda=\alpha\pm i\beta\) with \(\alpha,\beta\in\mathbb{R}\) and \(\beta\neq0\), then we have:
\[e^{\lambda t}=e^{(\alpha\pm i\beta)t}=e^{\alpha t}e^{\pm i\beta t}=e^{\alpha t}\left(\cos(\beta t)\pm i\sin(\beta t)\right).\]The corresponding eigenvectors are complex conjugates. Hence: \(A\mathbf{v}=\lambda\mathbf{v}\) and \(A\overline{\mathbf{v}}=\overline{\lambda}\,\overline{\mathbf{v}}\). Now we can choose linear combinations of the form \(c_1\mathbf{v}e^{\lambda t}+c_2\overline{\mathbf{v}}e^{\overline{\lambda}t}\) with \(c_1,c_2\in\mathbb{C}\) that are real such that we obtain two linear independent (real) solutions. We can choose the real and the imaginary part of \(\mathbf{v}e^{\lambda t}\) for that.
Example: Consider \(\mathbf{x}'(t)=A\mathbf{x}(t)\) with \(A=\begin{pmatrix}5&-2\\1&3\end{pmatrix}\). Then we have: \(|A-\lambda I|=\begin{vmatrix}5-\lambda&-2\\1&3-\lambda\end{vmatrix}=\lambda^2-8\lambda+17=(\lambda-4)^2+1\). So the eigenvalues are \(\lambda=4\pm i\). For \(\lambda=4+i\) we now find:
\[\lambda=4+i:\quad\begin{pmatrix}1-i&-2\\1&-1-i\end{pmatrix}\quad\Longrightarrow\quad\mathbf{v}=\begin{pmatrix}1+i\\1\end{pmatrix}\;\text{(bijvoorbeeld)}.\]Then we have:
\[\mathbf{v}e^{\lambda t}=\begin{pmatrix}1+i\\1\end{pmatrix}e^{(4+i)t}=\begin{pmatrix}1+i\\1\end{pmatrix}e^{4t}\left(\cos(t)+i\sin(t)\right) =\begin{pmatrix}\cos(t)-\sin(t)\\\cos(t)\end{pmatrix}e^{4t}+i\begin{pmatrix}\cos(t)+\sin(t)\\\sin(t)\end{pmatrix}e^{4t}.\]Then the general solution is:
\[\mathbf{x}(t)=c_1\begin{pmatrix}\cos(t)-\sin(t)\\\cos(t)\end{pmatrix}e^{4t}+c_2\begin{pmatrix}\cos(t)+\sin(t)\\\sin(t)\end{pmatrix}e^{4t}, \quad c_1,c_2\in\mathbb{R}.\]If \(A\) is a \(2\times2\) matrix, then the solutions of \(\mathbf{x}'(t)=A\mathbf{x}(t)\) can be drawn in the plane \(\mathbb{R}^2\). The graph of such a solution \(\mathbf{x}(t)\) with \(t\geq0\) is called a trajectory of the continuous dynamical system \(\mathbf{x}'(t)=A\mathbf{x}(t)\). Such a trajectory depends of the starting point \(\mathbf{x}(0)=\mathbf{x}_0\in\mathbb{R}^2\).
Let \(A\) be diagonalizable with eigenvectors \(\mathbf{v}_1\) en \(\mathbf{v}_2\) corresponding to the eigenvalues \(\lambda_1\) and \(\lambda_2\) respectively. Then the behaviour of the solutions \(\mathbf{x}(t)=c_1\mathbf{v}_1e^{\lambda_1t}+c_2\mathbf{v}_2e^{\lambda_2t}\) with \(t\geq0\) is determined by the eigenvalues.
If \(\lambda_1 < 0\) and \(\lambda_2 < 0\), then all solutions tend toward the origin for \(t\to\infty\). In that case the origin is called an attractor or sink of the dynamical system.
If \(\lambda_1 > 0\) and \(\lambda_2 > 0\), then all solutions tend to infinity (away from the origin) for \(t\to\infty\). In that case the origin is called a repeller or source of the dynamical system.
If \(\lambda_1 > 0\) and \(\lambda_2 < 0\), then the origin is called a saddle point of the dynamical system. Some trajectories move toward the origin, while other trajectories tend to infinity (away from the origin) depending on the initial condition \(\mathbf{x}(0)=\mathbf{x}_0\).
In the case of nonreal eigenvalues \(\lambda=\alpha\pm i\beta\) with \(\alpha,\beta\in\mathbb{R}\) and \(\beta\neq0\) the trajectories have the form of a spiral and is the origin called a spiral point of the dynamical system. If the real part \(\alpha\) of both complex conjugate eigenvalues is negative, then these trajectories move in the direction of the origin and if the real part \(\alpha\) is positive, then they tend to infinity (away from the origin).
Example: Consider \(\mathbf{x}'(t)=A\mathbf{x}(t)\) with \(A=\begin{pmatrix}1&2&2\\-1&3&3\\0&-2&-1\end{pmatrix}\). Earlier we have seen that the eigenvalues are: \(\lambda=1\) and \(\lambda=1\pm2i\). The corresponding eigenspaces are:
\[\text{E}_1=\text{Span}\left\{\begin{pmatrix}1\\-1\\1\end{pmatrix}\right\},\quad\text{E}_{1+2i}=\text{Span}\left\{\begin{pmatrix}1\\1+i\\-1\end{pmatrix}\right\} \quad\text{and}\quad\text{E}_{1-2i}=\text{Span}\left\{\begin{pmatrix}1\\1-i\\-1\end{pmatrix}\right\}.\]Now we have using \(e^{(1+2i)t}=e^te^{2it}=e^t\left(\cos(2t)+i\sin(2t)\right)\):
\[\begin{pmatrix}1\\1+i\\-1\end{pmatrix}e^t\left(\cos(2t)+i\sin(2t)\right)=\begin{pmatrix}\cos(2t)\\\cos(2t)-\sin(2t)\\-\cos(2t)\end{pmatrix}e^t +i\begin{pmatrix}\sin(2t)\\\cos(2t)+\sin(2t)\\-\sin(2t)\end{pmatrix}e^t.\]Then the general solution is:
\[\mathbf{x}(t)=c_1\begin{pmatrix}1\\-1\\1\end{pmatrix}e^t+c_2\begin{pmatrix}\cos(2t)\\\cos(2t)-\sin(2t)\\-\cos(2t)\end{pmatrix}e^t +c_3\begin{pmatrix}\sin(2t)\\\cos(2t)+\sin(2t)\\-\sin(2t)\end{pmatrix}e^t.\]Last modified on April 5, 2021