Differential equations – Systems of linear differential equations – Homogeneous systems

Superposition principle

Theorem: If \(\mathbf{x}_1(t)\) and \(\mathbf{x}_2(t)\) are solutions of the homogeneous system

\[\mathbf{x}'(t)=A(t)\mathbf{x}(t),\tag1\]

then the linear combination \(\mathbf{x}(t)=c_1\mathbf{x}_1(t)+c_2\mathbf{x}_2(t)\) is also a solution for each \(c_1,c_2\in\mathbb{R}\).

Proof: This follows easily by substitution of \(\mathbf{x}(t)=c_1\mathbf{x}_1(t)+c_2\mathbf{x}_2(t)\):

\[\mathbf{x}'(t)=c_1\mathbf{x}_1'(t)+c_2\mathbf{x}_2'(t)=c_1A(t)\mathbf{x}_1(t)+c_2A(t)\mathbf{x}_2(t) =A(t)\left(c_1\mathbf{x}_1(t)+c_2\mathbf{x}_2(t)\right)=A(t)\mathbf{x}(t).\]

Suppose that \(A(t)\) is an \(n\times n\) matrix and that \(\mathbf{x}_1(t),\ldots,\mathbf{x}_n(t)\) are solutions of (1), then we have: \(\{\mathbf{x}_1(t),\ldots,\mathbf{x}_n(t)\}\) is linear independent if

\[c_1\mathbf{x}_1(t)+\cdots+c_n\mathbf{x}_n(t)=\mathbf{0}\quad\Longrightarrow\quad c_1=0,\;\ldots,\;c_n=0.\]

Hence we have:

\[\Bigg(\mathbf{x}_1(t)\;\ldots\;\mathbf{x}_n(t)\Bigg)\begin{pmatrix}c_1\\\vdots\\c_n\end{pmatrix}=\begin{pmatrix}0\\\vdots\\0\end{pmatrix} \quad\Longrightarrow\quad\begin{pmatrix}c_1\\\vdots\\c_n\end{pmatrix}=\begin{pmatrix}0\\\vdots\\0\end{pmatrix}.\]

This implies that: \(W(\mathbf{x}_1,\ldots,\mathbf{x}_n)(t):=\Bigg|\mathbf{x}_1(t)\;\ldots\;\mathbf{x}_n(t)\Bigg|\neq0\).

This is called the Wronskian determinant or the Wronskian of the solutions \(\mathbf{x}_1(t),\ldots,\mathbf{x}_n(t)\).

Abel's theorem

Theorem: If \(A(t)\) is an (\(n\times n\) matrix and \(\mathbf{x}_1(t),\ldots,\mathbf{x}_n(t)\) are solutions of (1) on an open interval \(I\), then we have for all \(t\in I\):

\[W(t):=W(\mathbf{x}_1,\ldots,\mathbf{x}_n)(t)=c\cdot e^{-\int(a_{11}(t)+\cdots+a_{nn}(t))\,dt}.\]

Here \(a_{11}(t)+\cdots+a_{nn}(t)\) is the trace of the matrix \(A(t)\).

Hence this implies that either \(W(t)=0\) for all \(t\in I\) (if \(c=0\)) or \(W(t)\neq0\) for all \(t\in I\) (if \(c\neq0\)).

The proof is skipped; see exercise 6 of §7.4.

Constant coefficients

Consider the homogeneous system with constant coefficients

\[\mathbf{x}'(t)=A\mathbf{x}.\]

Now let \(\mathbf{x}(t)=\mathbf{v}e^{\lambda t}\), then we have: \(\mathbf{x}'(t)=\lambda\mathbf{v}e^{\lambda t}\). Hence:

\[\mathbf{x}'(t)=A\mathbf{x}(t)\quad\Longleftrightarrow\quad\lambda\mathbf{v}e^{\lambda t}=A\mathbf{v}e^{\lambda t}\]

and since \(e^{\lambda t}\neq0\) for all \(t\) this implies that \(A\mathbf{v}=\lambda\mathbf{v}\) or equivalently: \(\lambda\) is an eigenvalue of \(A\) and \(\mathbf{v}\) is a corresponding eigenvector.

If \(A\) is a \(2\times2\) matrix, then there are three possibilities:

  1. \(\lambda_1,\lambda_2\in\mathbb{R}\) with \(\lambda_1\neq\lambda_2\);

  2. \(\lambda_1,\lambda_2\in\mathbb{R}\) with \(\lambda_1=\lambda_2\);

  3. \(\lambda_1,\lambda_2\notin\mathbb{R}\): \(\lambda_{1,2}=\alpha\pm i\beta\) with \(\beta\neq0\).

The first and the last possibility are already extensively treated at Linear Algebra. Here we will mostly consider the second possibility, where the matrix \(A\) is not diagonaizable; the matrix \(A\) is then calleddefect.

Phase plane

In the case of a \(2\times2\) matrix \(A\) for \(\mathbf{x}'(t)=A\mathbf{x}(t)\) one can draw the trajectories of the solutions in the \(x_1,x_2\)-plane. This is called the phase plane. See also: Linear Algebra.

If both eigenvalues \(\lambda_1\) and \(\lambda_2\) of the \(2\times2\) matrix \(A\) are real, then the origin is called a node of the system if both eigenvalues have the same sign. Otherwise, the origin is called a saddle point of the system. In fact we distinguish the following possibilities for the origin:

  1. \(\lambda_2 < \lambda_1 < 0\): the origin is called an attractor or a sink;

  2. \(\lambda_2 < 0 < \lambda_1\): the origin is called a saddle point;

  3. \(0 < \lambda_2 < \lambda_1\): the origin is called a repeller or a source.


Last modified on July 1, 2021
© Roelof Koekoek

Metamenu