Differential Polynomials

We are officially asking harder questions. How do we solve differential equations that look like the following?

\displaystyle a \frac{\mathrm d^2 y}{\mathrm dx^2} + b \frac{\mathrm d y}{\mathrm dx} + cy = Q(x)

We have a straightforward result if Q = 0. The fun arises when Q \neq 0. The topics we handle now will assume Q being differentiable, but eventually we will encounter bizarre types of Q that required more exotic techniques.

How can we approach this systematically? What we will do is present a more compact version of what is known as the method of undetermined coefficients, but using a new tool called the inverse-\mathcal D operator. What motivates such a definition? Well, recall that differentiation is a linear transformation in the following sense:

\displaystyle \frac{\mathrm d}{\mathrm dx}(f + g) = \frac{\mathrm d}{\mathrm dx}(f) + \frac{\mathrm d}{\mathrm dx}(g), \quad \frac{\mathrm d}{\mathrm dx}(kf) = k \frac{\mathrm d}{\mathrm dx}(f).

We can then interpret \mathcal D as an operator whose input is a function f and its output is another function \mathcal D(f). We define

\displaystyle \mathcal D(f):= \frac{\mathrm d}{\mathrm dx}(f)

so that the linearity property of \displaystyle \mathcal D \equiv \frac{\mathrm d}{\mathrm dx} can be written as

\mathcal D(f + g) = \mathcal D(f) + \mathcal D(g), \quad \mathcal D(kf) = k \mathcal D(f).

Definition 1. For any n \in \mathbb N_0,

\displaystyle \mathcal D^n(f) = \begin{cases} f, & n = 0, \\ \mathcal D(\mathcal D^{n-1}(f)), & n > 0. \end{cases}

Example 1. For any k \in \mathbb C \supseteq \mathbb R, the following hold:

\displaystyle \begin{aligned} \mathcal D (e^{kx}) &= k e^{kx},\\ \mathcal D^2 (\sin kx) &= -k^2 \sin kx,\\ \mathcal D^2 (\cos kx) &= -k^2 \cos kx.\end{aligned}

Furthermore, for any n \in \mathbb N_0, \mathcal D^n(e^{kx}) = k^n e^{kx}.

Using \mathcal D-notation, the original differential equation can be simplified as follows

\begin{aligned} Q(x) &= a \frac{\mathrm d^2 y}{\mathrm dx^2} + b \frac{\mathrm d y}{\mathrm dx} + cy \\ &= a \mathcal D^2 (y) + b \mathcal D (y) + cy \\ &= (a \mathcal D^2 + b \mathcal D + c)y, \end{aligned}

where the last line follows from adding operators according to Definition 2 below.

Definition 2. Let p(x) = a_0 + a_1x + \cdots + a_nx^n, a_n \neq 0 be a polynomial with real coefficients a_i. Define the differential polynomial by

p(\mathcal D) y \equiv (p(\mathcal D))y := a_0 + a_1 \mathcal D(y) + \cdots + a_n \mathcal D^n(y).

By the definition of adding functions,

p(\mathcal D) \equiv a_0 + a_1 \mathcal D + \cdots + a_n \mathcal D^n

Theorem 1. For any polynomial p(x), p(\mathcal D) is a linear transformation in the following sense:

p(\mathcal D)(f+g) = p(\mathcal D)(f) + p(\mathcal D)(g),\quad p(\mathcal D)(kf) = k p(\mathcal D)(f).

Proof. The properties hold for operators of the form {\mathcal D}^k. Let \mathcal D_1, \mathcal D_2 denote operators that satisfy the linearity properties. Checking that scaling and adding such operators that preserve linearity then yields the result:

\begin{aligned} (c \mathcal D_1)(f+g) &= c(\mathcal D_1(f)+\mathcal D_1(g)) \\ &= (c \mathcal D_1) (f) + (c \mathcal D_1) (g),\\ (\mathcal D_1 + \mathcal D_2)(f+g) &= (\mathcal D_1)(f+g) + (\mathcal D_2)(f+g) \\ &= \mathcal D_1(f) + \mathcal D_1(g) + \mathcal D_2(f) + \mathcal D_2(g) \\ &= \mathcal D_1(f) + \mathcal D_2(f) + \mathcal D_1(g) + \mathcal D_2(g) \\ &= (\mathcal D_1 + \mathcal D_2)(f) + (\mathcal D_1 + \mathcal D_2)(g), \end{aligned}

since \displaystyle p(\mathcal D) = a_0 + a_1\mathcal D + \cdots + a_n\mathcal D^n.

Remark 1. These properties can be generalised and studied in more detail in the undergraduate course linear algebra.

Thus, we are interested to solve differential equations of the form

p(\mathcal D)(y) = Q(x),

where p(x) = ax^2 + bx + c for real coefficients a,b,c. We understand the situation when Q = 0, and call such solutions the complementary functions, denoted y_{\mathrm C} so that

p(\mathcal D)(y_{\mathrm C}) = 0.

It turns out that if we can find just one function y_{\mathrm P} such that p(\mathcal D)(y_{\mathrm P}) = Q(x), then we can find all functions y such that p(\mathcal D)(y) = Q(x). This approach isn’t unique to differential polynomials, but applies to any linear transformation.

Nevertheless, we will state this result in the context of differential polynomials, since any broader result belongs to linear algebra, rather than this topic.

Theorem 2. Suppose we have found the general solution to the equation p(\mathcal D)(y_{\mathrm C}) = 0, called the complementary function y_{\mathrm C}, and one particular integral y_{\mathrm P} such that p(\mathcal D)(y_{\mathrm P}) = Q(x). Then the general solution to the equation p(\mathcal D)(y) = Q(x) is given by y = y_{\mathrm P} + y_{\mathrm C}.

Proof. Suppose p(\mathcal D)(y) = Q(x). Since p(\mathcal D)(y_{\mathrm P}) = Q(x) and p(\mathcal D) is a linear transformation,

\displaystyle p(\mathcal D)(y-y_{\mathrm P}) = p(\mathcal D)(y) - p(\mathcal D)(y_{\mathrm P}) = Q(x) - Q(x) = 0.

Since p(\mathcal D)(u) = 0 \Rightarrow u = y_{\mathrm C}, we have

y - y_{\mathrm P} = y_{\mathrm C} \quad \Rightarrow \quad y = y_{\mathrm P} + y_{\mathrm C}.

Example 2. Find the general solution to the differential equation \mathcal D(y) = x^2.

Solution. Given, \mathcal D(y) = 0, we want to find all y_{\mathrm C} such that \displaystyle \frac{\mathrm d}{\mathrm dx}(y_{\mathrm C}) = 0. This means y_{\mathrm C} = C for some real constant C.

Recalling integration and differentiation as reverses of each other,

\displaystyle \mathcal D\left( \frac 13 y^3 \right) = \frac 13 \cdot 3y^2 = y^2 \neq K\cdot C.

Thus, we have found one y_{\mathrm P} = y^3/3. Note that this choice is by no means unique.

The general solution, therefore, is y = y_{\mathrm P} + y_{\mathrm C} = \frac 13 y^3 + C, which is what we would have obtained when solving

\displaystyle \frac{\mathrm d}{\mathrm dx}(y) = \mathcal D(y) = x^2 \quad \iff \quad y = \int x^2\, \mathrm dx

in the traditional manner.

Example 3. Find the general solution to the differential equation

\displaystyle (\mathcal D^2 - 5 \mathcal D + 6)(y) = e^x.

Solution. We first solve the equation (\mathcal D^2 - 5 \mathcal D + 6)(y) = 0. The characteristic equation m^2 - 5m + 6 = 0 yields the real and distinct roots m = 2, 3. Hence,

y_{\mathrm C} = C_1 e^{2x} + C_2 e^{3x}.

For y_{\mathrm P}, we just need one example that isn’t a combination of e^{2x} or e^{3x}. Our educated guess (the technical term in the business is called an Ansatz) here is then y_{\mathrm P} = Ae^{kx}, where we will determine the (presently-undetermined) coefficients. We would like

(\mathcal D^2 - 5\mathcal D + 6)(y_{\mathrm P}) = e^x.

If we can find A and k such that y_{\mathrm P} = Ae^{kx}, the left-hand side simplifies to

\begin{aligned} (\mathcal D^2 - 5\mathcal D + 6)(y_{\mathrm P}) &= \mathcal D^2 (y_{\mathrm P}) - 5 \cdot \mathcal D (y_{\mathrm P}) + 6 \cdot y_{\mathrm P} \\ &= A \cdot k^2 e^{kx} - 5 \cdot Ake^{kx} + 6 \cdot Ae^{kx} \\ &= A(k^2 - 5k + 6)e^{kx}.\end{aligned}

This yields the equation

A(k^2 - 5k + 6)e^{kx} = e^x.

Hence, what would a good choice of A and k be? Well, we want the exponential terms to match, so k = 1. Furthermore, we want the coefficients of the terms on both sides to match, so we require

A(k^2 - 5k + 6) = 1 \quad \Rightarrow \quad A = 1/2.

Hence, y_{\mathrm P} = \frac 12 e^x will be a good choice. Combining our results, the general solution will then be

\displaystyle y =y_{\mathrm P} + y_{\mathrm C} = \frac 12 e^x + C_1 e^{2x} + C_2 e^{3x}.

Example 4. Find the general solution to the differential equation

\displaystyle (\mathcal D^2 - 5\mathcal D + 6)(y) = e^{2x}.

Solution. By Example 3, we still have the complementary function

y_{\mathrm C} = C_1 e^{2x} + C_2 e^{3x}.

If we used the same Ansatz y_{\mathrm P} = Ae^{kx}, then the default equation

(\mathcal D^2 - 5\mathcal D + 6)(y_{\mathrm P}) = e^{2x}

simplifies to

A(k^2 - 5k + 6)e^{kx} = e^{2x}.

The same line of reasoning will yield k = 2, but the coefficients won’t match, since A(k^2 - 5k + 6) = 0 \neq 1. Clearly, the Ansatz y_{\mathrm P} = Ae^{kx} won’t work here.

As if it’s some magical trick spawned from on-high, let’s try the Ansatz y_{\mathrm P} = Axe^{kx}. Differentiating by using the product rule,

\begin{aligned} \mathcal D(y_{\mathrm P}) &= Ae^{kx} + Akxe^{kx} = Ae^{kx} + k \cdot y_{\mathrm P}, \\ \mathcal D^2(y_{\mathrm P}) &= A\cdot ke^{kx} + k \cdot \mathcal D(y_{\mathrm P}) = 2Ake^{kx} + k^2 y_{\mathrm P}. \end{aligned}

So the left-hand side of the default equation simplifies to

\begin{aligned} e^{2x} &=  (\mathcal D^2 - 5\mathcal D + 6)(y_{\mathrm P}) \\ &= (2Ake^{kx} + k^2 y_{\mathrm P}) - 5(Ae^{kx} + k \cdot y_{\mathrm P}) + 6 \cdot y_{\mathrm P} \\ &= (2k - 5)Ae^{kx} + (k^2 - 5k + 6) y_{\mathrm P}. \end{aligned}

Now things look much nicer. Setting k = 2 will still yield k^2 - 5k + 6 = 0. But now there is a nonzero term that will solve the problem, since

e^{2x} = (2 \cdot 2 - 5)Ae^{2x} = (-A)e^{2x}.

The appropriate choice for A is then -A = 1 \iff A = -1. Hence, we will choose y_{\mathrm P} = -xe^{2x} to satisfy our requirements, yielding the general solution

y = y_{\mathrm P} + y_{\mathrm C} = -xe^{2x} + C_1 e^{2x} + C_2 e^{3x}.

Theorem 2 therefore provides us an underlying strategy for solving second-order differential equations with constant coefficients. In fact, we demonstrated the simplest form of the method of undetermined coefficients—by determining the coefficients in Examples 3 and 4. However, we would like to accomplish this goal in a more systematic manner. The working especially in Example 4 is exceedingly tedious. The challenge here arises from selecting a y_{\mathrm P} that works; can we streamline this process?

The trick is to identify and tabulate the common functions for y_{\mathrm P} and ensure that they satisfy our requirements. That is where inverse-\mathcal D operators come in clutch.

—Joel Kindiak, 30 Jan 25, 1915H

,

Published by


Responses

  1. Two-Dimensional Differential Equations – KindiakMath

    […] explore both approaches in due time; the Laplace approach in its traditional presentation, and the undetermined coefficients in a rather compact presentation involving inverse- […]

    Like

  2. The Inverse-D Recipe – KindiakMath

    […] and its general solution is called the complementary function. The case when can be solved using judiciously chosen Ansatz, or as I would like to call it, educated trial-and-error. Nevertheless, if we can […]

    Like

  3. A Two-Part Strategy – KindiakMath

    […] . For convenience, we use –notation to […]

    Like

Leave a comment