We are officially asking harder questions. How do we solve differential equations that look like the following?
We have a straightforward result if . The fun arises when
. The topics we handle now will assume
being differentiable, but eventually we will encounter bizarre types of
that required more exotic techniques.
How can we approach this systematically? What we will do is present a more compact version of what is known as the method of undetermined coefficients, but using a new tool called the inverse- operator. What motivates such a definition? Well, recall that differentiation is a linear transformation in the following sense:
We can then interpret as an operator whose input is a function
and its output is another function
. We define
so that the linearity property of can be written as
Definition 1. For any ,
Example 1. For any , the following hold:
Furthermore, for any ,
.
Using -notation, the original differential equation can be simplified as follows
where the last line follows from adding operators according to Definition 2 below.
Definition 2. Let ,
be a polynomial with real coefficients
. Define the differential polynomial by
By the definition of adding functions,
Theorem 1. For any polynomial ,
is a linear transformation in the following sense:
Proof. The properties hold for operators of the form . Let
denote operators that satisfy the linearity properties. Checking that scaling and adding such operators that preserve linearity then yields the result:
since .
Remark 1. These properties can be generalised and studied in more detail in the undergraduate course linear algebra.
Thus, we are interested to solve differential equations of the form
where for real coefficients
. We understand the situation when
, and call such solutions the complementary functions, denoted
so that
It turns out that if we can find just one function such that
, then we can find all functions
such that
. This approach isn’t unique to differential polynomials, but applies to any linear transformation.
Nevertheless, we will state this result in the context of differential polynomials, since any broader result belongs to linear algebra, rather than this topic.
Theorem 2. Suppose we have found the general solution to the equation , called the complementary function
, and one particular integral
such that
. Then the general solution to the equation
is given by
.
Proof. Suppose . Since
and
is a linear transformation,
Since , we have
Example 2. Find the general solution to the differential equation .
Solution. Given, , we want to find all
such that
. This means
for some real constant
.
Recalling integration and differentiation as reverses of each other,
Thus, we have found one . Note that this choice is by no means unique.
The general solution, therefore, is , which is what we would have obtained when solving
in the traditional manner.
Example 3. Find the general solution to the differential equation
Solution. We first solve the equation . The characteristic equation
yields the real and distinct roots
. Hence,
For , we just need one example that isn’t a combination of
or
. Our educated guess (the technical term in the business is called an Ansatz) here is then
, where we will determine the (presently-undetermined) coefficients. We would like
If we can find and
such that
, the left-hand side simplifies to
This yields the equation
Hence, what would a good choice of and
be? Well, we want the exponential terms to match, so
. Furthermore, we want the coefficients of the terms on both sides to match, so we require
Hence, will be a good choice. Combining our results, the general solution will then be
Example 4. Find the general solution to the differential equation
Solution. By Example 3, we still have the complementary function
If we used the same Ansatz , then the default equation
simplifies to
The same line of reasoning will yield , but the coefficients won’t match, since
. Clearly, the Ansatz
won’t work here.
As if it’s some magical trick spawned from on-high, let’s try the Ansatz . Differentiating by using the product rule,
So the left-hand side of the default equation simplifies to
Now things look much nicer. Setting will still yield
. But now there is a nonzero term that will solve the problem, since
The appropriate choice for is then
. Hence, we will choose
to satisfy our requirements, yielding the general solution
Theorem 2 therefore provides us an underlying strategy for solving second-order differential equations with constant coefficients. In fact, we demonstrated the simplest form of the method of undetermined coefficients—by determining the coefficients in Examples 3 and 4. However, we would like to accomplish this goal in a more systematic manner. The working especially in Example 4 is exceedingly tedious. The challenge here arises from selecting a that works; can we streamline this process?
The trick is to identify and tabulate the common functions for and ensure that they satisfy our requirements. That is where inverse-
operators come in clutch.
—Joel Kindiak, 30 Jan 25, 1915H
Leave a comment