Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / C#

Solving Differential Equations - General Theory and Numerical Integration

5.00/5 (11 votes)
30 Mar 2023CPOL25 min read 9.5K   253  
This article is all about general methods for solving differential equations.
Introduces a general method for dealing with explicit numerical integration schema with any Butcher tableau. This is the first part of a series on differential equations, and the introduction will be quite theory heavy. Here, I will describe a general procedure that you can follow to solve ordinary differential equations and also group together the two main acoustical differential equations with a unified method of solution.

Image 1

Background

Everything changes with time. Think about how objects around you, the development of new technology change your interaction with the world, or simply the movements of heavenly objects. The flow of a river, the air that travels around an object, and how your blood flows around in your veins. All things that could be described and plotted in a graph, we can simply make a function that fits this graph. Or better yet, find a differential equation that simply describes in most cases the average movements of atoms around us. Or I can simply ask ChatGPT to give me a Shakespearian poem about the power of differential equations:

Oh, wondrous mystery of nature's ways, That even the most learned men doth amaze, Behold the art of differential equations, That describe the motion of stars and vibrations.

In truth, these equations are of great power, For they can reveal the growth of a flower, And how the wind doth move the ocean's wave, And how the pendulum doth pendulate and behave.

With symbols and signs they represent, The changing rate of a quantity's ascent, Or how a system responds to a force, Or the flow of heat, which follows its course.

But though these equations may seem obscure, And their solutions often hard to procure, They hold the key to understanding the universe, And in their secrets lies knowledge most diverse.

How it changes can be described in mathematics with the use of differential equations, simply describing the changes of the function with respect to time and space. Most differential equations are found by some logical thought process involving conservation law. Think of simple conservation of mass could be exemplified by the number of items in your shopping cart is dependent on what you put in and what you take out.

On these assumptions, a differential equation is set up. This is why they are so interesting to both solve and perform a study on. Broadly speaking, there are two kinds of differential equations, linear and non-linear. Here, linear differential equations are simple to understand and we can easily solve these for all future times without many problems. Non-linear differential equations are often chaotic and difficult to predict, they do not behave nicely.

Here, linear differential equations are simple to solve and we have developed many tools to solve them, for instance, superposition principle, while non-linear ones require much more work to solve. However, in recent 100 years, we found a wonderful shortcut to these problems, numerical integration methods which is always an option to solve any differential equation.

This article will take you on quite a ride through this alien universe of differential equations where most people assume to not really live. I will seemingly roam through many major historical developments in mathematics, which you most likely will know by heart and use frequently without knowing about the existence of the theoretical background for it. However, don't lose sight of the main objective here is to explain how some of the differential equations can be solved rather easily without much hassle.

The modern literature describing how to solve differential equations is very extensive and the main reason for it is that solving them is hard work; there are a wealth of different techniques, transformations, and tricks that are used. For an overview of the literature, you could see modern mathematical textbooks such as Ibragimov, 2009 or Evans, 2010 and they will go through most of the most used techniques. Although techniques involving Lie groups have appeared and have generally resulted in reducing the number of methods needed to implement for solving these equations.

In the Beginning...

It is impossible to write any lengthy article or book in mathematics without mentioning the number \( e \) in some form or another. Whole books have been written about that number (not a joke!). The first one to derive a method for finding the number was Euler who also named it. While I assumed he found it the simplest possible way, much like I stumbled upon it when I was younger. But as it turns out, Euler found the number in quite a complicated manner by using the binominal expansions. This goes to show how important polynomial curve fitting theory was at the beginning of calculus. This was after all Newton's insight into interpreting integral and differential equations, he simply made use of interpolation methods. This is indeed still how most of the proofs are developed in mathematics but with Euler being the greatest algebraic manipulator there ever was or has been, he easily outwitted his peers and predecessors. To put it bluntly: Newton seems to be an amateur mathematician compared to Euler, but fortunately for Newton, so does everybody else.

What Euler did was this, was to assume that there exists a formula for the inverse and the exponential on the form:

$ a^\omega = 1 + k \omega $

In the above equation \( k \) is a constant dependent on \( a \) and we assume \( \omega \) is a very small number. Euler does some substitutions with \( \omega = \frac{x}{i} \) and here \( i \) is an infinitely large number and \( x \) is a given number.

$a^x = (a^\omega)^i = (1 + k \omega)^i$

Euler now expands the expression using the binomial theorem, do some tidying up and gets:

$ a^x = 1 + \frac{kx}{1} + \frac{k^2 x^2}{2!}+ \frac{k^3 x^3}{3!}+ \frac{k^4 x^4 }{4!} + \cdots $

Now it is only a matter of inserting \( k = 1 \) and you will get \( e^x \).

In comparison, I assumed that he started with Fermat's integration formula. I simply started practicing the integration formulas and started by integrating 1, then integrating the result x, and then x^2/2 etc. Then I simply asked:

"What do I get if I sum all these items?"

-" You get the number e"

was the answer my teacher gave me. Expanding the number e with the derivatives also generates a formula that makes it crystal clear why the number stays the same under differentiation and integration.

But since I want to use this result later on, the most convenient way is to reformulate the number as:

$e^x = \frac{d^\infty \, 1}{dx^\infty} + \cdots + \frac{d \, 1}{dx} + 1 + \int_0^x{1 \, dx} + \cdots + \int_0^x \cdots \int_0^x{1 \, dx^\infty} $

With the definition, the proof of the function is its own derivative and integrated follows naturally by itself without looking up some strange derivative or integration technique. Using the extended definition, the integration constant is also hidden within the multiple differentials of a constant. Actually, Euler did not write \( + C \) on any of his integration results as he found it to be obvious.

As a digression in order to have most of the methods for finding the number $e$, one can also use Picard iterations. It is also a very simple formula:

$\frac{dx}{dt} = f(t) = 1 \qquad x(t_0) = x_0 = 1$

With these initial conditions, the same integral is recursive, they are performed again and again.

$x(t) = x_0 + \int_{0}^t f(s) \, ds$

With these initial conditions, the result is simply the exponential function \(e^x \).

Groups

Strictly speaking, you could skip this chapter, but it will help you to understand how one can greatly simplify proofs and explanations for different theories. In mathematics, groups are a much more central theme that often seems to since it unifies and connects previously unconnected things in mathematics, and also it is heavily used in day-to-day programming! A group has elements and a designated operator that functions on the elements and returns an element in the group. It's easy to show this with some simple code. Assume that all elements have the interface which all members or elements implement. The group/interface must contain an identity element, like 1 is in mathematics, as this element returns the element that the operator operates on. An example would be the operator * that operates on the identity element and 2 like so 1 * 2 = 2. In programming terms, this is really easy to show since these groups are distinct like, double, int, single, etc. The difference is that these are multiple operators, like plus and minus, multiplication and division, thus they are called rings. And any operation performed on two similar objects returns an object that is still within the group. Being a skilled programmer that knows your trait makes you more proficient in algebra than most mathematicians are. To summarize a group is formally defined as:

  1. Closure - Given two elements a and b in I, or as mathematicians write it: \( a, b \in \mathbb{I} \) the result a multiplied by b is also in I: \( a \cdot b \in \mathbb{I} \)
  2. Associativity - For all a and b \( \forall a, b \) and c in I: \( c \in \mathbb{I} \), the two different expression are equal \( (a \cdot b ) \cdot c = a \cdot (b \cdot c) \)
  3. Identity element - There is an identity element. \( e = \mathbb{I}^0 \) in this case, for all a in I: \( \forall a \in \mathbb{I} \) there exists an identity element e multiplied with a is still a: \( e \cdot a = a \)
  4. Inverse element - \( a \in \mathbb{I} \) there exists \( b \in \mathbb{I} \) such that \( a \cdot b = b \cdot a = e \).
  5. Commutativity - \( \forall a,b \in \mathbb{I} \) the relation \( a \cdot b = b \cdot a \) is always true.

This implies that the function operated on by a differential or an integral operator is an infinite Abelian algebraic group. The requirement is, of course, that the function the operator works on must be differentiable and integrable, which might not be the case in general. Here, it is just assumed to be the case since the primary interest in this article concerns solutions to what can be described as harmonic functions which are naturally smooth. Applying the new operators to describe the exponential function $e^x$ can easily be done. In contrast to more elaborate ways of finding it, as for example, Euler did with the binomial theorem, a different way of writing the exponential function $e^x$ with only the fundamental theorem of calculus.

Since the natural number \( e \) can be defined as an Abelian group, which means that under given situations, it follows the arithmetic of natural number and is thus well behaved. It also highlights the fact that a power series can be used to solve differential equations since these can be formed as a dot product of constants and a subset of the elements in the exponential set. \section{Solving the cyclotomic differential equations} The solutions to cyclotomic differential equations in the exact same form as the algebraic solutions. The general differential cyclotomic equation can be solved using infinitesimal group operation on the constant 1, written as \( E^x \) for simplicity, by simply noting that modulus \( n \) scalars with opposite signs will produce the power series solution directly.

The well-known 1801 book Disquisitiones Arithmeticae by Carl Friedrich Gauss expanded the idea to include modular arithmetic to solve many difficult problems. This book is so influential to mathematicians that contemporary algebraic extensions are seen today in abstract algebra and cyclotomic fields, which are just natural extensions of Gauss's original work. These things are important as they lay the groundwork to generalize problems in mathematics.

In the end...

I'm setting myself a secondary goal for the article. To unify the method in which you can find the solutions to the differential equations in the life of an acoustician. This means integrating the solution to the general wave equation and the flexural or bending wave equation into the same family of solutions and also the broader family of similar differential equations. Or as the mathematician would say that the differential equations are in the same group.

Stated formally, the goal is to show that the solutions to the general ordinary differential equation \(\frac{d^n \, f(x)}{dx^n} \pm \omega^n f(x) = 0 \) were \(\omega \in \mathbb{C} \) (is a complex number) with \(n \in \mathbb{Z}_+\) n is a real number. Within these equations, the second-degree classical fundamental equation of mechanics is hidden \( m \cdot \frac{d^2 \, y}{dt^2} + k^2 \cdot y \), called the Helmholtz equation and a certain 4th-degree version of the Euler-Bernoulli beam equation in space are both acoustic examples of this equation type.

As is often the case, Euler found the techniques used to solve all of the equations solved by using techniques like early versions of the Fourier transform, Laplace transforms, integrating factor, and much more presented by him in one form or another. This means that the way the solutions are presented here could in hindsight just as well have been found by an 18th-century mathematician or physicist, which makes the absence of this unifying proposition somewhat surprising.

Additionally, it has in general been noted that it is typically simpler to solve lower-order ordinary differential equations than higher-order ones. For the differential equations on the \( \frac{d^n \, f(x)}{dx^n} \pm \omega^n f(x) = 0 \) things are different. No matter the order of the equation, it is much simpler to solve the differential equations covered they are grouped according to their solutions.

Start off with the classical wave equation

$\frac{d^2 \, f(x,t)}{dx^2} = \frac{1}{c^2} \frac{d^2 \, f(x,t)}{dx^2}$

Understanding how the energy transport happens in the case of the wave equation can be seen by factorizing the operators of the differential wave equation into its two more fundamental differential equations.

$\left( \frac{d \, }{dx} - \frac{1}{c} \frac{d \, }{dx} \right) \left( \frac{d \, }{dx} + \frac{1}{c} \frac{d \, }{dx} \right)f(x,t)= 0$

This factorization of the wave equation appears in Boole's book (Boole, 1859), and the two similar results only differing to the sign have several names according to their use: the one-way wave equation, the advection equation, or the transport equation. This factorization technique to solve differential equations is usually called Heaviside's operator factorization of a differential equation. See for instance Cooper, 1952 for some historical references on the technique.

There is however a slight general problem with the factorization presented, the factorization is not unique, or worse it might not exist for some differential equations see for instance Simmons, 2016. Further examples are given in Bender,1999 where factorization is presented:

$y'' - y = \left(\frac{d}{dx} -1\right)\left(\frac{d}{dx} +1\right) y= \left(\frac{d}{dx} -\tanh{(x)}\right)\left(\frac{d}{dx} -\tanh{(x)}\right)$

The factorization of the Helmholtz wave equation on the form given below is also valid Berkovich, 2007.

$\left[ D^2 + k \right] f(x) =\left[ \left( D + k \frac{-c_1 \sin(x) + c_2 \cos(x)}{c_1 \cos(x) + c_2 \sin(x)}\right) \left( D - k \frac{-c_1 \sin(x) + c_2 \cos(x)}{c_1 \cos(x) + c_2 \sin(x)}\right)\right] f(x)$

To make things easier for us, we can limit the solutions to what is found in the physical realm with small amplitudes of the wave, the only interpretation of the wave equation possible is the one-way wave equation since the wave speed is assumed a real constant (or complex if one wants to include losses in a medium). Interestingly, one can also use the factorization into the two equations to argue that the steady state solution of the wave equation and the Euler-Bernoulli beam equation in an enclosed finite space is thus wholly dependent on the reflected waves at the boundaries.

In order to transform the general wave equation to an ODE the Fourier (or Laplace) transformation is applied to the wave equation. This reduces the partial differential equation (PDE) it to an ordinary differential equation (ODE) called the homogeneous Helmholtz equation as mentioned earlier.

$\frac{d^2\, f(x)}{dx^2} + k^2 \, \cdot f(x) = 0 $

A different version of this equation is often used as an introduction to ordinary differential equations in mathematics or physics: the simple pendulum equation. The wide general applications of this differential equation in physics have given it the name Fundamental equation of mechanics. Another example of a cyclotomic differential equation in acoustics is the Euler-Bernoulli beam equation, which deals with the vibrations on bars can, with some exchange of variables Kinsler et.al. 2000 is of the same format:

$\frac{d^4\, f(x)}{dx^4} - \left( \frac{c}{\omega} \right)^4 \, \cdot f(x) = 0 $

All of the examples justify treating the broader differential n'th-order dimensional equation that contains all solutions to the preceding equations. The more general ordinary differential equation with the restriction $n \in \mathbb{Z_+}$ and $\omega \in \mathbb{C}$:

$\frac{d^n\, f(x)}{dx^n} + \omega^n \, \cdot f(x) = 0 $

The focus is on the special case of the ordinary differential equation (ODE) whose examples preceded here, namely the many 1D cases of solutions of the differential equation similar to the ODE wave equation.

The Infinitesimal Group

Marius Sophus Lie had an idea of unifying the theory regarding solutions to differential equations similar to the methods given by the Galois groups. Lie idea was to introduce a continuous transformation. Let's say we want to get a formula for the solution to the second-degree polynomial equations exchange \(x = \hat{x} + \epsilon \) .

$a \cdot x^2 + b \cdot x + c = 0$

The idea is now to eliminate all values only dependent on x only and this procedure can be applied generally and is called Tschirnhaus' method from 1683. One simply multiplies the new transformation for x.

$ a \hat{x}^2 + 2 a \hat{x} \epsilon + a \epsilon^2 + b \hat{x} + b \epsilon + c = 0$

Introducing new variables in order to get the same equation:

$ \hat{a} \hat{x}^2 + \hat{b} \hat{x} + \hat{c} = 0$

Now one chooses \( \hat{b} = b - 2 a \epsilon = 0 \). Similar transformations could be performed to differential equations and this is exactly what was done by Lie. I will not go into further detail about Lie transformation, but just point out that transformations are an essential tool that can help to solve many mathematical problems.

We start a different transformation with Leonard Euler, who was the first person to exchange the letter \( D = \frac{d}{dx} \) to describe the differential operator in operational calculus in order to simplify the expression, see for instance Simmons, 2016. However, using the letter \( D \) is not actually optimal or indeed convenient as it is also used for total derivative and fractional derivatives and integrals. So, in generalizing the infinitesimal to include both integration and differentiation exchanging \( D \) with \( \mathbb{I} \) where positive powers give the integral and negative powers gives differentiation seems preferable and more natural, as differentials are described in finite difference (subtraction) and Cauchy integration Kurtz, 2004 as a summation. An alternative with examples is given:

$\begin{aligned} \mathbb{I}_{x} \left[ y \right] &= \int_0^x{y \, dx} \\ \mathbb{I}^{0}_x \left[ y \right] &= 1 \, y \\ \mathbb{I}^{-1}_x \left[ y \right] &= \frac{dy}{dx} = D \, y\\ \mathbb{I}^{-2}_x \left[ y \right] &= \frac{d^2 \, y}{dx^2} = D^2 \, y\\ \mathbb{I}^{-n}_x \left[ y \right] &= \frac{d^n \, y}{dx^n} = D^{n} \, y\\ \end{aligned}$

In addition, one can observe that the $n$th order linear differential operator $L$ is a subgroup to the infinitesimal group, which again is a special case of the more general theory presented in Kolchin, 1973.

$L \cdot y = \left( p_0 (x) + p_1(x) \frac{d}{dx} + \cdots + p_n(x) \frac{d^n}{dx^n}\right) \cdot y = \left< p_n(x) , \mathbb{I}^{-n}_x \left[ y \right] \right> $
$\frac{d^n}{dx^n} f(x) + \omega^n f(x) = 0 $

Solving this equation with $\omega$ as a variable is rather straightforward.

$A_n = \{ \left(a_1 , \cdots , a_n \right) \times \left(1 , -1 \right) \}_{-\infty}^{\infty} $

One can also show this with first-order differential equations in matrix form. The general solution to be expressed as a dot product with \(e^x \) and the infinite series:

$f(x) = \left< A_n , e^{\omega \cdot x} \right> $

Equally for subtraction

$\frac{d^n}{dx^n} f(x) - \omega^n f(x) = 0$

This makes the power series solution simpler and the need for the alternating sub-series vanishes.

$A_n = \{ \left(a_1 , \cdots , a_n \right) \}_{-\infty}^{\infty}$

The solution to both positive and negative signs is given so the solution is the same only exchanging the coefficients in the dot product. In reality, using the infinitesimal group to write up the solutions directly is just a convenient version of the Taylor series method (or the Frobenius method with $x$ being shifted by a constant, see Bender,1999 or Paulsen,2014 to solve a differential equation where the value of $y$ is replaced. The only difference is that the power series is replaced with the sum of the integrals times a constant in the infinitesimal group similar to the Borel sum.

$\begin{aligned} y = f(X) & = \sum_{n=0}^{\infty}{a_n \cdot \frac{x^n}{n!}} \\ & = \sum_{n=-\infty}^{\infty}{a_n \cdot \mathbb{I}_x^{n} \left[ 1 \right]} \\ & = \left< a_n, e^x \right> _{-\infty}^{\infty} \end{aligned}$

An example of writing a general solution to the Helmholtz equation with \(A \cdot \cos(\omega \, x) + B \cdot \sin(\omega \, x) \)

$f(x) = A \cdot 1 + B \cdot (\omega \, x) - A \cdot \frac{(\omega \, x)^2}{2!} - B \cdot \frac{(\omega \, x)^3}{3!} + A \cdot \frac{(\omega \, x)^4}{4!} + \cdots \label{eq:PowerSeriesSinAndCos}$

Rewritten as a dot product that also includes the dot product of derivatives

$D_n = \{ \left( A , B \right) \times \left(1,-1 \right) \}_{k = -\infty}^{k=\infty} \label{eq:C_nWithSinAndCos}$

The solution can be written using this form

$f(x) = \left< D_{n}, E^{\omega x} \right> \label{eq:AnTrigonometricWithOmega}$

While the amplitude of the 'trigonometric' function is given by the constants and is cyclic, the number $E^{x}$ deals with the differentiation and integration while remaining unchanged by the operators. The strength of the method comes into play once you change the order in the Helmholtz equation from 2 to 3. All that is needed is to add a third general element and the general solution follows.

$D_n = \{ \left( A , B, C \right) \times \left(1,-1 \right) \}_{k = -\infty}^{k=\infty} \label{eq:D_nWith3rdOrderHelmholtz}$

Using the infinitesimal group together with the cyclotomic differential equation where $n$ is not a prime, shows that $n$ could be factorized into its primes and here modern abstract algebra could be a useful tool. For example, solving a 6th-order cyclotomic differential equation where the general solutions might be of cyclic order 2, 3, or 6 depending on the boundary conditions. Strictly speaking, a cyclic group of $1$ is also admissible as a solution. In general, the cyclic order factorization might go some way of explain why higher cyclotomic order differential equations seldom describe natural phenomena.

Solutions to Factorized Cyclotomic Differential Equations

A standard method of factorization of differential equations can be shown with Heaviside's operational calculus, where a general rigorous description of the factorization technique called operational calculus is given by Mikusinski,1959, and later Krabbe,1970. However, the idea is very simple indeed, in this case, it is enough to suppose that the solution could be written in the form \( y = e^{s \cdot x} \). Inserting this into a differential equation gives you an algebraic equation. Utilizing the well know cyclotomic algebraic equation together with help of operational calculus the expanded to differential cyclotomic equations gives rise to even more solutions to other differential equations. Using the inspiration from the cyclotomic algebraic equation:

$\frac{x^{n+1}-1}{x-1} = x^{(n)} + x^{(n-1)} + \cdots + x + 1 \label{eq:CyclotomicPolynomial}$

A quick note is that one can get a similar algebraic expression with \( x+1 \) in the denominator as well. But you will get a grasp of why it works with the expression above.

Utilizing operator calculus on differential equation the solution of the equation \((\frac{d}{dx} - 1) \cdot f(x) \) gives the solution of the family of differential equations on the form:

$\left(\frac{d^{n} \, }{dx^{n}} + \cdots + \frac{d}{dx} + 1 \right) \cdot f(x) = 0 \label{eq:HeavisideCyclotomic}$

Transforming the equation by multiplying with $(D - 1)$ simplifies the general homogeneous differential equation:

$\left( \frac{D^{(n+1)} - 1}{D - 1} \right) \cdot f(x) = \left(D^{n} + \cdots + D + 1 \right) \cdot f(x) = 0 \label{eq:CyclotomicDifferential2}$

This transformation is really akin to assuming that the solution can be written as \( e^{s \cdot x} \) and that the differentiation will generate a polynomial which we can solve for.

Numerical Integration

In applied mathematics, everything starts with the Taylor or Maclaurin expansions series. Originally, this seems to first stem from Gregory when he found the binominal theorem at the same time Newton did. The difference was that Gregory seems to have been a much more general approach and found the formulation for the Taylor series and used it for some trigonometric series. I guess that there is no coincidence that Maclaurin found the series, as he took up the professorate after Gregory passed away after Newton recommended Maclaurin for the job. Gregory seems to write it on the back of a letter sent to him around 1671, which nearly exactly predates Taylor's formula by 30 years prior to publication.

$ f(x+h) = f(x) + h \cdot f'(x) + \frac{h^2}{2!} \cdot f''(x) + \cdots $

We can use this information and apply it for instance a midpoint rule in order to perform the integration:

F(x) = \int_x^{x+h} f(x) \, dx \approx h \cdot(\frac{f(x) + f(x+h)}{2})

But Euler's method is not really optimal since it's a first-order approximation, so is there any better way of integration? As it turns out there is. We know from quadrature integration that well-behaved functions can have a much better approximation with much fewer points. The same ideas also apply to Runge-Kutta integration techniques, but there are many different tableaus that could be used. These were systemized by Butcher hence the tableau is often called Butcher tableaus.

$ \begin{array}{c|c} c & A \\ \hline & b \end{array} = \begin{equation*} \begin{array}{c|cccc} 0 & 0 & 0 & 0 & 0 \\ 1/2 & 1/2 & 0 & 0 & 0 \\ 1/2 & 0 & 1/2 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 \\ \hline \, & 1/6 & 1/3 & 1/3 & 1/6 \end{array} \end{equation*}$

The typical RungeKutta 4th order integration tableau is represented :

C#
// Load RungeKutta 4 order method as the default integrator
c = new double[] { 0, 1d / 2d, 1d/ 2d, 1d };
A = new double[4, 4] {
    { 0,    0,  0,  0},
    {1d/2d, 0,  0,  0 },
    {0, 1d/2d,  0,  0 },
    {0,     0,  1d, 0 }
};
b = new double[] { 1d / 6d, 1d / 3d, 1d / 3d, 1d / 6d };

The implementation is as follows:

C#
/// <summary>
/// General Runge Kutta iuntegration method from a Butcher tablau.
/// Default loads the RK41
/// </summary>
/// <param name="df">The derivative of the function that
/// takes x and t as parameter and retuns a double value</param>
/// <param name="x">Starting point</param>
/// <param name="t">Current time</param>
/// <param name="dt">Time step used for numerical integration</param>
/// <returns></returns>
public double CalculateStep(Func<double, double , double> df,
                                 double x, double t, double dt)
{
    // Stores the step results
    double[] Increments = new double[c.Length];

    // Go through each step of the Butcher tablau
    for (int i = 0; i < c.Length; i++)
    {
        // Initial value
        double increment = x;

        // Go trough all stage results in the current Butcher tableau
        for (int j = 0; j < i; j++)
            increment += dt*Increments[j] * A[i, j];

        //Save the current stage
        Increments[i] = df(increment, t + c[i]*dt);
    }

    // Startinpoint for integration (same as the + C in integrating)
    double Result = x;

    // Add the integration weights (b) used for each stage result
    for (int i = 0; i < b_HighOrder.Length; i++)
        Result += dt*b_HighOrder[i] * Increments[i];

    return Result;
}

There are several Butcher tableaus to choose from, but in general, you are advised to avoid the methods by Fehlberg, as they have been criticized for their lack of computational robustness. Methods by Verner is better suited, and Matlab used a 5th and 6th-order method developed by Dormand and Prince by default.

The usage together with a lambda expression is pretty straightforward. Here is exemplified by the pendulum equation:

C#
for (int i = 0; i < NumberOfSteps; i++)
{
    // Due to floating point arethmitic multiplication is better than summation
    // Since its a small value (dt) that is added multiple times.
    // Large value + value less than epsilton is equal to 
    // the large value in floating point 
    t = dt * i;
    
    // Run the RK integrator in as an asyncronous process. 
    var CurrentValuesFromAnotherThread = await Task.Run(()=>Integrator.CalculateStep(
                                         // The integration step, 
                                         // it needs a pointer to all the derivatives 
                                         // at current x and t
                                         (x, t) => ydot(t, x),
                                         // the initial condition for integration step
                                         CurrentValues,
                                         // the current time
                                         t,
                                         // the step size in time
                                         dt
                                       ));
    // Store the caluclated values from local 
    CurrentValues = CurrentValuesFromAnotherThread;
    
    // Store the values in lineseries 
    IntegrationSeries.Points.Add(new DataPoint(t, CurrentValues[0]));
    ExactSolutioinSeries.Points.Add(new DataPoint(t, CurrentValues[1]));
    
    // Cant update too often since it will overload the UI
    if (i % 10 == 0)                
        PlotModel.InvalidatePlot(true);
}

// Just to make sure everything is plotted.
PlotModel.InvalidatePlot(true);

Here the system of equations solved is from:

$m \cdot l \cdot y'' = - m \cdot g \cdot \sin(y)$

Rewritten as a system of first-order equations, this becomes:

C#
/// <summary>
/// The derivatives 
/// </summary>
/// <param name="t">Time dependent variable in f'(t,x)</param>
/// <param name="y">Space dependent variable in f'(t,x)</param>
/// <returns>f'(t,x)</returns>
public double[] ydot(double[] y, double t)
{
    double[] z = new double[2]
    {
        y[1],
        -(g / PendulumStringLength) * Math.Sin(y[0]) - 
        Damping*y[1] + Amplitude*Math.Sin(t)
    };
    return z;
}

This procedure is quite general, and in the program attached to this article, I give three simple examples of different types of differential equations to be solved. A normal polynomial equation, the pendulum equation, and a wave traveling on a string. I plan on writing a follow-up article describing more implementations of the wave equation, and not so much on the theory to solve them as presented here.

Conclusion

For the programmer, numerical integration is probably the most interesting since it is a general tool that will help you solve any inexplicit integration schema with a Butcher tableau of your choice.

The method for finding solutions to many homogeneous differential wave-type equations that frequently come up in acoustics and groups them together with regards to a unified method of solving them is given. The addition of transforming the equation by the multiplication of a differential equation gives more solutions to many previously unrelated differential equations by using the solutions of the cyclotomic differential equation.

References

  • d'Alembert, 1747 Recherches sur la courbe que forme une corde tenduĂ« mise en vibration by Jean-Baptiste le Rond d'Alembert
  • Euler, 1748, Introductio in Analysin Infinitorum by Leonard Euler
  • Gauss, 1801, Disquisitiones Arithmeticae by Carl Friedrich Gauss
  • Boole, 1859, A treatise on differential equations by George Boole
  • Cooper 1952, Heaviside and the Operational Calculus article http://www.jstor.org/stable/3610762
  • Mikusinski 1959, Operational Calculus by Jan Mikusinski
  • Krabbe, 1970, Operational Calculus by Gregers Krabbe
  • Lindsay, 1973, Acoustics: historical and philosophical development by Robert Bruce Lindsay
  • Kolchin, 1973, Differential algebra and algebraic groups by E.R. Kolchin
  • Lang,1989, Cyclotomic Fields I and II - combined second edition by Serge Lang
  • Bender,1999, Advanced mathematical methods for scientists and engineers I - Asymptotic methods and perturbation theory by Bender, Carl M. and Orszag, Steven A.
  • Kinsler, 2000, Fundamentals of Acoustics, 4th edition by Kinsler, Laurence E. and Frey, Austin R., and Coppens, Alan B. and Sanders, James V.
  • Kurtz, 2004, Theories of Integration: The Integrals of Riemann, Lebesgue, Henstock-Kurzweil and McShane by Kurtz , Douglas S. and Swartz, Charles W.
  • Petersson, 2005, Structure-Borne Sound: Structural Vibrations and Sound Radiation at Audio Frequencies, 3rd edition by Cremer, L. and Heckl, M. and Petersson, B.A.T.
  • Berkovich, 2007, Applicable Analysis and Discrete Mathematics: Method of factorization of ordinary differential operators and some of its applications by Berkovich, Lev M.
  • Ibragimov,2009, A practical course in differential equations and mathematical modeling by Ibragimov, Nail H.
  • Evans, 2010, Partial Differential Equations, 2nd edition by Evans, Lawrence C.
  • Hungerford, 2012, Abstract Algebra: An Introduction, 3rd Edition by Hungerford, Thomas W.
  • Paulsen, 2014, Asymptotic analysis and perturbation theory by Paulsen, William
  • Simmons, 2016, Differential equations, with applications and historical notes, Third edition by Simmons, George F.

History

  • 31st March, 2023: Initial version

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)