In studying ordinary differential equations, we were motivated by the fact that many of the basic equations encountered in nature are cast in a form where the rate of change in the function depends on the function itself. An example of this was Newton’s laws of motion, where, in general, the acceleration can depend on the position. There are other fundamental relationships in physics; relationships that do not have as simple a structure.
Recall the first time you encountered the idea of potential energy. It was introduced for a number of reasons, one being that the potential energy is easier to calculate, since it is a scalar function and not a vector, allowing the force to be determined as

_{}. 
(11.1) 
Unfortunately, while this was sufficient for onedimensional problems, we live in a three‑dimensional world. Thus, equation (11.1) very quickly became replaced with

_{}, 
(11.2) 
where _{} is the gradient, or directional derivative. In Cartesian coordinates, it is defined as

_{}. 
(11.3) 
Notice that the derivatives in equation (11.3) are partial derivatives. This is a general trend. Moving from one to two, three or more dimensions inherently involves converting from ordinary derivatives to partial derivatives, and thus our equations transform from ordinary differential equations to partial differential equations.
Moving from ordinary differential equations to partial differential equations involves invoking a very different approach to their solution. In particular, whereas it was enough to specify the values of the function and its lesser derivatives to get an exact solution for an ODE, for a partial differential equation, an equivalent expression requires that the initial conditions must be known throughout all of space. In addition, we usually want to constrain the solution of the PDE in some region of space. Thus, in order to solve a partial differential equation, we must specify either an initial condition function, or a boundary condition, or both.
Which type of conditions must be specified depends partially on the type of equation encountered. For simplicity, we will consider only linear partial differential equations in two dimensions. Extension of these results to higher dimensions is straightforward.
Let’s start with first order equations. In general these are of the form

_{}, 
(11.4) 
where q_{1}, q_{2} are generalized coordinates, and a, b and g are known functions of these coordinates. Solving equation (11.4) is similar to solving a first order ODE, and since these equations are rarely encountered in physics, we’ll not spend any more time on them.
The most general second order partial differential equation can be written is

_{} 
(11.5) 
where A, B and C are again known functions of the coordinates. The function g can also depend on derivatives of lesser order. Notice that equation (11.5) is similar to the quadratic equation,
ax^{2} +bxy + cy^{2} = d.
Solutions of this equation fall into three classes, depending on the relationships between a, b, and c. When b^{2} < 4ac, the equation describes an ellipse; b^{2} > 4ac describes a hyperbola. A special case arises when b^{2} = 4ac. In this case, the resulting shape is a parabola.
Using the classification of the quadratic equation as a guide, we can break partial differential equations into three groups, elliptical, parabolic and hyperbolic. Specifically, a PDE is elliptical if B^{2}(q_{1},q_{2}) < 4A(q_{1},q_{2})C(q_{1},q_{2}), parabolic if B^{2}(q_{1},q_{2}) = 4A(q_{1},q_{2})C(q_{1},q_{2}), and hyperbolic if B^{2}(q_{1},q_{2}) > 4A(q_{1},q_{2})C(q_{1},q_{2}).
We have already encountered examples of each class of equation. Start with elliptical equations. In this case B^{2} < 4AC. Consider the specific case of B = 0. Then if q_{1} and q_{2} represent Cartesian coordinates, equation (11.5) can be written as

_{}. 
(11.6) 
This can immediately be recognized as Poisson’s equation from electrodynamics.
For parabolic equations, B^{2} = 4AC, so the left hand side of equation (11.5) can be recast as

_{}. 
(11.7) 
This allows equation (11.5) to become

_{} 
(11.8) 
As an example of this, consider the case where _{} and _{}. Then equation (11.8) becomes

_{}. 
(11.9) 
Equation (11.9) can be recognized as Schrödinger’s equation.
The last type of PDE is the hyperbolic equation. In this case B^{2} > 4AC. If we set B = 0, A = 1, and C = v^{2}, then equation (11.5)

_{}, 
(11.10) 
where g = 0. This is recognized as the wave equation.
While it is important to be able to classify the type of equation, we shouldn’t spend too much time concentrating on this. One reason is that there are courses specifically devoted to understanding partial differential equations; courses which provide an invaluable set of analytical tools for a scientist. The other reason is that most equations encountered in research do not fall neatly into one of the three categories. Instead, they are often hybrids. Still, the methods developed here will be applicable to these equations as well.
Let’s start by looking at parabolic equations. These equations are the closest to ordinary differential equations, and thus are one of the easiest equations to understand. Parabolic equations usually use a mixed set of conditions, namely an initial configuration combined with a boundary condition. As a concrete example, consider the diffusion equation,

_{}, 
(11.11) 
where T(x,t) is the temperature at location x and time t, and k is the thermal diffusion coefficient. Assume that the temperature is known at time t = 0 to satisfy
T(x,t = 0) = T_{0}d(x),
as well as the Dirichlet boundary conditions,
T(x=L/2,t) = T(x=L/2,t) = 0.
To solve this problem numerically, we replace the continuous analytical variables x and t with a discrete grid covering space and time. In order to simplify the equations, let’s introduce the shorthand notation,

T_{in} = T(x_{i},t_{n}) 
(11.12) 
where _{}, h is the spatial step size, _{}, and t is the temporal step size. Notice that the spatial points are both positive and negative, while the time steps are strictly positive. Also note that the boundary points correspond to i = 1 and i = N, thus forcing the spatial step size h to satisfy h = L/(N – 1).
We still need to calculate the derivatives in equation (11.11). This can be done using equations associated with derivatives. In particular, the time derivative can be written as

_{}. 
(11.13) 
You should recognize this as the definition of the forward derivative. We can use this form since we are interested in solutions that only travel forward in time.
The spatial derivative is a second order derivative, and so needs to be treated more carefully. Since the spatial shifts can go in both a positive and negative direction, we must choose a centrally located scheme. Again borrowing from our previous work with derivatives, we see that this derivative can be written as

_{}. 
(11.14) 
Substituting equations (11.13) and (11.14) into equation (11.11) yields

_{}. 
(11.15) 
Because the discrete version of the time derivative is a forward derivative while the discrete version of the spatial derivative is a central derivative, this method is known as the forward time centered space (FTCS) scheme.
Rearranging equation (11.15) to find the future value of the temperature yields

_{}. 
(11.16) 
Notice that everything that depends on the time step n is on the right hand side, while only the future value of the temperature is on the left. Because of this, the FTCS scheme is an example of an explicit method for solving partial differential equations.
Returning to our specific problem, we need to cast the initial and boundary conditions into a discrete form. From the equation (11.11), we immediately see that the boundary conditions correspond to

T_{1,n} = T_{N}_{,n} = 0 
(11.17) 
for all n. The initial condition, T(x,0) = T_{0}d(x), cannot be put into a program directly. Fortunately, the Kronecker delta function is defined at the limit of a number of functions. For our purposes it is sufficient to choose a function that has similar features, namely one where the value goes to infinity as the distance scale goes to zero while maintaining an enclosed area of approximately unity. With this in mind, we can approximate the delta function as

_{}. 
(11.18) 
This is just a triangular spike with unit area. As h approaches zero, its value goes to infinity while the area remains constant. Thus, we see that
_{}.
The last problem that needs to be solved before we can program the solution is determining the size of the time step. This is important since if we pick a step too small we use up significant computer resources before finding a solution, if indeed one is reached. Too large a step loses information relevant to the problem. So how do we determine this?
As we have done in the past, we again let the analytical solution guide us. Assuming that the solution can be separated into a product of a function that depends only on time and a function that depends only on position, we can use the separation of variables method to find a solution as

_{}. 
(11.19) 
In order to determine the step size, assume that the spread in T is to be minimized. The spread in T can be calculated by taking partial derivatives

_{}. 
(11.20) 
This is minimized when dt satisfies

_{}, 
(11.21) 
where a spatial step size of h has been used for both x and dx. Since we are only interested in the size of the step, the negative sign can be ignored. Notice that if this step size is substituted back into equation (11.16) it becomes

_{}. 
(11.22) 
Thus, each component of the difference equation is approximately of the same weight, as we would expect.
Hyperbolic equations take the form
_{},
as shown in equation (11.10). The best example of the application of this is as the wave equation. In particular, consider an ideal string, i.e. one that is perfectly elastic, offers no resistance to bending, and is stretched between two immovable supports. Let the mass density of the string be uniform, and the tension in the string be much greater than the weight of the string. This will allow the effects of gravity to be ignored.
Freshman physics shows that, in this case, the wave equation can be written as

_{}, 
(11.23) 
where T is the tension in the string and m is the mass density of the string. If we assume that the length of the string is L, then at the boundaries we have
_{}.
As a first attempt at converting this into a finite difference equation, use the second order central difference equation for the derivatives. This yields
_{}
and
_{}.
Thus, the wave equation transforms into

_{}. 
(11.24) 
Notice that we again have to use a two dimensional grid to solve this equation, only now we have to allow for the solution to move both forwards and backwards in time.
In order to solve the hyperbolic equation, two sets of initial conditions must be specified. While they can take on many forms, for a traveling wave, the initial position and velocity of the wave pulse are the most frequently stated conditions.
Example:
Consider a 10.0 g string 5.00 m long. The left end is fixed, while the right end passes over a pulley and is attached to a 100 g weight. If the left end, which is taken to be at the origin, is pulled upward a distance of 0.25 m and then released from rest, determine the subsequent motion of the string.
Start with equation (11.23) and assume that the solution can be written as
u(x,t) = c(x)t(t).
Using separation of variables, equation (11.23) becomes
_{},
where w is a constant. The last line consists of two ordinary differential equations,

_{}, 
(11.25) 
and

_{}. 
(11.26) 
We can solve each one in turn.
It is easy to see that the solution of equation (11.26) is

_{}, 
(11.27) 
where the sine function was picked to satisfy the boundary conditions c(0) = c(l) = 0, and

_{}. 
(11.28) 
Similarly, the solution of (11.25) can be written as

_{}, 
(11.29) 
where w_{n} is given by equation (11.28). The requirement that the string be released from rest forms the initial condition
_{}.
It can be seen from equation (11.28) that w_{n} is nonzero, which means that C_{n} = 0. Thus, the general solution can be written as

_{}. 
(11.30) 
In order to eliminate the last constant, B_{n}, we need to describe the form of the plucked string mathematically. We could assume that that u(x,t) = u_{0}d(x), but this is not a very physical description. The next complication is to use a triangular pulse. Assuming that the pulse is symmetric about the peak and has a base twice the overall height, then this initial condition becomes

_{}. 
(11.31) 
To find B_{n}, we multiply equation (11.30) on both sides by sin k_{m}x and integrate. The right hand side reduces to B_{n} due to the orthogonality of the sine functions, while the left hand side becomes
_{}
where x_{0} = 0.25 m and l = 5.00 m. Thus, the final analytical solution is

_{}. 
(11.32) 
To solve this problem numerically, we must first specify a grid on which the solution will be found. The size of the position step can be estimated from equation (11.31). Since the initial pulse is assumed to be 0.5 m in length, a step size of 1.00 cm will give a reasonable approximation to the triangular pulse. Since the overall length of the string is 5.00 m, this translates to a total of 500 points in position space.
The temporal step can be determined be bound by the stability criteria associated with our numerical solution. In the case of a hyperbolic PDE, this criteria is that
_{},
where Dt is the time step and Dx is the spatial step. For the initial conditions associated with this problem,
_{},
so a time step of 1.0 x 10^{4} s will satisfy this condition.
Once the grid has been specified, the initial conditions are encoded. The initial position can be determined directly from equation (11.31). The initial velocity cannot be coded directly since it involves differentials. Translating this into a difference equation, the initial condition becomes

_{}. 
(11.33) 
Substituting this into (11.24) yields

_{} 
(11.34) 
for the first time step. Subsequent steps are determined from equation (11.24). A program to determine the evolution of the wave is shown in hyperbolic.c.
Elliptical equations differ from the other two types of PDEs in that elliptical equations typically involve only spatial coordinates. Because of this, the initial conditions become boundary conditions, i.e., the value of the function along the boundaries is completely specified.
The most common example of elliptical PDEs comes from electricity and magnetism. Recall that, in differential form, Maxwell’s equations are

_{} 
(11.35) 
If we substitute the definition _{}, where j is the electric potential, into the first of these equations, we get Poisson’s equation,

_{}. 
(11.36) 
Depending upon the charge distribution, there are many ways to solve this analytically. In particular, when the charge distribution is zero, equation (11.36) reduces to Laplace’s equation, which can then be solved via separation of variables.
In order to solve (11.36) numerically, we take advantage of the fact that the values on the boundaries are already specified. Since we are only concerned with the interior, we can immediately see that there are grid points on either side of any point being evaluated. Thus, a central difference approximation can be used and the derivative in the x_{i}^{th} direction is approximated as

_{}. 
(11.37) 
Substituting this back into equation (11.36) converts Poisson’s equation into a difference equation on a grid. This can then be solved to find the value of the potential at a particular grid point. For example, in two dimensions, the solution at (x_{i},y_{j}) is:

_{} 
(11.38) 
It should be noted that this algorithm is not as sophisticated as we frequently want. This shows up through the large number of iterations needed to obtain convergence. If the amount of computer time needed to solve the problem becomes excessive, a more accurate and robust algorithm should be substituted.