The derivative of E with respect to um

(Note this may differ from the last lecture, this one is the one coded in and thus tested)

 

Eqn 1

The above was tested with the code relax1 for\relax1.for

      The essence of this code is that I start with the approximate answer.

 , then set u64(r=6.3) to zero and then iterate to the solution.  The purpose of this is merely to test the above equation.  A somewhat more sophisticated version of the code is for\relax.for.


The action for the two dimensional Laplacian.

           Eqn 2

with evenly spaced points in x and y, the above procedure for evenly spaced points yields

               Eqn 3

for a rather simple interpolation of the points nearest i,j.  The entire problem then involves placing the boundary and doing partial interpolations only for those points near the boundaries.

 

In particular Φ9,6 is interpolated between Φ8,6, the boundary, Φ9,5 and Φ9,7.

   Since any disturbance in this method propagates in each step only to the next nearest steps, the method is fastest with the fewest points but of course with the most discretization error.  In the past I have found it worthwhile to solve with N points in each dimension, N2 total points, then to double starting with interpolations between the first group of points and to finally double ones more.  The procedure of producing u new from u old is completely vectorizable since there are no ifs and no function calls --- fast ---.  I have in the past formed a logical variable to tell which points are near boundaries so that I could then give them special treatment with stored interpolation terms.

                The fact that x,y,z appear in the same way in each dimension means that the multidimensional problem is “essentially” the same as the one-d problem except of course for the need for a lot more points. In essence the relaxation method in multi-dimensions is simply an extension of the one dimensional schemes to many more points.  The local methods have a few additional complications in multi-dimensions as shown below.

 

Partial Diff Eq.  (Press 17.1-17.2 old edition

    19.1-19.2 new edition) note that I am returning to the local problem of starting with the solution at one point and finding it at nearby points.

The object is to start with the known boundary condition at t=0 (the bottom of the picture) and to proceed upwards in time while satisfying the boundary conditions at x=0,t=1,2,... and at x=xn , t=1,2,...

 

 First the equation with first derivatives on both sides:

 

The flux-conservative equation is

                   Eqn 4

which we simplify to a scalar u and a linear scalar F so that the typical equation is

                         Eqn 5

with v a constant.  This one is so simple that we know the solution is

u=f(x-vt)

where f is any arbitrary function. i.e. f/x=(df(arg)/d(arg))darg/dx=df/darg while f/t=(df/darg)darg/dt=-vdf/darg

   [ note that in real life this would require finding an f which satisfies the boundary conditions (frequently far from trivial)]

The next step is to do a simple finite difference.  In t we have at the beginning only the present point, while in x we have both the one at x and the ones at x+ and x-, so that we quite simply form

  where n is time, j is x

which yields un+1,j in terms of the already known quantities at the time n.

 simple elegant obvious and unconditionally unstable.  Note that this equation is not numbered - - it should never be used.

 



points n,j+1 and n,j-1 determine the derivative with respect to x while the new point n+1,j and n,j determine the derivative in t.

 

Imagine that at time t=0, there is a fluctuation in the input data given by ckexp(ikx) as shown ( This is the Von-Neumann Stability analysis) so that the spurious part of u is a constant times  where we note that at n=0,  for all values of .  The purpose of the stability analysis is to find the value of  required by the iteration scheme.  If this value has norm < 1, the fluctuation will die out as n increases.  If the value is >1 the fluctuation will grow and the correct answer will be lost among a host of +- truncation errors.   Substituting the assumed fluctuation into the equation gives

 

 

 

which is always greater than zero.  No problem here with a method that suddenly goes wrong, this one never works.

Lax, method  (The useable method)

The problem in the above method comes from the fact that the points used in the x derivative (n,j+1 and n,j-1) and the points used in the time derivative (n+1,j and n,j) are different. This allows a fluctuation to build up in which the points determining one derivative go wild with respect to the points determining the other.  Instead of using un,j lets average un,j+1  and un,j-1 so that the same points are used in both derivatives.

              Eqn 6

Note that the above equation is numbered, so that it can be safe to use.

then the analysis gives

 

and |ξ|<1 for |v|Δt/Δx<1 which in the sense that v=dx/dt is the speed of propagation through a cell makes a certain amount of sense.  Thus there is a useable explicit method.

The main lesson here is that strange results can occur with perfectly logical inputs.  Instabilities grow and some do not go away as delta x and delta t become small, but minor differences in approach can make major differences in results don’t be afraid to experiment.

 

Diffusive initial value problems. 

 

In non-relativistic physics problems (most importantly the time dependent Schroedinger equation) the time derivative is determined by del2 of the spatial variation.  As a simplest example, the diffusion equation is

                       Eqn 7

Numerically differentiate this by first noting that the first derivative at m1 is du/dx)x=m1= (u2-u1)/Δx, and the first derivative at m2 is du/dx)x=m2=(u3-u2)/Δx.  Then the derivative of the first derivatives at point 2 is d(du/dx)/dx)x=2=( du/dx)x=m2- du/dx)x=m1)/ Δx

=(u3-u2-u2+u1)/ (Δx)2.

 

to find

  Eqn 8

noting that both derivatives depend on the middle term (n,j), we might expect that like the LAX method above there will be a region of stability and indeed the same analysis yields

                         Eqn 9

which is stable for

                            Eqn 10

and the problem is that Δt must be smaller than (Δx)2 which is related to the time required to diffuse across a cell.  In addition, if we want an accurate representation of the second derivative of u, we will need to make Δx small.

On real problems D is D(x), and probably only visual for certain special areas so suddenly it becomes large and those k values with kΔx on the order of nπ/2 with n odd suddenly grow to dominate the solution and we stop on an overflow.  Business managers wanting wing designs for tomorrow do not appreciate this excuse.  So we change the approximation slightly

         Eqn 11

so that n,j is the only value at the current level and all others are from the n+1 level (which we do not know yet)

 

Now the analysis gives

        Eqn 12

which is stable for all values of Δt and Δx.  This is the explicit method for solving a 2 dimensional partial differential equation combining first derivatives in t with second derivatives in x.  My preferred grouping is for predictor corrector

           Eqn 13

in which case the first guess on the right side is a simple projection of the left. 

That is use UELAG to extrapolate the last M times for each j to arrive at the first guess for un+1. Then calculate all un+1,j’s.  If we stopped there the method would not be stable, so now use the calculated un+1,j’s and recalculate until the output u’s equal the input as you have already seen in the one dimensional predictor corrector method.   The iterations should use Aitkin's extrapolation procedure at each third step.  If more than 5 iterations are required, halve the time step size.

Your author, Press et al, suggest a different scheme.  Define

 

and rewrite the equations with the known un,j’s on the right

 

This set of equations involves the j-1,j,j+1 elements of the matrix only and is know as tri-diagonal.  The routine for solving it is given in detail in Press section 2.4.  It involves some factor time 3N operations in solving it rather than the N3/2 that would be required to brute force the matrix.  It does not pivot.  The predictor corrector which I suggested above  could be considered a Gauss-Seidel method for solving these equations.

The recommended method

Both the explicit and the implicit methods above make an error of order Δt in the time step by using the difference equation as an estimate of the time derivative at one end or the other.  This is a bit of a shame since the Δx second derivative is properly centered to gain an extra power of Δx in its error.  Centering says tu(t+Δt/2)/t = (D/2)(2u(t+Δt)/x2+2u(t)/x2) so that the integral becomes u(t+Δt)=Δt(u(t+Δt/2)/t)+O(Δt)3 for the higher accuracy desired.  But this says to simply average the explicit and implicit method (Crank-Nicholson scheme).  Since it has a name attached (This is the recommended method)....

 

Eqn 14

and we recognize the predictor corrector from our one dimensional method.  The stability factor is

 

so that the method is stable for any size Δt.

For the time dependent radial Schroedinger equation

 

this method has the advantage of Hermiticity, so that the normalization of u is preserved. 

                A  general problem is the cylindrically symmetric case where two nuclei approach each other at a distance R(t).  The potential needs R1e and R2e which can be found from ρ ,z and R(t) to yield a potential V(ρ,z,t).  The equation to be solved is then

   Eqn 15

The interest occurs when Z1+Z1>137 and the ground state energy is <-1022KeV, i.e particles can be created from the vacuum.   Note that in this case we really need to solve the more complex but simpler first order Dirac equation.