# The two body correlation function

The classical expectation value of the potential is defined to be

(0.1)

All coordinates are equivalent in this definition leading to

(0.2)

Define the two-body correlation function to be
(0.3)

So that

(0.4)

Let r = r1  r2

(0.5)

Define the single variable g to be the average of (0.3) over r2 for r1 = r + r2

(0.6)

Most inter-particle potentials depend only on the magnitude of r, thus define

(0.7)

This is crudely related to the scattering function in ..\definitions\gofr\IbachLuth.htm.  A prescription for finding g(r) using Monte Carlo methods is described in ..\definitions\gofr\gofr2.htm  Contains HW 22  an interesting exercise of medium difficulty.

Finally

(0.8)

The Ornstein Zernike equation

(Proc. Acad. Sci. Amsterdam, 17, 793 (1914))

Define the function h(r) to be

(1.1)

The function g(r) is the two-body correlation function, the probability of finding a particle a distance r from another particle at 0.  This probability in a liquid becomes one for large enough r.  The Ornstein Zernike equation defines the direct correlation function c(r)

(1.2)

This says that the probability of finding a second particle at a distance r from one at the origin differs from one by the direct correlation due to the particle at the origin plus the integral of the direct correlation from all of the other particles.

This would be just a defining relation for c(r), were it not for the fact that there are many reasonable definitions of c(r).  My favorite way of defining c(r) is to calculate g(|r|) inside the usual periodically repeated box of side R, ..\definitions\gofr\gofr2.htm

R

+                         +

+     +              R    +   +

using Monte Carlo techniques.  Obviously there is an enormous premium for making small N give the large N result.  The close in parts for r<R/2 rather rapidly reach the same result that one would find in an infinite system, while the parts for r>R are obviously very biased by the fact that they are averages over periodic repeats.  A technique due in part at least to Hanson and Levesque is to define c(r) by

h(r)=hMC(r)      r<R/2

c(r)=0              r>R/2

This is only one of many possible ways to define c(r), others have names such as Percus Yevick and Hyper-Netted-Chain or BBKGY depending on the names of their originators or the shapes of the graphs used to arrive at the approximations.

As can be seen to the right  the 5  values of h computed by Monte-Carlo are the result of 5 integrals over all r’ of the function c(r’)h(r-r’) with r taken as each of the 5 values shown.  This gives 5 equations for the 5 desired values of c(r) which in principle could be solved by minimizing  (1.3)
where the constant vector c in c can be taken simply as the values of c at the 5 points where it is assumed to be non-zero.  The integral uses h values between R/2 and R which must be calculated from
(1.4)
for each assumed value of c.  Note that we have introduced a “fictitious” short ranged function c(r) which produces the long range function h(r) and in addition given a scheme for calculating this function. In essence the Ornstein Zernicke equation is fundamental in liquid theory and many people without adequate thought about Fourier transforms have tried to find efficient ways to do the 3 dimensional integral for all 3d values of r amounting to a 6 dimensional set of points to calculate.

# Convolution

(2.1)

Begin with the one-dimensional definition of the Fourier transforms.

(2.2)

Extended to 3 dimensions these become

(2.3)

The expression for a delta function is easily recovered by substituting the equation for h(r) into the equation for H(f)

(2.4)

from which we see that

(2.5)

Convolution

(2.6)

on allowing for the evaluation of the r’ integral first

(2.7)
The last integral is zero unless f’=f which reduces the set of integrals to

(2.8)

which implies that

(2.9)

So that while the original evaluation of the 3d integral for d(r) in terms of c(r’) and h(r-r’), since this is needed for all r it involved 6d of work or N6 function evaluations, the evaluation is now reduced to  2 3d transforms to find H and C each of which require N3 log2(N) operations, and a back transform to find d(r) involving N3 log2(N) operations.  To put numbers to it. A smart calculation involving only 100 pts per dimension requires 1012 operations to do the straightforward integrals required to find d(r).  To go the transform route involves 106x20x3 or < 108 operations (quite feasible on today’s computers).

However, the date on the OZ equation is 1914.  Obviously, the use of symmetries can reduce this still further.  For those cases where h(r)=h(|r|) the transform H is given by

(2.10)

where we notice that H(f) is also a function of |f| only.  Explicitly making the back transform, we notice that the - sign in the exponent cancels out of the integration and that the back transform is the same as the forward transform.

(2.11)

So now, the only problem is how to make a sine transform.  Note that this is similar to the symmetric region problem with the exception that the function H(f) now needs to be defined as f x H(f) and the function h(r) needs to be defined as r h(r).  Sinetran.htm.  This transform occurs so often, that Press gives us his version.

Of course it uses the FFT as its base, but the integral is from 0 to infinity, not -+ infinity and involves the sine not the exponential, so a factor of two savings in the number of data points needed is in principle possible Press (section 12.3 both editions) gives us the necessary code.

for\SINFT.for

# Assignment HW 26

Calculate  by convolution methods for the following test problem.

Consider a charged sphere of radius 10 and charge density 1.  Gauss’s law tells us that

4πr2 E(r)=-(4/3)πr3 for r<R
and 4πr2 E(r)=-(4/3)πR3  for r>R

so that

v(r)=1000/(3r3) r>R

v(r)=50-r2/6     r<R

We also have

which is a convolution integral.  for\CONVOL.for

Interestingly the transform of 1/r is not trivial

Noting that the z axis of the r coordinate system can be located along f.

The integral over mu is easy

This nice integral does not converge.  Introduce an epsilon as a convergence factor.

So that

For more detail than desired, see test_problem