3.15. Delta Function

Delta function \delta(x) is defined such that this relation holds:

(3.15.1)\int f(x)\delta(x-t)\d x=f(t)

No such function exists, but one can find many sequences “converging” to a delta function:

(3.15.2)\lim_{\alpha\to\infty}\delta_\alpha(x) = \delta(x)

more precisely:

(3.15.3)\lim_{\alpha\to\infty}\int f(x)\delta_\alpha(x)\d x = \int f(x)\lim_{\alpha\to\infty}\delta_\alpha(x)\d x = f(0)

one example of such a sequence is:

\delta_\alpha(x) = {1\over\pi x}\sin(\alpha x)

It’s clear that (3.15.3) holds for any well behaved function f(x). Some mathematicians like to say that it’s incorrect to use such a notation when in fact the integral (3.15.1) doesn’t “exist”, but we will not follow their approach, because it is not important if something “exists” or not, but rather if it is clear what we mean by our notation: (3.15.1) is a shorthand for (3.15.3) and (3.15.2) gets a mathematically rigorous meaning when you integrate both sides and use (3.15.1) to arrive at (3.15.3). Thus one uses the relations (3.15.1), (3.15.2), (3.15.3) to derive all properties of the delta function.

Let’s give an example. Let {\bf\hat r} be the unit vector in 3D and we can label it using spherical coordinates {\bf\hat r}={\bf\hat r}(\theta,\phi). We can also express it in cartesian coordinates as {\bf\hat
r}(\theta,\phi)=(\cos\phi\sin\theta,\sin\phi\sin\theta,\cos\theta).

(3.15.4)f({\bf\hat r'})=\int\delta({\bf\hat r}-{\bf\hat r'})f({\bf\hat r})\,\d {\bf\hat r}

Expressing f({\bf\hat r})=f(\theta,\phi) as a function of \theta and \phi we have

(3.15.5)f(\theta',\phi')=\int\delta(\theta-\theta')\delta(\phi-\phi') f(\theta,\phi)\,\d\theta\d\phi

Expressing (3.15.4) in spherical coordinates we get

f(\theta',\phi')=\int\delta({\bf\hat r}-{\bf\hat r'}) f(\theta,\phi)\sin\theta\,\d\theta\d\phi

and comparing to (3.15.5) we finally get

\delta({\bf\hat r}-{\bf\hat r'})={1\over\sin\theta} \delta(\theta-\theta')\delta(\phi-\phi')

In exactly the same manner we get

\delta({\bf r}-{\bf r'})=\delta({\bf\hat r}-{\bf\hat r'}) {\delta(\rho-\rho')\over\rho^2}

See also (3.17.4.1) for an example of how to deal with more complex expressions involving the delta function like \delta^2(x).

When integrating over finite interval, this formula is very useful:

\int_a^b f(x)\delta(x-t)\d x = f(t) \theta(b-t) \theta(t-a)

in other words, the integral vanishes unless a < t < b. In the limit a\to
-\infty and b\to\infty we get:

\theta(b-t) \to \theta(\infty - t) = 1

\theta(t-a) \to \theta(t - (-\infty)) = \theta(t+\infty) = 1

Another integral that converges to a delta function is:

{1\over 2\pi} \int_{-\infty}^\infty e^{i\omega x} \d x
    = \lim_{L\to\infty} {1\over 2\pi} \int_{-L}^L e^{i\omega x} \d x
    = \lim_{L\to\infty} {\sin \omega L \over \pi \omega} = \delta(\omega)

3.16. Distributions

Some mathematicians like to use distributions and a mathematical notation for that, which I think is making things less clear, but nevertheless it’s important to understand it too, so the notation is explained in this section, but I discourage to use it – I suggest to only use the physical notation as explained below. The math notation below is put into quotation marks, so that it’s not confused with the physical notation.

The distribution is a functional and each function f(x) can be identified with a distribution \mathnot{T_f} that it generates using this definition (\varphi(x) is a test function):

\mathnot{T_f(\phi(x))} \equiv \int f(x)\varphi(x) \d x \equiv
\mathnot{f(\varphi(x))} \equiv \mathnot{(f(x), \varphi(x))}

besides that, one can also define distributions that can’t be identified with regular functions, one example is a delta distribution (Dirac delta function):

\mathnot{\delta(\phi(x))} \equiv \phi(0) \equiv \int \delta(x) \phi(x) \d x

The last integral is not used in mathematics, in physics on the other hand, the first expressions (\mathnot{\delta(\phi(x))}) is not used, so \delta(x) always means that you have to integrate it, as explained in the previous section, so it behaves like a regular function (except that such a function doesn’t exist and the precise mathematical meaning is only after you integrate it, or through the identification above with distributions).

One then defines common operations via acting on the generating function, then observes the pattern and defines it for all distributions. For example differentiation:

\mathnot{{\d\over\d x}T_f(\varphi)} =
\mathnot{T_{f'}(\varphi)} =
\int f'\varphi \d x =
-\int f\varphi' \d x =
\mathnot{-T_f(\varphi')}

so:

\mathnot{{\d\over\d x}T(\varphi)} =
\mathnot{-T(\varphi')}

Multiplication:

\mathnot{gT_f(\varphi)} =
\mathnot{T_{gf}(\varphi)} =
\int gf\varphi \d x =
\mathnot{T_f(g\varphi)}

so:

\mathnot{gT(\varphi)} =
\mathnot{T(g\varphi)}

Fourier transform:

\mathnot{FT_f(\varphi)} =
\mathnot{T_{Ff}(\varphi)} =
\int F(f)\varphi \d x =

=\int\left[\int e^{-ikx} f(k) \d k\right] \varphi(x) \d x
=\int f(k)\left[\int e^{-ikx} \varphi(x) \d x\right] \d k
=\int f(x)\left[\int e^{-ikx} \varphi(k) \d k\right] \d x
=

=
\int f F(\varphi) \d x =
\mathnot{T_f(F\varphi)}

so:

\mathnot{FT(\varphi)} =
\mathnot{T(F\varphi)}

But as you can see, the notation is just making things more complex, since it’s enough to just work with the integrals and forget about the rest. One can then even omit the integrals, with the understanding that they are implicit.

Some more examples:

\int \delta(x-x_0)\varphi(x) \d x = \int \delta(x)\varphi(x+x_0) \d x
= \varphi(x_0) \equiv \mathnot{\delta(\varphi(x+x_0))}

Proof of \delta(-x) = \delta(x):

\int_{-\infty}^\infty \delta(-x)\varphi(x)\d x =
-\int_{\infty}^{-\infty} \delta(y)\varphi(-y)\d y =
\int_{-\infty}^\infty \delta(x)\varphi(-x)\d x
\equiv \mathnot{\delta(\varphi(-x))} = \varphi(0)
=\mathnot{\delta(\phi(x))}
\equiv
\int_{-\infty}^\infty \delta(x)\varphi(x)\d x

Proof of x\delta(x) = 0:

\int x\delta(x)\varphi(x)\d x = \mathnot{\delta(x\varphi(x))}
= 0 \cdot \varphi(0) = 0

Proof of \delta(cx) = {\delta(x)\over |c|}:

\int \delta(cx)\varphi(x)\d x = {1\over|c|}\int\delta(x)\varphi\left({x\over
c}\right)\d x
=\mathnot{\delta\left(\varphi\left({x\over c}\right)\over|c|\right)}
={\delta(0)\over|c|}
=\mathnot{{\delta(\varphi(x))\over|c|}}
=
\int {\delta(x)\over |c|}\varphi(x)\d x

To prove that \lim_{L\to\infty} {1\over\pi x}\sin(L x)=\delta(x) we do the following calculation:

\left[\int_{-\infty}^\infty \lim_{L\to\infty} {1\over\pi x}\sin(L x)
    \varphi(x) \d x\right] - \varphi(0) =

= \lim_{L\to\infty} \int_{-\infty}^\infty {1\over\pi x}\sin(L x)
    (\varphi(x)-\varphi(0)) \d x =

= \lim_{L\to\infty} {1\over\pi} \int_{-\infty}^\infty
    {\varphi(x)-\varphi(0)\over x} \sin(L x) \d x =

= \lim_{L\to\infty} {1\over\pi} \int_{-\infty}^\infty
    h(x) \sin(L x) \d x = 0

where the function h(x) = {\varphi(x)-\varphi(0)\over x} is bounded and h(0)
=\lim_{x\to0}{\varphi(x)-\varphi(0)\over x}=\varphi'(0) is finite since the test function \varphi(x) is infinitely differentiable. From the Riemann–Lebesgue lemma, the integral then converges towards zero as L\to\infty.

3.17. Variations and Functional Derivatives

Variations and functional derivatives are generalization of differentials and partial derivatives to functionals. It is important to master this subject just like regular differentials/derivatives in calculus.

3.17.1. Functions of One Variable

Let’s first review differentials and derivatives of functions of one variable. We will use an approach that directly generalizes to multivariable functions and functionals. The differential \d f is defined as:

\d f
    \equiv \lim_{\varepsilon\to0} {f(x+\varepsilon h)
        -f(x)\over\varepsilon}
    = a h

Last equality follows from the fact, that the limit is a linear function of h:

\lim_{\varepsilon\to0} {f(x+\varepsilon h) -f(x)\over\varepsilon}
    = \lim_{\eta\to0} {f(x+\eta) -f(x)\over\left(\eta\over h\right)}
    = \left(\lim_{\eta\to0} {f(x+\eta) -f(x)\over\eta}\right) h

Where we used the substitution \eta = \varepsilon h. We define the derivative \d f\over \d x as:

{\d f\over \d x} = a

To get a formula for {\d f\over \d x}, we set h=1 and get:

{\d f\over \d x} = a = a \cdot 1
    = \lim_{\varepsilon\to0} {f(x+\varepsilon \cdot 1)-f(x)\over\varepsilon}
    = \lim_{\varepsilon\to0} {f(x+\varepsilon)-f(x)\over\varepsilon}

Using the formulas above we get an equivalent expression for the differential:

\left.{\d\over\d\varepsilon}f(x+\varepsilon h) \right|_{\varepsilon=0}
    = \left.\lim_{\eta\to0} {f(x+(\varepsilon+\eta) h)
        -f(x+\varepsilon h)\over\eta}\right|_{\varepsilon=0} =

    = \lim_{\eta\to0} {f(x+\eta h) -f(x)\over\eta} =

    = \lim_{\varepsilon\to0} {f(x+\varepsilon h) -f(x)\over\varepsilon}

So we get a general formula (the analogy of which we will use later):

\d f
    \equiv\left.{\d\over\d\varepsilon}f(x+\varepsilon h)
        \right|_{\varepsilon=0}
    = \lim_{\varepsilon\to0} {f(x+\varepsilon h)
        -f(x)\over\varepsilon}
    = a h

The variable x can be treated as a function (a very simple one):

x = g(x)

So we define \d x as:

\d x \equiv \d g = {\d g\over\d x} h = h

As such, \d x can have two meanings: either \d x = h = x-x_0 (a finite change in the variable x) or a differential (if x depends on another variable, thanks to the chain rule everything will work). With this understanding, for all calculations, we only need the following two formulas — the definition of the differential (using a limit):

\d f = \lim_{\varepsilon\to0} {f(x+\varepsilon \d x) -f(x)\over\varepsilon}

and the definition of the derivative (using the differential):

\d f = {\d f\over \d x} \d x

where \d x is either a differential or a finite change in the variable x.

If for example x=p(y) is a function of y then in the above \d x is a differential and we get:

\d f = {\d f\over \d x} \d x = {\d f\over \d x}{\d x\over \d y} \d y

Thanks to the chain rule, this can also be written as:

\d f = {\d f\over \d y} \d y

and so the notation is consistent.

3.17.2. Functions of several variables

Let’s have \x=(x_1,x_2,\dots,x_N). The function f(\x) assigns a number to each \x. We define a differential of f in the direction of {\bf h} as:

\d f
    \equiv\left.{\d\over\d\varepsilon}f({\bf x}+\varepsilon{\bf h})
        \right|_{\varepsilon=0}
    = \lim_{\varepsilon\to0} {f({\bf x}+\varepsilon{\bf h})
        -f(\x)\over\varepsilon}
    = {\bf a} \cdot {\bf h}

The last equality follows from the fact, that \left.{\d\over\d\varepsilon}f({\bf x}+\varepsilon{\bf h}) \right|_{\varepsilon=0} is a linear function of {\bf h}. We define the partial derivative {\partial
f\over\partial x_i} of f with respect to x_i as the i-th component of the vector {\bf a}:

{\bf a}
    \equiv \left({\partial f\over\partial x_1},
        {\partial f\over\partial x_2}, \dots,
        {\partial f\over\partial x_N}\right)
    \equiv \nabla f

This also gives a formula for computing {\partial f\over\partial x_i}: we set h_j=\delta_{ij}h_i and

{\partial f\over\partial x_i}
    = a_i
    = {\bf a}\cdot{\bf h}
    = \left.{\d\over\d\varepsilon} f(\x+\varepsilon(0,0,\dots,1,\dots,0))
        \right|_{\varepsilon=0} =

    = \lim_{\varepsilon\to0} {f(x_1,x_2,\dots,x_i+\varepsilon,\dots,x_N)
        -f(x_1,x_2,\dots,x_i,\dots,x_N) \over\varepsilon}

The usual way to define partial derivatives is to use the last formula as the definition, but here this formula is a consequence of our definition in terms of the components of {\bf a}. Every variable can be treated as a function (very simple one):

x_i=g(x_1,\dots,x_N)=\delta_{ij}x_j

and so we define

\d x_i\equiv\d g=\d(\delta_{ij}x_j)=h_i

and thus we write h_i=\d x_i and {\bf h}=\d\x and

\d f={\d f\over\d x_i}\d x_i = (\nabla f) \cdot \d {\bf x}

So \d{\bf x} has two meanings — it’s either {\bf h}={\bf x}-{\bf x}_0 (a finite change in the independent variable {\bf x}) or a differential, depending on the context. The above is a detailed explanation why things are defined the way they are and what the exact meaning is. With this understanding, the only things that are actually needed for any calculations are the following – the definition of a differential:

\d f = \left.{\d\over\d\varepsilon}f({\bf x}+\varepsilon\d {\bf x})
        \right|_{\varepsilon=0}

Only a regular derivative (defined in the previous section) is needed for this definition. The definition of a partial derivative (and a gradient):

\d f={\d f\over\d x_i}\d x_i = (\nabla f) \cdot \d {\bf x}

And finally the understanding that \d{\bf x} means either {\bf h}={\bf x}-{\bf x}_0 or a differential depending on the context. That’s all there is to it.

3.17.3. Functionals

Let’s now define functional derivatives and variations. Functional F[f] assigns a number to each function f(x). The variation is defined as

\delta F[f]\equiv\left.{\d\over\d\varepsilon}F[f+\varepsilon h] \right|_{\varepsilon=0}=\lim_{\epsilon\to0}{F[f+\epsilon h]-F[f]\over\epsilon}= \int a(x)h(x)\d x

We define {\delta F\over\delta f(x)} as

a(x)\equiv{\delta F\over\delta f(x)}

This also gives a formula for computing {\delta F\over\delta f(x)}: we set h(y)=\delta(x-y) and

(3.17.3.1){\delta F\over\delta f(x)}=a(x)=\int a(y)\delta(x-y)\d y= \left.{\d\over\d\varepsilon}F[f(y)+\varepsilon\delta(x-y)] \right|_{\varepsilon=0}=

   =\lim_{\varepsilon\to0} {F[f(y)+\varepsilon\delta(x-y)]-F[f(y)]\over\varepsilon}

Sometimes the functional derivative is defined using the last formula, here this formula just follows from our definition. Every function can be treated as a functional (although a very simple one):

f(x)=G[f]=\int f(y)\delta(x-y)\d y

and so we define

\delta f\equiv\delta G[f]= \left.{\d\over\d\varepsilon}G[f(x)+\varepsilon h(x)] \right|_{\varepsilon=0}= \left.{\d\over\d\varepsilon}(f(x)+\varepsilon h(x)) \right|_{\varepsilon=0}= h(x)

thus we write h=\delta f and

\delta F[f]=\int {\delta F\over\delta f(x)}\delta f(x)\d x

so \delta f have two meanings — it’s either h(x)=f(x) - f_0(x) (a finite change in the function f) or a variation of a functional, depending on the context. It is completely analogous to \d\x. Let’s summarize the only formulas needed in actual calculations – the definition of a variation (using a regular derivative):

(3.17.3.2)\delta F[f] = \left.{\d\over\d\varepsilon}F[f+\varepsilon \delta f]
    \right|_{\varepsilon=0}

the definition of the functional derivative:

\delta F[f]=\int {\delta F\over\delta f(x)} \delta f(x) \d x

and the understanding that \delta f means either h(x)=f(x) - f_0(x) or a variation. The last equation is the best way to calculate functional derivative — apply \delta variation, until you get the integral into the form \int \big(\ \ \big) \delta f(x) \d x and then you read off the functional derivative from the expression in the parentheses.

The correspondence between the finite and infinite dimensional case can be summarized using a functional F[n], function n({\bf x}) of continuous parameter {\bf x} (which can be a scalar or a vector) and its discretized version n_i \equiv n({\bf x}_i), together with a function F(n):

i \quad & \Longleftrightarrow \quad {\bf x} \\
n_i \quad & \Longleftrightarrow \quad n({\bf x}) \\
F(n_i) \quad & \Longleftrightarrow \quad F[n] \\
\d F = 0 \quad & \Longleftrightarrow \quad \delta F = 0 \\
{\partial F\over\partial n_i} = 0 \quad & \Longleftrightarrow \quad
    {\delta F \over \delta n({\bf x})} = 0 \\

In other words, the basic difference is that the continuous parameter {\bf x} has been replaced with a discrete parameter i. Then the function n({\bf x}) becomes a vector of values n_i, variation becomes a differential and functional derivative becomes a partial derivative. To minimize a functional, one must search for zero functional derivative, while in the discrete case one searches for zero partial derivatives (gradient).

We now extend the \delta-variation notation to any any function g which contains the function f(x) being varied, you just need to replace f by f+\epsilon \delta f and apply {\d\over\d\epsilon} to the whole g, for example (here g=\partial_\mu\phi and f=\phi):

\delta\partial_\mu\phi
    = \left.{\d\over\d\varepsilon}\partial_\mu(\phi+\varepsilon \delta\phi) \right|_{\varepsilon=0}
    = \partial_\mu\left.{\d\over\d\varepsilon}(\phi+\varepsilon \delta\phi) \right|_{\varepsilon=0}
    =\partial_\mu \delta\phi

As such, the F in (3.17.3.2) can be either a functional or any expression that contains the function f. This notation allows us a very convenient computation, as shown in the following examples.

First, when computing a variation of some integral, we can interchange \delta and \int:

F[f]=\int K(x) f(x) \d x

\delta F=\delta \int K(x) f(x) \d x = \left.{\d\over\d\varepsilon}\int K(x) (f+\varepsilon h)\d x\right|_{\varepsilon=0}= \left.\int{\d\over\d\varepsilon} (K(x) (f+\varepsilon h))\d x\right|_{\varepsilon=0}=

=\int\delta(K(x) f(x))\d x

In the expression \delta(K(x) f(x)) we must understand from the context if we are treating it as a functional of f or K. In our case it’s a functional of f, so we have \delta(K f)=K\delta f.

The second very important note is when taking variation of expression like:

\delta \int f(t_1)f(t_2) \d t_1 \d t_2 =

= \int \delta (f(t_1)f(t_2)) \d t_1 \d t_2 =

= \int \left.{\d\over\d\varepsilon}(f(t_1)+\varepsilon\delta f(t_1))
    (f(t_2)+\varepsilon\delta f(t_2)) \right|_{\varepsilon=0}
    \d t_1 \d t_2 =

= \int (\delta f(t_1))f(t_2)+f(t_1)(\delta f(t_2)) \d t_1 \d t_2 =

= \int (\delta f(t_1))f(t_2)+f(t_2)(\delta f(t_1)) \d t_1 \d t_2 =

= 2 \int f(t_2) \delta f(t_1) \d t_1 \d t_2

then when f is replaced by f+\epsilon \delta f, one has to keep track of the independent variable, so f(t_1) gets replaced by f(t_1)+\epsilon \delta
f(t_1) and f(t_2) gets replaced by f(t_2)+\epsilon \delta f(t_2). Thus the two variations \delta f(t_1) and \delta f(t_2) are different (independent). If there is only one indepenent variable, one can simply write \delta f as it is clear what the independent variable is. This is analogous to using differentials, e.g. \d (f(x) f(y)) = (\d f(x)) f(y) + f(x) \d f(y)
=f'(x) \d x f(y) + f(x) f'(y) \d y, where one has to keep track of the independent variable as well for each \d f.

Another useful formula is differentiation of a functional F[\psi(\theta)] where the function \psi(\theta) depends on a parameter \theta:

{\d F[\psi(\theta)] \over \d \theta}
=\left.{\d\over\d\epsilon} F[\psi(\theta+\epsilon)] \right|_{\epsilon=0}
=\left.{\d\over\d\epsilon} F\left[\psi(\theta)+\epsilon {d \psi(\theta)
    \over d \theta} + O(\epsilon^2)\right] \right|_{\epsilon=0}
=\left.{\d\over\d\epsilon} F\left[\psi(\theta)+\epsilon {d \psi(\theta)
    \over d \theta}\right] \right|_{\epsilon=0}
= \int {\delta F[\psi] \over \delta \psi} {\d\psi(\theta)\over\d\theta}\d x

where we used the definition of a variation and a functional derivative with \delta\psi={\d\psi(\theta) \over\d\theta}:

\delta F = \left.{\d\over\d\epsilon}
    F[\psi+\epsilon\delta\psi]\right|_{\epsilon=0}
= \int{\delta F\over\delta\psi}\delta\psi\d x

3.17.4. Examples

Some of these examples show how to use the delta function definition of the functional derivative in equation (3.17.3.1). However, the simplest way is to calculate variation first and then read off the functional derivative from the result, as explained above.

{\delta\over\delta f(t)}\int\d t'f(t')g(t')= \left.{\d\over\d\varepsilon}\int\d t'(f(t')+\varepsilon\delta(t-t'))g(t') \right|_{\varepsilon=0}=g(t)

\delta \int \d t f(t)g(t) = \left.{\d\over\d\varepsilon}\int\d
    t(f(t)+\varepsilon \delta f(t))g(t) \right|_{\varepsilon=0}
 = \int g(t) (\delta f(t)) \d t

{\delta f(t')\over\delta f(t)}= \left.{\d\over\d\varepsilon}(f(t')+\varepsilon\delta(t-t')) \right|_{\varepsilon=0}=\delta(t-t')

{\delta f(t_1)f(t_2)\over\delta f(t)}= \left.{\d\over\d\varepsilon}(f(t_1)+\varepsilon\delta(t-t_1)) (f(t_2)+\varepsilon\delta(t-t_2)) \right|_{\varepsilon=0}=\delta(t-t_1)f(t_2)+f(t_1)\delta(t-t_2)

The next example shows that when taking variation of an expression containing the function f of different independent variables, one has to keep track of these variables in the variations:

\delta (f(t_1)f(t_2))
    = \left.{\d\over\d\varepsilon}(f(t_1)+\varepsilon\delta f(t_1))
        (f(t_2)+\varepsilon\delta f(t_2)) \right|_{\varepsilon=0}
    =(\delta f(t_1))f(t_2)+f(t_1)(\delta f(t_2))

{\delta\over\delta f(t)}\half\int\d t_1\d t_2K(t_1,t_2)f(t_1)f(t_2)= \half\int\d t_1\d t_2K(t_1,t_2){\delta f(t_1)f(t_2)\over\delta f(t)}=

=\half\left(\int\d t_1 K(t_1,t)f(t_1)+\int\d t_2 K(t,t_2)f(t_2)\right) =\int\d t_2 K(t,t_2)f(t_2)

\delta \half\int\d t_1\d t_2K(t_1,t_2)f(t_1)f(t_2)= \half\int\d t_1\d
t_2K(t_1,t_2)(\delta f(t_1)f(t_2))=

= \half\int\d t_1\d t_2K(t_1,t_2) ((\delta f(t_1))f(t_2)
    +f(t_1)(\delta f(t_2))=

= \int\d t_1\d t_2 K(t_1,t_2) f(t_2) (\delta f(t_1))

The last equality follows from K(t_1,t_2)=K(t_2,t_1) (any antisymmetrical part of a K would not contribute to the symmetrical integration).

Another example is the derivation of Euler-Lagrange equations for the Lagrangian density \L=\L(\eta_\rho, \partial_\nu \eta_\rho, x^\nu):

0 = \delta S = \delta \int \L \,\d^4x^\mu
= \int \delta \L \,\d^4x^\mu
= \int { \partial \L\over\partial \eta_\rho}\delta\eta_\rho
    +
    { \partial \L\over\partial (\partial_\nu \eta_\rho)}
    \delta(\partial_\nu\eta_\rho)
    \,\d^4x^\mu
=

= \int { \partial \L\over\partial \eta_\rho}\delta\eta_\rho
    +
    { \partial \L\over\partial (\partial_\nu \eta_\rho)}
    \partial_\nu(\delta\eta_\rho)
    \,\d^4x^\mu
=

= \int { \partial \L\over\partial \eta_\rho}\delta\eta_\rho
    -
    \partial_\nu\left(
    { \partial \L\over\partial (\partial_\nu \eta_\rho)}
    \right)
    \delta\eta_\rho
    \,\d^4x^\mu
    +\int \partial_\nu \left(
    { \partial \L\over\partial (\partial_\nu \eta_\rho)}
    \delta\eta_\rho
    \right)
    \,\d^4x^\mu
=

= \int \left[{ \partial \L\over\partial \eta_\rho}
    -
    \partial_\nu\left(
    { \partial \L\over\partial (\partial_\nu \eta_\rho)}
    \right)
    \right]
    \delta\eta_\rho
    \,\d^4x^\mu

We can also write it using a functional derivative {\delta
S\over\delta\eta_\rho} as:

{\delta S\over\delta\eta_\rho}
    = { \partial \L\over\partial \eta_\rho}
    -
    \partial_\nu\left(
    { \partial \L\over\partial (\partial_\nu \eta_\rho)}
    \right)

Another example:

{\delta\over\delta f(t)}\int f^3(x)\d x= \left.{\d\over\d\varepsilon}\int(f(x)+\varepsilon\delta(x-t))^3\d x \right|_{\varepsilon=0}=

=\left.\int3(f(x)+\varepsilon\delta(x-t))^2\delta(x-t)\d x \right|_{\varepsilon=0}=\int3f^2(x)\delta(x-t)\d x=3f^2(t)

One might think that the above calculation is incorrect, because \delta^2(x-t) is undefined. In case of such problems the above notation automatically implies working with some sequence \delta_\alpha(x) \to \delta(x) (for example \delta_\alpha(x) =
{1\over\pi x}\sin(\alpha x)) and taking the limit \alpha\to\infty:

{\delta\over\delta f(t)}\int f^3(x)\d x= \left.\lim_{\alpha\to\infty}{\d\over\d\varepsilon}\int(f(x)+\varepsilon\delta_\alpha(x-t))^3\d x \right|_{\varepsilon=0}=

=\left.\lim_{\alpha\to\infty}\int3(f(x)+\varepsilon\delta_\alpha(x-t))^2\delta_\alpha(x-t)\d x \right|_{\varepsilon=0}=\lim_{\alpha\to\infty}\int3f^2(x)\delta_\alpha(x-t)\d x=

(3.17.4.1)=\int3f^2(x)\lim_{\alpha\to\infty}\delta_\alpha(x-t)\d x= \int3f^2(x)\delta(x-t)\d x=3f^2(t)

As you can see, we got the same result, with the same rigor, but using an obfuscating notation. That’s why such obvious manipulations with \delta_\alpha are tacitly implied. However, the best method is to first calculate the variation:

\delta \int f^3(x) \d x = \int \delta f^3(x) \d x
=\int 3 f^2(x) \delta f(x) \d x

and immediately read off the functional derivative:

{\delta\over\delta f(t)} \int f^3(x) \d x = 3 f^2(t)

Another example with a metric as a function of coordinates g_{\mu\nu} = g_{\mu\nu}(x^\mu):

\delta g_{\mu\nu}
    = \delta g_{\mu\nu}(x^\mu)
    = \left.{\d\over\d\varepsilon} g_{\mu\nu}(x^\mu+\varepsilon
            (\delta x^\mu)) \right|_{\varepsilon=0}
    = \left.{\d\over\d\varepsilon}
            (x^\sigma+\varepsilon(\delta x^\sigma)) \right|_{\varepsilon=0}
        \partial_\sigma g_{\mu\nu}
    = (\delta x^\sigma) \partial_\sigma g_{\mu\nu}

And an example of varying with respect to a metric:

\delta \sqrt{ |\det g_{\mu\nu}| }
    =\sqrt{ |\det g_{\mu\nu}| }\, \delta \log \sqrt{ |\det g_{\mu\nu}| }
    =\half \sqrt{ |\det g_{\mu\nu}| }\, \delta \log |\det g_{\mu\nu}| =

    =\half \sqrt{ |\det g_{\mu\nu}| }\, \delta \Tr \log g_{\mu\nu}
    =\half \sqrt{ |\det g_{\mu\nu}| }\, \Tr \delta \log g_{\mu\nu} =

    =\half \sqrt{ |\det g_{\mu\nu}| }\, \Tr g^{\mu\nu} \delta g_{\mu\nu}
    =\half \sqrt{ |\det g_{\mu\nu}| }\, g^{\mu\nu} \delta g_{\mu\nu} =

    =-\half \sqrt{ |\det g_{\mu\nu}| }\, g_{\mu\nu} \delta g^{\mu\nu}

Another example (varying energy functional):

E[\rho] = 4\pi\int {a \rho(r)\over b + r_s(r)} r^2 \d r

r_s(r) = \left(3\over 4\pi (-\rho)\right)^{1\over 3}

{\d r_s\over\d \rho} =
        {1\over 3}\left(3\over 4\pi (-\rho)\right)^{-{2\over 3}}
        {3\over 4\pi \rho^2}
    =
        -{1\over 3\rho}\left(3\over 4\pi (-\rho)\right)^{1\over 3}
    =
        -{r_s\over 3\rho}

\delta E[\rho] = 4\pi \delta \int {a \rho\over b + r_s} r^2 \d r =

    = 4\pi \int\left({a \delta \rho\over b + r_s}
        - {a \rho\over (b + r_s)^2 }\delta r_s\right) r^2 \d r =

    = 4\pi \int\left({a \delta \rho\over b + r_s}
        - {a \rho\over (b + r_s)^2 }\left(-{r_s\over 3\rho}\right)
          \delta\rho\right)
          r^2 \d r =

    = 4\pi \int\left({a \over b + r_s}
        +{1\over3} {a r_s\over (b + r_s)^2 }\right) (\delta\rho) r^2 \d r

{\delta E[\rho]\over\delta\rho}
    = 4\pi r^2 \left({a \over b + r_s}
        +{1\over3} {a r_s\over (b + r_s)^2 }\right)

Another example (Hartree energy):

E[n] = \half \int {n({\bf r}') n({\bf r}'')\over
    | {\bf r}' - {\bf r}''| } \d^3 r' \d^3 r''

we calculate the variation first:

\delta E[n] = \half \delta \int {n({\bf r}') n({\bf r}'')\over
    | {\bf r}' - {\bf r}''| } \d^3 r' \d^3 r'' =

    = \half \int { (\delta n({\bf r}')) n({\bf r}'')
        + n({\bf r}') (\delta n({\bf r}''))\over
    | {\bf r}' - {\bf r}''| } \d^3 r' \d^3 r'' =

    = \int { n({\bf r}') \over | {\bf r}' - {\bf r}''| }
        (\delta n({\bf r}'')) \d^3 r' \d^3 r'' =

    = \int { n({\bf r}') \over | {\bf r} - {\bf r}'| }
        (\delta n({\bf r})) \d^3 r' \d^3 r

so the functional derivative is:

{\delta E[n]\over \delta n({\bf r})}
    = \int { n({\bf r}') \over | {\bf r} - {\bf r}'| } \d^3 r'

Another example (functional with gradients):

F[n] = \int h(n) {| \nabla n|^2 \over n} \d^3 r

the variation is:

\delta F[n]
    = \int \delta \left( h(n) {| \nabla n|^2 \over n} \right) \d^3 r =

    = \int {\d h \over \d n} {| \nabla n|^2 \over n} \delta n
        + h(n)\left(2n\nabla n \cdot \nabla \delta n - | \nabla n|^2
          \delta n
          \over n^2\right) \d^3 r =

    = \int \left({\d h \over \d n} - {h(n)\over n}\right)
        {| \nabla n|^2 \over n}
        \delta n
        + 2 h(n){\nabla n \cdot \nabla \delta n \over n} \d^3 r =

    = \int \left({\d h \over \d n} - {h(n)\over n} \right)
        {| \nabla n|^2 \over n}
        \delta n
        - 2 \nabla\cdot\left(h(n){\nabla n \over n}\right)\delta n\ \d^3 r =

    = \int \left({\d h \over \d n} - {h(n)\over n} \right)
        {| \nabla n|^2 \over n}
        \delta n
        - 2 \left({\d h \over \d n}{| \nabla n|^2 \over n}
          +h(n){\nabla^2 n \over n}
          -h(n){| \nabla n|^2 \over n^2}
          \right)\delta n\ \d^3 r =

    = \int \left[ \left({h(n)\over n} - {\d h \over \d n}\right)
        {| \nabla n|^2 \over n}
        - 2 h(n){\nabla^2 n \over n}\right]
          \delta n\ \d^3 r

from which we read off the functional derivative:

{\delta F[n] \over \delta n({\bf r})}
    = \left({h(n)\over n} - {\d h \over \d n}\right)
        {| \nabla n|^2 \over n}
        - 2 h(n){\nabla^2 n \over n}

3.18. Dirac Notation

The Dirac notation allows a very compact and powerful way of writing equations that describe a function expansion into a basis, both discrete (e.g. a Fourier series expansion) and continuous (e.g. a Fourier transform) and related things. The notation is designed so that it is very easy to remember and it just guides you to write the correct equation.

Let’s have a function f(x). We define

\begin{eqnarray*} \braket{x|f}&\equiv& f(x) \\ \braket{x'|f}&\equiv& f(x') \\ \braket{x'|x}&\equiv&\delta(x'-x) \\ \int\ket{x}\bra{x}\d x&\equiv&\one \\ \end{eqnarray*}

The following equation

f(x')=\int\delta(x'-x)f(x)\d x

then becomes

\braket{x'|f}=\int\braket{x'|x}\braket{x|f}\d x

and thus we can interpret \ket{f} as a vector, \ket{x} as a basis and \braket{x|f} as the coefficients in the basis expansion:

\ket{f}=\one\ket{f}=\int\ket{x}\bra{x}\d x\ket{f}= \int\ket{x}\braket{x|f}\d x

That’s all there is to it. Take the above rules as the operational definition of the Dirac notation. It’s like with the delta function - written alone it doesn’t have any meaning, but there are clear and non-ambiguous rules to convert any expression with \delta to an expression which even mathematicians understand (i.e. integrating, applying test functions and using other relations to get rid of all \delta symbols in the expression – but the result is usually much more complicated than the original formula). It’s the same with the ket \ket{f}: written alone it doesn’t have any meaning, but you can always use the above rules to get an expression that make sense to everyone (i.e. attaching any bra to the left and rewriting all brackets \braket{a|b} with their equivalent expressions) – but it will be more complex and harder to remember and – that is important – less general.

Now, let’s look at the spherical harmonics:

Y_{lm}({\bf\hat r})\equiv\braket{{\bf\hat r}|lm}

on the unit sphere, we have

\int\ket{\bf\hat r}\bra{\bf\hat r}\d{\bf\hat r}= \int\ket{\bf\hat r}\bra{\bf\hat r}\d\Omega=\one

\delta({\bf\hat r}-{\bf\hat r'})=\braket{{\bf\hat r}|{\bf\hat r'}}

thus

\int_0^{2\pi}\int_0^{\pi} Y_{lm}(\theta,\phi)\,Y^*_{l'm'}(\theta,\phi)\sin\theta\,\d\theta\,\d\phi = \int\braket{l'm'|{\bf\hat r}}\braket{{\bf\hat r}|lm}\d\Omega= \braket{l'm'|lm}

and from (3.30.1) we get

\braket{l'm'|lm}=\delta_{mm'}\delta_{ll'}

now

\sum_{lm}Y_{lm}(\theta,\phi)Y_{lm}^*(\theta',\phi')= \sum_{lm}\braket{{\bf\hat r}|lm}\braket{lm|{\bf\hat r'}}

from (3.30.3) we get

\sum_{lm}\braket{{\bf\hat r}|lm}\braket{lm|{\bf\hat r'}}= \braket{{\bf\hat r}|{\bf\hat r'}}

so we have

\sum_{lm}\ket{lm}\bra{lm}=\one

so \ket{lm} forms an orthonormal basis. Any function defined on the sphere f({\bf\hat r}) can be written using this basis:

f({\bf\hat r}) =\braket{{\bf\hat r}|f} =\sum_{lm}\braket{{\bf\hat r}|lm}\braket{lm|f} =\sum_{lm}Y_{lm}({\bf\hat r})f_{lm}

where

f_{lm}=\braket{lm|f}=\int\braket{lm|{\bf\hat r}}\braket{{\bf\hat r}|f}\d\Omega =\int Y_{lm}^*({\bf\hat r}) f({\bf\hat r})\d\Omega

If we have a function f({\bf r}) in 3D, we can write it as a function of \rho and {\bf\hat r} and expand only with respect to the variable {\bf\hat r}:

f({\bf r})=f(\rho{\bf\hat r})\equiv g(\rho,{\bf\hat r})= \sum_{lm}Y_{lm}({\bf\hat r})g_{lm}(\rho)

In Dirac notation we are doing the following: we decompose the space into the angular and radial part

\ket{{\bf r}}=\ket{{\bf\hat r}}\otimes\ket{\rho} \equiv\ket{{\bf\hat r}}\ket{\rho}

and write

f({\bf r})=\braket{{\bf r}|f}=\bra{{\bf\hat r}}\braket{\rho|f}= \sum_{lm}Y_{lm}({\bf\hat r})\bra{lm}\braket{\rho|f}

where

\bra{lm}\braket{\rho|f}= \int\braket{lm|{\bf\hat r}}\bra{{\bf\hat r}}\braket{\rho|f}\d\Omega =\int Y_{lm}^*({\bf\hat r}) f({\bf r})\d\Omega

Let’s calculate \braket{\rho|\rho'}

\braket{{\bf r}|{\bf r'}}=\bra{\bf\hat r}\braket{\rho|\rho'}\ket{{\bf\hat r'}} =\braket{{\bf\hat r}|{\bf\hat r'}}\braket{\rho|\rho'}

so

\braket{\rho|\rho'} ={\braket{{\bf r}|{\bf r'}}\over\braket{{\bf\hat r}|{\bf\hat r'}}} ={\delta(\rho-\rho')\over\rho^2}

We must stress that \ket{lm} only acts in the \ket{{\bf\hat r}} space (not the \ket\rho space) which means that

\braket{{\bf r}|lm} =\bra{\bf\hat r}\braket{\rho|lm} =\braket{{\bf\hat r}|lm}\bra{\rho} =Y_{lm}({\bf\hat r})\bra{\rho}

and V\ket{lm} leaves V\ket\rho intact. Similarly,

\sum_{lm} \ket{lm}\bra{lm}=\one

is a unity in the \ket{\bf\hat r} space only (i.e. on the unit sphere).

Let’s rewrite the equation (3.30.4):

\sum_m\braket{{\bf\hat r}|lm}\braket{lm|{\bf\hat r'}}= {4\pi\over 2l+1} \braket{{\bf\hat r}\cdot{\bf\hat r'}|P_l}

Using the completeness relation (3.29.1):

\sum_l {2l+1\over2}\braket{x'|P_l}\braket{P_l|x}=\braket{x'|x}

\sum_l \ket{P_l}{2l+1\over2}\bra{P_l}=\one

we can now derive a very important formula true for every function f({\bf\hat r}\cdot{\bf\hat r'}):

f({\bf\hat r}\cdot{\bf\hat r'})=\braket{{\bf\hat r}\cdot{\bf\hat r'}|f}= \sum_l \braket{{\bf\hat r}\cdot{\bf\hat r'}|P_l}{2l+1\over2}\braket{P_l|f}= \sum_{lm}\braket{{\bf\hat r}|lm}\braket{lm|{\bf\hat r'}}{(2l+1)^2\over8\pi} \braket{P_l|f}=

=\sum_{lm}\braket{{\bf\hat r}|lm}f_l \braket{lm|{\bf\hat r'}}

where

f_l={(2l+1)^2\over8\pi}\braket{P_l|f} ={(2l+1)^2\over8\pi}\int_{-1}^1 \braket{P_l|x}\braket{x|f}\d x ={(2l+1)^2\over8\pi}\int_{-1}^1 P_l(x)f(x)\d x

or written explicitly

(3.18.1)f({\bf\hat r}\cdot{\bf\hat r'})= \sum_{l=0}^\infty\sum_{m=-l}^l Y_{lm}({\bf\hat r}) f_l Y_{lm}^*({\bf\hat r'})

3.19. Homogeneous Functions (Euler’s Theorem)

A function of several variables f(x_1, x_2, \dots) \equiv f(x_i) is homogeneous of degree k if

f(\lambda x_i) = \lambda^k f(x_i)

By differentiating with respect to \lambda:

x_i {\partial f(\lambda x_i)\over\partial x_i} = k\lambda^{k-1} f(x_i)

and setting \lambda=1 we get the so called Euler equation:

x_i {\partial f(x_i)\over\partial x_i} = k f(x_i)

in 3D this can also be written as:

{\bf x} \cdot \nabla f({\bf x}) = k f({\bf x})

3.19.1. Example 1

The function f(x, y, z) = {xy\over z} is homogeneous of degree 1, because:

f(\lambda x, \lambda y, \lambda z) = {\lambda x\lambda y\over \lambda z}
= \lambda {xy\over z}
= \lambda f(x, y, z)

and the Euler equation is:

x{\partial f\over\partial x} +
y{\partial f\over\partial y} +
z{\partial f\over\partial z}
=
f

or

x{y\over z} +
y{x\over z} +
z\left(-{xy\over z^2}\right)
={xy\over z}

Which is true.

3.19.2. Example 2

The function V(r) = -{Z \over r} is homogeneous of degree -1, because:

V(\lambda r) = -{Z\over \lambda r} = \lambda^{-1} V(r)

and the Euler equation is:

r{\d V\over\d r} = -V

or

r{Z\over r^2} = -\left(-{Z\over r}\right)

Which is true.

3.20. Green Functions

Green functions are an excellent tool for working with a solution to any ODE or PDE. In this text we explain how it works and then show how one can calculate them using FEM.

3.20.1. Introduction

Let’s put any ODE or PDE in the form:

(3.20.1.1)Lu(x)=f(x)

Here L is a differential operator and x can have any dimension, e.g. 1D (ODE), 2D, 3D or more (PDE). Then we can express the solution as

(3.20.1.2)u(x) = L^{-1}f(x) = \int G(x, x')f(x')\d x'

where G(x, x') is a Green function, that needs to satisfy the equation:

(3.20.1.3)LG(x, x')=\delta(x-x')

Remember, that L acts on x only, so we can check, that (3.20.1.2) indeed solves the PDE (3.20.1.1):

Lu(x)
= L\int G(x, x')f(x')\d x'
= \int LG(x, x')f(x')\d x'
= \int \delta(x-x')f(x')\d x'
= f(x)

3.20.2. Boundary Conditions

The equation (3.20.1.3) doesn’t determine the Green function uniquely, because one can add to it any solution of the homogeneous equation Lu(x)=0. We can use this freedom to solve (3.20.1.3) for any boundary condition. So we prescribe a boundary condition and find the Green function (by solving (3.20.1.3)) that satisfies the boundary condition. It can be shown, that u(x) determined from (3.20.1.2) then also needs to satisfy the same boundary condition.

3.20.3. Symmetry

We write the equation for Green functions at two different points x_1 and x_2:

L G(x, x_1) = \delta(x-x_1)

L G(x, x_2) = \delta(x-x_2)

and multiply the first equation by G(x, x_2), second by G(x, x_1):

G(x, x_2) L G(x, x_1) = \delta(x-x_1) G(x, x_2)

G(x, x_1) L G(x, x_2) = \delta(x-x_2) G(x, x_1)

substract them and integrate over x:

G(x, x_2) L G(x, x_1) - G(x, x_1) L G(x, x_2)
    = \delta(x-x_1) G(x, x_2) - \delta(x-x_2) G(x, x_1)

\int \left(G(x, x_2) L G(x, x_1) - G(x, x_1) L G(x, x_2) \right) \d x
    = \int \left(\delta(x-x_1) G(x, x_2) - \delta(x-x_2) G(x, x_1)
        \right)\d x

\int \left(G(x, x_2) L G(x, x_1) - G(x, x_1) L G(x, x_2) \right) \d x
    = G(x_1, x_2) - G(x_2, x_1)

Assuming that the operator L is Hermitean, we get:

\int \left((L G(x, x_2)) G(x, x_1) - G(x, x_1) L G(x, x_2) \right) \d x
    = G(x_1, x_2) - G(x_2, x_1)

0 = G(x_1, x_2) - G(x_2, x_1)

So the Green function is symmetric for Hermitean operators L.

3.20.4. Examples

Poisson Equation in 1D

Poisson equation:

-{\d^2\over\d x^2} u(x) = f(x)

We calculate the Green function using the Fourier transform:

-{\partial^2\over\partial x^2} G(x, x') = \delta(x-x')

-(ik)^2 \tilde G(k, x') = {e^{ikx'}\over\sqrt{2\pi}}

\tilde G(k, x') = {e^{ikx'}\over\sqrt{2\pi}\,k^2}

G(x, x') = -\half (x-x') \sign(x-x')
    = -\half (x-x') (2H(x-x')-1)
    = H(x-x') (x'-x) +\half(x-x')

Check:

{\partial\over\partial x} G(x, x')
    = \delta(x-x') (x'-x) + H(x-x') (-1) + \half
    = - H(x-x') + \half

{\partial^2\over\partial x^2} G(x, x')
    = - \delta(x-x')

Then:

u(x) = \int G(x, x') f(x') \d x'
    = \int \left(H(x-x') (x'-x) +\half(x-x')\right) f(x') \d x'

The green function can also be written using x_<=\min(x, x') and x_> = \max(x, x'):

G(x, x') = H(x-x') (x'-x) +\half(x-x')
    = \half(x_< - x_>)

Radial Poisson Equation

Let’s write r_> and r_< using the Heaviside step function:

r_> = \max(r, r')
    =
\begin{cases}
r&\quad\mbox{for $r>r'$}\cr
r'&\quad\mbox{for $r<r'$}\cr
\end{cases}
= H(r-r') r + H(r'-r)r' =

= H(r-r') r + (1-H(r-r'))r'
= H(r-r') (r-r') + r'

and:

r_< = \min(r, r')
    =
\begin{cases}
r'&\quad\mbox{for $r>r'$}\cr
r&\quad\mbox{for $r<r'$}\cr
\end{cases}
= H(r-r') r' + H(r'-r)r =

= H(r-r') r' + (1-H(r-r'))r
= H(r-r') (r'-r) + r

Then we can differentiate:

{\partial\over\partial r} r_>
    = \delta(r-r')(r-r') + H(r-r')
    = H(r-r')

{\partial^2\over\partial r^2} r_>
    = \delta(r-r')

{\partial\over\partial r} r_<
    = \delta(r-r')(r'-r) - H(r-r') + 1
    = 1-H(r-r') = H(r'-r)

{\partial^2\over\partial r^2} r_<
    = -\delta(r-r')

Given:

(3.20.4.1)u(r) = \int_0^\infty {f(r') r'^2 \over r_>} \d r'

The Green function is

G(r, r') = {r'^2\over r_>}

Let’s differentiate:

{\partial\over\partial r} G(r, r') = -{r'^2\over r_>^2}
        {\partial\over\partial r} r_>
= -{r'^2\over r_>^2} H(r-r')
= -{r'^2\over r^2} H(r-r')

and

{\partial^2\over\partial r^2} G(r, r') = +{2\over r}{r'^2\over r^2}
    H(r-r') -{r'^2\over r^2} \delta(r-r')

So we get:

-{\partial^2\over\partial r^2} G(r, r')
    -{2\over r}{\partial\over\partial r} G(r, r')
    =-{2\over r}{r'^2\over r^2} H(r-r') +{r'^2\over r^2} \delta(r-r')
        +{2\over r}{r'^2\over r^2} H(r-r')
    ={r'^2\over r^2} \delta(r-r') = \delta(r-r')

So u(r) from (3.20.4.1) is a solution to the radial Poisson equation:

-{\d^2\over\d r^2} u(r) - {2\over r}{\d\over\d r}u(r) = f(r)

Helmholtz Equation in 1D

\left({\d^2\over\d x^2}+1\right)u(x)=f(x)

\left({\d^2\over\d x^2}+1\right)G(x, x')=\delta(x-x')

with boundary conditions u(0)=u({\pi\over2})=0. We use the Fourier transform:

\left((ik)^2+1\right)\tilde G(k, x')={e^{ikx'}\over\sqrt{2\pi}}

\tilde G(k, x')={e^{ikx'}\over \sqrt{2\pi} (1-k^2)}

G(x, x') = \half\sign(x-x')\sin(x-x')

Check:

{\partial\over\partial x} G(x, x') = \delta(x-x')\sin(x-x')
    +\half\sign(x-x')\cos(x-x') =

    = \half\sign(x-x')\cos(x-x')

{\partial^2\over\partial x^2} G(x, x') = \delta(x-x')\cos(x-x')
    -\half\sign(x-x')\sin(x-x') =

    = -\half\sign(x-x')\sin(x-x') + \delta(x-x')

{\partial^2\over\partial x^2} G(x, x')
+{\partial\over\partial x} G(x, x') = \delta(x-x')

The general solution of the homogeneous equation is:

u(x) = C_1 \sin x + C_2 \cos x

so the general Green function is:

G(x, x') = \half\sign(x-x')\sin(x-x') + C_1 \sin (x+x')
    + C_2 \cos (x+x')

Satisfying the boundary conditions (for all x' \ne 0):

G(0, x') = G({\pi\over 2}, x') = 0

we get:

C_1 & = -\half \\
C_2 & = 0

and:

G(x, x') = \half\sign(x-x')\sin(x-x') -\half \sin (x+x') =

=-H(x'-x)\sin x\cos x' - H(x - x') \cos x \sin x' =

=
\begin{cases}
-\sin x\cos x'&\quad x<x'\cr
-\cos x\sin x'&\quad x>x'\cr
\end{cases}
=-\sin x_< \cos x_>

and

u(x) = \int G(x, x')f(x')\d x'
=-\cos x\int_0^xf(x')\sin x'\d x'
-\sin x\int_x^{\pi\over2}f(x')\cos x'\d x'

To show that this really works, let’s take for example f(x)=3\sin2x. Then

u(x)
=-\cos x\int_0^x3\sin 2x'\sin x'\d x'
-\sin x\int_x^{\pi\over2}3\sin 2x'\cos x'\d x'

We can use SymPy to evaluate the integrals:

In [1]: u = -cos(x)*integrate(3*sin(2*y)*sin(y), (y, 0, x)) - \
    sin(x)*integrate(3*sin(2*y)*cos(y), (y, x, pi/2))

In [2]: u
Out[2]:
-(cos(x)*sin(2*x) - 2*cos(2*x)*sin(x))*cos(x) - (sin(x)*sin(2*x)
    + 2*cos(x)*cos(2*x))*sin(x)

In [3]: simplify(u)
Out[3]:
     2                  2
- cos (x)*sin(2*x) - sin (x)*sin(2*x)

In [4]: trigsimp(_)
Out[4]: -sin(2*x)

And we get

u(x)=-\sin 2x

We can easily check, that u''+u=3\sin2x:

>>> u = -sin(2*x)
>>> u.diff(x, 2) + u
3*sin(2*x)

and since f(x) = 3\sin2x, we have verified, that u(x)=-\sin 2x is the correct solution.

Poisson Equation in 2D

Let {\bf x}=(x, y) and we want to solve:

\nabla^2u({\bf x})=f({\bf x})

So we have:

\nabla^2G({\bf x}, {\bf x'})=\delta({\bf x}-{\bf x'})

The solution is:

G({\bf x}, {\bf x'})
={1\over2\pi}\log|{\bf x}-{\bf x'}|
={1\over4\pi}\log|{\bf x}-{\bf x'}|^2
={1\over4\pi}\log((x-x')^2+(y-y')^2)

Poisson Equation in 3D

\nabla^2u(x)=f(x)

\nabla^2G(x, x')=\delta(x-x')

with boundary condition G(x) = 0 at infinity. Then:

G(x, x')=-{1\over4\pi}{1\over|x-x'| }

and

u(x) = -{1\over4\pi}\int {f(x')\over|x-x'| }\d x'

Helmholtz Equation in 3D

(\nabla^2+k^2)u(x)=f(x)

(\nabla^2+k^2)G(x, x')=\delta(x-x')

with boundary condition G(x) = 0 at infinity. Then:

G(x, x')=-{1\over4\pi}{e^{ik|x-x'| }\over|x-x'| }

u(x) = -{1\over4\pi}\int {f(x')e^{ik|x-x'| }\over|x-x'| }\d x'

Finite Element Method

Let’s show it on the Laplace equation. We want to solve:

\nabla^2G(x, x')=\delta(x-x')

We will treat x' as a parameter, so we define g_{x'}(x)\equiv G(x, x'):

\nabla^2g_{x'}(x)=\delta(x-x')

We set g_{x'}(x)=0 on the boundary and we get:

-\int\nabla g_{x'}(x) \cdot \nabla v(x) \d x = \int v(x)\delta(x-x')\d x

-\int\nabla g_{x'}(x) \cdot \nabla v(x) \d x = v(x')

So we choose x' and then solve for g_{x'}(x) using FEM and we get the Green function G(x, x') for all x and one particular x'. We can then evaluate the integral (3.20.1.2) numerically – one would have to use FEM for all x' that are needed in the integral, so that is not efficient, but it should work. One will then be able to play with Green functions and be able to calculate them numerically for any boundary condition (which is not possible analytically).

3.21. Binomial Coefficients

For n and k integers, the binomial coefficients are defined by:

\binom{n}{k} = {n!\over k! (n-k)!} =
    {n (n-1) \cdots (n-k+1)\over k!}

For r real, one just uses the second formula as a definition:

\binom{r}{k} = {r (r-1) \cdots (r-k+1)\over k!}

Example I:

\binom{-n}{k}
    = {(-n) (-n-1) \cdots (-n-k+1)\over k!}
    = (-1)^{k} {n (n+1) \cdots (n+k-1)\over k!}
    = (-1)^{k}\binom{n+k-1}{k}

Example II:

\binom{k-\half}{k}
    = {(k-\half) (k-\half-1) \cdots (k-\half-k+1)\over k!}
    = {(k-\half) (k-\half-1) \cdots \half\over k!} =

    = {(2k-1) (2k-3) \cdots 1\over 2^k k!}
    = {(2k-1)!!\over 2^k k!}
    = {(2k)!\over (2^k k!)^2}
    = {1\over 4^k}\binom{2k}{k}

The binomial formula is for n integer:

(x+y)^n = \sum_{k=0}^n \binom{n}{k} x^k y^{n-k}

and for r real and |x| < |y| this can be generalized to:

(x+y)^r = \sum_{k=0}^\infty \binom{r}{k} x^k y^{r-k}

Example: (for |x| < 1)

(1 + x)^{-n} = \sum_{k=0}^\infty \binom{-n}{k} x^k
    = \sum_{k=0}^\infty (-1)^k\binom{n+k-1}{k} x^k
    = \sum_{k=0}^\infty \binom{n+k-1}{k} (-x)^k

so:

(1 - x)^{-n} = \sum_{k=0}^\infty \binom{n+k-1}{k} x^k

(1 - x)^{-1} = \sum_{k=0}^\infty \binom{1+k-1}{k} x^k
    = \sum_{k=0}^\infty \binom{k}{k} x^k
    = \sum_{k=0}^\infty x^k

(1 - x)^{-\half} = \sum_{k=0}^\infty \binom{\half+k-1}{k} x^k
= \sum_{k=0}^\infty \binom{k-\half}{k} x^k
= \sum_{k=0}^\infty {1\over 4^k}\binom{2k}{k} x^k

Another example:

{1\over\sqrt{1-2xt+t^2}} = \left(1-(2xt-t^2)\right)^{-\half}
    = \sum_{n=0}^\infty {1\over 4^n}\binom{2n}{n} (2xt-t^2)^n =

    = \sum_{n=0}^\infty {1\over 4^n}\binom{2n}{n} \sum_{k=0}^n
        \binom{n}{k} (-t^2)^k (2xt)^{n-k} =

    = \sum_{n=0}^\infty {1\over 4^n}\binom{2n}{n} \sum_{k=0}^n
        \binom{n}{k} (-1)^k t^{2k} (2x)^{n-k} t^{n-k} =

    = \sum_{n=0}^\infty {1\over 4^n}\binom{2n}{n} \sum_{k=0}^n
        \binom{n}{k} (-1)^k t^{n+k} (2x)^{n-k} =

    = \sum_{n=0}^\infty \sum_{k=0}^{\left\lfloor n\over 2 \right\rfloor}
        {1\over 4^{n-k}}\binom{2n-2k}{n-k}
        \binom{n-k}{k} (-1)^k t^{n} (2x)^{n-2k} =

    = \sum_{n=0}^\infty \left(
        {1\over 2^n}
        \sum_{k=0}^{\left\lfloor n\over 2 \right\rfloor}
        (-1)^k
        \binom{n}{k}
        \binom{2n-2k}{n}
        x^{n-2k} \right) t^{n} =

    = \sum_{n=0}^\infty \left(
        {1\over 2^n}
        \sum_{k=0}^{\left\lfloor n\over 2 \right\rfloor}
        (-1)^k
        \binom{n}{k}
        {1\over n!}{\d^n \over \d x^n} x^{2n-2k}
        \right) t^{n} =

    = \sum_{n=0}^\infty \left(
        {1\over 2^n}
        \sum_{k=0}^n
        (-1)^k
        \binom{n}{k}
        {1\over n!}{\d^n \over \d x^n} x^{2n-2k}
        \right) t^{n} =

    = \sum_{n=0}^\infty \left(
        {1\over 2^n n!}
        {\d^n \over \d x^n} (x^2 - 1)^n
        \right) t^{n} =

    = \sum_{n=0}^\infty P_n(x) t^{n}

where we used (3.22.2) and

\binom{2n-2k}{n-k} \binom{n-k}{k} = \binom{2n-2k}{n} \binom{n}{k}

The P_n(x) are Legendre Polynomials.

3.22. Double Sums

When evaluating double sums, one can use triangular summation to reorder them:

(3.22.1)\sum_{n=0}^\infty \sum_{k=0}^\infty a_{n k} =
\sum_{n=0}^\infty \sum_{k=0}^n a_{n-k, k}

Also it holds

(3.22.2)\sum_{n=0}^\infty \sum_{k=0}^n a_{n k} =
    \sum_{n=0}^\infty \sum_{k=0}^{\left\lfloor n\over 2 \right\rfloor}
        a_{n-k, k}

3.23. Triangle Inequality

Triangle inequality (condition) means that none of the three quantities a, b, c is greater than the sum of the other two:

(3.23.1)a + b \ge c

b + c \ge a

c + a \ge b

This is equivalent to just one equation:

(3.23.2)|a-b| \le c \le a+b

we can do any permutation of the symbols, i.e. the above equation is equivalent to any of these:

|b-c| \le a \le b+c

|c-a| \le b \le c+a

So instead of stating the three inequalities (3.23.1) it is more convenient to just write (3.23.2), using any permutation that we like.

To show, that (3.23.1) implies (3.23.2) we rewrite (3.23.1):

a + b \ge c

c \ge a-b

c \ge b-a

so

a + b \ge c

c \ge |a-b|

and we get (3.23.2). To show, that (3.23.2) implies (3.23.1) we rewrite (3.23.2) for a\ge b first:

a \ge b

|a-b| \le c \le a+b

so:

a \ge b

a-b \le c \le a+b

rearranging:

a + b \ge c

b + c \ge a

a \ge b

since c is positive, if a\ge b then also c+a\ge b and we get (3.23.1). Finally, for a < b:

a < b

|a-b| \le c \le a+b

so:

a < b

-(a-b) \le c \le a+b

rearranging:

a + b \ge c

b > a

c + a \ge b

since c is positive, if b > a then also b+c\ge a and we get (3.23.1).

3.24. Gamma Function

The Gamma function \Gamma(x) is defined by the following properties for x > 0:

(3.24.1)\Gamma(1) = 1

(3.24.2)\Gamma(x+1) = x \Gamma(x)

(3.24.3)\log\Gamma(x) \quad\mbox{is convex}

It can be shown that this determines the function uniquely for x > 0 (this is called the Bohr-Mollerup theorem) and then it can be extended analytically to the whole complex plane.

The most common formula for \Gamma(z) that satisfies (3.24.1), (3.24.2) and (3.24.3) is:

(3.24.4)\Gamma(z) = \int_0^\infty t^{z-1} e^{-t} \d t

It satisfies (3.24.1) because:

\Gamma(1)
    = \int_0^\infty t^{1-1} e^{-t} \d t
    = \int_0^\infty e^{-t} \d t
    = [-e^{-t}]_0^\infty
    = 1

It satisfies (3.24.2) by integrating by parts:

\Gamma(z)
    = \int_0^\infty t^{z-1} e^{-t} \d t
    = (z-1)\int_0^\infty t^{z-2} e^{-t} \d t-[t^{z-1}e^{-t}]_0^\infty
    = (z-1)\Gamma(z-1)

Finally it satisfies (3.24.3) by verifying the convex condition directly (x, y > 0 and 0\le \lambda \le 1):

\log\Gamma(\lambda x + (1-\lambda) y)
    = \log\int_0^\infty t^{\lambda x + (1-\lambda) y - 1} e^{-t} \d t =

    = \log \int_0^\infty (t^{x-1} e^{-t})^\lambda
        (t^{y-1} e^{-t})^{1-\lambda} \d t \le

    \le \log \left(
        \left(\int_0^\infty t^{x-1} e^{-t} \d t\right)^\lambda
        \left(\int_0^\infty t^{y-1} e^{-t} \d t\right)^{1-\lambda}\right) =

    = \lambda \log\Gamma(x) + (1-\lambda) \log \Gamma(y)

And thus (3.24.4) uniquely determines the Gamma function. We can use (3.24.4) to calculate \Gamma(\half):

\Gamma(\half)
    = \int_0^\infty t^{{1\over2}-1} e^{-t} \d t
    = \int_0^\infty {e^{-t}\over\sqrt t} \d t
    = \int_0^\infty {e^{-x^2}\over x} 2x\, \d x
    = 2\int_0^\infty e^{-x^2} \d x =

    = \int_{-\infty}^\infty e^{-x^2} \d x
    = \sqrt{
        \int_{-\infty}^\infty e^{-x^2} \d x
        \int_{-\infty}^\infty e^{-y^2} \d y
        }
    = \sqrt{2\pi \int_0^\infty e^{-r^2} r\d r} =

    = \sqrt{2\pi \int_0^\infty e^{-u} \half\d u}
    = \sqrt\pi

From this and the definition of the Gamma function we get for integer n:

(3.24.5)\Gamma(n+1) = n\Gamma(n) = n(n-1)\Gamma(n-1)
    = n(n-1)(n-2)\cdots 2\cdot1\cdot\Gamma(1) =

    = n(n-1)(n-2)\cdots 1
    = n!

and

(3.24.6)\Gamma(n+\half) = (n-\half)\Gamma(n-\half)
    = (n-\half)(n-1-\half)\Gamma(n-1-\half)
    = (n-\half)(n-1-\half)\cdots\half\Gamma(\half) =

    = {2n-1\over2}{2n-3\over2}{2n-5\over2}\cdots{1\over2}\Gamma(\half)
    = {(2n-1)!!\over 2^n}\Gamma(\half)
    = {(2n-1)!!\over 2^n}\sqrt\pi

3.25. Incomplete Gamma Function

The upper incomplete gamma function is defined by:

\Gamma(z, x) = \int_x^\infty t^{z-1} e^{-t} \d t

Integrating by parts we get:

\Gamma(z+1, x)
    = \int_x^\infty t^z e^{-t} \d t
    = z\int_x^\infty t^{z-1} e^{-t} \d t-[t^ze^{-t}]_x^\infty
    = z\Gamma(z, x) + x^z e^{-x}

Some special values are:

\Gamma(z, 0) = \int_0^\infty t^{z-1} e^{-t} \d t = \Gamma(z)

\Gamma(1, x) = \int_x^\infty e^{-t} \d t = -[e^{-t}]_x^\infty = e^{-x}

\Gamma(\half, x)
    = \int_x^\infty t^{-\half} e^{-t} \d t
    = 2\int_{\sqrt{x}}^\infty e^{-s^2} \d s
    = \sqrt{\pi} \mbox{erfc}(\sqrt{x})

For integer n we get:

\Gamma(n+1, x)
    = n\Gamma(n, x) + x^n e^{-x}
    = n(n-1)\Gamma(n-1, x) + (n x^{n-1} + x^n) e^{-x} =

    = n(n-1)(n-2)\Gamma(n-2, x) + (n (n-1) x^{n-2}
        + n x^{n-1} + x^n) e^{-x} =

    = n(n-1)(n-2)\cdots 2\cdot 1\cdot \Gamma(1, x)
        + (n(n-1)\cdots 2 x^1 + \cdots
          + n (n-1) x^{n-2} + n x^{n-1} + x^n) e^{-x} =

    = n! e^{-x} + (n(n-1)\cdots 2 x^1 + \cdots
          + n (n-1) x^{n-2} + n x^{n-1} + x^n) e^{-x} =

    = n! e^{-x}\sum_{\nu=0}^n {x^\nu\over\nu!}

and

\Gamma(n+\half, x)
    = (n-\half)\Gamma(n-\half, x) + x^{n-\half} e^{-x}
    = (n-\half)(n-1-\half)\Gamma(n-1-\half, x)
        + ((n-\half) x^{n-1-\half} + x^{n-\half})) e^{-x} =

    = (n-\half)(n-1-\half)\cdots\half\Gamma(\half, x)
        + ((n-\half)(n-1-\half)\cdots(1+\half) x^\half + \cdots +
            (n-\half) x^{n-1-\half} + x^{n-\half})) e^{-x} =

    = {(2n-1)!!\over 2^n}\Gamma(\half, x)
        +
        {(2n-1)!!\over 2^n} e^{-x}\sum_{\nu=1}^n
            {2^\nu x^{\nu-\half}\over(2\nu-1)!!} =

    = {(2n-1)!!\over 2^n}\left(\sqrt\pi \mbox{erfc}(\sqrt x)
        +
        e^{-x}\sum_{\nu=1}^n
            {2^\nu x^{\nu-\half}\over(2\nu-1)!!} \right)

The lower incomplete gamma function is defined by:

\gamma(z, x) = \int_0^x t^{z-1} e^{-t} \d t = \Gamma(z) - \Gamma(z, x)

and as such all expressions can be easily derived using the gamma and upper incomplete gamma functions. The recursion relation is then:

\gamma(z+1, x) = \Gamma(z+1) - \Gamma(z+1, x)
    = z \Gamma(z) - (z\Gamma(z, x) + x^z e^{-x})
    = z \gamma(z, x) - x^z e^{-x}

Some special values are:

\gamma(z, 0) = \Gamma(z) - \Gamma(z, 0) = \Gamma(z) - \Gamma(z) = 0

\gamma(1, x) = \Gamma(1) - \Gamma(1, x) = 1 - e^{-x}

\gamma(\half, x) = \Gamma(\half) - \Gamma(\half, x)
    = \sqrt\pi - \sqrt\pi \mbox{erfc}(\sqrt x)
    = \sqrt\pi \mbox{erf}(\sqrt x)

By repeated application of the recursion formula we get:

\gamma(z, x)
    = {1\over z}\gamma(z+1, x) + e^{-x} {x^z\over z}
    = {1\over z(z+1)}\gamma(z+2, x) + e^{-x} \left({x^z\over z}
        + {x^{z+1}\over z(z+1)}\right) =

    = {1\over z(z+1)\cdots(z+n)}\gamma(z+n+1, x) + e^{-x}
        \sum_{k=0}^n {x^{z+k}\over z(z+1)\cdots(z+k)} =

    = {\Gamma(z)\over \Gamma(z+n+1)}\gamma(z+n+1, x) + x^z
        \Gamma(z) e^{-x} \sum_{k=0}^n {x^k\over \Gamma(z+k+1)} =

(3.25.1)= x^z \Gamma(z) e^{-x} \sum_{k=0}^\infty {x^k\over \Gamma(z+k+1)}

where we used:

\lim_{z\to\infty} {\gamma(z, x)\over\Gamma(z)} = 0

which can be proven by the following inequality which uses the fact that the function f(t) = t^{z-1} e^{-t} is an increasing function for t < z-1, so as long as x < z-1 we get:

\gamma(z, x)
    = \int_0^x t^{z-1} e^{-t} \d t
    = \int_0^x f(t) \d t <

    < \int_0^x f(x) \d t = x f(x) =

    = {x \over z - 1 - x} \int_x^{z-1} f(x) \d t <

    < {x \over z - 1 - x} \int_x^{z-1} f(t) \d t <

    < {x \over z - 1 - x} \int_0^\infty f(t) \d t =

    = {x \over z - 1 - x} \Gamma(z)

Using (3.25.1) we can now write \gamma(z, x) using the Kummer confluent hypergeometric function {}_1F_1(a, b, z) as follows:

\gamma(z, x)
    = x^z \Gamma(z) e^{-x} \sum_{k=0}^\infty {x^k\over \Gamma(z+k+1)}
    = x^z z^{-1} e^{-x}\, {}_1F_1(1, z+1, x)
    = x^z z^{-1}\, {}_1F_1(z, z+1, -x)

3.25.1. Example

Consider the class of integrals:

F_m(t) = \int_0^1 u^{2m} e^{-tu^2} \d u, \quad\quad
    \mbox{($t>0$; $m=0,1,2,\dots$)}

We write them using the lower incomplete gamma function as:

F_m(t) = \int_0^t \left(v\over t\right)^m e^{-v}
        \left(v\over t\right)^{-\half}{\d v\over 2 t}
    = {1\over 2 t^{m+\half}}\int_0^t v^{m-\half} e^{-v} \d v
    = {\gamma(m+\half, t)\over 2 t^{m+\half}}

We can also write it using the confluent hypergeometric function as follows:

F_m(t) = {\gamma(m+\half, t)\over 2 t^{m+\half}}
    = {t^{m+\half}(m+\half)^{-1} \over 2 t^{m+\half}}
        \,{}_1F_1(m+\half, m+{\textstyle{3\over2}}, -t)
    = {{}_1F_1(m+\half, m+{\textstyle{3\over2}}, -t)\over 2m+1}

For m=0 we get:

F_0(t) = {\gamma(\half, t)\over 2 t^{\half}}
    = \half \sqrt{\pi\over t} \mbox{erf}(\sqrt t)

Using the recursion relation we get:

F_{m+1}(t)
    = {\gamma(m+\half+1, t)\over 2 t^{m+\half+1}}
    = {(m+\half)\gamma(m+\half, t)-t^{m+\half}e^{-t}\over 2 t^{m+\half} t}
    = {(m+\half)\over t}F_m(t)-{e^{-t}\over 2 t} =

    = {(2m+1) F_m(t)-e^{-t}\over 2 t}

By expressing F_m(t) from the equation we obtain the inverse relation:

F_m(t) = {2t F_{m+1}(t) + e^{-t}\over 2m+1}

From (3.25.1) we get:

F_m(t)
    = {\gamma(m+\half, t)\over 2 t^{m+\half}}
    = {1\over 2 t^{m+\half}}
    t^{m+\half} \Gamma(m+\half) e^{-t} \sum_{k=0}^\infty {t^k\over
        \Gamma(m+\half+k+1)} =

    = \half \Gamma(m+\half) e^{-t} \sum_{k=0}^\infty {t^k\over
        \Gamma(m+k+{3\over2})} =

    = \half {(2m-1)!! \sqrt\pi\over 2^m} e^{-t} \sum_{k=0}^\infty {t^k\over
        {(2m+2k+1)!!\sqrt\pi\over 2^{m+k+1}}} =

    = e^{-t} \sum_{k=0}^\infty {(2m-1)!! (2t)^k\over (2m+2k+1)!!} =

    = e^{-t} \sum_{k=0}^\infty {(2t)^k\over (2m+1)(2m+3)\cdots(2m+2k+1)}

3.26. Factorial

The factorial n! is defined as

n! = n(n-1)(n-2)\cdots 3\cdot2\cdot 1

By (3.24.5) it can be written using the Gamma function as:

n! = \Gamma(n+1)

3.27. Double Factorial

The double factorial n!! is defined as:

n!! = \begin{cases}
    n(n-2)(n-4)(n-6)\cdots 5\cdot3\cdot1\quad\quad\mbox{for odd $n=2k+1$} \\
    n(n-2)(n-4)(n-6)\cdots 6\cdot4\cdot2\quad\quad\mbox{for even $n=2k$}
\end{cases}

One can rewrite double factorial using a factorial as:

(2k)!! = 2\cdot4\cdot6\cdots (2k)
    = 2^k (1\cdot2\cdot3\cdots k)
    = 2^k k!

(2k-1)!!
    = 1\cdot3\cdot5\cdots(2k-1)
    = {1\cdot2\cdot3\cdot4\cdot5\cdots(2k) \over 2\cdot4\cdot6\cdots (2k)}
    = {(2k)!\over (2k)!!} = {(2k)!\over 2^k k!}

For odd n it can be written using the Gamma function, see (3.24.6):

(2k-1)!! = {1\over\sqrt\pi} 2^k \Gamma\left(k+\half\right)

3.27.1. Example

A(n) = {1\cdot3\cdot5 \cdot \dots \cdot (2n-1) \over
    1\cdot 2\cdot 3\cdot \dots \cdot n}
    = {(2n-1)!!\over n!}
    = {(2n)!\over 2^n (n!)^2}
    = {1\over 2^n}\binom{2n}{n}

B(n) = {1\cdot3\cdot5 \cdot \dots \cdot (2n-1) \over
    2\cdot 4\cdot\cdot6 \dots \cdot 2n}
    = {(2n-1)!!\over (2n)!!}
    = {(2n)!\over (2^n n!)^2}
    = {1\over 4^n}\binom{2n}{n}

3.28. Fermi-Dirac Integral

The Fermi-Dirac integral (sometimes just called a Fermi integral) is defined as:

I_\alpha(\mu) = \int_0^\infty {\epsilon^\alpha \d\epsilon \over
    e^{\epsilon-\mu} + 1}

Examples:

I_{1\over 2}(\mu) = \int_0^\infty {\sqrt \epsilon \d\epsilon \over
    e^{\epsilon-\mu} + 1}

I_{3\over 2}(\mu) = \int_0^\infty {\epsilon^{3\over2} \d\epsilon \over
    e^{\epsilon-\mu} + 1}

The Fermi-Dirac integral can also be written using the polylogarithm, see The Series pFq for details.