# 3.40. Differential Geometry¶

## 3.40.1. Manifolds¶

### Scalars, Vectors, Tensors¶

Differentiable manifold is a space covered by an atlas of maps, each map covers part of the manifold and is a one to one mapping to an euclidean space :

Let’s have a one-to-one transformation between and coordinates (we simply write , etc.):

Scalar is such a field that transforms as ( is it’s value in coordinates):

One form is such a field that transforms the same as the gradient of a scalar, that transforms as ( is it’s value in coordinates):

so

Vector is such a field that produces a scalar when contracted with a one form and this fact is used to deduce how it transforms:

so we have

multiplying by and using the fact that we get

Higher tensors are build up and their transformation properties derived from the fact, that by contracting with either a vector or a form we get a lower rank tensor that we already know how it transforms.

Having now defined scalar, vector and tensor fields, one may then choose a basis at each point for each field, the only requirement being that the basis is not singular. For example for vectors, each point in has a basis , so a vector (field) has components with respect to this basis:

### Covariant differentiation¶

The derivative of the basis vector is a vector, thus it can be written as a linear combination of the basis vectors:

Differentiating a vector is then easy:

So we define a covariant derivative:

and write

I.e. we have:

We also define:

A scalar doesn’t depend on basis vectors, so its covariant derivative is just its partial derivative

Differentiating a one form is done using the fact, that is a scalar, thus

where we have defined

This is obviously a tensor, because the above equation has a tensor on the left hand side () and tensors on the right hand side ( and ). Similarly for the derivative of the tensor we use the fact that is a vector:

where we define

and so on for other tensors, for example:

One can now easily proof some common relations simply by rewriting it to components and back:

Change of variable:

### Parallel transport¶

If the vectors at infinitesimally close points of the curve are parallel and of equal length, then is said to be parallel transported along the curve, i.e.:

So

In components (using the tangent vector ):

### Fermi-Walker transport¶

In local inertial frame:

We require orthogonality , in a general frame:

where was calculated by differentiating the orthogonality condition. This is called a Thomas precession.

For any vector, we define: the vector is Fermi-Walker tranported along the curve if:

If is perpendicular to , the second term is zero and the result is called a Fermi transport.

Why: the is transported by Fermi-Walker and also this is the equation for gyroscopes, so the natural, nonrotating tetrade is the one with , which is then correctly transported along any curve (not just geodesics).

### Geodesics¶

Geodesics is a curve that locally looks like a line, i.e. it parallel transports its own tangent vector:

so

or equivalently (using the fact ):

(3.40.1.1)¶

Let’s determine all possible reparametrizations that leave the geodesic equation invariant:

Substituting into the geodesic equation, we get:

So we can see that the equation is invariant as long as , which gives:

This is called an affine reparametrization.

Another way to derive the geodesic equation is by finding a curve that extremizes the proper time:

Here can be *any* parametrization. We have introduced to make the
formulas shorter:

We vary this action with respect to :

By setting the variation we obtain the geodesic equation:

(3.40.1.2)¶

We have a freedom of choosing , so we choose such parametrization so that , which makes and we recover (3.40.1.1):

Note that the equation (3.40.1.2) is parametrization invariant, but (3.40.1.1) is not (only affine reparametrization leaves (3.40.1.1) invariant).

### Riemann Curvature Tensor¶

Curvature means that we take a vector , parallel transport it around a closed loop (which is just applying a commutator of the covariant derivatives ) and see how it changes. We express the result in terms of the vector :

The coefficients form a tensor called Riemann curvature tensor. Expanding the left hand side:

Where we have used the fact that all terms symmetric in (in particular and and ) get canceled by the same term in the . We get

In order to see all the symmetries, that the Riemann tensor has, we lower the first index

and use local inertial frame coordinates, where all Christoffel symbols vanish (not their derivatives though):

We will also need:

Using these expressions for the curvature tensor in a local inertial frame, we derive the following 5 symmetries of the curvature tensor by simply substituting for the left hand side and verify that it is equal to the right hand side:

These are tensor expressions and so even though we derived them in a local inertial frame, they hold in all coordinates. The last identity is called a Bianchi identity.

The Ricci tensor is defined as:

From the last equality we can see that it is symmetric in . A Ricci scalar is defined as:

The Einstein tensor is defined as:

It is symmetric in due to the symmetry of the metric and Ricci tensors. By contracting the Bianchi identity twice, we can show that Einstein tensor has zero divergence:

### Lie derivative¶

Definition of the Lie derivative of any tensor is:

it can be shown directly from this definition, that the Lie derivative of a vector is the same as a Lie bracket:

and in components

Lie derivative of a scalar is

and of a one form is derived using the observation that is a scalar:

and so on for other tensors, for example:

### Metric¶

In general, the Christoffel symbols are not symmetric and there is no metric that generates them. However, if the manifold is equipped with metrics, then the fundamental theorem of Riemannian geometry states that there is a unique Levi-Civita connection, for which the metric tensor is preserved by parallel transport:

We define the commutation coefficients of the basis by

In general these coefficients are not zero (as an example, take the units vectors in spherical or cylindrical coordinates), but for coordinate bases they are. It can be proven, that

and for coordinate bases , so

As a special case:

All last 3 expressions are used (but the last one is probably the most common). is the matrix of coefficients . At the beginning we used the usual trick that is symmetric but is antisymmetric. Later we used the identity , which follows from the well-known identity by substituting and taking the logarithm of both sides.

#### Diagonal Metric¶

Many times the metric is diagonal, e.g. in 3D:

(in general ), then the Christoffel symbols can be calculated very easily (below we do not sum over , and ):

If or then

(3.40.1.3)¶

otherwise (i.e. and ) then either :

(3.40.1.4)¶

or (i.e. ):

In other words, the symbols can only be nonzero if at least two of , or are the same and one can use the two formulas (3.40.1.3) and (3.40.1.4) to quickly evaluate them. A systematic way to do it is to write (3.40.1.3) and (3.40.1.4) in the following form:

(3.40.1.5)¶

Then find all and for which is nonzero and then immediately write all nonzero Christoffel symbols using the equations (3.40.1.5).

For example for cylindrical coordinates we have and , so is only nonzero for and and we get:

all other Christoffel symbols are zero. For spherical coordinates we have , and , so is only nonzero for , or , or , and we get:

All other symbols are zero.

### Symmetries, Killing vectors¶

We say that a diffeomorphism is a symmetry of some tensor T if the tensor is invariant after being pulled back under :

Let the one-parameter family of symmetries be generated by a vector field , then the above equation is equivalent to:

If is the metric then the symmetry is called isometry and is called a Killing vector field and can be calculated from:

The last equality is Killing’s equation. If is a geodesics with a tangent vector and is a Killing vector, then the quantity is conserved along the geodesics, because:

where the first term is both symmetric and antisymmetric in , thus zero, and the second term is the geodesics equation, thus also zero.

### Symmetry and Antisymmetry¶

Every tensor can be decomposed into symmetric and antisymmetric parts:

In particular, for a symmetric tensor we get:

and for antisymmetric tensor we get:

When contracting a symmetric tensor with an antisymmetric tensor we get zero:

When contracting a general tensor with a symmetric tensor , only the symmetric part of contributes:

When contracting a general tensor with an antisymmetric tensor , only the antisymmetric part of contributes:

#### Example I¶

We want to rewrite:

So we write the left part as a sum of symmetric and antisymmetric parts:

Here is antisymmetric and is symmetric in , so the contraction is zero. The final result is:

#### Example II¶

Let . Then we can simplify:

Here is the antisymmetric part (the only one that contributes, because is antisymmetric) of .

### Covariant integration¶

If is a scalar, then the integral depends on coordinates. The correct way to integrate in any coordinates is:

where . The Gauss theorem in curvilinear coordinates is:

where is the boundary (surface) of and is the normal vector to this surface.

## 3.40.2. Examples¶

### Weak Formulation of Laplace Equation¶

As an example, we write the weak formulation of the Laplace equation in arbitrary coordintes:

Now we apply per-partes (assuming the boundary integral vanishes):

For diagonal metric this evaluates to:

### Cylindrical Coordinates¶

The transformation matrix is

The metric tensor of the cartesian coordinate system is , so by transformation we get the metric tensor in the cylindrical coordinates :

As a particular example, let’s write the Laplace equation with nonconstant conductivity for axially symmetric field. The Laplace equation is:

so we use the formulas above to get:

but we know that , so and the final equation is:

To write the weak formulation for it, we need to integrate covariantly (e.g. in our case) and rewrite it using per partes. We did exactly this in the previous example in a coordinate free maner, so we just use the final formula we got there for a diagonal metric:

and for , we get:

### Spherical Coordinates¶

The relation between cartesian coordinates and spherical coordinates is:

(3.40.2.1)¶

The transformation matrix (Jacobian) is calculated by differentiating (3.40.2.1):

(3.40.2.2)¶

The inverse Jacobian is calculated by inverting the matrix (3.40.2.2):

We expressed the above Jacobians using , , and we can use (3.40.2.1) to express them using , , . Code:

```
from sympy import var, sin, cos, zeros, Matrix, simplify, latex
var("rho theta phi")
x_hat = Matrix([
rho * sin(theta) * cos(phi),
rho * sin(theta) * sin(phi),
rho * cos(theta)])
x = Matrix([rho, theta, phi])
M = zeros(3, 3)
for i in range(3):
for j in range(3):
M[i, j] = x_hat[i].diff(x[j])
N = M.inv(method="ADJ")
one = sin(phi)**2*sin(theta)**2 - cos(phi)**2*cos(theta)**2 + \
cos(phi)**2 + cos(theta)**2
one_simple = one.subs(sin(phi)**2, 1-cos(phi)**2).expand().simplify()
N.simplify()
# one_simple is equal to 1, but simplify() can't do this automatically yet:
N = N.subs(one, one_simple)
print "J =", latex(M)
print
print "J^{-1} =", latex(N)
```

Output:

The transformation matrices (Jacobians) are then used to convert vectors

and tensors

between spherical and cartesian coordinates. For example the partial derivatives from cartesian to spherical coordinates transform as:

and from spherical to cartesian as:

Care must be taken when rewriting the index expression into matrices – the top index of the Jacobian is the row index, the bottom index is the column index.

The metric tensor of the cartesian coordinate system is , so by transformation we get the metric tensor in the spherical coordinates :

Once we have the metric tensor expressed in spherical coordinates, we don’t need the cartesian coordinates anymore. All formulas only contain the spherical coordinates and the metric tensor.

### Rotating Disk¶

Let’s have a laboratory Euclidean system and a rotating disk system . The relation between the frames is

The inverse transformation can be calculated by simply inverting the matrix:

so the transformation matrices are:

The problem now is that Newtonian mechanics has a degenerated spacetime metrics (see later). Let’s pretend we have the following metrics in the system:

and

However, if we calculate with the correct special relativity metrics:

and

We get the same Christoffel symbols as with the metrics, because only the derivatives of the metrics are important. Then the only nonzero Christoffel symbols are

If we want to avoid dealing with metrics, it is possible to start with the Christoffel symbols in the system:

and then transforming them to the system using the change of variable formula:

As an example, let’s calculate the coefficients above:

So we got the same results.

Now let’s see what we have got. Later we’ll show, that the coefficients are just in the Newtonian theory. E.g. in our case we have:

from which:

and the force acting on a test particle is then:

where we have defined . This is just the centrifugal force. Also observe, that we could have read directly from the metrics itself — just compare it to the Lorentzian metrics (with gravitation) in the next chapter.

The other two terms (, and the symmetric ones) don’t behave as a gravitational force, but rather only act when we are differentiating (e.g. only act on moving bodies). Below we show this is just the term (responsible for the Coriolis acceleration).

Let’s write the full equations of geodesics:

This becomes:

we can define and . Then the above equations can be rewritten as:

So we get two fictituous forces, the centrifugal force and the Coriolis force.

Now imagine a static vector in the system along the axis, i.e.

then

In the last equality we transformed from to using the relation between frames.

Differentiating any vector in the coordinates is easy – it’s just a partial derivative (due to the Euclidean metrics). Let’s differentiate any vector in the coordinates with respect to time (since , the time is the same in both coordinate systems):

(3.40.2.3)¶

For our particular (static) vector this yields:

as expected, because it was at rest in the system. Let’s imagine a static vector in the system along the axis, i.e.

then

Similarly

How can one prove the relation:

(3.40.2.4)¶

that is used for example to derive the Coriolis acceleration etc.? We need to write it components to understand what it really means:

Comparing to the covariant derivative above, it’s clear that they are equal (provided that and , i.e. we are at the center of rotation).

Let’s show the derivation by Goldstein. The change in a time of a general vector as seen by an observer in the body system of axes will differ from the corresponding change as seen by an observer in the space system:

Now consider a vector fixed in the rigid body. Then and

For an arbitrary vector, the change relative to the space axes is the sum of the two effects:

A more rigorous derivation of the last equation follows from:

Let’s make the space and body instantaneously coincident at time t, then and , so we get the same equation as earlier:

Anyhow, introducing by:

we get

### Linear Elasticity Equations in Cylindrical Coordinates¶

Authors: Pavel Solin & Lenka Dubcova

In this paper we derive the weak formulation of linear elasticity equations suitable for the finite element discretization of axisymmetric 3D problems.

#### Original equations in Cartesian coordinates¶

Let’s start with some notations: By we denote the displacement vector in 3D Cartesian coordinates, and by the tensor of small deformations,

The stress tensor has the form

(3.40.2.5)¶

where

The symbols and are the Lam’e constants and is the Kronecker symbol ( if and otherwise). The equilibrium equations have the form

(3.40.2.6)¶

where is the vector of internal forces (such as gravity).

The boundary conditions for linear elasticity are given by

where are surface forces.

#### Weak formulation¶

Multiplying by test functions and integrating over the domain we obtain

(3.40.2.7)¶

Using Green’s theorem and the boundary conditions

Thus

(3.40.2.8)¶

Let us write the equations (3.40.2.8) in detail using relation (3.40.2.5)

(3.40.2.9)¶

#### Elementary transformation relations¶

First let us show how the partial derivatives of a scalar function are transformed from Cartesian coordinates to cylindrical coordinates . Note that

Since

it is

From here we obtain

(3.40.2.10)¶

The relations between displacement components in Cartesian and cylindrical coordinates are

(3.40.2.11)¶

The same relations hold for surface forces and volume forces .

Applying (3.40.2.10) to , we obtain

Using (3.40.2.11) and the fact that does not depend on , this yields

Analogously, for we calculate

For , using that it does not depend on , we have

For further reference, transform also into cylindrical coordinates

#### Axisymmetric formulation¶

Assuming that the domain is axisymmetric, we can begin to transform the integrals in (3.40.2.9) to cylindrical coordinates. Recall that the Jacobian of the transformation is . The first equation in (3.40.2.9) has the form:

The second equation in (3.40.2.9) has the form:

Adding these two equations together we get

This can be simplified to

Finally, the third equation in (3.40.2.9) has the form

This gives us

Since the integrands do not depend on , we can simplify this to integral over , where is the intersection of the domain with the half-plane. Dividing both equations by we get

#### Coordinate Independent Way¶

Let’s write the elasticity equations in the cartesian coordinates again:

Those only work in the cartesian coordinates, so we first write them in a coordinate independent way:

so:

The weak formulation is then (do not sum over ):

We apply the integration by parts:

This is the weak formulation valid in any coordinates. Using the cylindrical coordinates (see above) we get:

for we get: