Useful code background - GRTLCollaboration/engrenage GitHub Wiki
Data structure
The data for the variable values at any timestep is stored in a state. The structure of the state is that it has all the data for the first variable (phi
) from the inner ghost cell to the outer ghost cell, then all the data for the second variable (hrr
), and so on.
The ordering of the variables is specified in statevariables.py
and by default the bssn vars go first, followed by any matter ones. Note that spherical symmetry implies that the only non zero values for the rank 2 tensors are the diagonal elements, and for vectors only the r component.
As a simple example, if our grid only had N=4 points at [-r/2, r/2, 3r/2, 5r/2] then the state with scalar matter would be a long vector like:
[phi(-r/2), phi(r/2), phi(3r/2), phi(5r/2), hrr(-r/2), hrr(r/2), hrr(3r/2).... v(3r/2), v(5r/2)]
We will pack and unpack this state vector into structs that contain all the BSSN variables in tensor form to assist in making the code more readable, but we can always just index into the variable we want at some position if we have the full state vector.
When we obtain a solution, this contains a series of states, one for each timestep that we requested in setting up the example. The diagnostic for the Hamiltonian constraint here is a good example of how to index into this solution, and how to unpack the state into the BSSN vars and find the derivatives.
Grid structure
Grid points and ghost cells
To set up a grid object we need to provide the state vector (to know how many variables will be stored on the grid) and the spacing
grid = Grid(spacing, my_state_vector)
One can use the useful helper function to work out the spacing needed, depending on the grid you want. For example for cubic spacing you can do:
params = CubicSpacing.get_parameters(r_max, min_dr, max_dr)
spacing = CubicSpacing(**params)
to get a grid that goes out to r_max
with the specified min and max spacings. A very nice notebook in which you can visualise the spacing is provided here.
There are 3 ghost cells at each end of the grid, which are not evolved but are filled according to some user specified rules. For the points at the inner boundary the values can be directly copied from the corresponding evolved points in the positive $r>0$ part of the grid (but taking into account their parity change under $r$ -> $-r$, as specified in statevariables.py
).
The outer boundary cells are filled by assuming some predefined fall off in the values with $r$, again specified in statevariables.py
.
Spatial derivatives - finite differencing
We calculate derivatives on the grid using finite difference methods. This simply means that for the first derivative with respect to r, we do something like:
$$\frac{\partial \phi}{\partial r} \approx \frac{(\phi(r+dr) - \phi(r-dr))}{2dr} $$
For this the "stencil" is $[-1/2dr, 0, 1/2dr]$, where the central value is the weighting of the current grid point, and the other two are those applied to the values of the variable either side.
By including more surrounding points (by increasing the complexity of the "stencil") we can get higher accuracy and the cost of greater complexity and a need for more ghost cells. The coefficients of the stencil can be obtained from fitting a Lagrange polynomial to the points and evaluating its derivatives in terms of the function values and their spacing.
These stencils can be packaged into a matrix that is applied to the whole vector of values for a given variable at a certain time, to return the values of the derivatives at each point.
See the Derivatives class to see how this works.
Time integration of "RHS" calculation
We make use of the highly optimised python solve_ivp
class to perform the time integration of our solution. This essentially just takes the initial state, and given a function which computes the right hand side (the "rhs", which is $\partial_t \phi$) does something like:
$$\phi(t + dt) = \phi(t) + \partial_t \phi ~ dt$$
Again in practise it uses a higher order method (RK4, but checking and adjusting the timestep using RK5), but you probably don't need to worry too much about the detail of that. The time stepping is adaptive so sometimes you may seem the time increase and then decrease again, as the method aims to maintain a certain error. Once the solution is obtained it is interpolated onto the points that you asked for in setting up the time vector (note therefore that the vector of times t
that you ask for does not determine the time stepping frequency, only the sampling of the outputs).
If the time stepping breaks, it usually means that something is unstable - try running it for a shorter amount of time and see what is blowing up.