Overview - sharpc42/CGM-theory-sims GitHub Wiki
Athena++ is a mesh-based simulation code, which means that it is oriented around the environmental space itself rather than the particular stuff within it. Instead of following particles representing massive points of interest, we define units of space — 1D, 2D, or 3D as desired — as cells populating a larger mesh, and then follow physical quantities of interest uniform within a given cell. In other words, where physical reality is continuous, and along with it any spatially-dependent quantities (e.g., temperature), codes like Athena++ discretize all of that in accordance with the resolutions defined by the user in the input parameter file. Higher resolutions correspond to longer compute times, but often yield more physically accurate results.
To start, these cells do not correspond to physical space in any meaningful or consistent way. They are entirely for computational purposes, since there is distinct finiteness to computer memory conflicting with the idea of continuous spaces. You decide on a desired resolution and inputs it into the parameter text file via the parameter options “nx1,” “nx2,” and “nx3.”
An important caveat: Athena++ requires that the minimum value be 1. In other words, since all three are available, if you wish a 2D instead of a 3D environment, you put desired values in for “nx1” and “nx2,” while leaving “nx3” at its minimum value of 1. The same applies for 1D; just do that as well for “nx2” so that “nx1” is the only one left unadjusted.
The code units for physical space are defined by passing different values for “x1min” and “x1max,” and similarly for x2 and x3, as if centered on an origin. This is reconciled with the cell spacing above by arithmetic. The range of cells along a given axis is divided by the physical space defined by the user, and that’s how much physical space there is in each cell (via code units, which we will illustrate shortly.)
If you are running in serial, that covers it. If instead you are running in parallel, you’ll need to divide the mesh up into “meshblocks.” The primary rule is that the cells per meshblock is an integer factor of the total cells that dimension for the whole mesh. For example, if the mesh is 32 cells across one dimension, then meshblocks with 16, 8, 4, or 2 cells each are acceptable, but 3 cells per block is not. You’ll find these quantities in the input parameter file under the mesh block sections versus the normal mesh sections covered above. How many times your meshblock cell-size evenly divides into that of the total mesh defines how many meshblocks you have: in turn how many are passed to your parallel processing.
Simulations running in virtual environments through a virtual timeline use arbitrary variables for both. This allows for the same simulation using scale-invariant physics to potentially be simultaneously interpreted as perhaps our circumgalactic medium, or as something small enough to exist within the solar system.
The essential question then for interpreting the output is in interpreting those variables consistently. We thus define “code units,” what each unit iteration of these variables — spatial, temporal, and mass — represent in terms of SI units. The idea is that these code units are characteristic of corresponding physical quantities within the environment of interest. From there, any other physical quantity of interest can have its dimensions represented in terms of these code units.
What we chose:
- A code unit of space [L] is 1 kpc = 3.086 x 10^19 m
- A code unit of time [T] is 1 Myr = 3.154 x 10^13 s
- A code unit of mass [M] is 1 solar mass = 1.988 x 10^30 kg
Any physical quantity can be dimensionally reconstructed from these.
Let’s run through some examples.
-
Velocity: v = x / t = [L] / [T] = kpc Myr-1
-
Acceleration: a = dv/dt = [L] / [T]2 = kpc Myr-2
-
Mass Density: rho = M / V = [M] / [L]-3 = M_solar kpc-3
-
Pressure: P = F / A = ( [M] [L] [T]-2 ) / ( [L]2 = [M] [L]-1 [T]-2 ) = M_solar kpc-1 Myr-2
-
Energy Density: u = E / V = ( [M] [L]2 [T]-2 ) / ( [L]3 ) = [M] [L]-1 [T]-2 = M_solar kpc-1 Myr-2
- Note that these are the same dimensions as for pressure. This is often a physically useful fact; just because one is "thought of" as such doesn't mean it isn't still dimensionally equivalent to the other.
-
Magnetic Field Strength: B = sqrt( 2 * P_B ) = [P]1/2 = ([M] [L]-1 [T]-2)1/2 = [M]1/2 [L]-1/2 [T]-1
- Here we have utilized the fact that magnetic pressure in a magnetohydrodynamic environment scales as the square of the magnetic field strength. This handily allows us to dodge the issue of how to otherwise express the units of Teslas or Gausses in terms of SI units, for our computational purposes at least.
To be clear: The idea with code units is that you just type in just the number of the quantity in question into the code, parameter file, etc., with the guarantee that it will be accurate so long as you make sure it has been converted to these physical dimensions.
Athena++ uses as its notation in code x1, x2, and x3 boundaries. For our purposes, these correspond effectively to x, y, and z axes in Cartesian coordinates. Generally, there is support for spherical and cylindrical coordinates in the underlying simulation code but it is not supported in the CGM Virtual Environment at this time. Work in Cartesian.
We must establish boundary conditions appropriate to our setup in the Athena++ input parameter file, since the mesh must be eventually roped off to facilitate the finiteness of real-world computation.
What we do for the CGM Virtual Environment is to use in 2D:
-
“Periodic” boundary conditions at both sides of the x1 axis (it must be periodic at both boundaries for logical consistency)
-
“Reflecting” at the bottom (“outer”) boundary of the x2 axis and “outflow”/vacuum at the top (“inner”) boundary
If you run in 3D instead, use periodic conditions for the sides of both the x1 and the x2 axes, and apply the aforementioned boundary conditions for the x2 axis to the x3 axis instead.
In the CGM Virtual Environment, the top of the mesh represents the interface to the intergalactic medium, which to good approximation is a vacuum. The sides of the mesh are periodic since we are primarily interested in a vertical chunk of the CGM; we assume that this chunk will be identical anywhere along the galactic disk, hence the use of periodicity. The bottom of the mesh is reflecting for computational simplicity regarding galactic gravity. Gas pulled up from beneath the disk would partially fly through symmetrically to how gas pulls down from above the disk (what we simulate) flies through it, and so implementing a reflection here essentially effects that reality. Otherwise, we would have to simulate the space both above and below the galactic disk, it itself being in the middle of the mesh, with outflow conditions at both the top and bottom boundaries.
We implement an exponential atmospheric density profile taken from McCourt et. al. ([year]) and then superimpose a white noise perturbation to facilitate dynamics in the ensuing simulation.
The exponential profile is [math stuff]. This facilitates [what is the reasoning behind this].
The perturbation has two parts, corresponding to amplitude and wavelength respectively of its underlying oscillatory nature. As we iterate through each cell along x (x1) at a given height y (x2) in the mesh, we compute a superposition of density modes for that cell. The procedure is to first compute an amplitude randomized between 10-4 and 1, then multiply it by a phase factor correlating to an integrated range of integers between 2 and 40. In other words, we step through the integer range of n = [2,40] and for each step compute the random amplitude A; then the given mode is A cos(n 2 pi x / L) where L is the spatial length of the mesh and x is in terms of physical code units, not cell count in the mesh. This implementation is based on the work done by McCourt et. al. ([year]) on simulating the IGM immediately outside galaxies.
This computed perturbation is then superimposed onto the exponential density profile at each cell so that the final result is an effectively atmospheric setup mixed with white noise fluctuations: that way interesting things can happen.
Gravity in Athena++ is handled as a linear gradient (F = mgh, where h is either x2 or x3 depending on 2D or 3D) the magnitude of which is determined by the exact acceleration parameter g is passed by the user in the text input file. We’ve already defined this to be g = -2.1 x 10-4 in code units. We obtain this approximation using the idea of centripetal force equilibrium in the disk with common galactic rotational velocities of 200 km/s and a virial radius of 200 kpc.
Given the spatial scales at the CGM level, we think that treating gravity as a linear gradient is a safe approximation.
We implement a cooling function based on that expected for the temperature regime between 105 and 107 Kelvin as per McQuinn and Werk (2017) in the CGM. We treat it in terms of a ratio of cooling time to gravitational dynamical time:
Assuming dynamical time and so gravitational potential are fixed, then the lower time ratio is, the lower cooling time is: implying that cooling is stronger. The dynamical time can be calculated explicitly by
where the physical height h = (y + H) in our code: y being the spatial distance from the origin, and H is the atmospheric scale height at which we’ve centered the origin. We can thus calculate h, and g is immediately available.
The cooling contribution to the thermal energy update L is calculated from
which couches the update in terms of our available variables for dynamical time and the ratio of it to cooling time. The idea is that the user sets the strength of the cooling implicitly via passing a particular value for the dynamical time in the input parameter file, as mentioned, and the rest is handled from there.
The total energy update however is calculated from heating as well as cooling. Inspired by McCourt et. al. ([year]), we average the cooling magnitudes for each height. This spatial average is the heating H for each of that height’s cells.
Then the total energy update each cell is simply
We decided that this is the simplest option computationally while still being physically representative.
Magnetic fields likely constitute a key component of the circumgalactic environment. They have been estimated to exist on upwards of a microgauss scale. ([source]) At this level, they may impede otherwise ordinary hydrostatic processes. This potential plays an important role in ongoing investigations, and so for now we include a way to implement a linear, homogenous field. The ultimate goal is to allow the user to iterate through a variety of B-field topologies. The main requirement is that they be physically realistic: that is, that they satisfy div(B) = 0.