Code structure - epicf/ef GitHub Wiki

todo: rewrite this page into readable form

As has been said in the problem description, the typical components of each simulation task are: particles, particle sources, device geometry, and external static or quasistatic electromagnetic fields. Besides, we have to specify simulation volume and time interval. Apart from that, there and various parameters specific for particle-in-cell method, such as a number of nodes in spatial mesh to compute particles' fields, algorithm to solve Poisson equation, and so on.

The code structure tries to follow the problem description, and for each typical object found in problem statement there is usually a corresponding data structure that represents it in the code. Thus there are classes that model individual particles (Particle), particle sources (ParticleSourceBox, ParticleSourceCylinder, etc), time interval and time step (Time_grid) and so on.

From a high-level perspective the code reads config file, uses it to initialize all necessary objects and then starts time-evolution algorithm. Occasionally it saves the state of the simulation on the hard disk.

During the run, the instances of the particle sources, time grid and other classes are stored in the object of class Domain, which is supposed to contain all the information necessary for simulation. Time evolution algorithm is defined as a method of this class.

The Domain Class

A central class in the program is Domain: it is supposed to hold all information necessary for simulation.

Inside Domain object, there several objects of other classes:

class Domain {
private:
  //Domain() {};
public:
  Time_grid time_grid;
  Spatial_mesh spat_mesh;
  Inner_regions_manager inner_regions;
  Particle_to_mesh_map particle_to_mesh_map;
  Field_solver field_solver;    
  Particle_sources_manager particle_sources;
  External_magnetic_field external_magnetic_field;
  Particle_interaction_model particle_interaction_model;
...
  • Time_grid - contains information about time duration of simulation and about current time moment.
  • Spatial_mesh - simulation volume size and it's discretization. data on nodes of the mesh.
  • Inner_regions_manager - Inner regions are regions inside simulation volume, that treated as obstacles to particles.
  • Particle_to_mesh_map: responsible for mapping particle's charge to mesh nodes and electric field from mesh nodes to particles
  • Field solver: solves field equations to obtain fields, generated by particles.
  • Particle_source_manager: particle sources generate new particles each time step
  • External_magnetic_field: allows to specify uniform magnetic field in the simulation region
  • Particle_interaction_model: whether to take into account or omit particle's self-interaction.

The Main Function

Execution of the program begins from the main function. It parses command line and reads config file, then initiates the simulation:

int main( int argc, char *argv[] )
{ 
...
    //// Parse command line
    parse_cmd_line( argc, argv, config_file );
    //// Read config
    Config conf( config_file );
    if ( mpi_process_rank == 0 )
        conf.print();
    // run simulation
    pic_simulation( conf );
...
}

Before time evolution is started, all necessary data structures are initiated with values read from config file (Domain dom( conf );). Then fields in the domain produced only by inner regions and boundary potentials (without any particles) are evaluated and written to disk. After that, particle simulation is started.

void pic_simulation( Config &conf )
{
    Domain dom( conf );          
    // fields in domain without any particles
    dom.eval_and_write_fields_without_particles( conf );
    // run simulation
    dom.run_pic( conf );
    
    return;
}

Time Evolution Algorithm

The central part of the program is time evolution procedure. It is defined in Domain class in run_pic procedure. Time loop starts after some preparations; each iteration it advances simulation one step forward. Besides, it checks whether or not it is necessary to save current results on disk and prints some information on current progress to stdout:

void Domain::run_pic( Config &conf )
{
    ...
    for ( int i = current_node; i < total_time_iterations; i++ ){
        if ( mpi_process_rank == 0 ){
            std::cout << "Time step from " << i << " to " << i+1
                << " of " << total_time_iterations << std::endl;
        }
        advance_one_time_step();
	    write_step_to_save( conf );
    }

    return;
}

Advance_one_step_forward procedure evaluates charge density at spatial grid nodes, solves field equation to obtain potential, computes fields from these potentials, applies these fields to particles (pushes particles) and, finally, removes particles that flew out of computation region and places new particles generated by sources. In actual code order of these steps is somewhat different due to employed time integration scheme (see sec. []), but general idea is still the same.

_If particles do not interact, only external fields act on particles. It is sufficient to compute them once and later there is no need to recalculate charge density and potential each time step. _

void Domain::advance_one_time_step()
{    
    if ( particle_interaction_model.noninteracting ){
        push_particles();
        apply_domain_constrains();
        update_time_grid();
    } else if ( particle_interaction_model.pic ){
        push_particles();
        apply_domain_constrains();
        eval_charge_density();
        eval_potential_and_fields();
        update_time_grid();
    }
    return;
}

todo: write minimal explanation about rest of the code

Spatial_mesh and Particle_to_mesh_map

To evaluate forces acting on particles it is necessary to solve Poisson' equation. Poisson equation is solved numerically on rectangular grid. Parameters of the grid are defined in Spatial_mesh class:

class Spatial_mesh {
  public:
    double x_volume_size, y_volume_size, z_volume_size;
    double x_cell_size, y_cell_size, z_cell_size;
    int x_n_nodes, y_n_nodes, z_n_nodes;
    boost::multi_array<Vec3d, 3> node_coordinates;
    boost::multi_array<double, 3> charge_density;
    boost::multi_array<double, 3> potential;
    boost::multi_array<Vec3d, 3> electric_field;
    ...

{x,y,z}_n_nodes is number of nodes in the grid along {X,Y,Z} axis, {x,y,z}_cell_size is step between nodes. Values of charge and potential and electric field on nodes are stored in appropriate 3d arrays.

Interpolation of particles' charge to nodes is done in function weight_particles_charge_to_mesh, defined in Particle_to_mesh_map class. After interpolation is performed, it is possible to solve Poisson equation on the mesh. This is done by eval_potential function from Field_solver class (see below). Result of the solution is array 'potential' in Spat_mesh filled with numerically obtained potential values in the domain. Electric field is a gradient of potential. It is calculated by eval_fields_from_potential function also located in Field_solver class.

Field-solver

Field solver is responsible for solution of Poisson equation. In time evolution loop in domain class, Poisson equation solution is invoked in the form

void Domain::eval_potential_and_fields()
{
    field_solver.eval_potential( spat_mesh, inner_regions );
    field_solver.eval_fields_from_potential( spat_mesh );
    return;
}

Eval_potential function has access to all information about spatial mesh: number of nodes, charge on each node, node coordinates etc. Given that information it's purpose is to obtain numerical solution for Poisson equation and fill potential array in spat_mesh object.

Solution for Poisson equation is done with PETSc library. 7-diagonal matrix is built in the function construct_equation_matrix_in_full_domain:

void Field_solver::construct_equation_matrix_in_full_domain( Mat *A, ... )
{
    ...
    PetscInt nonzero_per_row = 7; // approx
    //
    construct_d2dx2_in_3d( A, nx, ny, nz, rstart, rend );
    ierr = MatScale( *A, dy * dy * dz * dz ); CHKERRXX( ierr );
    //
    alloc_petsc_matrix( &d2dy2, nlocal, nlocal, nrow, ncol, nonzero_per_row );
    construct_d2dy2_in_3d( &d2dy2, nx, ny, nz, rstart, rend );
    ierr = MatAXPY( *A, dx * dx * dz * dz, d2dy2, DIFFERENT_NONZERO_PATTERN ); CHKERRXX( ierr );
    ierr = MatDestroy( &d2dy2 ); CHKERRXX( ierr );
    //
    alloc_petsc_matrix( &d2dz2, nlocal, nlocal, nrow, ncol, nonzero_per_row );
    construct_d2dz2_in_3d( &d2dz2, nx, ny, nz, rstart, rend );
    ierr = MatAXPY( *A, dx * dx * dy * dy, d2dz2, DIFFERENT_NONZERO_PATTERN ); CHKERRXX( ierr );
    ierr = MatDestroy( &d2dz2 ); CHKERRXX( ierr );
    //
    return;
}

Actually, inner objects act not only as obstacles for particles, but also as conducting objects under fixed potential. I.e. they fix potential values on nodes they occupy and boundary conditions near them had to be changed accordingly. This results in changes in 7-diagonal matrix ( see function construct_equation_matrix )

void Field_solver::construct_equation_matrix( Mat *A, ... )
{
    ...
    construct_equation_matrix_in_full_domain( A, nx, ny, nz, dx, dy, dz, nlocal, rstart, rend );
    cross_out_nodes_occupied_by_objects( A, nx, ny, nz, inner_regions );
    modify_equation_near_object_boundaries( A, nx, ny, nz, dx, dy, dz, inner_regions );
}

Such approach complicates solution algorithm and probably should be changed somehow.

Solution is done by PETSc: ierr = KSPSolve( ksp, rhs, phi_vec); CHKERRXX( ierr ); line in solve_poisson_eqn function. Then results are placed in appropriate array in the spat_mesh object ( transfer_solution_to_spat_mesh( spat_mesh ) call ).

After potential are obtained, fields are calculated. This is simple 2-point difference (see eval_fields_from_potential in Field_solver ). Fields are also placed in the appropriate array in spat_mesh.

Particle sources

Particle sources place new particles into domain. Since particle is characterized by charge, mass, position and momentum, particle source should contain such information for particles it generates. Charge and mass are fixed in config. Momentum distribution is Maxwellian with same temperature for each of x,y,z momentum component.

Position distribution: particle source have certain shape and particles are distributed uniformly inside it. Various shapes are supported with inheritance. todo: write about dynamic_cast in constructor. Geometric shape is not known in advance

There can be more than one source in the region. Container class for them is 'Particle_source_manager' defined in 'particle_source.h' header file.

Inner regions

Code organization is mostly similar to particle_sources. Can be of different shapes (handled by inheritance). The only method required is to test whether or not certain point is inside this object. It is used to identify nodes occupied by this object and particles that hit it.

Parallelization

Parallelization is done with MPI. Each process has a full copy of domain object. They are identical except for particles.

Particle generation is parallelized. Number of particles to generate each step is distributed evenly between processes. Particle belongs to the process where it was generated and never changes this process during simulation. Such approach results in ... . For example, when charge on the nodes is calculated, it has to be summed between processes. When it is necessary to advance particle's position and momenta each process simply works with particles that belong to it.

Solution of Poisson equation is parallelized by PETSc. Each process is responsible for it's own part of solution.

Writing to file is parallelized by HDF5 library.

⚠️ **GitHub.com Fallback** ⚠️