ParticleStructure General Usage Guide - SCOREC/pumi-pic GitHub Wiki

The following is a detailed breakdown of PUMI-PiC particle structure functionality.

Data Access

get

get<N> provides direct data access to the particle structure.

Template Parameters

  • N: int representing the Nth member type of particle data.

Note: The Slice object contains a form of smart pointer to particle data when used with the SellCSigma or CSR particle structures. Thus, when adding large numbers of particles to a structure, care must be taken to ensure that these objects go out-of-scope and the previously-allocated particle data is released from memory.

parallel_for

pumipic::parallel_for interfaces with each ParticleStructure implementation to provide a way to loop through particle data. It is best used in conjunction with one or more get calls to particle data.

Parameters

  • particle structure: ParticleStructure* to particle structure to loop through
  • function: PS_LAMBDA function object that takes in 3 parameters:
    • element id: int index of mesh element that current particle resides in
    • particle position: int index of current particle position in overall structure
    • active identifier: bool representing if the current particle is active (1) or padding (0)
  • string identifier: string representing parallel loop on GPU

Examples

namespace ps = particle_structs;
using T0 = int;
using T1 = double[3];
using Types = ps::MemberTypes<T0, T1>;
using ExeSpace = Kokkos::DefaultExecutionSpace;
using MemSpace = ExeSpace::memory_space;
typedef ps::ParticleStructure<Types, MemSpace> PS;
using pumipic::lid_t;
using PS::kkLidView;
using PS::kkGidView;

// setup for ParticleStructure
lid_t num_elems = 5;
lid_t num_ptcls = 20;
Kokkos::TeamPolicy<ExeSpace> policy(num_elems,4);
kkLidView particles_per_element("particles_per_element",5);
Kokkos::parallel_for(num_elems,
  KOKKOS_LAMBDA(const int& i) {
    particles_per_element(i) = 4; // 4 particles per element
  });
// initialize ParticleStructure (CSR)
PS* csr = new ps::CSR<Types, MemSpace>(policy, num_elems, num_ptcls, kkGidView(), particles_per_element);

// get access to T0 and T1
auto intSlice = csr->get<0>;
auto doublesSlice = csr->get<1>;

// loop through structure, setting values
auto setValues = PS_LAMBDA(const int& elm, const int& ptcl, const bool& active) {
  if (active) {
    intSlice(ptcl) = elm;
    for (int i = 0; i < 3; i++)
      doublesSlice(ptcl,i) = elm*ptcl*0.01;
  }
};
ps::parallel_for(csr, setValues, "setValues");

Data Manipulation

The moving of particles between mesh elements along with the addition and deletion of particles is implemented through the use of the rebuild member function.

Parameters

  • new_element: Kokkos::View<int> where new_element(i) is the element to move the particle at index i to. The particle can also be removed by setting this value to -1.
  • new_particle_elements: [OPTIONAL] Kokkos::View<int> where new_particle_elements(i) is the element to associate particle i in new_particle_info in.
  • new_particle_info: [OPTIONAL] pumipic::ParticleStructure::MTVs(MemberTypeViews) containing data for new particles

Examples

Moving/Deleting particles with rebuild:

namespace ps = particle_structs;
using T0 = int;
using T1 = double[3];
using Types = ps::MemberTypes<T0, T1>;
using ExeSpace = Kokkos::DefaultExecutionSpace;
using MemSpace = ExeSpace::memory_space;
typedef ps::ParticleStructure<Types, MemSpace> PS;
using pumipic::lid_t;
using PS::kkLidView;
using PS::kkGidView;

// setup for ParticleStructure
lid_t num_elems = 5;
lid_t num_ptcls = 20;
Kokkos::TeamPolicy<ExeSpace> policy(num_elems,4)
kkLidView particles_per_element("particles_per_element",5);
Kokkos::parallel_for(num_elems,
  KOKKOS_LAMBDA(const int& i) {
    particles_per_element(i) = 4; // 4 particles per element
  });
// initialize ParticleStructure (CSR)
PS* csr = new ps::CSR<Types, MemSpace>(policy, num_elems, num_ptcls, kkGidView, particles_per_element);

// assign new elements
// note we use capacity() being that, depending on the ParticleStructure implementation, the total size of the structure will change
kkLidView new_element("new_element",csr->capacity());
auto setNewElement = PS_LAMBDA(const int& elm, const int& ptcl, const bool& active) {
  if (active) { // technically unneeded here since inactive particles are ignored
    if (elm == 4)
      new_element(ptcl) = -1; // remove all particles in element 4
    else
      new_element(ptcl) = (elm+1); // set all other particles to move to next right element
  }
};
ps::parallel_for(csr, setNewElement , "setNewElement");
csr->rebuild(new_element);

Adding particles with rebuild:

namespace ps = particle_structs;
using T0 = int;
using T1 = double[3];
using Types = ps::MemberTypes<T0, T1>;
using ExeSpace = Kokkos::DefaultExecutionSpace;
using MemSpace = ExeSpace::memory_space;
typedef ps::ParticleStructure<Types, MemSpace> PS;
using pumipic::lid_t;
using PS::kkLidView;
using PS::kkGidView;
using PS::MTVs;

// setup for ParticleStructure
lid_t num_elems = 5;
lid_t num_ptcls = 20;
Kokkos::TeamPolicy<ExeSpace> policy(num_elems,4)
kkLidView particles_per_element("particles_per_element",5);
Kokkos::parallel_for(num_elems,
  KOKKOS_LAMBDA(const int& i) {
    particles_per_element(i) = 4; // 4 particles per element
  });
// initialize ParticleStructure (CSR)
PS* csr = new ps::CSR<Types, MemSpace>(policy, num_elems, num_ptcls, kkGidView(), particles_per_element);

// assign same elements
// note we use capacity() being that, depending on the ParticleStructure implementation, the total size of the structure will change
kkLidView new_element("new_element",csr->capacity());
auto setSameElement = PS_LAMBDA(const int& elm, const int& ptcl, const bool& active) {
  if (active) // technically unneeded here since inactive particles are ignored
    new_element(ptcl) = elm;
};
ps::parallel_for(csr, setSameElement , "setSameElement");

// new particle setup
lid_t num_new_ptcls = 5;
kkLidView new_particle_elements("new_particle_elements",num_new_ptcls );
Kokkos::parallel_for(num_ptcls,
  KOKKOS_LAMBDA(const int& i) {
    new_particle_elements(i) = i%num_elems; // 1 new particle per element
  });
MTVs new_particle_info = ps::createMemberViews<Types>(num_ptcls); // new particle data
int* new_ints = new int[num_new_ptcls ];
double** new_doubles = new double[3][num_new_ptcls];
for (int i = 0; i < num_new_ptcls; i++) { // assign new particle values
  new_ints[i] = num_ptcls+i;
  for (int j = 0; j < 3; j++)
    new_doubles[i][j] = (num_ptcls+i)*j;
}
// copy values into MTVs structure
ps::hostToDevice(ps::getMemberView<Types, 0>(new_particle_info), new_ints);
ps::hostToDevice(ps::getMemberView<Types, 1>(new_particle_info), new_doubles);

delete [] new_ints;
delete [] new_doubles;

csr->rebuild(new_element, new_particle_elements, new_particle_info);

Migration

Particle Migration allows one or more process to share the storage (and associated load) of all particles in a given element. This is implemented through the use of the migrate member function using MPI. Note that for migration to be possible, a set of global identification numbers for elements must be set at ParticleStructure construction. For ease-of-use migrate also calls the rebuild function since, through the process of migration, particles are being sent to other ParticleStructures, thus necessitating a rebuild.

Parameters

  • new_element: Kokkos::View<int> where new_element(i) is the element to move the particle at index i to. The particle can also be removed by setting this value to -1.
  • new_process: Kokkos::View<int> where new_process(i) is the rank (MPI process id) to move the particle at index i to. The value is ignored for particle to be removed.
  • distributor: [INSERT DISTRIBUTOR DESCRIPTION]
  • new_particle_elements: [OPTIONAL] Kokkos::View<int> where new_particle_elements(i) is the element to associate particle i in new_particle_info in.
  • new_particle_info: [OPTIONAL] pumipic::ParticleStructure::MTVs (MemberTypeViews) containing data for new particles

Examples

Migration to share particles across all processes in shared elements:

namespace ps = particle_structs;
using T0 = int;
using T1 = double[3];
using Types = ps::MemberTypes<T0, T1>;
using ExeSpace = Kokkos::DefaultExecutionSpace;
using MemSpace = ExeSpace::memory_space;
typedef ps::ParticleStructure<Types, MemSpace> PS;
using pumipic::lid_t;
using PS::kkLidView;
using PS::kkGidView;
using PS::MTVs;

int comm_rank, comm_size;
MPI_Comm_rank(MPI_COMM_WORLD, &comm_rank);
MPI_Comm_size(MPI_COMM_WORLD, &comm_size);

// setup for ParticleStructure
PS* csr;
lid_t num_elems = 5;
lid_t num_ptcls;
Kokkos::TeamPolicy<ExeSpace> policy(num_elems,4);
kkLidView particles_per_element("particles_per_element", num_elems);
kkGidView element_gids("element_gids", num_elems);
Kokkos::parallel_for(num_elems,
  KOKKOS_LAMBDA(const int& i) {
    particles_per_element(i) = 4; // 4 particles per element
    element_gids(i) = i;
  });
if (comm_rank == 0) { // initialize ParticleStructure (CSR) (20 particles on rank 0)
  num_ptcls = 20;
  csr = new ps::CSR<Types, MemSpace>(policy, num_elems, num_ptcls, element_gids, particles_per_element);
}
else { // initialize ParticleStructure (CSR) (0 particles on other ranks)
  num_ptcls = 0;
  csr = new ps::CSR<Types, MemSpace>(policy, num_elems, num_ptcls, element_gids, kkLidView());
}

// migration setup
kkLidView new_element("new_element", csr->capacity());
kkLidView new_process("new_process", csr->capacity());
auto setSameElementSpreadProcess = PS_LAMBDA(const int& elm, const int& ptcl, const bool& active) {
  if (active) {
    new_element(ptcl) = elm; // set same elements
    new_process(ptcl) = ptcl%comm_size; // spread particles across all processes
  }
};
ps::parallel_for(csr, setSameElementSpreadProcess, "setSameElementSpreadProcess");

csr->migrate(new_element, new_process);

Migration to share particles using Distributor

namespace ps = particle_structs;
using T0 = int;
using T1 = double[3];
using Types = ps::MemberTypes<T0, T1>;
using ExeSpace = Kokkos::DefaultExecutionSpace;
using MemSpace = ExeSpace::memory_space;
typedef ps::ParticleStructure<Types, MemSpace> PS;
using pumipic::lid_t;
using PS::kkLidView;
using PS::kkGidView;
using PS::MTVs;

int comm_rank, comm_size;
MPI_Comm_rank(MPI_COMM_WORLD, &comm_rank);
MPI_Comm_size(MPI_COMM_WORLD, &comm_size);

// setup for ParticleStructure
lid_t num_elems = 5;
lid_t num_ptcls = 20;
Kokkos::TeamPolicy<ExeSpace> policy(num_elems,4);
kkLidView particles_per_element("particles_per_element", num_elems);
kkGidView element_gids("element_gids", num_elems);
Kokkos::parallel_for(num_elems,
  KOKKOS_LAMBDA(const int& i) {
    particles_per_element(i) = 4; // 4 particles per element
    element_gids(i) = i;
  });
PS* csr = new ps::CSR<Types, MemSpace>(policy, num_elems, num_ptcls, element_gids, particles_per_element);

// migration setup
// make distributor
int neighbors[3]; // count processes to "left" and "right" as communicable to
neighbors[0] = comm_rank;
neighbors[1] = (comm_rank - 1 + comm_size) % comm_size;
neighbors[2] = (comm_rank + 1) % comm_size;
ps::Distributor<MemSpace> dist(std::min(comm_size,3), neighbors);
kkLidView new_element("new_element", csr->capacity());
kkLidView new_process("new_process", csr->capacity());
auto setSameElementRightProcess = PS_LAMBDA(const int& elm, const int& ptcl, const bool& active) {
  if (active) {
    new_element(ptcl) = elm; // set same elements
    new_process(ptcl) = (comm_rank + 1) % comm_size; // move to next "right" process
  }
};
ps::parallel_for(csr, setSameElementRightProcess, "setSameElementRightProcess");

csr->migrate(new_element, new_process);

Copying Across Memory Spaces

Copying the ParticleStructure to a new memory space is implemented through the use of the copy function. This function copies all data from a ParticleStructure in a different memory space to this one.

Note that if both the current structure and the structure passed in are on same memory space, data is not duplicated. Instead, only a shallow copy operation takes place. Also note that this is not currently implemented for the CabM ParticleStructure.

Parameters

  • old ParticleStructure - a pointer to a ParticleStructure of the same type as this one, in a different memory space.
⚠️ **GitHub.com Fallback** ⚠️