Particle Swarm Optimization - nasa/gunns GitHub Wiki

PSO is a good method for multi-variate optimization, meaning it's good for tuning multiple parameters of your model simultanously. This is implemented with gunns/core/optimization/GunnsOptimParticleSwarm.

Our PSO is demonstrated in the gunns/sims/SIM_mc sim with the RUN_mc/input.py input file. We have a simple fluid network defined in the gunns/sims/SIM_mc/model/GunnsMcModelFluid.xml drawing, shown below for reference. We annotated the 4 'truth' tuning targets (in red) we are optimizing, and the 4 model parameters (in blue) for which we have a trend of data that we are trying to achieve.

GunnsMcModelFluid

The RUN_mc/input.py file shows how we configure the PSO. The swarm has 30 particles and runs 100 iterations, for 3000 model runs total. The target trajectories of the 4 blue parameters is given in the input_driver_data.csv file. The PSO searches the state space (the 4 red parameters) for the best set of values that causes the model to produce the same target trajectories. We define the state space as the range of tuning values for conductivity of the 2 valves and 2 conductors we are trying to optimize, giving them a range from zero to 10 times the actual truth tuning - we choose 10x because tuning GUNNS fluid conductors with the real hydraulic cross-sectional area is usually within 1 order of magnitude of the tuned value needed for accurate performance.

The PSO converges to within 1% error for all 4 parameters after 100 iterations, shown below:

tuning_error

The cost function improves towards zero over the 100 iterations. We show the best cost of the swarm, and the costs of 2 individual particles for comparison:

cost_history

This shows the tuning improving towards the 'truth' values. We plot the state as the sum of the 2 valve conductances for the X axis and the sum of the 2 conductors as the Y axis:

swarm_state_history

Configuration Data

This is the configuration data for the PSO, defined in the GunnsOptimParticleSwarmConfigData class in gunns/core/optimization/GunnsOptimParticleSwarm.hh.

  • mNumParticles: (must be > 0) this is the size of the swarm, the number of particles in the swarm. We recommend a size of 30 particles for general uses.
  • mMaxEpoch: (must be > 0) this is the number of epochs, or iterations of the swarm, that will be run. We recommend an initial value of 100 epochs. The total number of monte carlo runs is mNumParticles * mMaxEpoch, i.e. 30 particles by 100 epochs = 3000 total monte carlo runs.
  • mInertiaWeight: (must be > 0) this is the inertia weight of the particles on the first epoch. We recommend a value of 0.5.
  • mInertiaWeightEnd: (must be > 0) this is the inertia weight of the particles on the final epoch. We recommend a value of 0.5. The inertia weight ramps linearly from the initial value to this ending value over the epochs. This can be used to create an optional 'annealing' effect similar to annealing optimizers, whereby the particles lose energy in later epochs and tend to settle towards their personal best results.
  • mCognitiveCoeff: (must be > 0) this weights the particles towards their personal best state instead of the global best. We recommend a value of 2.0.
  • mSocialCoeff: (must be > 0) this weights the particles towards the global best state instead of their personal best. We recommend a value of 2.0.
  • mMaxVelocity: (must be between (0-1)) this limits how fast the particles can traverse the state space, per epoch. This is the fraction of the state space that can be traversed per epoch, so a value of 0.5 here limits the particles to travel no more than 50% across the state space in any dimension per epoch. We recommend starting with a value of 0.5, and lowering this in subsequent runs as you fine-tune the optimization.
  • mRandomSeed: this is the seed number for the random number generator. Any value will do, but trying runs with different seeds can sometimes improve your results.
  • mInitDistribution: this controls how we initialize the particle positions in the state space. The options are:
    • RANDOM: particle positions are given a uniform random distribution.
    • MIN_MAX_CORNERS: this initializes half the particles at the corner of minimum values for all the state space dimensions, and the other half at the maximum values corner. This can be a good method to use if the optimized target is likely to be near the minimum or maximum range of the state space. We recommend using this option for tuning GUNNS fluid conductors.
    • FILE: this initializes the particle states by reading them from the pso_state.csv file. This file is output by the PSO, but can also be created by hand to use your own custom initial state here.
    • FILE_CONTINUOUS: this causes each epoch to initialize its swarm state to the output of the previous epoch state, via the pso_state.csv file. This is used in our current implementation of the parallel processing mode.

Outputs

The PSO outputs 3 files to the sim folder:

  • pso_state.csv: this is the swarm state from the most recent epoch, including position, velocity, acceleration and cost for all of the particles, as well as the global best state. The global best state position is the optimized result for your model. This file is space-delimited.
  • pso_swarm_history.csv: this lists the position state and cost of all the particles, plus the global best, for all epochs. This can be used to see trajectories of the particles and how they trend over the epochs. This file is comma-delimited.
  • pso_cost_history.csv: this lists the global best cost for each epoch. This can be used to quickly verify how well the swarm is converging towards zero cost.
  • pso_cost_history.csv: this lists the global best cost for each epoch. This can be used to quickly verify how well the swarm is converging towards zero cost.

Recommendations

  • For general use, we recommend using a swarm size of 30 particles, an initial run of 100 epochs, a max velocity limit of 0.5, and the MIN_MAX_CORNERS initial swarm state option.
  • For further optimization after the initial 100 epochs, lower the velocity limit, and configure the next batch of 100 runs to start with the output from the previous set, by using the FILE_CONTINUOUS initial condition option.