Memory Use - openmpp/openmpp.github.io GitHub Wiki

Home > Model Development Topics > Memory Use

This topic opens with a general discussion of memory use in models and follows with a compendium of techniques to optimize memory use in time-based models.

Related topics

Topic contents

Introduction and Background

This topic describes techniques to simulate large populations of entities in time-based models by reducing per-entity memory requirements. It is not applicable to case-based models which by design can simulate populations of unlimited size in a fixed amount of memory.

A computer's physical memory is a limited fixed resource, whether the computer is a desktop, a server, or a component of a cloud service. If memory requested by an application exceeds the computer's available physical memory, the request may be denied, causing the application to fail. More typically the computer will attempt to ration physical memory by swapping less-used portions of application memory to disk and bringing them back on demand, a process called paging. Paging is very slow relative to the speed of physical memory, but can work well if memory demands over time are concentrated in some regions of memory while being infrequent in others.

Models can request large amounts of memory, and use that memory frequently and intensively. A time-based model with a large interacting population of entities will access and update those entities frequently during a run over large regions of memory. Because the model accesses large regions of memory in a scattered rather than a concentrated way, models respond poorly to paging. When models are starved for memory and start paging, they may slow down by orders of magnitude and become unusable. So in practice the entities in the population need to all fit into the physical memory of the target computer.

Reducing per-entity memory use increases the maximum number of entities which can be in memory simultaneously. The techniques in this topic can help reduce per-entity memory use without changing model logic.

Time-based models have an inherent tradeoff between population size and model complexity, because the size of an entity increases with model complexity. Case-based models have no such tradeoff between population size and complexity, but they can't represent large interacting populations. When the modelling problem to be solved requires both large population size and high complexity, it may be possible to factor it into two models which are run sequentially: an upstream time-based model which simulates an interacting population with limited complexity paired with a downstream case-based model with no population interactions but unlimited complexity. An example is the combination HPVMM and OncoSim, where the upstream time-based HPVMM model simulates infectious disease dynamics and the downstream case-based OncoSim model simulates disease screening, diagnosis, treatment, health consequences, and health system resources and costs. In the HPVMM-OncoSim pairing, HPVMM results on disease incidence (rates of infection) are communicated as inputs to the downstream OncoSim model.

[back to topic contents]

Bag of Tricks

This subtopic contains links to sections which describe techniques to manage memory use. It is not always appropriate to apply one of these techniques. It may not be worth the effort or additional model complexity, or there may be a functionality trade-off which is not worth making. The Entity Member Packing report can help identify which techniques will be most fruitful. The list of links is followed by another list of BoT candidates.

BoT candidates:

  • compute rather than store
  • use smaller C types
  • hook to self-scheduling events, e.g. self_scheduling_int(age)
  • be economical with events
  • be economical with tables
  • avoid ordinal statistics
  • use a unitary Ticker actor to push common characteristics to the population, e.g. year

[back to topic contents]

Exploit the resource use report

Software developers often guess wrong about the causes of high resource use. It is best to gather data before embarking on efforts to improve efficiency. The Model Resource Use report was designed for that purpose.

[back to BoT]
[back to topic contents]

Suppress table groups

Organize tables into functional groups using table_group statements. Then, use retain_tables to keep only the tables needed for the current purpose when the model is built. When a table is suppressed when a model is built, memory savings accure both from the memory for the table cells and for entity members associated with the table.

Organizing tables into groups and retaining only those required for immediate needs allows a model to have many tables without paying a high memory cost. If a diagnostic table is required after exploring run results, a variant of the model can be built with that table retained and the simulation re-run.

[back to BoT]
[back to topic contents]

Change time type to float

The Time type of a model can be changed from the default double to float by inserting the following statement in model code:

time_type float;

The Time type is ubiquitous in models. It is used in attributes, events, and internal entity members. By default, Time wraps the C++ type double, which is a double-precision floating point number stored in 8 bytes. The time_type statement allows changing the wrapped type to float, which is stored in 4 bytes. This can reduce memory use. For example, here is the summary report for the 1 million GMM run used to illustrate the Model Resource Use topic where time_type is double (the default value):

+---------------------------+
| Resource Use Summary      |
+-------------------+-------+
| Category          |    MB |
+-------------------+-------+
| Entities          |  1924 |
|   Doppelganger    |   552 |
|   Person          |  1372 |
|   Ticker          |     0 |
| Multilinks        |    10 |
| Events            |    80 |
| Sets              |   196 |
| Tables            |     0 |
+-------------------+-------+
| All               |  2213 |
+-------------------+-------+

Here is the report for the same 1 million run with time_type set to float:

+---------------------------+
| Resource Use Summary      |
+-------------------+-------+
| Category          |    MB |
+-------------------+-------+
| Entities          |  1566 |
|   Doppelganger    |   515 |
|   Person          |  1050 |
|   Ticker          |     0 |
| Multilinks        |    10 |
| Events            |    80 |
| Sets              |   196 |
| Tables            |     0 |
+-------------------+-------+
| All               |  1854 |
+-------------------+-------+

In this example, memory usage of the Person entity was 23% less with time_type set to float compared to double.

A float has a precision of about 7 decimal digits. Roughly, that means that the float time value 2025.123 is distinguishable from 2025.124, which is a precision of ~8 hours. A model with a time origin of 0 and maximum time of 100 years would have higher float precision than that. The run-time function GetMinimumTimeIncrement() gives the actual precision of Time values in a model. The actual precision is based on time_type and the maximum Time value required by the model in a previous call to SetMaxTime() in model code.

Changing time_type to float may affect model results due to the reduced precision of Time values. If model logic is well represented by float precision, such differences are likely to be statistical. That can be verified by comparing model results with time_type float vs. time_type double by changing a single line of model code.

[back to BoT]
[back to topic contents]

Use value_out with flash tables

Flash tables are entity tables which tabulate at instantaneous points in time. They do that using an attribute like trigger_changes(year) in the table filter which becomes instantaneously true and then immediately false in a subsequent synthetic event. Because an increment to a flash table is instantaneous it has identical 'in' and 'out' values. That means that a flash table using 'value_in' will produce the same values as 'value_out'. However, value_in in a table causes the compiler to create an additional member in the entity to hold the 'in' value of an increment. For flash tables, this memory cost can be avoided by using 'value_out' instead of 'value_in'.

[back to BoT]
[back to topic contents]

Enable entity packing

Members of entities can be packed more efficiently by turning on the entity_member_packing option, but there is a trade-off. For mode information see Entity Member Packing.

[back to BoT]
[back to topic contents]

Use mutable real type

Floating point values can be declared in model code using the real type. By default, real is the same as the C++ type double, but it can be changed to the C++ type float by inserting the following statement in model code:

real_type float;

This single statement will change all uses of real from double to float, which will halve the storage requirements of `real' values.

A float has a precision of around 7 digits, so can represent a dollar amount of 12345.67 to an accuracy of 1 cent.

Because a single real_type statement changes the meaning of real throughout, it is easy to assess to what extent changing real from double to float affects results. This provides more flexibility than changing (say) double to float in code.

[back to BoT]
[back to topic contents]

Prefer range and classification to int

Values of type Range or Classification are automatically stored in the smallest C type which can represent all valid values. This can reduce memory use. For example, if YEAR is declared as

range YEAR  //EN Year
{
    0, 200
};

a member year

entity Person {
    YEAR year;
};

declared with type YEAR will be stored efficiently in a single byte. In contrast, of year was declared as int it would require 4 bytes.

[back to BoT]
[back to topic contents]

Use bitset instead of bool array

The bool type takes one byte of storage, even though a byte contains 8 bits. Some models use large arrays of bool in entity members, e.g.

entity Person {
    bool was_primary_caregiver[56][12];
}

which records whether a Person was a primary caregiver in each month of 56 possible working years during the lifetime. The Model Resource Use report would show that the was_primary_caregiver array member of Person consumes 672 bytes of memory in each Person, a significant amount for a time-based model with a large population.

The same information could be stored in a foreign member of Person using the a C++ std::bitset. A code sketch follows:

typedef std::bitset<56*12> ym_bools; // flattened bit array of 56 years and 12 months
...
entity Person {
    ym_bools was_primary_caregiver;
}

std::size_t ym_index(std::size_t year, std::size_t month) {
    return 12 * year + month;
}

Then model code like

ptCareGiver->was_primary_caregiver[nEarningIndex][nM] = true;

could be replaced by similar functionally equivalent code

ptCareGiver->was_primary_caregiver[ym_index(nEarningIndex,nM)] = true;

In this example, replacing the array of bool by a std::bitset reduces storage requirements from 672 bytes to 84 bytes, a significant saving for each Person entity.

If the bool array was 1-dimensional rather than 2-dimensional, the code would be simpler.

Possibly, a general wrapper class bitset2D could be added to OpenM++ runtime support to avoid changing model code at all, e.g.

#include "bitset2D.h"
...
typedef bitset2D<56,12> ym_bools; // 2-D bit array of 56 years and 12 months
...
entity Person {
    ym_bools was_primary_caregiver;
}

Then existing model code like

ptCareGiver->was_primary_caregiver[nEarningIndex][nM] = true;

would require no changes.

[back to BoT]
[back to topic contents]

Purge available entity list

Depending on the model design, an entity type might be used only at a particular phase of the simulation. For example, an Observation entity might only be used during the creation of the starting population. OpenM++ maintains pools of entities which have exited the simulation which are available for potential reuse. If there will be no reuse of an entity type, the corresponding pool can be emptied and the memory reclaimed by a function call like

Observation::free_available();

[back to BoT]
[back to topic contents]

⚠️ **GitHub.com Fallback** ⚠️