1_scenario_setup - TUM-VT/FleetPy GitHub Wiki
Scenario Setup
General Idea
In most cases, FleetPy use cases involve sensitivity analysis for different input parameters and multiple MOD demand files to analyze the stability of results.
FleetPy allows defining multiple scenarios in a single file, denoted by scenario_config
. The file is (usually) defined in the csv
format (yaml
is also a supported format). As is typical with any csv
file, the first line is the header, describing the names of the parameters to be used in the experiment. Each consequent row represents a different simulation scenario and contains some changes to the parameter values.
Each single scenario is defined by many parameters, but usually, only a few parameters vary in the scenarios of a specific study (e.g., a sensitivity analysis). To have a clear structure, FleetPy uses two files: the scenario_config
and the constant_config
. The constant_config
file contains all input parameters that are constant throughout the study. In contrast, the scenario_config
contains only those parameters that change among different study scenarios.
It is important to note that the settings in the scenario_config
will always overwrite the settings in the constant_config
, as shown in the Figure below.
The following figure shows the simulation process on a very high level. The scenario parameters are used to initialize the desired modules. The whole simulation is set up based on the linked data input files before calling the run functionality of FleetPy.
Hence, the scenario definition contains three different kinds of parameters:
- Specification of Modules
- Specification of Data Input Files
- Specification of Plain Parameters
The specification of modules determines the algorithms that are executed during the simulation. These can range from different orders of steps to different control strategies. You can find more information here.
Data input files are used for structured input parameters. For example, writing every MOD request with all its information in the scenario definition file does not make sense. Instead, the scenario definition links to a file containing all information on MOD requests.
There are parameters, e.g., the time it should take customers to board and alight, which can be constant for all boarding processes. Hence, defining this parameter in the constant_config or scenario_config is meaningful.
In general, the used modules determine which data input files and simple parameters are required. An example is below for three repositioning algorithms under ./src/fleetctrl/repositioning
. For more advanced methods inheriting base methods, all modules and parameters are specified in both scripts.
Examples
Examples of this structure are provided in the directory FleetPy/studies/example_study/scenarios
. For testing purposes, most scenario files just contain a single scenario. However, the file example_ir_heuristics_repositioning.csv
includes 2 scenarios, which differ by some parameters. The remaining parameters to run this simulation are defined in the constant_config_ir_repo.csv
file.
Benchmark Data Sets
Input data and corresponding example scenario files are available for large-scale case studies of Manhattan, NY, Chicago, IL, and Munich, Germany. This data can be used as benchmark data sets to test and compare new algorithms or to set up large-scale simulations quickly. The FleetPy input data can be downloaded here and has to be copied into the FleetPy/data folder:
- Manhattan: https://doi.org/10.5281/zenodo.15187906
- Chicago: https://doi.org/10.5281/zenodo.15189440
- Munich: https://doi.org/10.5281/zenodo.15195726
Creating your first own scenario (for beginner users)
You can follow these steps to build your own first scenario:
- Go to
./studies
and create your own study directory. - In this directory, create another directory called scenarios.
- Then, copy a
constant_config
andscenario_config
from the examples. - If you have already set up your own network and demand data, replace the respective entries and save. If not, you can create your own network and MOD demand. However, keep the example network for now and play around with the fleet size, vehicle types or number of operators. For more information on how to modify these attributes, check the description of the parameter op_fleet_composition in Input_Parameters
Note: the ./.gitignore
is set to ignore changes in the data in the scenarios directory. This choice was made to keep the repository size manageable. Some request files with millions of trips are larger than the whole code base. Feel free to share input and output data on public servers.
Creating your own scenarios (for advanced users)
Generally, it is recommended to create the constant_config
first and then write a Python script to generate the scenario_config
, as it usually involves one or several loops for sensitivity analysis.
There are several ways to build the constant_config
:
- Copy it from a similar study (using the same or very similar modules) and modify the parameters.
- (Deprecated) Execute
./ScenarioCreatorGUI.py
. This GUI is designed to set up theconstant_config
for you. You can select modules and link data files. Unfortunately, it is essential to note that this was a quick-fix solution and has not been appropriately tested. - If the
./ScenarioCreatorGUI.py
crashes, you can also build theconstant_config
from scratch. For further information, see here. This link is also relevant to include your newly created module in the GUI.
Creating your own non-public run-simulation-file
There are two recommended ways to create your own Python run file from where you can start the FleetPy simulation.
- In the root directory, you can create a file denoted by
run_private_XYZ.py
, where you can replaceXYZ
with anything, for example, your study name. The./.gitignore
file is set to ignore run files denoted byrun_private*py
in the root directory. - You can also create your run file in the study directory, as the
./.gitignore
file is also set to ignore contents in the studies directory.
Either way, your run file can look like this :
import os
import traceback
import multiprocessing as mp
# the following two lines are only necessary if you have your run file within your study directory
import sys
sys.path.append("../..")
from run_examples import run_scenarios
if __name__ == "__main__":
mp.freeze_support()
# ------- #
# study 1 #
# ------- #
const_fn = "constant_config.csv"
sc_fn = "30_min_test.csv"
# the following lines are necessary if you run from the main directory, where
# study_name = "..."
# constant_config_file = f"studies/{study_name}/scenarios/{const_fn}"
# scenario_file = f"studies/{study_name}/scenarios/{sc_fn}"
# otherwise, you can use these lines
constant_config_file = f"scenarios/{const_fn}"
scenario_file = f"scenarios/{sc_fn}"
if not os.path.isfile(constant_config_file) or not os.path.isfile(scenario_file):
raise IOError("Invalid paths to config files! Script has to be placed in main directory!")
# -------------- #
# other settings #
# -------------- #
evaluate=1
log_level="info"
n_parallel_sim = 1
n_cpu_per_sim = 1
try:
run_scenarios(constant_config_file, scenario_file, n_parallel_sim, n_cpu_per_sim, evaluate, log_level,
continue_next_after_error=True)
except:
traceback.print_exc()
You can run multiple simulations in parallel by setting n_cpu_per_sim
> 1. This uses the Python multiprocessing module. If you want to run many FleetPy simulations simultaneously on a cluster, using the cluster’s vectorizing capabilities over the Python multiprocessing module is usually recommended.
The evaluate
parameter controls whether the standard evaluation is performed at the end of the simulation. The standard evaluation script can be found in ./src/evaluation/standard.py
and processes the user and operator stats to get aggregated quantities (e.g., service rate, fleet empty mileage, Pkm/Vkm, and many more).
We recommend using log_level = “info”
for all simulations. If you are developing new code, we advise you to set the log message level on info
in your new codes for debugging. You can modify the code to contain (mostly) debug messages when done. This way, the debug
messages are still available for other users in debug
simulations but do not produce intractable log files for your debugging and production processes.