05 Configuration Files - Smart-Edge-Lab/SeQaM GitHub Wiki

Managing Configuration Files

After installing the platform components, a hidden folder named .seqam_fh_dortmund_project_emulate is created in the user's home directory /home/<user_name>. This folder contains important configuration files, including environment variables, that can be modified to suit your specific setup. This configuration is taken to build the images you distribute afterward among the central, distributed, and network components.

Accessing the Configuration Folder

After you have successfully installed, navigate to the hidden configuration folder:

cd ~/.seqam_fh_dortmund_project_emulate

Here, you'll find various configuration files, such as environment variables and JSON configurations which control various aspects of the platform.

Updating Configuration

To change or update any configuration (such as environment variables). Open the relevant configuration file (e.g., env) using a text editor of your choice:

nano env

Make the necessary changes in the file. Save the file and exit the editor. Once you have updated the configuration files, you need to build the platform components to apply the changes. For the first time you run the platform, the images will be built based on the configuration you have set. If you change them after, you can rather rebuild and replace the images or manually update the specific parts in the central, distributed, and network components.

Types of Configuration files

1. ExperimentConfig.json

Experiments

Experiments consist of a set of commands triggered at a certain point in time.

📝 Note: More on commands here

⚠️ Warning: Currently it is not possible to trigger two network events on the same devices one after the other due to a recovery limitation in iperf. The experiment dispatcher creates a thread per event request, so they do not block the queue. However, it is recommended to add a small delay between concurrent events

Example:

{
  "eventList": [
   {
      "command": "cpu_load src_device_type:server src_device_name:svr101 cores:1 mode:inc load:80 time:4s",
      "executionTime": 100
    },
    {
      "command": "network_load src_device_type:router src_device_name:router1 interface:eth0 load:80 time:5s",
     "executionTime": 105
    }
  ]
}

dos_and_donts

Creating an experiment file

Before running an experiment, there are a set of conditions you need to make sure your set-up satisfies. These are configured in the ExperimentConfig.json. The configuration is pretty simple and must be sequential.

The following file shows the syntax

{
  "experiment_name": "experiment_name",
  "eventList": [
    {
      "command": "command to run",
      "executionTime": time(in ms)
    },
    ...
    {
      "command": "exit",
      "executionTime": time(in ms)
    }
  ]
}

The experiment_name parameter is used to generate a file that contains the set of events performed in the experiment. Each event or combination of events is created as spans and thus stored in the database. The information about an experiment is accessible via the API. The data includes the command and the UTC time in UNIX format (microseconds) at which it was executed.

experiment_name and timestamps are the only ways we provide now to get the data collected during an experiment. Remember to change the experiment name each time you run a new experiment. If you forget to do so, you could still retrieve the data based on timestamps, but this will imply creating your query in the database.

The exit command exits the experiment dispatcher once all events have been dispatched. Make sure you add it at the end of the experiment to ensure that the trace completes and thus, the data is accessible using the API.

Example ExpermentConfig.json file

{
  "experiment_name": "test_case_1",
  "eventList": [
    {
      "command": "cpu_load src_device_type:ue src_device_name:ue001 cores:5 load:20 time:60s",
      "executionTime": 8000
    },
    {
      "command": "memory_load src_device_type:ue src_device_name:ue001 workers:5 load:20 time:10s",
      "executionTime": 10500
    },
    {
      "command": "exit",
      "executionTime": 21000
    }
  ]
}

Sequential Execution

The executionTime is set in milliseconds and starts to count from the time the trigger signal is sent to the experiment dispatcher module. Be aware that if you want to trigger two events one after the other finishes (for example CPU load of 80% starting at t0 and finishing at t1, followed by a network load of 80% starting at t1), you have to consider adding up the duration of the first event to the executionTime of the second.

Example:

{
  "eventList": [
   {
      "command": "cpu_load src_device_type:server src_device_name:svr101 cores:1 mode:inc load:80 time:40s",
      "executionTime": 100
    },
    {
      "command": "network_load src_device_type:router src_device_name:router1 interface:eth0 load:80 time:50s",
      "executionTime": 40100
    }
  ]
}

Sequential execution

Here,

  • t0=100
  • duration of the first event 40s (40000ms). Therefore, the time when the event will finish is t1= 40100

Concurrent Execution

For generating concurrent execution (two or more events running together during a period of time), multiple commands shall be triggered before the time of the prior event elapses. Each command runs independently, meaning the time frame for each command can overlap based on their respective durations

Example:

{
  "eventList": [
   {
      "command": "cpu_load src_device_type:server src_device_name:svr101 cores:1 mode:inc load:80 time:80s",
      "executionTime": 10000
    },
    {
      "command": "network_load src_device_type:router src_device_name:router1 interface:eth0 load:80 time:60s",
      "executionTime": 20000
    }
  ]
}

Concurrent execution

Here,

  • The first event, cpu_load, starts at t0 = 10000 ms and runs for 80 seconds (80000 ms).
  • The second event, network_load, starts at the same time (t1 = 20000 ms) and runs for 60 seconds (60000 ms).

As both events are running concurrently, their execution will overlap over a time frame. While getting data from the API, you will receive the spans during this overlapping in a separate portion of the response.

Start the experiment

📝 Note: Here are some points you need to consider before running an experiment:

  1. In the laboratory deployment scenario, all connections should be made and tested and IP addresses should be updated in the ScenarioConfig.json file.
  2. You have properly configured the .env file.
  3. The edge application must share the same machine with the distributed components
  4. The distributed components are up and running (Refer installation guide)
  5. The network traffic generators have been attached to your scenario and are up and running (Refer installation guide)
  6. The device types and names are properly set in the ScenarioConfig.json file and in the commands described in the ExperimentConfig.json
  7. You have created/modified the ExperimentConfig.json accordingly.

The experiment dispatcher waits for a trigger event from the web console to start.

start_module module:experiment_dispatcher

📝 Note: When you run multiple experiments, make sure you change the name of each of them.

2. ModuleConfig.json

⚠️ Danger: Modifying the ModuleConfig.json file is not recommended unless you specifically need to change the port or other deployment settings of the distributed source code (e.g. change or add endpoints in the core components)

The Module Configuration file must have the following format:

{
  "modules": [
    {
      "*module_name*": {
        "name": "module_display_name",
        "description": "module_description",
        "port": "module_port",
        "host": "module_host",
        "paths": [
          {
            "*action_name*": {
              "endpoint": "/path_endpoint/"
            }
          },
          ...
        ]
      }
    },
    ...
  ]
}

The core components of the platform can be customized by modifying the ModuleConfig.json file. IP addresses and ports can be configured in this file when using the agent deployment method (you directly run each component from a terminal, with the prior installation of all its dependencies [not recommended]) or by modifying the .env file when using the containerized version.

Each module is represented as a JSON with the following parameters:

  • name: It is the name of the module, this is a hardcoded parameter, therefore it is not worth changing it, as other modules use this name to generate the endpoint to which send their request from this file
  • description: A small and simple description of each module
  • port: The port the module listens to when it is deployed. This can be configured according to your needs. It is obtained from the environment variables. The system will automatically retrieve the port from here, so there is no problem if you prefer to change it.
  • host: In case you prefer to distribute the platform core components across different physical devices, you can add their IP address here, so all the other modules can communicate with it when required. If all are in the same host, then configure it as 127.0.0.1.

⚠️ Warning: Do not use localhost as DNS resolution takes unnecessary time. When configuring the .env file, the host IPs and ports are obtained from the Environment Variables.

  • paths: The path is characterized by an action and an endpoint. Actions are hardcoded elements that are used by the other components of the platform to make a call to a certain action within the component. These values should not be changed, but you are free to add new ones if you need them. The endpoint can be configured according to your criteria, as the components read this value before making the REST request. However, if not required, just leave them as they are set.

Example ModuleConfig.json file

{
    "modules":[
       {
          "console":{
             "name":"Console",
             "description":"Console module to input commands",
             "port":0,
             "host":"0.0.0.0",
             "paths":[
                
             ]
          }
       },
       {
          "command_translator":{
             "name":"Command Translator",
             "description":"Get raw commands and forward them to orchestrators in json format",
             "port": 8001,
             "host": "172.22.174.157",
             "paths":[
                {
                   "translate":{
                      "endpoint":"/translate/"
                   }
                }
             ]
          }
       },
       {
          "event_orchestrator":{
             "name":"Event Orchestrator",
             "description":"Get event requests",
             "port": 8002,
             "host": "172.22.174.157",
             "paths":[
                {
                   "event":{
                      "endpoint":"/event/"
                   }
                }
             ]
          }
       },
      {
        "experiment_dispatcher":{
           "name":"Experiment Dispatcher",
           "description":"Executes the configured experiment",
           "port": 8004,
           "host": "172.22.174.157",
           "paths":[
              {
                 "start":{
                    "endpoint":"/experiment/init/"
                 }
              }
           ]
        }
     }
    ]
 }

3. ScenarioConfig.json

The Scenario Configuration file must have the following format:

{
  "distributed": {
    "*device_type*": [
      {
        "name": "*device_name*",
        "description": "*device description*",
        "port": *agent-port*,
        "host": "*host*",
        "ssh_port": *ssh-daemon-port*,
        "ssh_user": "*ssh-username*",
        "paths": [
          {
            "*action*": {
              "endpoint": "/*path*/"
            },
            ...
          }
        ]
      },
      ...
    ]
  }
}

where device_type is the name of device category; it should represent either src_device_type or dst_device_type; possible device types are ue (user equipment), server, and router;

device_name is the computer-friendly identifier of the device, something like the hostname; for example device_names are ue001, svr101, svr102, ntw_agent; the device name should be unique within every device_type set. This name is used when triggering the commands;

agent-port is an optional integer port where either Distributed Event Manager Agent or Network Event Manager Agent can be listening on; it is optional because the host can be controlled using either agent or an agentless ssh approach. this is required if you plan to run the ssh command;

the host can be specified by IPv4 literal, or IPv6 literal.

ssh-daemon-port is an optional integer port where an ssh daemon can be listening on.

Example ScenarioConfig.json file

This file configures your scenario for deploying SeQaM.

{
    "distributed":
    {
        "ue":[
            {
                "name":"ue001",
                "description":"User Equipment 1",
                "port": 9001,
                "host": "192.168.1.16",
                "ssh_port": 22,
                "ssh_user": "ubuntu",
                "paths":[
                   {
                      "event":{
                         "endpoint":"/event/"
                      },
                      "cpu_load":{
                        "endpoint":"/event/stress/cpu_load"
                     },
                     "memory_load":{
                       "endpoint":"/event/stress/memory_load"
                    }
                   }
                ]
            }
        ] ,
        "server":[
            {
                "name":"svr001",
                "description":"Network Load Server",
                "port": 9001,
                "host": "172.22.174.175",
                "paths":[
                   {
                      "event":{
                         "endpoint":"/event/"
                      },
                      "cpu_load":{
                        "endpoint":"/event/stress/cpu_load"
                     },
                     "memory_load":{
                       "endpoint":"/event/stress/memory_load"
                    }
                   }
                ]
            }
        ],
        "router":[
         {
             "name":"ntw_agent",
             "description":"Endpoint of the agent that runs in the same network emulator VM. The name is hardcoded and this section is to be deprecated",
             "port": 8887,
             "host": "172.22.174.175",
             "paths":[
                {
                   "network_bandwidth":{
                      "endpoint":"/event/network/bandwidth"
                   },
                   "network_load":{
                     "endpoint":"/event/network/load"
                  }
                }
             ]
         }
      ]
    }
}

For a detailed explanation of how to configure the ScenarioConfig.json file for network topology, check out the demo setup guide.

📝 Note:

  • The ntw_agent and LoadClient are the same component if you still use the deprecated commands for network load. This agent runs the network event manager and is responsible for managing network-related events. In the new version of the command, where you define source and destination, this is not needed

4. .env

The following table explains the environment variables used in the platform configuration:

Variable Description Default Value
SEQAM_CENTRAL_HOST IP address of the Core Components
DATABASE_ENDPOINT IP address of the Database running as part of SigNoz
OTLP_URL SigNoz OTLP Collector URL for tracing and telemetry data collection "$DATABASE_ENDPOINT":4317
API_HOST IP address of the Central API service "$SEQAM_CENTRAL_HOST"
API_PORT Port for the Central API service and web interface 8000
COMMAND_TRANSLATOR_HOST IP address of the Central Command Translator service "$SEQAM_CENTRAL_HOST"
COMMAND_TRANSLATOR_PORT Port for the Central Command Translator service 8001
EVENT_ORCHESTRATOR_HOST IP address of the Central Event Orchestrator service "$SEQAM_CENTRAL_HOST"
EVENT_ORCHESTRATOR_PORT Port for the Central Event Orchestrator service 8002
EXPERIMENT_DISPATCHER_HOST IP address of the Central Experiment Dispatcher service "$SEQAM_CENTRAL_HOST"
EXPERIMENT_DISPATCHER_PORT Port for the Central Experiment Dispatcher service 8004

Example env file

Depending on your setup, you may need to configure one or more Distributed Event Managers and Network Event Managers. In the following example, there are two Distributed Event Managers and one Network Event Manager

SEQAM_CENTRAL_HOST=172.22.229.149
DATABASE_ENDPOINT="$SEQAM_CENTRAL_HOST"
OTLP_URL="$DATABASE_ENDPOINT":4317
API_HOST="$SEQAM_CENTRAL_HOST"
API_PORT=8000
COMMAND_TRANSLATOR_HOST="$SEQAM_CENTRAL_HOST"
COMMAND_TRANSLATOR_PORT=8001
EVENT_ORCHESTRATOR_HOST="$SEQAM_CENTRAL_HOST"
EVENT_ORCHESTRATOR_PORT=8002
EXPERIMENT_DISPATCHER_HOST="$SEQAM_CENTRAL_HOST"
EXPERIMENT_DISPATCHER_PORT=8004
⚠️ **GitHub.com Fallback** ⚠️