03 Installation - Smart-Edge-Lab/SeQaM GitHub Wiki

System Requirements

The platform comprises three main components: Central, Network, and Distributed. However, a Builder component is used to build them. The components can be installed on a single computer or every component on a dedicated computer.

The components have the following requirements:

  • Builder
    • Ubuntu 20.04 LTS or Alpine Linux 3.20
    • Git: 2.43.0 or newer
    • GNU bash: 5.2.21 or newer
    • Your favorite text editor
    • Docker: v20.10.22 or newer
    • envsubst: 0.22.5 or newer
    • Memory: 768 MB RAM or more
    • Disk: 10 GB or more
    • CPU: at least one core
  • Central
    • Ubuntu 20.04 LTS or Alpine Linux 3.20
    • Git: 2.43.0 or newer
    • Signoz: v0.49.0 from https://github.com/SigNoz/signoz/archive/refs/tags/v0.49.0.tar.gz . If you are going to deploy SigNoz somewhere else or use its cloud version, then you can skip this dependency
    • Your favorite text editor to edit configuration files
    • Docker: v20.10.22 or newer
    • Docker Compose: v2.29.7 or newer
    • Memory: 768 MB RAM or more
    • Disk: 10 GB or more
    • CPU: at least one core

๐Ÿ“ Note: Refer to the SigNoz documentation for further requirements

  • Network event manager
    • Ubuntu 20.04 LTS or Alpine Linux 3.20
    • Docker: v20.10.22 or newer
    • Docker Compose: v2.29.7 or newer
    • Memory: 4 GB RAM or more
    • Disk: 4 GB or more
    • CPU: at least 4 cores
  • Distributed event manager
    • Ubuntu 20.04 LTS or Alpine Linux 3.20
    • Docker: v20.10.22 or newer
    • Docker Compose: v2.29.7 or newer
    • Memory: 768 MB RAM or more
    • Disk: 4 GB or more
    • CPU: at least 4 cores

Deploying the platform

Follow the sections below to deploy each component.

Deploying the Builder

The Builder is used to build docker images of all other components and to configure them.

The deployment of the Builder requires following these steps

1. Clone the repo

Navigate to some directory where you have writing access and would like to store the platform source files.

Clone the Repository and navigate to the project directory

git clone https://github.com/Smart-Edge-Lab/SeQaM.git

cd SeQaM

2. Configure

๐Ÿชฒ Common Issues:

1. Permission to run Docker: If you encounter issues running Docker (e.g., "permission denied"), it could be because the user doesnโ€™t have permission to run Docker. To fix this, add your user to the Docker group:

sudo groupadd docker
sudo gpasswd -a $USER docker
newgrp docker

2. Do not run any installation or deployment scripts with sudo or as the root user. Running scripts like deploy.sh or install.sh with elevated privileges can cause permission-related issues with configuration files not being read (unless stated differently in this guide).

Run

./api/bin/install.sh

This script creates a hidden folder named .seqam_fh_dortmund_project_emulate in your home directory /home/<your_username>:

Install config

This folder contains an ssh key pair and important configuration files that need to be modified to suit your specific setup.

Please carefully read the explanation of Configuration files here before continuing. Also, you should read the page of the demo setup to familiarize yourself with the configuration of these files given an example of a network before proceeding with the instalation.

Modify Configuration Files

After running the install.sh script, navigate to the hidden folder in your home directory:

cd ~/.seqam_fh_dortmund_project_emulate
  • env: Modify the environment variables to match your specific network configuration.
  • ScenarioConfig.json: Adjust this file to suit your deployment scenario and network topology.

Open the ~/.seqam_fh_dortmund_project_emulate/env file in your favorite text editor and set the SEQAM_CENTRAL_HOST variable to the IP address of the computer where you are going to install the Central components:

๐Ÿ“ *Note: The IP address should be the IP of the machine. Do not use localhost as this will be unreachable by the containers.

Open the ~/.seqam_fh_dortmund_project_emulate/ScenarioConfig.json and replace the sample IPs with the IP address of the computers where you are going to install a Distributed component. If you plan to have more distributed components in your setup (eg. three servers), add each one of them individually to the file following the same format. Make sure you configure each IP address and assign different names.

Add a load-client entry under the UE section of the same file, setting the IP address of the computer where you are going to install a Network component in the host field:

Again, you can refer to the configuration files here, which are provided as an example of the network setup used in the demo.

3. Build images and generate docker compose files for all the components

Execute

# from the platform folder
./api/bin/fast-build.sh

This may take a while as it will build docker images for all the components:

Building docker images

Run

./bare-composes/generate-docker-composes.sh y

This step can also take some time. In the command, the last "y" option means that we would like to generate tar-balls ready to be deployed on computers without an internet connection.

The above scripts generate three folders:

  • bare-composes/seqam-central/
  • bare-composes/seqam-network-event-manager/
  • bare-composes/seqam-distributed-event-manager/

the content of these directories should be copied to the computers that should run the Central, Network, and Distributed components, respectively.

Deploying the Central components

1. Deploying the Central Collector

Go to the computer where you would like to deploy the central components and set up the Central Collector. It is recommended to follow the Signoz installation guide. Signoz can be installed in various ways, including Signoz Cloud; however, SeQaM uses the standalone version with Docker. You can run the central collector on a separate machine to improve performance.

In a directory of your choosing, clone the SigNoz repository and cd into the signoz/deploy directory by entering the following commands:

git clone -b main https://github.com/SigNoz/signoz.git && cd signoz/deploy/

๐Ÿ› ๏ธ Configuration Recommendations:

  • Update OpenTelemetry Collector Configuration: The OpenTelemetry collector operates in batches, meaning that information is collected but not exported until certain conditions are met. There are two parameters to configure: datasize and timeout. When either condition is fulfilled, the data is forwarded to the exporter. For faster data insertion into the database, it is crucial to configure these parameters in both the central and distributed collectors. For the central collector, locate the otel-collector-config.yaml file in signoz/deploy/docker/clickhouse-setup and adjust it as follows:
processors:
  batch:
    send_batch_size: 100
    send_batch_max_size: 110
    timeout: 1ms
  • Avoid Unwanted Signoz Components: It is advisable not to install unnecessary Signoz components for this platform, such as hotrod, and load-hotrod, which are just an example service. These can be removed by deploying the minimal version of Signz or by editing the docker-compose.yaml file located in signoz/deploy/docker/clickhouse-setup and removing the corresponding entries.
 include:
 #- test-app-docker-compose.yaml
  - docker-compose-minimal.yaml

To run SigNoz, enter the docker compose up command, specifying the following:

  • -f and the path to your configuration file
  • -d to run containers in the background
docker compose -f docker/clickhouse-setup/docker-compose.yaml up -d

The central collector can be still separated as it is a completely isolated unit. This means it can run on a different machine than the core components. However, to simplify the setup it is better to put them together.

2. Deploying the Core Components

Copy the content of the bare-composes/seqam-central/ folder from the Builder computer (the machine where you have built the docker images) to the computer where you are going to deploy the Central components:

Copy central files

Go to the Central computer, navigate to the seqam-central folder and run

./load-image.sh seqam.central.tar.gz

it should load docker images of the Central components:

Load central docker images

You can see the loaded docker images:

Docker image list

Once this step is completed, proceed to deploy the platform components:

docker compose up -d

It should start all the Central components:

Start central components

โš ๏ธ Warning:
The API component requires Signoz (clickhouse) to be running as it implements database migrations when starting. In case you have Signoz + Clickhouse in a different machine, make sure you change the environmental variables of the docker-compose.yaml file to match the IP address. as default the system is built supposing all central components are in the same machine.

Now navigate your web-browser to the IP-of-the-Central-computer:8000. A "chat" window should open. Type "hello" in the input field and press <ENTER>. You should see a response like below:

SeQaM Chat

To stop the platform

docker componse down

โš ๏ธ Warning:

After deploying the platform, any changes or updates to the configuration must be made in the seqam-central/config configuration folder on the computer running the Central components. Also, if you change, for example, the port of a distributed component, then you will have to manually adjust the docker compose in that distributed component to expose the same port. Please refer to the Managing Configuration Files section for detailed instructions on how to modify the settings.

Deploying Distributed Components

Distributed UEs/Servers

Deploy your instrumented application client/server together with the distributed components. To get started, you can also test out the demo application. Check out the demo installation guide here.

Copy the content of the bare-composes/seqam-distributed-event-manager/ folder from the Builder computer to each device (UE/server) where you are going to deploy the Distributed component. Use scp to securely transfer these files:

scp -r bare-composes/seqam-distributed-event-manager/ user_name@distributed_device_IP:optional_directory

Make sure to replace user_name and distributed_device_IP with the appropriate username and IP address of a distributed machine:

Copy distributed files

Once the files are in place, go to the Distributed device, navigate to the seqam-distributed-event-manager directory and run

./load-image.sh seqam-distributed-event-manager.tar.gz

it should load the docker image:

Load distributed docker image

Deploy the distributed components by running the following commands:

# From the seqam-distributed-event-manager directory
docker compose up

To stop the distributed components just press <CTRL>+<C>.

Enable GPU Metrics

โš ๏ธ Warning: Applicable For NVIDIA GPUs Only.

  1. Install NVIDIA Container Toolkit following the official guide.

  2. Modify docker-compose.yml by uncommenting the following lines:

   deploy:
     resources:
       reservations:
         devices:
           - driver: nvidia
             count: all
             capabilities: [gpu]

Deploying Network Components

Load Client

Copy the content of the bare-composes/seqam-network-event-manager/ directory from the Builder computer to the machine/VM that will represent the Load Client:

Copy network files

Go to the computer where you are going to run the Network component, navigate to the seqam-network-event-manager directory, and load the docker image with

./load-image.sh seqam-network-event-manager.tar.gz

Deploy the Network components on by running the following command

# From the seqam-network-event-manager directory
docker compose up

Run network component

To stop the component just press <CTRL>+<C>.

Load Server

To simulate traffic and handle performance measurements, start one more instance of the Network component on some computer and add its IP address to the seqam-central/config/ScenarioConfig.json file under the server section on the Central computer like the following:

{
  "name": "load-server",
  "host": "load_server_IP"
}

Configure load server

This load-server will run iperf3 in the background, listening for incoming client connections for bandwidth testing.

Run experiment

The seqam-central/config/ExperimentConfig.json file on the Central computer contains a sample experiment. Please open it and update the experiment_name:

Run the experiment by typing in the SeQaM "chat" window in your browser:

start_module module:experiment_dispatcher

Run experiment

Now, using /experiments API on the Central component, you can see the experiment in the list of conducted experiments:

Get experiments

Using /experiments/{exp_name}/apps/{app_name} API we can get detailed information about spans that happened during every step of the experiment:

Experiment details

๐Ÿ“ *Note: {exp_name} is the name you have just changed in the seqam-central/config/ExperimentConfig.json file {app_name} is the service name of your instrumented application with OpenTelemety.


โš ๏ธ **GitHub.com Fallback** โš ๏ธ