02 Architecture - Smart-Edge-Lab/SeQaM GitHub Wiki
SeQaM implements 3 different types of components that are to be integrated with the edge computing scenario.
- Central: These are the components that represent the core of SeQaM, implementing all its functionalities.
- Distributed: These are the components that must be allocated together with the user equipment devices and servers that allocate the edge application (client and server segments). These components communicate with the central components to send data collected or execute events (e.g. trigger a cpu-load event in a distributed component)
- Network: These components are used to collect data and generate events in the network devices.
📝 Note: The Service Planner and the Edge & Network Infrastructure Manager are components that do not belong to SeQaM but interact with it. Both should be part of edge computing scenarios.
- Service Planner: SeQaM abstracts this component, as each edge application may have distinct criteria for ensuring service quality. For example, key service quality metrics can be reducing the overall processing time (E2E latency), or the time measured from the occurrence of a service quality degradation event (e.g., overloaded server, congested network) until the initiation of corrective action (e.g. task offloading, service migration), or even reducing energy consumption. This component can retrieve data from SeQaM using the API, and the user is free to create their own logic for service planning.
- Edge & Network Infrastructure Manager: There is a requirement for a master who oversees resource allocation and orchestration, workload distribution, network awareness, and matchmaking. In the architecture, the Edge and Network Infrastructure Manager is an abstract representation of an entity that manages the edge infrastructure within the network of a particular provider. This component can retrieve data from SeQaM using the API and implement some logic or receive actions determined by the Service Planner that need to be implemented at the network or infrastructure level (e.g. assign more network bandwidth, increase the computing resources assigned to an application in the edge server, etc.).
Components
1. Central Components
The central components represent the core of SeQaM and are to be deployed isolated from the distributed and network components. Normally, you will install the central components in a separate machine connected to the edge network.
Architecture
1. Central Collector
SeQaM uses SigNoz as the general tool for collection. The data is gathered using OpenTelemetry from the distributed components and sent to the distributed collectors. From there, each distributed collector sends to the central collector, which in turn stores the spans in a clickhouse database.
2. Web Frontend Console
This component provides a graphical user interface (GUI) that allows users to type plain-text commands to interact with SeQaM. It also facilitates reading status and error messages from various components. The Web Frontend Console operates on port :8000
of the Central Component. For more details on the available commands, refer to the Commands section.
3. Command Translator
This component receives the commands from both the console and the experiment dispatcher modules and translates them into a readable format for the program (JSON). The data is further sent to the event orchestrator module. The validation of a command is done here.
4. Event Orchestrator
This component is responsible for generating all events within SeQaM. It manages internal events between central components and distributed events from central components to distributed components. Requests are processed via REST API calls.
5. Experiment Dispatcher
This component helps with the creation of experiments. An experiment is a determined set of actions that occur in a determined order and at a specific time. The module reads the events described in ExperimentConfig.json
and sequentially executes them.
More details on how to create experiments are described here.
6. API
This component provides integration capabilities with SeQaM, allowing for programmatic interactions through well-defined endpoints to access the raw collected data and statistics. This information allows for the implementation of Service Planners that generate actuating mechanisms to ensure service quality.
Installation Guide of Central Components
For instructions on installing and setting up the necessary components, refer to the Installation Guide.
2. Distributed Components
These components are to be run in both UEs
and Servers
in your edge environment. Represent the element required by SeQaM to handle collection and events.
The collection includes traces and metrics.
- Traces are sent by your edge application once you have instrumented it using OpenTelemetry.
- Metrics are host metrics collected by the distributed collector, such as CPU, Memory, Network, etc. These are sent at regular intervals to the Central Collector. The resolution of these intervals can be dynamically configured in SeQaM.
If you don't care about these metrics, you can simplify the SeQaM architecture by directly pointing your OTLP endpoint in your client and server segments to the Central Collector IP address. In such a case, you don´t need to run the Distributed Components, but just your instrumented edge application to get the generated spans.
Architecture
1. Distributed Metrics Collector
The Distributed Collector is used to gather host metrics from the UEs and/or Servers where your application runs. Consider installing a Distributed Collector in each system where you want to collect host metrics.
2. Distributed Event Manager
This component is used to receive all the calls from the central components in SeQaM. It handles infrastructure events, such as CPU and memory load. It also configures the resolution of metrics collection in the Distributed Collector.
3. Distributed Load Generator
stress_ng is used to generate controlled CPU and memory load on the distributed devices. It automatically runs inside the provided container and is utilized to simulate various load conditions (ramp, random, etc.).
4. Edge Application
The Edge Application represents your custom server or client segment of the edge application you want to monitor. You must ensure that your custom application is configured to communicate with the Central Collector while instrumenting it to generate spans.
Installation Guide of Distributed Components
For instructions on installing and setting up the necessary components, refer to the Installation Guide.
3. Network Components
The Network Components are used to collect data from network devices and simulate network conditions and traffic using SeQaM.
There are two types of network components:
-
Network Loader: It is used to generate network load in the core network. As for now, this is supported using
iperf3
to generate network traffic, working in conjunction with the Event Manager to handle network-related events. -
Network Spy: This component is used to collect detailed information about the network traffic passing through a link. The component collects and filters data at regular intervals, which is sent by the Main Collector.
These components are mainly intended to be used in emulated and laboratory environments. In real-world setups, the real network conditions are already there, and it will be difficult to do traffic mirroring. For emulating a network topology, we recommend using containerlab, but any other tool could be adapted.
Architecture
1. Network Loader
The network loader can behave as load-client or load-server, depending on the direction of traffic (assuming traffic is sent from client to server). In load client mode, it generates network traffic by transmitting data over TCP to stress the connection between distributed nodes. The load server receives this traffic, measuring network performance in real-time. This setup effectively simulates network conditions such as congestion and bandwidth constraints, allowing SeQaM to evaluate the performance under real-world network loads.
Network Event Manager
This component orchestrates the interaction between the elements required to generate network load: the load client and load server. We recommend that these components be two separate entities (Machines or VMs) that are connected to the network topology to be able to generate the traffic conditions.
Network Traffic Simulator
In the Network Loader, iperf3
serves two primary roles: as a load client and as a load server. The load client uses iperf3
in client mode to generate network traffic. Conversely, the load server runs iperf3
in server mode to receive the traffic produced by the load client.
Make sure you place the load-client and load-server in the direction you desire for the network traffic. If they are incorrectly placed, the direction of traffic generation will be reversed, leading to potential inaccuracies in the simulation due to queuing or scheduling issues. The network loader that comes together with the repo can be used as a load client or load server.
1. Network Spy
The network spy uses TCP DUMP to collect and filter data from the traffic flowing through the monitored interface. Therefore, it is needed to install a network switch to perform port mirroring. This component needs to be allocated on a separate machine. The capabilities of such a machine will depend on the amount of traffic flowing.