Architecture - myaut/tsload GitHub Wiki
Dataflow for vPerfGenerator is shown on this picture:
vPerfGenerator architecture has three major parts:
- Server that manages experiments and provides various user interfaces (Web-UI, CLI)
- Loader agent that runs on testing system or on client system (for experiments involving network client system should be split from testing system)
- Monitor agent that runs on testing system and collects monitoring reports
Agents are written on C, while Server is written on Scala. Agents are communicating with Server using JSON-TS protocol. To support multiple types of workload and monitoring abilities, vPerfGenerator agents are modular. When agent starts, it loads all available modules from MODPATH which are actually shared libraries and registers them.
Proposed architecture allows vPerfGenerator to be used as long-term experimental environment comparing with MOSBENCH for example, that it also modular and multi-agent but is dedicated for MIT experiment and requires a lot of Python coding to be reused for another experiment.
Unlike most benchmarks that require manual configuration of test environments, vPerfGenerator has central point for management of entire expiremental environment (but still it cannot manage hypervisor or physical hosts like enterprise monitoring/management software like Oracle EM): it responsible for management of multiple agents that running in experimental environment. It uses central repository to store monitoring data, configuration and history of experiments.
Key part of vPerfGenerator is tsload loader that generates workload on testing system. It may be run as daemon, then it connects to vPerfGenerator server instance and receives tasks from or as standalone application, then all configuration is read from files (in JSON format).
After workload configuration is received (from server of files), agent prepares environment (i.e. allocating system resources), than notifies server that it is ready to start workloads. After confirmation is received, agent runs requests until time serie associated with workload is not ends or explicit ending of experiment is requested.
Workload is run in thread pool, that means that requests are executing concurrently according to multithread coefficent (it provided with the rest of workload configuration). All time is split into small amounts of time called quantums, and time serie of workload is a vector, which identifies how many requests we should be executed during quantum.
Considering all of above, workload is identified by:
- Parameters of workload
- Request parameters describe all requests that will be executed. They are not varying during experiment, but you can run two workload simultaneously with different request parameters
- Technical parameters describe what system resources are involved in experiment: i.e. names of network interfaces
- Parameters for thread pool including multithread coefficent and quantum length
- Time serie of workload
Sets of parameters and it's possible values are provided by modules. For example here is parameters for dummy module (used for testing):
type | name | data type | range | description |
---|---|---|---|---|
Request | filesize | size | 1 byte - 1 Tb | Size of file which would be benchmarked |
Request | blocksize | size | 1 byte - 16 Mb | Size of block which would be used |
Technical | path | string | len < 512 | Path to file |
Request | test | stringset | read, write | Benchmark name |
Technical | sparse | bool | Allow sparse creation of file |
See also Workload parameters
Monitor is completely independent from loader, but obviously it should be run on testing system. It collects various statistics from system, and sends it to server in on-demand manner. It is modular like tsload (moreover, it will use same routines for loading modules).
Each module provides monitoring schema about what information and how can be collected. Then user chooses required monitors for experiment, and they run until experiment finishes. Server stores monitor data into its repository, and may export this information to various tools like Rweb.