Integration with sdk - sonata-nfv/tng-analytics-engine GitHub Wiki
The overview of the interaction between the Profiler and the tng-sdk-benchmark is as follows:
Metrics Types
Data that are generated from the tng-sdk-benchmark can be of a Type:
- container specific such as: cpu_usage_total_usage (common in all experiments)
- system under test specific such as: cpu_cores, mem_max (is this common in all experiments?)
- VNF specific such as: TO BE COMPLETED (can we have an example here?)
Metrics Naming
Taking under consideration Prometheus best practices for naming:
https://prometheus.io/docs/practices/naming/
container name | monitoring parameter | dimensions |
---|---|---|
cname | monitoring parameter | ns_id & experiment_id |
eg. mn_mp_output_vdu01_cpu_stats__online_cpus_int {ns_id:"ns-1vnf-ids-suricata",experiment_id:"suricata_performance"}
Note1: if we will use Prometheus putting the ns_id and experiment_id as a dimension helps to easily querying all timeseries data for a specific network service and/or experiment. Otherwise can be part of the metric name. For
example: mn_mp_output_vdu01_cpu_stats__online_cpus_int_ns-1vnf-ids-suricata_suricata_performance
or skipped mn_mp_output_vdu01_cpu_stats__online_cpus_int
Note2: Some data in csv format can be:
-
- Question: What is the preread and read?
- Tip: Columns that contain arrays should be split
- Tip: Timestamp values should be unique (not repeated within the column values)
- Tip: id column can be removed since timestamp can be used as primary key id
Metrics Structure
If metrics come for a specific network service and experiment tabular format will be like this:
timestamp | m1 | m2 | m3 |
---|---|---|---|
t1 | value11 | value12 | value13 |
t2 | value21 | value22 | value23 |
... | ... | ... | ... |
tn | valuen1 | valuen2 | valuen3 |
If metrics come for a specific network service and more than one experiments, tabular format will be like this:
(Note4: In case we have to run a profiling analysis upon data that come from different experiments we can only get metrics that are common in all experiments)
Experiment 1:
timestamp | m1 | m2 | m3 |
---|---|---|---|
t1 | value11 | value12 | value13 |
... | ... | ... | ... |
tn | valun31 | valuen2 | valuen3 |
Experiment 2:
timestamp | m1 | m3 | m4 |
---|---|---|---|
tz | valuez1 | valuez3 | valuez4 |
... | ... | ... | ... |
tk | valunk1 | valuek3 | valuek4 |
Result Dataset to be analyzed:
(Note5: m1' & m3' do not include the dimension info so as to be feasible to get matched)
timestamp | m1' | m3' |
---|---|---|
t1 | value11 | value13 |
... | ... | ... |
tk | valunk1 | valuek3 |
Profiling
Profiler could support both ways of interaction (with Prometheus & csv). Analyzing csv files gains in simplicity. Fetching data from Prometheus supports a more sophisticated way for metric values fetching & combination.
For more details see APIs:
- Invoke profiling analysis process getting data via Prometheus
- Invoke profiling analysis process getting data via csv file(s).
(Note6:Profiler is operating as a web server right now, so it's execution is done via curl requests but we foresee to support also a CLI operation mode.)