PQC Computational Performance Metrics - crt26/pqc-evaluation-tools GitHub Wiki
Collected Performance Metrics
The computational performance tests collect detailed CPU and memory usage metrics for PQC digital signature and KEM algorithms. Using the Liboqs library, the automated testing tool performs each cryptographic operation and outputs the results, which are separated into two categories: CPU benchmarking and memory benchmarking.
CPU Benchmarking
The CPU benchmarking results measure the execution time and efficiency of various cryptographic operations for each PQC algorithm. Using the Liboqs speed_kem
and speed_sig
benchmarking tools, each operation is run repeatedly within a fixed time window (3 seconds by default). The tool performs as many iterations as possible in that time frame and records detailed performance metrics.
The table below describes the metrics included in the CPU benchmarking results:
Metric | Description |
---|---|
Iterations | Number of times the operation was executed during the test window. |
Total Time (s) | Total duration of the test run (typically fixed at 3 seconds). |
Time (us): mean | Average time per operation in microseconds. |
pop. stdev | Population standard deviation of the operation time, indicating variance. |
CPU cycles: mean | Average number of CPU cycles required per operation. |
pop. stdev (cycles) | Standard deviation of CPU cycles per operation, indicating consistency. |
Memory Benchmarking
The memory benchmarking tool evaluates how much memory individual PQC cryptographic operations consume when executed on the system. This is accomplished by running the test-kem-mem
and test-sig-mem
Liboqs tools for each PQC algorithm and its respective operations with the Valgrind Massif profiler. Each operation is performed once with the Valgrind Massif profiler to gather peak memory usage and can be tested across multiple runs to ensure consistency.
The following table describes the memory-related metrics captured after the result parsing process has been completed:
Metric | Description |
---|---|
inits | Number of memory snapshots (or samples) collected by Valgrind during profiling. |
maxBytes | Peak total memory usage across all memory segments (heap + stack + others). |
maxHeap | Maximum memory allocated on the heap during the execution of the operation. |
extHeap | Heap memory allocated externally (e.g., through system libraries). |
maxStack | Maximum stack memory usage recorded during the test. |
Computational Performance Result Data Storage Structure
All performance data is initially stored as unparsed output when using the computational performance benchmarking script (pqc_performance_test.sh
). This raw data is then automatically processed using the Python parsing script to generate structured CSV files for analysis, including averages across test runs.
The table below outlines where this data is stored and how it's organised in the project's directory structure:
Data Type | State | Description | Location (relative to test_data/ ) |
---|---|---|---|
CPU Speed | Un-parsed | Raw .csv outputs from speed_kem and speed_sig . |
up_results/computational_performance/machine_x/raw_speed_results/ |
CPU Speed | Parsed | Cleaned CSV files with metrics and averages. | results/computational_performance/machine_x/speed_results/ |
Memory Usage | Un-parsed | Valgrind Massif .txt outputs from signature/KEM profiling. |
up_results/computational_performance/machine_x/mem_results/ |
Memory Usage | Parsed | CSV summaries of peak memory usage. | results/computational_performance/machine_x/mem_results/ |
Performance Averages | Parsed | Averaged metrics across test runs. | results/computational_performance/machine_x/ |
Where machine_x
is the Machine-ID number assigned to the results when executing the testing scripts. If no custom Machine-ID is assigned, the default ID of 1 will be set for the results.