PQC Computational Performance Testing - crt26/pqc-evaluation-tools GitHub Wiki
Benchmarking Tool Overview
This page provides detailed instructions for using the automated Post-Quantum Cryptographic (PQC) computational performance testing tool. It allows users to gather benchmarking data for PQC algorithms using the Open Quantum Safe (OQS) Liboqs library. It automatically collects raw performance data in CSV and text formats, which can then be parsed into structured, analysis-ready results using the included Python scripts.
Supported PQC Algorithms
This tool supports all PQC algorithms available through the Liboqs library. However, there are some limitations to this that should be considered when using the computational performance testing tool.
For a full list of algorithms currently supported in this project’s performance testing suite, see:
Conducting PQC Computational Performance Testing
Starting the Automated Tests
The automated test script is located in the scripts/testing_scripts
directory and can be launched using the following command:
./pqc_performance_test.sh
When executed, the testing script will prompt you to configure the benchmarking parameters.
Configuring Testing Parameters
Before testing begins, the script will prompt you to configure a few testing parameters, which include:
- Should the results have a custom Machine-ID assigned to them?
- The number of times each test should be run to allow for a more accurate average calculation.
Machine-ID Assignment:
The first testing option is:
Do you wish to assign a custom Machine-ID to the performance results? [y/n]?
Selecting y
(yes) allows you to assign a Machine-ID, which is used by the parsing scripts to organise and distinguish results from different systems, which is useful for cross-device or cross-architecture comparisons. If you select n
(no), the default Machine-ID of 1 will be applied.
Assigning Number of Test Runs:
The second testing parameter is the number of test runs that should be performed. The script will present the following option:
Enter the number of test runs required:
Enter a valid integer to specify how many times each test should run. A higher number of runs will increase the total testing time, especially on resource-constrained devices, but it also improves the accuracy of the resulting performance averages.
Outputted Performance Results
After testing completes, raw performance results are saved to the following directory:
test_data/up_results/computational_performance/machine_x
Where machine_x
refers to the assigned Machine-ID. If no ID was specified, the default ID of 1 is used.
By default, the testing script will automatically trigger the parsing system upon completion. It passes the Machine-ID and total number of test runs to the parsing tool, which then processes the raw output into structured CSV files.
These parsed results are saved in:
test_data/results/computational_performance/machine_x
To skip automatic parsing and only output the raw test results, pass the --disable-result-parsing
flag when launching the test script:
./pqc_performance_test.sh --disable-result-parsing
For complete details on parsing functionality and a breakdown of the collected computational performance metrics, refer to the following documentation: