PQC Computational Performance Testing - crt26/pqc-evaluation-tools GitHub Wiki
Benchmarking Tool Overview
This page provides instructions for running the automated computational benchmarking tool included with this project. It collects CPU and memory performance metrics for PQC algorithms using the Open Quantum Safe (OQS) Liboqs library.
The tool outputs raw performance metrics in CSV and text formats, which are later parsed using Python scripts for easier interpretation and analysis.
Notice: The HQC KEM algorithms are disabled by default in recent Liboqs versions due to a disclosed IND-CCA2 vulnerability. For benchmarking purposes, the setup process includes an optional flag to enable HQC, accompanied by a user confirmation prompt and warning. For instructions on enabling HQC, see the Advanced Setup Configuration Guide, and refer to the Disclaimer Page for more information on this issue.
Conducting PQC Computational Performance Testing
Starting the Automated Tests
The automated Liboqs test script is located in the scripts/testing-scripts
directory and can be launched using the following commands:
./full-liboqs-test.sh
When executed, the testing tool will provide various testing parameter options before the benchmarking process begins.
Configuring Testing Parameters
Before testing begins, the script will prompt you to configure a few testing parameters which includes:
- Whether the results should be compared with other machines and Assigning Machine-ID
- The number of times each test should be run to allow for more accurate average calculation.
Machine Comparison Option:
The first testing option is:
Do you intend to compare the results against other machines [y/n]?
Selecting y
(yes) enables multi-machine result comparison. The script will prompt you to assign a machine ID to the results, which the Python parsing scripts use to organise and differentiate data from different systems. This is useful when comparing performance across devices or architectures. Responding n
(no) to this option will assign a default value of 1
to the outputted machine results upon test completion.
Assigning Number of Test Runs:
The second testing parameter is the number of test runs that should be performed. The script will present the following option:
Enter the number of test runs required:
You can then enter a valid integer value to specify the total number of test runs. However, it is important to note that a higher number of runs will significantly increase testing time, especially if the tool is being used on a more constrained device. This feature allows for sufficient gathering of data to perform average calculations, which is vital if conducting research into the performance of PQC algorithms.
Outputted Performance Results
After testing has completed, performance results are stored in the newly created test-data/up-results/liboqs/machine-x
directory. This directory stores all of the unparsed results from the automated testing tools.
These results are not yet ready for interpretation or graphing. To convert them into structured CSV files suitable for analysis, refer to the Parsing Performance Results page.
For a detailed description of the Liboqs performance metrics that this project can gather, what they mean, and how this project scripts structure the un-parsed and parsed data, please refer to the Performance Metrics Guide.