Automated Performance Testing Scripts - crt26/pqc-evaluation-tools GitHub Wiki

Performance Testing Scripts Overview

The computational performance testing suite benchmarks standalone PQC cryptographic operations for CPU and memory usage. It currently uses the Liboqs library and its testing tools to evaluate all supported KEM and digital signature algorithms. It is provided through a singular automation script which handles CPU and memory performance testing for PQC schemes. It is designed to be run interactively, prompting the user for test parameters such as the machine-ID to be assigned to the results and the number of test iterations.

pqc_performance_test.sh

This script performs fully automated CPU and memory performance benchmarking of the algorithms included in the Liboqs library. It runs speed tests using Liboqs' built-in benchmarking binaries and uses Valgrind's massif tool to capture detailed memory usage metrics for each cryptographic operation. The results are stored in dedicated directories, organised by machine ID.

The script handles:

  • Setting up environment and directory paths

  • Prompting the user for test parameters (machine ID and number of runs)

  • Performing speed and memory tests for each algorithm for the specified number of runs

  • Organising raw result files for easy parsing

  • Automatically calling the Python parsing scripts to process the raw performance results

Speed Test Functionality:

The speed test functionality benchmarks the execution time of KEM and digital signature algorithms using the Liboqs speed-kem and speed-sig tools. Raw performance results are saved to the test_data/up_results/computational_performance/machine_x/raw_speed_results directory.

Memory Testing Functionality:

Memory usage is profiled using the Liboqs test-kem-mem and test-sig-mem tools in combination with Valgrind’s Massif profiler. This setup captures detailed memory statistics for each cryptographic operation. Raw profiling data is initially stored in a temporary directory, then moved to test_data/up_results/computational_performance/machine_x/mem_results.

All results are saved in the test_data/up_results/computational_performance/machine_x directory, where x corresponds to the assigned machine ID. By default, these raw performance results will be parsed using the Python parsing scripts included within this project.