Parsing Performance Results - crt26/pqc-evaluation-tools GitHub Wiki
Parsing Test Results Overview
This page explains how to use the project's automated result parsing script to convert raw test outputs into structured CSV files suitable for analysis. After running any of the Liboqs or OQS-Provider performance testing tools, the results are saved as unparsed logs. These need to be processed using the parse_results.py
script to generate cleaned, organised performance data with averaged results.
This script provides three methods in which to parse the results:
- Only Liboqs testing data
- Only OQS-Provider TLS testing data
- Both Liboqs and OQS-Provider testing data
If parsing results for multiple machine-IDs, please ensure that all relevant test results are located in the test-data/up-results
directory before running the script. When executing the script, you will be prompted to enter the testing parameters, such as the number of machines tested and the number of testing runs conducted in each testing category †.
If you run the parsing script on a different system or environment from where the setup.sh
script was executed, ensure the pandas
Python package is installed. This is the only external dependency required for parsing. You can install it using:
pip install pandas
† Note: The script currently requires that all machines used for testing ran the same number of test runs in a given testing category (Liboqs/OQS-Provider). If there’s a mismatch, parse each machine’s results separately, then rename and organise the output manually if needed.
Parsing Script Usage
The parsing script can be executed on both Linux and Windows systems. To run it, use the following command (depending on your system's Python alias):
python parse_results.py
Parsed Results Output
Once parsing is complete, the parsed results will be stored in the newly created test-data/results
directory. This includes CSV files containing the detailed test results and automatically calculated averages for each test category. These files are ready for further analysis or can be imported into graphing tools for visualisation.
Please refer to the Performance Metrics Guide for a detailed description of the performance metrics that this project can gather, what they mean, and how these scripts structure the un-parsed and parsed data.