Advanced TLS Testing Customisation - crt26/pqc-evaluation-tools GitHub Wiki
The TLS performance testing script supports several configuration options that allow the benchmarking process to be tailored to specific environments. These options are beneficial when operating in restricted networks, virtualised environments, or when precise control over test behaviour is required.
Supported customisation features include:
- TCP port configuration
- Control Signal Behaviour
- Disabling Automatic Result Parsing
If the default TCP ports are incompatible with the testing environment, custom ports can be specified at runtime using the following flags. Port values must fall within the range 1024–65535.
Flag | Description |
---|---|
--server-control-port=<PORT> |
Set the server control port (1024-65535) |
--client-control-port=<PORT> |
Set the client control port (1024-65535) |
--s-server-port=<PORT> |
Set the OpenSSL S_Server port (1024-65535) |
When using custom TCP ports, please ensure the same values are provided to both the server and client instances. Otherwise, the testing will fail.
By default, the tool uses a 0.25 second delay when sending control signals between the server and client instances. This is to avoid timing issues during the control signal exchange, which can cause testing to fail.
If the default delay is unsuitable for your environment, you can either set a custom delay or disable it entirely:
Flag | Description |
---|---|
--control-sleep-time=<TIME> |
Set the control sleep time in seconds (integer or float) |
--disable-control-sleep |
Disable the control signal sleep time |
Please note that the --control-sleep-time
flag cannot be used with the --disable-control-sleep
flag.
The performance testing script triggers automatic result parsing upon test completion. This behaviour can be disabled by passing the following flag at runtime:
./pqc_tls_performance_test.sh --disable-result-parsing
Disabling automatic parsing may be appropriate in scenarios such as:
-
Collecting raw outputs for batch processing at a later stage
-
Running tests in low-resource environments