Performance measurement of a sequential write workload - ascar-io/pilot-bench GitHub Wiki

In this tutorial we generate a sequential write workload to benchmark the throughput a device or file system can sustain.

libpilot can help us to handle the following processes:

  1. removing the warm-up phase so that we can measure the sustainable throughput
  2. confidence interval calculation (under default 95% confidence level)
  3. determining optimal benchmark duration

Because we want to get the result in a few minutes, not hours, out benchmark will have the following drawbacks:

  1. writing to different locations on the device or file system might have different throughput
  2. aging of the file system might cause degraded throughput

We will provide the following input to libpilot:

  1. device name or file name to write to
  2. the size upper limit: the write benchmark will not write beyond this size limit
  3. start size: the write benchmark still start at this size, and libpilot will gradually extend the test duration until a satisfactory confidence level is reached
  4. a function that does the actual write

The first three inputs are easy to provide. The actual workload function needs to carry out the benchmark at a length as provided by libpilot and return the measurement throughput to libpilot. If the workload function can provide per I/O throughput it would greatly accelerate the benchmark process, because libpilot needs hundreds of samples to estimate the confidence interval of the mean result. If each round of running the workload function could only provide an overall throughput, libpilot would need to run it for hundreds of round, and would have a hard time for detecting and removing the warm-up phase.

The ready-to-use program itself can be found here: https://github.com/mlogic/pilot-bench/blob/master/lib/test/func_test_seq_write.cc. The program saves the test results in XML files, and you can use the pilot command line tool to display or compare results (not implemented yet).