Verify overview - vincentkfu/fio-blog GitHub Wiki
I am grateful to Adam Manzanares for feedback on an earlier version of this post.
Earlier this year Ankit Kumar and I gave fio's verify feature a tune up. Ankit resolved a swathe of verify-related issues reported on GitHub whereas I led the creation of a verify test script. Together we improved our understanding of the inner workings of fio's verify feature and it seems reasonable to cap off this work by sharing some of what we learned. Fio's documentation has an entire section devoted to verification. However it can be useful to have more details about how fio carries out verification. This is the first in a series of blog posts on fio's verify feature and will begin by walking through a simple example verify workload. Then I will cover fio's verify options and outline the steps that fio undertakes to carry out a verification workload. Finally, I will go through a series of additional verify examples.
The discussion here is based on fio 3.40.
Basic Verify Example: Sequential Write
Let us begin with a basic sequential write workload. To make the discussion
manageable, the fio job writes four 4K blocks to a 16K file and to show what
fio is doing the job is run with --debug=verify,io
. I arbitrarily set
--verify=md5
to have fio use md5 checksums to assess data integrity. The output
is below.
1 root@localhost:~/fio-dev/fio-canonical# ./fio-3.40 --name=test --filesize=16k --verify=md5 --rw=write --debug=verify,io
2 fio: set debug option verify
3 fio: set debug option io
4 verify 4635 td->trim_verify=0
5 test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
6 fio-3.40
7 Starting 1 process
8 test: Laying out IO file (1 file / 0MiB)
9 io 4635 declare unneeded cache test.0.0: 0/16384
10 io 4637 declare unneeded cache test.0.0: 0/16384
11 io 4637 fill: io_u 0x55be95e39340: off=0x0,len=0x1000,ddir=1,file=test.0.0
12 io 4637 prep: io_u 0x55be95e39340: off=0x0,len=0x1000,ddir=1,file=test.0.0
13 verify 4637 fill random bytes len=4096
14 verify 4637 fill md5 io_u 0x55be95e39340, len 4096
15 io 4637 queue: io_u 0x55be95e39340: off=0x0,len=0x1000,ddir=1,file=test.0.0
16 io 4637 complete: io_u 0x55be95e39340: off=0x0,len=0x1000,ddir=1,file=test.0.0
17 io 4637 fill: io_u 0x55be95e39340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
18 io 4637 prep: io_u 0x55be95e39340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
19 verify 4637 fill random bytes len=4096
20 verify 4637 fill md5 io_u 0x55be95e39340, len 4096
21 io 4637 queue: io_u 0x55be95e39340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
22 io 4637 complete: io_u 0x55be95e39340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
23 io 4637 fill: io_u 0x55be95e39340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
24 io 4637 prep: io_u 0x55be95e39340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
25 verify 4637 fill random bytes len=4096
26 verify 4637 fill md5 io_u 0x55be95e39340, len 4096
27 io 4637 queue: io_u 0x55be95e39340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
28 io 4637 complete: io_u 0x55be95e39340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
29 io 4637 fill: io_u 0x55be95e39340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
30 io 4637 prep: io_u 0x55be95e39340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
31 verify 4637 fill random bytes len=4096
32 verify 4637 fill md5 io_u 0x55be95e39340, len 4096
33 io 4637 queue: io_u 0x55be95e39340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
34 io 4637 complete: io_u 0x55be95e39340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
35 verify 4637 starting loop
36 io 4637 declare unneeded cache test.0.0: 0/16384
37 verify 4637 get_next_verify: ret io_u 0x55be95e39340
38 io 4637 prep: io_u 0x55be95e39340: off=0x0,len=0x1000,ddir=0,file=test.0.0
39 io 4637 queue: io_u 0x55be95e39340: off=0x0,len=0x1000,ddir=0,file=test.0.0
40 io 4637 complete: io_u 0x55be95e39340: off=0x0,len=0x1000,ddir=0,file=test.0.0
41 verify 4637 md5 verify io_u 0x55be95e39340, len 4096
42 verify 4637 get_next_verify: ret io_u 0x55be95e39340
43 io 4637 prep: io_u 0x55be95e39340: off=0x1000,len=0x1000,ddir=0,file=test.0.0
44 io 4637 queue: io_u 0x55be95e39340: off=0x1000,len=0x1000,ddir=0,file=test.0.0
45 io 4637 complete: io_u 0x55be95e39340: off=0x1000,len=0x1000,ddir=0,file=test.0.0
46 verify 4637 md5 verify io_u 0x55be95e39340, len 4096
47 verify 4637 get_next_verify: ret io_u 0x55be95e39340
48 io 4637 prep: io_u 0x55be95e39340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
49 io 4637 queue: io_u 0x55be95e39340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
50 io 4637 complete: io_u 0x55be95e39340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
51 verify 4637 md5 verify io_u 0x55be95e39340, len 4096
52 verify 4637 get_next_verify: ret io_u 0x55be95e39340
53 io 4637 prep: io_u 0x55be95e39340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
54 io 4637 queue: io_u 0x55be95e39340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
55 io 4637 complete: io_u 0x55be95e39340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
56 verify 4637 md5 verify io_u 0x55be95e39340, len 4096
57 verify 4637 get_next_verify: empty
58 verify 4637 exiting loop
59 io 4637 close ioengine psync
60 io 4637 free ioengine psync
61
62 test: (groupid=0, jobs=1): err= 0: pid=4637: Wed Jun 11 23:12:44 2025
63 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(16.0KiB/4msec)
64 clat (nsec): min=1323, max=31213, avg=9847.25, stdev=14289.63
65 lat (nsec): min=3242, max=37673, avg=14463.75, stdev=15682.99
66 clat percentiles (nsec):
67 | 1.00th=[ 1320], 5.00th=[ 1320], 10.00th=[ 1320], 20.00th=[ 1320],
68 | 30.00th=[ 2736], 40.00th=[ 2736], 50.00th=[ 2736], 60.00th=[ 4128],
69 | 70.00th=[ 4128], 80.00th=[31104], 90.00th=[31104], 95.00th=[31104],
70 | 99.00th=[31104], 99.50th=[31104], 99.90th=[31104], 99.95th=[31104],
71 | 99.99th=[31104]
72 write: IOPS=500, BW=2000KiB/s (2048kB/s)(16.0KiB/8msec); 0 zone resets
73 clat (nsec): min=11310, max=86510, avg=34285.25, stdev=35571.75
74 lat (usec): min=48, max=2583, avg=1057.11, stdev=1238.92
75 clat percentiles (nsec):
76 | 1.00th=[11328], 5.00th=[11328], 10.00th=[11328], 20.00th=[11328],
77 | 30.00th=[12096], 40.00th=[12096], 50.00th=[12096], 60.00th=[27264],
78 | 70.00th=[27264], 80.00th=[86528], 90.00th=[86528], 95.00th=[86528],
79 | 99.00th=[86528], 99.50th=[86528], 99.90th=[86528], 99.95th=[86528],
80 | 99.99th=[86528]
81 lat (usec) : 2=12.50%, 4=12.50%, 10=12.50%, 20=25.00%, 50=25.00%
82 lat (usec) : 100=12.50%
83 cpu : usr=60.00%, sys=30.00%, ctx=1, majf=0, minf=23
84 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
85 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
86 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
87 issued rwts: total=4,4,0,0 short=0,0,0,0 dropped=0,0,0,0
88 latency : target=0, window=0, percentile=100.00%, depth=1
89
90 Run status group 0 (all jobs):
91 READ: bw=4000KiB/s (4096kB/s), 4000KiB/s-4000KiB/s (4096kB/s-4096kB/s), io=16.0KiB (16.4kB), run=4-4msec
92 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=16.0KiB (16.4kB), run=8-8msec
93
94 Disk stats (read/write):
95 sda: ios=0/0, sectors=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
96 ...
The io
debug messages show four steps that fio undertakes for each IO
operation: fill (line 11), prep (line 12), queue (line 15), and complete (line
16). These IOs are write operations, denoted by ddir=1
and we can see that
these four steps are undertaken for each of the offsets 0x0, 0x1000, 0x2000,
and 0x3000. The verify
debug messages indicate that fio has filled the data
buffer with random bytes (line 13) and calculated an md5 checksum (line 14)
based on the contents of each buffer. For most verify workloads, the checksum
is stored in a verify header that is written at the start of each block.
After all the write operations complete, fio begins the read/verify phase of
the verify workload, denoted by the starting loop debug message in line 35. The
write operations were stored in a verify list and the read/verify phase
commences by dequeuing offsets from the verify list to be read back (line 37).
The io
debug messages indicate that these operations are read operations
(ddir=0
) and occur at the expected offsets 0x0, 0x1000, 0x2000, and 0x3000.
For reads, fio only carries out the prep, queue, and complete steps since there
is no need to fill a data buffer. In the midst of these io
debug messages fio
displays verify
debug messages indicating that the md5 checksum in the verify
header matched the checksum calculated from the data that was read back (line
41). As expected, all of these operations complete successfully and fio exits
with no error messages.
The above verification workload was triggered by simply adding --verify=md5
to the list of options. There are many ways to control how fio carries this out
beyond the simple example above. Let us now turn to these different options.
Verify options
Fio's verify-related options can be divided into seven categories:
- Enablement
- Data
- Failure handling
- IOs
- State files
- Experimental verify
- Trim verification
Let us discuss each of the categories in turn.
Enablement
Five options control how fio carries out verification. At its heart is the
verify
option. This specifies the method for fio to use for data
verification. It can be one of a multitude of checksums (md5, sha512, etc), a
specific data pattern, or null
(only used for testing). By default no
verification method is specified and verify is disabled. When specifying a data
pattern for verification, the verify_pattern
option supplies the pattern.
The do_verify
option is another option that controls verification. It is a
simple Boolean option that fio checks to determine whether it should carry out
a verification pass as part of the workload. This needs to be set to True
(the default) and a verification method specified by verify
in order for fio
to carry out a verification phase. In most cases there is no need to ever
specify this option. Users can simply choose to specify a verification method
(or not) using verify
to call for verification to be part of the workload.
The third verification trigger option is verify_only
. This is intended to be
added to a write workload with a verification phase. If a user has already
successfully written the data from this workload and wishes to verify the data,
simply add the verify_only
option and fio will skip the write phase of the
workload and carry out the verification phase only.
The final two options are verify_write_sequence
and verify_header_seed
. For
most verify workloads fio will write a verify header to each block. The header
will be described in more detail below, but for now we note that it includes a
sequence number and the random seed used to generate the data that was written.
The verify_write_sequence
and verify_header_seed options
respectively
control whether fio verifies these components of the header. By default these
options are set to True
. However, in certain cases fio will automatically
disable these checks when these components cannot be verified reliably (i.e.,
header seed verification will be automatically disabled when blocks may be
overwritten with norandommap=1
) when these options are not explicitly
specified.
Data
A second set of options specifies the contents of the data buffers used for
verification. The first of these options is verify_offset
. By default fio
places the verification header at the start of each data buffer used for
verification. The verify_offset
option directs fio to place this verification
header elsewhere in the data buffer. When the data buffer is prepared, fio
swaps the contents of the verification header with the data at the location
specified by verify_offset
. When the data is read back for verification, the
header is swapped back before verifying.
Another option controlling the data buffer is the verify_interval
option. By
default fio creates a single verification header for the entire data buffer for
each write operation. verify_interval
can be used to create verification
headers for smaller slices of the entire data buffer. If each write operation
is 16384 bytes, a user might set verify_interval=4096
and will create four
verification headers each protecting 4096 bytes of the 16384-byte write.
Finally, the verify_pattern
option is used to specify the desired data
pattern when verify=pattern
is set. For pattern verification fio will not
calculate a verification header and will merely write the specified pattern
(repeating as necessary if the block size is larger than the pattern size), and
confirm that any data read back matches the specified pattern.
Failure handling
Two options control fio's behavior when it encounters a verification error. The
verify_dump
option directs fio to create files containing the data read from
the file and the data fio expected to read from the file when it encounters a
verification error. By default this is disabled.
The verify_fatal
option controls whether fio continues to verify later blocks
or segments after encountering an error. If verify_increment
is specified,
fio will stop verifying the block after the first failing increment. Without
verify_fatal
fio will continue to verify later increments after any failures.
In verify_async
mode (described below), fio will stop the job when
verify_fatal
is set and a failure is encountered. Without verify_fatal
fio
will continue verifying later blocks.
IOs
By default a verify workload is comprised of two phases, a write phase followed
by a verify (read) phase. Fio has a set of options that can modify this
behavior. The verify_async
option directs fio to spawn the specified number of
verification worker threads. Each of these threads polls a list of completed
read IOs to verify. Instead of having the same thread issue, complete, and
verify IOs, read IOs that are completed are added to the verification list.
These verify_async
threads carry out the actual data verification operations,
leaving the main fio worker thread to submit and complete read operations. It
is also possible to pin the verify_async
threads to run on specific CPUs using
the verify_async_cpus
option.
Two other options control a verification workload's sequencing of phases. If a
user wishes to split the write and read/verify phases into smaller components,
this can be done with the verify_backlog
option. Instead of completing the
entire write phase before starting the read/verify phase, fio can draw from the
backlog of pending verification IOs and issue these read/verify IOs when the
backlog is full. The verify_backlog
option controls the size of this backlog.
The verify_backlog_batch
option controls how much of this backlog is consumed
during each mini-verification phase. If the value set for the batch size is
smaller than the size of the backlog, then the backlog is not fully drained
during each verification phase. If the batch size is set to exceed the backlog
size, no further verification IOs are issued after the backlog is drained. This
differs from the documentation's assertion that some blocks will be verified
more than once in this situation.
State files
Fio can save and load verify state files for use when verify jobs are
interrupted. The verify_state_save
option is a Boolean option that instructs
fio to save a verify state file if the write phase of a verify workload is
interrupted. The verify_state_load
option is a Boolean option that instructs
fio to load a previously saved verify state file. This state file tells fio the
extent to which write IOs were successfully completed which can be used to
determine the appropriate stopping point for the verify phase. See this blog
post for more details about the verify state file.
Experimental verify
The default verification strategy in fio is to save IOs in a list or tree for
later use during the read/verification phase. If the experimental_verify
option
is set to True
, fio will instead replay IO by resetting the file and random
number generators in order to generate the same data patterns and sequence of
offsets for the read/verification phase. With experimental verify enabled, fio
goes through the motions of issuing write operations but right before
submission, these IOs are changed to read operations and the contents are
checked after the read operation completes.
Trim verification
Fio can also be used to verify trims by checking that the contents of trimmed
blocks are all zeroes when they are read back. The trim_percentage
specifies
the percentage of written blocks to trim. For example, to trim a randomly
selected half of the written blocks set trim_percentage=50
. By default this
percentage is zero and trim verification is disabled. The trim_verify_zero
option directs fio to confirm whether trimmed blocks are all zeroes when they
are read back. By default this is True and fio will check the contentse of
trimmed blocks. The trim_backlog
option describes how often fio should
initiate trims. This must be set in order to enable trim verification. When
this is set to one fio will send a trim command after each write operation.
Finally, the trim_backlog_batch
option sets how many trim operations to issue
in each trim phase. By default, fio will drain the backlog of trim operations.
Verification: write phase
The options above control the different ways that fio carries out a verify workload. Let us now explore the nuts and bolts of how fio actually accomplishes this.
The starting point for verification is fio's do_io()
function which is a loop
where fio composes an IO unit or io_u
and then submits it to the ioengine. This
loop carries out several tasks related to the write phase of a verify job:
The first relevant step in do_io()
is a call to populate_verify_io_u()
when
fio is running a job that requires verification. This calls
fill_verify_pattern()
which either:
- uses a random seed generated from
verify_state
to fill the data buffer with random data or - fills the data buffer with the contents specified by the
verify_pattern
option.
Below is how the verification header is defined. It begins with a magic number and has fields for verify type, length of the block (including the header), random seed used to generate the block's data, offset of the header, start time for the IO, the thread number of the fio job issuing the IO, and the sequence number for the IO. Finally the contents of the header are also protected by a checksum. This header structure is followed by a checksum type-specific value containing the value of the checksum. The size of this additional data varies from a single byte for a crc7 checksum to 128 bytes for a sha3-512 checksum.
1 /*
2 * A header structure associated with each checksummed data block. It is
3 * followed by a checksum specific header that contains the verification
4 * data.
5 */
6 struct verify_header {
7 uint16_t magic;
8 uint16_t verify_type;
9 uint32_t len;
10 uint64_t rand_seed;
11 uint64_t offset;
12 uint32_t time_sec;
13 uint32_t time_nsec;
14 uint16_t thread;
15 uint16_t numberio;
16 uint32_t crc32;
17 };
18
19 ...
20 struct vhdr_sha512 {
21 uint8_t sha512[128];
22 };
23 ...
24 struct vhdr_crc7 {
25 uint8_t crc7;
26 };
27 ...
By default each write IO will have a single verification header with a checksum
for the remainder of the buffer. The verify_interval
option can be activated
to use multiple verification headers, each covering a portion of the data
buffer immediately following the header.
Fio will also call log_io_piece()
unless experimental verify is selected. There are two cases here:
- If the file has a random map enabled (and there is no possibility of
overlapping offsets) then the
io_u
is just added to a simple list - otherwise the
io_u
is added to an RB tree and if an overlapping entry is found, the oldio_u
is dropped
Verify: read phase
When fio issues verify reads it sets the io_u
's end_io
completion hook to a
verify function. This is either verify_io_u()
for standard verify workloads
or verify_io_u_async()
when the verify_async
option is set. In either case
program execution ultimately arrives at verify_io_u()
. In this function the
verify header is first verified. Header verification involves checks of the
following components:
- magic number
- length of the buffer that the header protects
- header random seed when it can be verified
- offset
- write sequence number when it can be verified
- CRC32C checksum of the header data
If the header passes all of the above checks, a checksum of the selected type is calculated and compared against the checksum value from the header. If the expected and received checksums match, execution continues. Otherwise fio reports an error.
Fio also has a pattern verification mode that omits the verify header. For this mode the header check is skipped and the entire buffer is compared against the specified pattern.
More Examples
With a deeper understanding of how fio carries out verification, let us walk through more advanced examples of verify workloads.
Triggering a failure
The initial example above showed a workload where all of the blocks were
successfully verified. Let us now try to trigger a verify failure. First, we
write a single byte to the file used in the first example. This is accomplished
by setting --bs=1
and --number_ios=1
in the first job below (line 1). Then, we run
the same job as we did in the first example with the addition of the
--verify_only=1
option (line 39). This instructs fio to skip the write phase of the job
and immediately start the read/verify phase. Debug output for the two fio
invocations is below.
1 root@localhost:~/fio-dev/fio-canonical# ./fio-3.40 --name=test --filesize=16k --rw=randwrite --debug=io --bs=1 --number_ios=1
2 fio: set debug option io
3 test: (g=0): rw=randwrite, bs=(R) 1B-1B, (W) 1B-1B, (T) 1B-1B, ioengine=psync, iodepth=1
4 fio-3.40
5 Starting 1 process
6 io 4643 declare unneeded cache test.0.0: 0/16384
7 io 4643 fill: io_u 0x5591d4991340: off=0x3e87,len=0x1,ddir=1,file=test.0.0
8 io 4643 prep: io_u 0x5591d4991340: off=0x3e87,len=0x1,ddir=1,file=test.0.0
9 io 4643 queue: io_u 0x5591d4991340: off=0x3e87,len=0x1,ddir=1,file=test.0.0
10 io 4643 complete: io_u 0x5591d4991340: off=0x3e87,len=0x1,ddir=1,file=test.0.0
11 io 4643 close ioengine psync
12 io 4643 free ioengine psync
13
14 test: (groupid=0, jobs=1): err= 0: pid=4643: Wed Jun 11 23:14:06 2025
15 write: IOPS=500, BW=500B/s (500B/s)(1B/2msec); 0 zone resets
16 clat (nsec): min=844053, max=844053, avg=844053.00, stdev= 0.00
17 lat (nsec): min=850724, max=850724, avg=850724.00, stdev= 0.00
18 clat percentiles (usec):
19 | 1.00th=[ 848], 5.00th=[ 848], 10.00th=[ 848], 20.00th=[ 848],
20 | 30.00th=[ 848], 40.00th=[ 848], 50.00th=[ 848], 60.00th=[ 848],
21 | 70.00th=[ 848], 80.00th=[ 848], 90.00th=[ 848], 95.00th=[ 848],
22 | 99.00th=[ 848], 99.50th=[ 848], 99.90th=[ 848], 99.95th=[ 848],
23 | 99.99th=[ 848]
24 lat (usec) : 1000=100.00%
25 cpu : usr=100.00%, sys=0.00%, ctx=1, majf=0, minf=15
26 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
27 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
28 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
29 issued rwts: total=0,1,0,0 short=0,0,0,0 dropped=0,0,0,0
30 latency : target=0, window=0, percentile=100.00%, depth=1
31
32 Run status group 0 (all jobs):
33 WRITE: bw=500B/s (500B/s), 500B/s-500B/s (500B/s-500B/s), io=1B (1B), run=2-2msec
34
35 Disk stats (read/write):
36 sda: ios=0/0, sectors=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
37 ...
38
39 root@localhost:~/fio-dev/fio-canonical# ./fio-3.40 --name=test --filesize=16k --verify=md5 --rw=write --debug=verify,io --verify_only
40 fio: set debug option verify
41 fio: set debug option io
42 verify 4644 td->trim_verify=0
43 test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
44 fio-3.40
45 Starting 1 process
46 io 4646 declare unneeded cache test.0.0: 0/16384
47 io 4646 fill: io_u 0x560caff17340: off=0x0,len=0x1000,ddir=1,file=test.0.0
48 io 4646 prep: io_u 0x560caff17340: off=0x0,len=0x1000,ddir=1,file=test.0.0
49 io 4646 complete: io_u 0x560caff17340: off=0x0,len=0x1000,ddir=1,file=test.0.0
50 io 4646 fill: io_u 0x560caff17340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
51 io 4646 prep: io_u 0x560caff17340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
52 io 4646 complete: io_u 0x560caff17340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
53 io 4646 fill: io_u 0x560caff17340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
54 io 4646 prep: io_u 0x560caff17340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
55 io 4646 complete: io_u 0x560caff17340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
56 io 4646 fill: io_u 0x560caff17340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
57 io 4646 prep: io_u 0x560caff17340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
58 io 4646 complete: io_u 0x560caff17340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
59 verify 4646 starting loop
60 io 4646 declare unneeded cache test.0.0: 0/16384
61 verify 4646 get_next_verify: ret io_u 0x560caff17340
62 io 4646 prep: io_u 0x560caff17340: off=0x0,len=0x1000,ddir=0,file=test.0.0
63 io 4646 queue: io_u 0x560caff17340: off=0x0,len=0x1000,ddir=0,file=test.0.0
64 io 4646 complete: io_u 0x560caff17340: off=0x0,len=0x1000,ddir=0,file=test.0.0
65 verify 4646 md5 verify io_u 0x560caff17340, len 4096
66 verify 4646 get_next_verify: ret io_u 0x560caff17340
67 io 4646 prep: io_u 0x560caff17340: off=0x1000,len=0x1000,ddir=0,file=test.0.0
68 io 4646 queue: io_u 0x560caff17340: off=0x1000,len=0x1000,ddir=0,file=test.0.0
69 io 4646 complete: io_u 0x560caff17340: off=0x1000,len=0x1000,ddir=0,file=test.0.0
70 verify 4646 md5 verify io_u 0x560caff17340, len 4096
71 verify 4646 get_next_verify: ret io_u 0x560caff17340
72 io 4646 prep: io_u 0x560caff17340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
73 io 4646 queue: io_u 0x560caff17340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
74 io 4646 complete: io_u 0x560caff17340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
75 verify 4646 md5 verify io_u 0x560caff17340, len 4096
76 verify 4646 get_next_verify: ret io_u 0x560caff17340
77 io 4646 prep: io_u 0x560caff17340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
78 io 4646 queue: io_u 0x560caff17340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
79 io 4646 complete: io_u 0x560caff17340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
80 verify 4646 md5 verify io_u 0x560caff17340, len 4096
81 md5: verify failed at file test.0.0 offset 12288, length 4096 (requested block: offset=12288, length=4096, flags=88)
82 Expected CRC: f76ea441085c83534566b62ebb773c91
83 Received CRC: 48ddde524e8a38e968fd7ef60c557dd4
84 io 4646 io_u_queued_complete: min=0
85 io 4646 getevents: 0
86 verify 4646 exiting loop
87 fio: pid=4646, err=84/file:io_u.c:2263, func=io_u_sync_complete, error=Invalid or incomplete multibyte or wide character
88 io 4646 close ioengine psync
89 io 4646 free ioengine psync
90
91 test: (groupid=0, jobs=1): err=84 (file:io_u.c:2263, func=io_u_sync_complete, error=Invalid or incomplete multibyte or wide character): pid=4646: Wed Jun 11 23:15:12 2025
92 read: IOPS=2000, BW=8000KiB/s (8192kB/s)(16.0KiB/2msec)
93 clat (usec): min=2, max=931, avg=235.12, stdev=463.98
94 lat (usec): min=7, max=940, avg=242.75, stdev=464.86
95 clat percentiles (usec):
96 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3],
97 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 4],
98 | 70.00th=[ 4], 80.00th=[ 930], 90.00th=[ 930], 95.00th=[ 930],
99 | 99.00th=[ 930], 99.50th=[ 930], 99.90th=[ 930], 99.95th=[ 930],
100 | 99.99th=[ 930]
101 lat (usec) : 4=37.50%, 1000=12.50%
102 lat (msec) : >=2000=50.00%
103 cpu : usr=0.00%, sys=100.00%, ctx=1, majf=0, minf=24
104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
106 complete : 0=20.0%, 4=80.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
107 issued rwts: total=4,4,0,0 short=0,0,0,0 dropped=0,0,0,0
108
109 Run status group 0 (all jobs):
110 READ: bw=8000KiB/s (8192kB/s), 8000KiB/s-8000KiB/s (8192kB/s-8192kB/s), io=16.0KiB (16.4kB), run=2-2msec
111
112 Disk stats (read/write):
113 sda: ios=0/0, sectors=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
114 ...
With --verify_only=1
fio goes through the motions of issuing write commands
and simulates issuing and completing these operations (lines 47-58) in order to
populate the list of verify io_u
's. Once the simulated write phase has
completed the read/verify phase begins. The first three 4K blocks at offsets
0x0, 0x1000, and 0x2000 are verified successfully, but the 4K block at offset
0x3000 fails verification (line 81) since our one-byte write was at offset 0x3e87. Fio
prints out the expected md5 checksum that was stored in the block's verify
header as well as the md5 checksum value that was calculated from the data that
was read from the file (lines 82-83).
Basic random write
Let us now examine a random write verify job. We use the same 16K file as
beforei but with rw=randwrite
(line 1). The io
debug messages indicate that
fio writes to blocks at offsets 0x3000, 0x0, 0x200, and 0x1000 (lines 9-32),
filling each block with random data with the verify header containing an md5
checksum (e.g., lines 11-12). Since fio's randommap is enabled, there is no
possibility that blocks will be overwritten, so for the read/verify phase, the
blocks are read in the same order they were written and the md5 checksum is
validated for each block. Fio found no data corruption and issued no error
messages.
1 root@localhost:~/fio-dev/fio-canonical# ./fio-3.40 --name=test --filesize=16k --verify=md5 --rw=randwrite --debug=verify,io
2 fio: set debug option verify
3 fio: set debug option io
4 verify 4656 td->trim_verify=0
5 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
6 fio-3.40
7 Starting 1 process
8 io 4658 declare unneeded cache test.0.0: 0/16384
9 io 4658 fill: io_u 0x55854d649340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
10 io 4658 prep: io_u 0x55854d649340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
11 verify 4658 fill random bytes len=4096
12 verify 4658 fill md5 io_u 0x55854d649340, len 4096
13 io 4658 queue: io_u 0x55854d649340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
14 io 4658 complete: io_u 0x55854d649340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
15 io 4658 fill: io_u 0x55854d649340: off=0x0,len=0x1000,ddir=1,file=test.0.0
16 io 4658 prep: io_u 0x55854d649340: off=0x0,len=0x1000,ddir=1,file=test.0.0
17 verify 4658 fill random bytes len=4096
18 verify 4658 fill md5 io_u 0x55854d649340, len 4096
19 io 4658 queue: io_u 0x55854d649340: off=0x0,len=0x1000,ddir=1,file=test.0.0
20 io 4658 complete: io_u 0x55854d649340: off=0x0,len=0x1000,ddir=1,file=test.0.0
21 io 4658 fill: io_u 0x55854d649340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
22 io 4658 prep: io_u 0x55854d649340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
23 verify 4658 fill random bytes len=4096
24 verify 4658 fill md5 io_u 0x55854d649340, len 4096
25 io 4658 queue: io_u 0x55854d649340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
26 io 4658 complete: io_u 0x55854d649340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
27 io 4658 fill: io_u 0x55854d649340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
28 io 4658 prep: io_u 0x55854d649340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
29 verify 4658 fill random bytes len=4096
30 verify 4658 fill md5 io_u 0x55854d649340, len 4096
31 io 4658 queue: io_u 0x55854d649340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
32 io 4658 complete: io_u 0x55854d649340: off=0x1000,len=0x1000,ddir=1,file=test.0.0
33 verify 4658 starting loop
34 io 4658 declare unneeded cache test.0.0: 0/16384
35 verify 4658 get_next_verify: ret io_u 0x55854d649340
36 io 4658 prep: io_u 0x55854d649340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
37 io 4658 queue: io_u 0x55854d649340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
38 io 4658 complete: io_u 0x55854d649340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
39 verify 4658 md5 verify io_u 0x55854d649340, len 4096
40 verify 4658 get_next_verify: ret io_u 0x55854d649340
41 io 4658 prep: io_u 0x55854d649340: off=0x0,len=0x1000,ddir=0,file=test.0.0
42 io 4658 queue: io_u 0x55854d649340: off=0x0,len=0x1000,ddir=0,file=test.0.0
43 io 4658 complete: io_u 0x55854d649340: off=0x0,len=0x1000,ddir=0,file=test.0.0
44 verify 4658 md5 verify io_u 0x55854d649340, len 4096
45 verify 4658 get_next_verify: ret io_u 0x55854d649340
46 io 4658 prep: io_u 0x55854d649340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
47 io 4658 queue: io_u 0x55854d649340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
48 io 4658 complete: io_u 0x55854d649340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
49 verify 4658 md5 verify io_u 0x55854d649340, len 4096
50 verify 4658 get_next_verify: ret io_u 0x55854d649340
51 io 4658 prep: io_u 0x55854d649340: off=0x1000,len=0x1000,ddir=0,file=test.0.0
52 io 4658 queue: io_u 0x55854d649340: off=0x1000,len=0x1000,ddir=0,file=test.0.0
53 io 4658 complete: io_u 0x55854d649340: off=0x1000,len=0x1000,ddir=0,file=test.0.0
54 verify 4658 md5 verify io_u 0x55854d649340, len 4096
55 verify 4658 get_next_verify: empty
56 verify 4658 exiting loop
57 io 4658 close ioengine psync
58 io 4658 free ioengine psync
59
60 test: (groupid=0, jobs=1): err= 0: pid=4658: Wed Jun 11 23:17:10 2025
61 read: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(16.0KiB/1msec)
62 clat (nsec): min=2167, max=16157, avg=5925.75, stdev=6836.08
63 lat (nsec): min=6817, max=21570, avg=13176.75, stdev=7307.53
64 clat percentiles (nsec):
65 | 1.00th=[ 2160], 5.00th=[ 2160], 10.00th=[ 2160], 20.00th=[ 2160],
66 | 30.00th=[ 2224], 40.00th=[ 2224], 50.00th=[ 2224], 60.00th=[ 3152],
67 | 70.00th=[ 3152], 80.00th=[16192], 90.00th=[16192], 95.00th=[16192],
68 | 99.00th=[16192], 99.50th=[16192], 99.90th=[16192], 99.95th=[16192],
69 | 99.99th=[16192]
70 write: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(16.0KiB/1msec); 0 zone resets
71 clat (nsec): min=6545, max=74176, avg=26033.75, stdev=32195.17
72 lat (usec): min=60, max=192, avg=99.65, stdev=62.45
73 clat percentiles (nsec):
74 | 1.00th=[ 6560], 5.00th=[ 6560], 10.00th=[ 6560], 20.00th=[ 6560],
75 | 30.00th=[10816], 40.00th=[10816], 50.00th=[10816], 60.00th=[12608],
76 | 70.00th=[12608], 80.00th=[74240], 90.00th=[74240], 95.00th=[74240],
77 | 99.00th=[74240], 99.50th=[74240], 99.90th=[74240], 99.95th=[74240],
78 | 99.99th=[74240]
79 lat (usec) : 4=37.50%, 10=12.50%, 20=37.50%, 100=12.50%
80 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=24
81 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
82 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
83 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
84 issued rwts: total=4,4,0,0 short=0,0,0,0 dropped=0,0,0,0
85 latency : target=0, window=0, percentile=100.00%, depth=1
86
87 Run status group 0 (all jobs):
88 READ: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=16.0KiB (16.4kB), run=1-1msec
89 WRITE: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=16.0KiB (16.4kB), run=1-1msec
90
91 Disk stats (read/write):
92 sda: ios=0/0, sectors=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
93 ...
Basic random write with norandommap
Let us now examine a random write workload with norandommap
enabled. This means
that some blocks may be written multiple times and others may not be written at
all. The debug output below shows that fio issues writes to offsets 0x3000,
0x3000, 0x2000, and 0x2000 (lines 9-34). The two written blocks are each
written twice and offsets 0x0 and 0x1000 are untouched. Since overwrites are
possible, instead of using a list, fio stores the verify io_u
's in a tree and
drops older io_u
's when it finds an overlap. Thus, there are only two read
operations (lines 38, 43) in the read/verify phase. Fio is able to successfully
verify the data in the reads from offsets 0x2000 and 0x3000 (lines 41, 46).
Note that fio disables header seed verification in this situation because when
it tries to replay the sequence of random seeds, it generates the random seeds
for data that has been overwritten.
1 root@localhost:~/fio-dev/fio-canonical# ./fio-3.40 --name=test --filesize=16k --verify=md5 --rw=randwrite --norandommap=1 --debug=verify,io
2 fio: set debug option verify
3 fio: set debug option io
4 verify 4663 td->trim_verify=0
5 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
6 fio-3.40
7 Starting 1 process
8 io 4665 declare unneeded cache test.0.0: 0/16384
9 io 4665 fill: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
10 io 4665 prep: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
11 verify 4665 fill random bytes len=4096
12 verify 4665 fill md5 io_u 0x565182dbd340, len 4096
13 io 4665 queue: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
14 io 4665 complete: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
15 io 4665 fill: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
16 io 4665 prep: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
17 verify 4665 fill random bytes len=4096
18 verify 4665 fill md5 io_u 0x565182dbd340, len 4096
19 io 4665 iolog: overlap 12288/4096, 12288/4096
20 io 4665 queue: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
21 io 4665 complete: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
22 io 4665 fill: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
23 io 4665 prep: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
24 verify 4665 fill random bytes len=4096
25 verify 4665 fill md5 io_u 0x565182dbd340, len 4096
26 io 4665 queue: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
27 io 4665 complete: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
28 io 4665 fill: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
29 io 4665 prep: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
30 verify 4665 fill random bytes len=4096
31 verify 4665 fill md5 io_u 0x565182dbd340, len 4096
32 io 4665 iolog: overlap 8192/4096, 8192/4096
33 io 4665 queue: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
34 io 4665 complete: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
35 verify 4665 starting loop
36 io 4665 declare unneeded cache test.0.0: 0/16384
37 verify 4665 get_next_verify: ret io_u 0x565182dbd340
38 io 4665 prep: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
39 io 4665 queue: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
40 io 4665 complete: io_u 0x565182dbd340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
41 verify 4665 md5 verify io_u 0x565182dbd340, len 4096
42 verify 4665 get_next_verify: ret io_u 0x565182dbd340
43 io 4665 prep: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
44 io 4665 queue: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
45 io 4665 complete: io_u 0x565182dbd340: off=0x3000,len=0x1000,ddir=0,file=test.0.0
46 verify 4665 md5 verify io_u 0x565182dbd340, len 4096
47 verify 4665 get_next_verify: empty
48 verify 4665 exiting loop
49 io 4665 close ioengine psync
50 io 4665 free ioengine psync
51
52 test: (groupid=0, jobs=1): err= 0: pid=4665: Wed Jun 11 23:17:56 2025
53 read: IOPS=2000, BW=8000KiB/s (8192kB/s)(8192B/1msec)
54 clat (nsec): min=2740, max=18091, avg=10415.50, stdev=10854.80
55 lat (nsec): min=19212, max=23651, avg=21431.50, stdev=3138.85
56 clat percentiles (nsec):
57 | 1.00th=[ 2736], 5.00th=[ 2736], 10.00th=[ 2736], 20.00th=[ 2736],
58 | 30.00th=[ 2736], 40.00th=[ 2736], 50.00th=[ 2736], 60.00th=[18048],
59 | 70.00th=[18048], 80.00th=[18048], 90.00th=[18048], 95.00th=[18048],
60 | 99.00th=[18048], 99.50th=[18048], 99.90th=[18048], 99.95th=[18048],
61 | 99.99th=[18048]
62 write: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(16.0KiB/1msec); 0 zone resets
63 clat (nsec): min=2969, max=69924, avg=21848.00, stdev=32216.07
64 lat (usec): min=58, max=143, avg=88.60, stdev=37.62
65 clat percentiles (nsec):
66 | 1.00th=[ 2960], 5.00th=[ 2960], 10.00th=[ 2960], 20.00th=[ 2960],
67 | 30.00th=[ 4128], 40.00th=[ 4128], 50.00th=[ 4128], 60.00th=[10432],
68 | 70.00th=[10432], 80.00th=[70144], 90.00th=[70144], 95.00th=[70144],
69 | 99.00th=[70144], 99.50th=[70144], 99.90th=[70144], 99.95th=[70144],
70 | 99.99th=[70144]
71 lat (usec) : 4=33.33%, 10=16.67%, 20=33.33%, 100=16.67%
72 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=22
73 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
74 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
75 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
76 issued rwts: total=2,4,0,0 short=0,0,0,0 dropped=0,0,0,0
77 latency : target=0, window=0, percentile=100.00%, depth=1
78
79 Run status group 0 (all jobs):
80 READ: bw=8000KiB/s (8192kB/s), 8000KiB/s-8000KiB/s (8192kB/s-8192kB/s), io=8192B (8192B), run=1-1msec
81 WRITE: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=16.0KiB (16.4kB), run=1-1msec
82
83 Disk stats (read/write):
84 sda: ios=0/0, sectors=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
85 ...
We can demonstrate that the job above succeeds only because header seed
checking has been disabled by running the exact same job but with header seed
explicitly enabled by adding --verify_header_seed=1
(line 1 in the example
below). Explicitly specifying this option will override any changes to defaults
that fio silently makes. When header seed checking is enabled we see that this
job now fails because of a header seed mismatch (line 45). Fio tried to replay
the sequence of header seeds and the first one it generated did not match the
seed read back from the device because a tree was used to store the offsets
written and offsets are dequeued in order of increasing offset.
1 root@localhost:~/fio-dev/fio-canonical# ./fio-3.40 --name=test --filesize=16k --verify=md5 --rw=randwrite --norandommap=1 --debug=verify,io --verify_header_seed=1
2 fio: set debug option verify
3 fio: set debug option io
4 verify 4667 td->trim_verify=0
5 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
6 fio-3.40
7 Starting 1 process
8 io 4669 declare unneeded cache test.0.0: 0/16384
9 io 4669 fill: io_u 0x562c76fed340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
10 io 4669 prep: io_u 0x562c76fed340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
11 verify 4669 fill random bytes len=4096
12 verify 4669 fill md5 io_u 0x562c76fed340, len 4096
13 io 4669 queue: io_u 0x562c76fed340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
14 io 4669 complete: io_u 0x562c76fed340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
15 io 4669 fill: io_u 0x562c76fed340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
16 io 4669 prep: io_u 0x562c76fed340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
17 verify 4669 fill random bytes len=4096
18 verify 4669 fill md5 io_u 0x562c76fed340, len 4096
19 io 4669 iolog: overlap 12288/4096, 12288/4096
20 io 4669 queue: io_u 0x562c76fed340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
21 io 4669 complete: io_u 0x562c76fed340: off=0x3000,len=0x1000,ddir=1,file=test.0.0
22 io 4669 fill: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
23 io 4669 prep: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
24 verify 4669 fill random bytes len=4096
25 verify 4669 fill md5 io_u 0x562c76fed340, len 4096
26 io 4669 queue: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
27 io 4669 complete: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
28 io 4669 fill: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
29 io 4669 prep: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
30 verify 4669 fill random bytes len=4096
31 verify 4669 fill md5 io_u 0x562c76fed340, len 4096
32 io 4669 iolog: overlap 8192/4096, 8192/4096
33 io 4669 queue: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
34 io 4669 complete: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=1,file=test.0.0
35 verify 4669 starting loop
36 io 4669 declare unneeded cache test.0.0: 0/16384
37 verify 4669 get_next_verify: ret io_u 0x562c76fed340
38 io 4669 prep: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
39 io 4669 queue: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
40 io 4669 complete: io_u 0x562c76fed340: off=0x2000,len=0x1000,ddir=0,file=test.0.0
41 verify: bad header rand_seed 1763778943062938676, wanted 46386204153304124 at file test.0.0 offset 8192, length 4096 (requested block: offset=8192, length=4096)
42 io 4669 io_u_queued_complete: min=0
43 io 4669 getevents: 0
44 verify 4669 exiting loop
45 fio: pid=4669, err=84/file:io_u.c:2263, func=io_u_sync_complete, error=Invalid or incomplete multibyte or wide character
46 io 4669 close ioengine psync
47 io 4669 free ioengine psync
48
49 test: (groupid=0, jobs=1): err=84 (file:io_u.c:2263, func=io_u_sync_complete, error=Invalid or incomplete multibyte or wide character): pid=4669: Wed Jun 11 23:18:44 2025
50 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096B/1msec)
51 clat (nsec): min=18103, max=18103, avg=18103.00, stdev= 0.00
52 lat (nsec): min=23627, max=23627, avg=23627.00, stdev= 0.00
53 clat percentiles (nsec):
54 | 1.00th=[18048], 5.00th=[18048], 10.00th=[18048], 20.00th=[18048],
55 | 30.00th=[18048], 40.00th=[18048], 50.00th=[18048], 60.00th=[18048],
56 | 70.00th=[18048], 80.00th=[18048], 90.00th=[18048], 95.00th=[18048],
57 | 99.00th=[18048], 99.50th=[18048], 99.90th=[18048], 99.95th=[18048],
58 | 99.99th=[18048]
59 write: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(16.0KiB/1msec); 0 zone resets
60 clat (nsec): min=4047, max=73520, avg=25095.25, stdev=32893.00
61 lat (usec): min=49, max=160, avg=94.60, stdev=50.65
62 clat percentiles (nsec):
63 | 1.00th=[ 4048], 5.00th=[ 4048], 10.00th=[ 4048], 20.00th=[ 4048],
64 | 30.00th=[ 4960], 40.00th=[ 4960], 50.00th=[ 4960], 60.00th=[17792],
65 | 70.00th=[17792], 80.00th=[73216], 90.00th=[73216], 95.00th=[73216],
66 | 99.00th=[73216], 99.50th=[73216], 99.90th=[73216], 99.95th=[73216],
67 | 99.99th=[73216]
68 lat (usec) : 10=40.00%, 20=40.00%, 100=20.00%
69 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=25
70 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
71 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
72 complete : 0=16.7%, 4=83.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
73 issued rwts: total=1,4,0,0 short=0,0,0,0 dropped=0,0,0,0
74 latency : target=0, window=0, percentile=100.00%, depth=1
75
76 Run status group 0 (all jobs):
77 READ: bw=4000KiB/s (4096kB/s), 4000KiB/s-4000KiB/s (4096kB/s-4096kB/s), io=4096B (4096B), run=1-1msec
78 WRITE: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=16.0KiB (16.4kB), run=1-1msec
79
80 Disk stats (read/write):
81 sda: ios=0/0, sectors=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
82 ...
Conclusion
Fio's verify capabilities are useful for assessing the integrity of a storage device. This post has sought to provide an overview of fio's verify capabilities and shed some light on what goes on behind the scenes as fio carries out a verify workload. This additional insight should be useful in troubleshooting any issues that arise while using this feature.