Custom Decoder Benchmarks - Noro-Official/OQS GitHub Wiki
Benchmarking a Custom Decoder
OpenQStack makes it easy to test how well your decoder performs under specific quantum error channels.
This page outlines how to:
- Define and inject your own decoder logic
- Run it under a noisy QEC simulation
- Track logical failure rates across trials
- Compare performance to the built-in decoder
What Is a Decoder?
In quantum error correction, a decoder is the classical algorithm that:
- Interprets a measured syndrome
- Chooses the best recovery operation
Decoders vary in complexity—from hardcoded rules (e.g. majority vote) to full minimum-weight perfect matching (MWPM) and machine learning–based strategies.
Step 1: Define Your Decoder Function
Your decoder takes a syndrome bitstring and returns an index or Pauli operator to apply.
Example: a simple parity-based decoder
def my_decoder(syndrome):
# Majority vote: flip the minority bit
bits = [int(b) for b in syndrome]
if bits.count(1) == 2:
return bits.index(0)
elif bits.count(0) == 2:
return bits.index(1)
else:
return None # no correction needed
Step 2: Use the Decoder in the Simulation
Replace the built-in recovery logic with your own:
code = BitFlipCode()
psi = [1, 0]
encoded = code.encode(psi)
noisy = code.apply_random_X_error(encoded)
syndrome = code.measure_syndrome(noisy)
flip_index = my_decoder(syndrome)
if flip_index is not None:
# Apply correction
from openqstack.qec import tensor
I = np.eye(2)
X = np.array([0, 1], [1, 0](/Noro-Official/OQS/wiki/0,-1],-[1,-0))
ops = [I, I, I]
ops[flip_index] = X
recovery = tensor(*ops)
recovered = recovery @ noisy
else:
recovered = noisy
decoded = code.decode(recovered)
Step 3: Benchmark Over Many Trials
Repeat the above process and record whether decoding was successful:
import numpy as np
def benchmark_decoder(psi, decoder_fn, n_trials=1000):
code = BitFlipCode()
success = 0
for _ in range(n_trials):
encoded = code.encode(psi)
corrupted = code.apply_random_X_error(encoded)
syndrome = code.measure_syndrome(corrupted)
idx = decoder_fn(syndrome)
if idx is not None:
I = np.eye(2)
X = np.array([0, 1], [1, 0](/Noro-Official/OQS/wiki/0,-1],-[1,-0))
ops = [I, I, I]
ops[idx] = X
recovery = tensor(*ops)
recovered = recovery @ corrupted
else:
recovered = corrupted
decoded = code.decode(recovered)
if np.allclose(np.abs(decoded), np.abs(psi), atol=1e-6):
success += 1
return success / n_trials
Then run:
rate = benchmark_decoder([1/np.sqrt(2), 1/np.sqrt(2)], my_decoder)
print(f"Logical success rate: {rate:.3f}")
Step 4: Compare to Baseline
You can now test your decoder against:
- The built-in
code.recover()
method - A “do nothing” decoder (for control)
- A decoder under a different noise model
Change apply_random_X_error()
to use:
- A
BitFlipChannel
- A
DepolarizingChannel
- A custom
ErrorChannel
with Kraus operators
Visualization (Optional)
Track logical success rate over different noise probabilities:
import matplotlib.pyplot as plt
ps = np.linspace(0, 0.5, 20)
rates = []
for p in ps:
code = BitFlipCode()
def noisy_decoder(syndrome): return my_decoder(syndrome)
# Inject noise with probability p
code.apply_random_X_error = lambda state: BitFlipChannel(p, 3).apply(state)
r = benchmark_decoder([1, 0], noisy_decoder, n_trials=500)
rates.append(r)
plt.plot(ps, rates)
plt.xlabel("Physical error rate")
plt.ylabel("Logical success rate")
plt.title("Decoder Benchmark")
plt.grid(True)
plt.show()
Summary
Benchmarking a decoder in OpenQStack involves:
- Swapping in a custom decoder function
- Running many trials under controlled noise
- Tracking logical success/failure
- Visualizing or comparing performance
This process can scale from basic codes to surface code simulations.
Related
Contribute
If you’ve developed a decoder you'd like to share—classical, neural, or heuristic—we welcome contributions.
Open a PR or get in touch.