Knut's QEMU patchwork - knuto/qemu GitHub Wiki

**Note: The SR/IOV patch set is now an integral part of QEMU from v.7.1.0 **

See git://git.qemu.org/qemu.git for the official master project.

SR/IOV emulation patches (now upstream)

I have implemented a set of patches to emulate SR/IOV in a virtual machine. This patch set should be fully functional but lacks a fully implemented example device in QEMU. Obviously I have tested and used it extensively on my device model which is on a format that does not fit well into QEMU (yet) and also others are known to have taken the patch set and made use of it.

The latest version of this patch set is available in the sriov_patches_v14 branch.

Examples of use

Thanks to the work of Łukasz Gieryk [email protected] and others, there's now a working example of an NVME Express device using the SR/IOV patches in upstream QEMU. You can get a simple one with max 3 VFS to play with in your VM with something along the lines of these parameters:

   qemu-kvm -M q35 \
         ...
         -device nvme-subsys,id=subsys0 \
         -device nvme,id=nvme0,bus=pcie.0,addr=4.0,serial=deadbead,subsys=subsys0,sriov_max_vfs=3,sriov_vq_flexible=6,sriov_vi_flexible=3

In addition to the actual SR/IOV emulation support patches, I early on wrote some very incomplete example code just to illustrate how to use the patches. This example code "emulates" an igb ethernet device based off the e1000e implementation in QEMU. This code is the final commit in sriov_patches_v14

With v6 these patches are working to an extent where it demonstrates the patch set, but which does not really amount to a working device yet.

Trying out the SR/IOV code using an igb device

On any Linux kernel with the igb driver, the igb driver will detect the emulated device and get as far as to enable interrupts, which does not work as the implementation is only partial.

I have been using a generic Qemu root port to test the device with, eg. start with something like

  qemu-kvm -M q35 \
      ...
      -device pcie-root-port,slot=2,id=pcie_port.2 \
      -device igb,bus=pcie_port.2

When the machine boots, you can login and verify that you have an igb device:

 lspci | grep 82576
 03:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

It is then possible to enable and disable VFs by either

  1. loading/unloading the igb driver and use a nonzero value for the module parameter, eg

    rmmod igb modprobe igb max_vfs=2

Now you should observe 3 devices:

 lspci | grep 82576
 03:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
 03:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
 03:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  1. or by using PCIe config space commands via /sys, eg.

    echo 7 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs

Now you should observe 8 devices, 7 VFs and 1 PF:

 lspci | grep 82576
 03:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
 03:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
 03:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
 03:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
 03:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
 03:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
 03:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
 03:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

Then you can disable the VFs again with:

 echo 0 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs

This is still far from a working network device in any way, but should be useful enough just to demonstrate/play with SR/IOV on a VM.