Graphics.PciPassthroughVm - lordmundi/wikidoctest GitHub Wiki
Pci Passthrough Vm
This page outlines how to run EDGE in a virtual machine with hardware acceleration by passing a 2nd PCI graphics card to the VM. This demo was performed on a Mint host and both Windows 7 and Scientific Linux guests.
== PART I: Configure host for PCI passthrough ==
The process for configuring the host for pass-through of PCI devices is outlined by Puget Systems in [https://www.pugetsystems.com/labs/articles/Multiheaded-NVIDIA-Gaming-using-Ubuntu-14-04-KVM-585/ their guide here.]
The virtual machine will require a dedicated graphics card, monitor, keyboard, and mouse. The keyboard and mouse are identified by make/model and should not be identical hardware as used by the host. The graphics card is identified by make/model during isolation from the host. While it seems not to be entirely necessary, it is highly recommended that the graphics cards be different models.
Demo system config:
host PC: HP z840, x2 intel xeon 12-core, 32g RAM
host OS: clean mint sarah install
guest OS: Tested with Windows 7, Scientific Linux 6.8
host graphics: nvidia quadro k6000 to Monitor 1
guest graphics: nvidia gtx1080 to Monitor 2
host control: dell keyboard, dell mouse
guest control: hp keyboard, hp mouse
BIOS: enable virtualization options (vt-x & vtd)
Install packages for virtual machine:
sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virt-manager vagrant
Verify user is member of required VM groups:
sudo adduser 'id -un' libvirtd
sudo adduser 'id-un' kvm
Add the following lines to /etc/modules:
pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_intel
Add the following lines to /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
Update grub
sudo update-grub
Find the identifier for the guest graphics card (& builtin audio) using the following line. In this case, the GTX1080 was identified as 10de:1b80 and it's audio device was 10de:10f0.
mghart@HP-Z841 ~/vms $ lspci -vnn | grep NVIDIA
03:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1b80] (rev a1) (prog-if 00 [VGA controller])
03:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10f0] (rev a1)
04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110GL [Quadro K6000] [10de:103a] (rev a1) (prog-if 00 [VGA controller])
04:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1)
Add pci_stub to /etc/initramfs-tools/modules with the identifier for the guest graphics card. This prevents the host machine from taking control of the card.
pci_stub ids=10de:1b80,10de:10f0
Update initramfs
sudo update-initramfs
Identify the PCI bus enumeration for the guest graphics card (& audio) using again the line:
lspci -vnn | grep NVIDIA
Create /etc/vfio-pci.cfg with the pci enumeration of your guest PCI devices:
0000:03:00.0
0000:03:00.1
Create the following script to start the virtual machine. The first half of this script binds the guest PCI devices to the VM using the vfio-pci.cfg file we just generated. The second half configures and starts the virtual machine using QEMU. The VM configuration may require significant adjustment for your own disk and device configuration. Recommend reading [http://wiki.qemu.org/download/qemu-doc.html QEMU documentation] and learning about these options.
#!/bin/bash
configfile=/etc/vfio-pci.cfg
vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
}
modprobe vfio-pci
cat $configfile | while read line;do
echo $line | grep ^# › /dev/null 2>&1 && continue
vfiobind $line
done
sudo qemu-system-x86_64 -enable-kvm -M q35 -m 8192 -cpu host,kvm=off
-drive format=vdi,file=/home/mghart/vms/RPL_SHUTTLE_XPC/RPL_SHUTTLE_XPC.vdi
-smp 4,sockets=1,cores=4,threads=1
-bios /usr/share/seabios/bios.bin -vga none
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1
-device vfio-pci,host=03:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on
-soundhw hda
-usb -usbdevice host:03f0:104a -usbdevice host:03f0:134a -usbdevice host:0bda:0111
-boot menu=on
exit 0
In our case, passing the audio device in the graphics card through to the guest caused major problems. Instead, the guest audio was passed to the host audio device in the script above with the line:
-soundhw hda \
You can find the IDs of your usb devices using the command lsusb. The USB passthrough of the guest keyboard and mouse was accomplished in the above script with the line:
-usb -usbdevice host:03f0:104a -usbdevice host:03f0:134a -usbdevice host:0bda:0111
You will also need to change the -drive line to match your own virtual media, the -device line to match your PCI enumeration, possibly add a -cdrom line for install media, and change the 8192 in the first line to match your desired guest memory.
The option kvm=off attempts to hide the hypervisor from the nvidia installer. If the driver detects a hypervisor, it will not install. In windows, this causes a "code 43: device has been stopped because it reported problems."
At this point, if you intend to run a windows guest, you may proceed to install windows, install the NVIDIA drivers, and reboot. The rest of this guide is specific to a scientific linux guest provisioned with vagrant and having Soc Lite preinstalled.
== Part II: Configure socbox for EDGE ==
provision the SL virtual machine (aka "socbox") using vagrant as usual. Allow it to boot in Virtualbox.
download NVIDIA binary driver
change runlevel to 3 in /etc/inittab
shut down
run socbox in qemu with above script (note: next steps can not be done out-of-order… for some reason)
install NVIDIA binary drivers (367.57 used here), say yes to everything
/boot/grub/grub.conf:
add rdblacklist=nouveau to kernel lines
/etc/modprobe.d/disable-nouveau.conf:
blacklist nouveau
options nouveau modeset=0
change runlevel to 5 in /etc/inittab
QEMU console: system_reset
socbox should now boot with hardware accelerated graphics.
download EDGE <Download.Download>
symbolic links missing from edge:
./lib_Linux_FC3/libjpeg.so → libjpeg.so.62.0.0
./lib_Linux_FC3/libjpeg.so.62 → libjpeg.so.62.0.0
install stuff from EDGE release notes for scientific linux:
yum install avahi-devel.i686 mesa-libGLU.i686 libXmu.i686 libXScrnSaver.i686 libXft.i686 libjpeg.i686 freetype-devel.i686 cairo.i686 libstdc++.i686
now fix a bug with libGL/nvidia conflict:
rpm -e mesa-libGL.i686 —nodeps
rpm -e mesa-libGL.x86_64 —nodeps
then start edge with run_standalone