Windows Gaming VM - zbrewer/homelab GitHub Wiki
Setup for a Windows 11 VM running in Proxmox that can be used for self-hosted "cloud" gaming.
This relies on using PCIe passthrough to provide the VM with access to a dedicated GPU installed in the host. When set up in this manner, the GPU can't be used for other functions including host console output. The gaming VM will also need a sufficient amount of RAM and storage which may put constraints on the host. With that out of the way, here are the necessary steps. This is based on this Gaming VM Guide on the Proxmox forums and the Craft Computing headless GPU in a VM instructions here.
First, enable IOMMU on the server per the instructions from the Proxmox wiki. Pay special attention when updating the kernel command line to whether you are using Grub or Systemd-boot as your boot loader as the instructions are different. As a tip - ZFS installations will be using Systemd-boot.
With IOMMU enabled (and verified), use the following shell command to list all PCIe devices and their IOMMU group:
for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
In order for this to work for GPU passthrough, the GPU must be in its own IOMMU group (it can be in the same group as its audio device, which is often split out separately, and potentially a PCI controller).
With IOMMU set up, a few more steps will likely need to be completed before PCIe passthrough can be used. These steps are generally outlined on the Proxmox wiki here.
In order to make sure that the GPU is able to be reset properly by the guest VM, and that the host doesn't depend on it for some reason, we must make sure that it isn't used by the host at all. First, run lspci -nnk
and identify the GPU that you'd like to use.
Check the Kernel driver in use
field and see if it is used by any other devices. If it isn't, the entire driver can be blacklisted to prevent the host OS (Proxmox) from interacting with it. That can be done by adding blacklist <driver_name>
to a new file in /etc/modprobe.d/
where <driver_name>
is the name of the driver you just found. This is the most common case and a new file can likely be created such as /etc/modprobe/modprobe.d/
with the following contents:
blacklist nvidiafb
blacklist nouveau
blacklist nvidia*
If, on the other hand, the driver is in use by some other device (like another GPU that is used by the host), you will need to force the device to use the vfio-pci
driver instead to effectively blacklist the device itself. These instructions can be found on the Wiki page for setting up ARM as it is a much more likely scenario for SATA controllers.
With the drivers blacklisted, update the initramfs by running update-initramfs -u -k all
and reboot (it may be prudent to run update-initramfs -u
in order to avoid updating older/fallback kernel versions until you are confident in the changes you have made).
After the system reboots, check that the driver in use is now missing (if all drivers were blacklisted) or that it is vfio-pci
. This can be done with lspci -nnk
or with lspci -nnk -s <device_address>
to just see the output for the device we care about.
With the above steps completed to prepare the GPU for passthrough, the VM can be configured and the Guest OS installed.
A VM can be created in Proxmox using the normal process with the following specific details:
Before starting, upload a Windows 11 ISO to Proxmox.
On the OS
screen, select the Windows 11 ISO that was uploaded and select Microsoft Windows
as the guest OS type (and the correct version). Do not add any VirtIO drivers.
On the System
page, use the following settings:
- Graphics card: Default
- Machine: q35
- BIOS: OVMF (UEFI)
- Add EFI Disk: yes
- SCSI Controller: LSI 53C895A
- Qemu Agent: no
- Add TPM: yes
On the Disks
page, make sure to set the Bus/Device
type to SATA
and the Cache
to Write back
. It is also important to note that the gaming VM will either need a sizeable disk or external storage in order to store games. I kept the install simple and created a new LVM Volume Group in Proxmox on a NVMe SSD installed in the host specifically for this purpose. I then used this for the boot disk and gave it most of the NVMe drive's capacity. Other options include mounting an iSCSI network drive in the guest for game storage, passing disks through directly, and more.
On the CPU
page, assign the appropriate number of cores and set the CPU type to host
.
On the Memory
page, disable Ballooning Device
(under the advanced settings).
On the Network
page, set the model to Intel E1000
in addition to specifying the appropriate bridge and VLAN, if necessary.
Finally, confirm your settings and create the VM but don't start it yet.
We will next navigate to the Hardware
tab of the VM and click Add > PCI Device
. From here, we will select the GPU under Raw Device
on the resulting pop-up screen and check ROM-Bar
and PCI-Express
under the advanced settings. Repeat this process for the GPU's audio controller as well.
Finally, open the console for the host and execute the following commands:
$ dmidecode -t 0
$ dmidecode -t 1
Copy the output to some scratch space (such as Notepad) as we will need it momentarily.
Go back to the VM in Proxmox and open its Options
page. Edit the `SMBIOS settings (type1) on this page and enter as much information as possible from the above commands (such as the UUID and manufacturer).
Next, go back to the host's console and, using the editor of your choice, open the VM config at /etc/pve/qemu-server/<vm-id>.conf
. Add the following line to the top of the file, being sure to replace the information in <>
with the information from the commands above. The vendor
and date
should be wrapped in quotes ("
) as should the version
if it contains any spaces.
args: -cpu host,-hypervisor,kvm=off, -smbios type=0,vendor=<vendor>,version=<version>,date=<date>
In this file, the line that reads cpu:host
should be changed to cpu:host,hidden=1
as well.
Save the file - the VM is now ready to be started.
After starting the VM, use the Console
in Proxmox and follow the installer as you normally would. The one caveat here is that special instructions may be needed if you'd like to complete the installation without a Microsoft account (using only a local account).
With the Windows installation completed, log-in to the system and install the latest drivers for the GPU you passed through. Afterwards, the device manager can be opened to be sure the GPU is being detected properly and that it is working as expected.
From here, we will configure the software necessary for remote access. I should note that the instructions that follow are for setting up a headless system using a virtual display adapter; however, a dummy display dongle plugged into the GPU can also be used in lieu of these steps. This will also cause the system to think that a display is attached and provide video output. That being said...
Download and install VB-Audio Virtual Audio Cable
(acts as a virtual audio device) and TightVNC server (used for remote access during setup). Make sure to install TightVNC as a service so that it works as the login screen.
Now, download Parsec (which will be used for remote access, in addition to Sunshine/Moonlight) and the USBMMIDD Virtual Display adapter.
Reboot the VM and connect via VNC.
Unzip the USBMMIDD zip archive and place the usbmmidd directory at c:\usbmmidd
. Within this folder, run usbmmidd.bat
as an administrator. This should create a new virtual display.
In this same directory, create a start.bat
file (make sure that the extension is .bat
as Windows by default hides file extensions so you may end up with start.bat.txt
). Edit it with Notepad and add the following line:
c:\usbmmidd\deviceinstaller64 enableidd 1
Next, open the group policy editor (gpedit.msc) and go to Computer Configuration > Windows Settings > Scripts > Startup
and add the start.bat
file that was just created. This will cause the script to run at boot and to recreate the virtual display.
With that done, open Windows display settings and change the resolution of the new adapter to 1080p. The original display should not allow resizing (shouldn't support other resolutions). Note which display is the new virtual adapter (usually display 2) and, under Multiple Displays
, select Show only on 2
(where 2 is the display number you identified). This should get rid of the extra display and you should be left with only a 1080p display. Note that this will also prevent the VM from displaying output in the Console
in Proxmox (the console was the other virtual display).
With the display configured, install and login to Parsec (make sure it is installed as shared
so that the login screen can be accessed). Under Settings > Host
, set the resolution to 1080p and the bandwidth limit to something more reasonable (such as 25Mb for LAN connections).
Next, shut down the Windows VM and, in Proxmox, under the Hardware
tab, edit the CD drive to Do not use any media
in order to remove the install disk. In addition, set the Display
to none
on this same page. This will prevent the extra display adapter from showing up in Windows.
Finally, start the VM and connect via Parsec. Install Sunshine/Moonlight if desired, following their specific instructions, in order to provide another remote access option. At this point, games can be installed and it should be possible to use the gaming VM.