Virt manager Arch Linux - ryzendew/Linux-Tips-and-Tricks GitHub Wiki

Virt-manager Installation Guide

Table of Contents

  1. Introduction
  2. Installation by Distribution
  3. Verification
  4. Advanced Configuration: GPU Acceleration (NVIDIA)
  5. Advanced Configuration: NVMe Passthrough
  6. Troubleshooting
  7. Quick Reference: All Commands in Order
  8. Additional Resources
  9. Summary

Introduction

This guide will help you install and configure virt-manager on Fedora (and Fedora-based distributions like Nobara), Arch Linux (and Arch-based distributions like CachyOS), Debian, and PikaOS. Virt-manager (Virtual Machine Manager) is a graphical application for managing virtual machines using libvirt. It provides an easy-to-use interface for creating, configuring, and managing virtual machines.

Distribution Links:

  • Fedora - The base distribution
  • Nobara - Fedora-based distribution optimized for gaming and content creation
  • Arch Linux - The base distribution
  • CachyOS - Arch-based distribution with performance optimizations
  • Debian - Stable, universal operating system
  • PikaOS - Debian-based gaming/optimization-focused distribution

What is virt-manager?

  • virt-manager is a desktop application for managing virtual machines
  • It provides a graphical user interface (GUI) for virtualization tasks
  • It uses libvirt as the backend, which is a toolkit for managing virtualization platforms
  • QEMU is the virtualization technology that actually runs the virtual machines
  • Together, these tools allow you to run multiple operating systems on your system

What you'll need:

  • Fedora (or Fedora-based distributions like Nobara), Arch Linux (or Arch-based distributions like CachyOS), Debian, or PikaOS 4 installed and running
  • Administrator (root) access via sudo
  • An internet connection to download packages

Installation by Distribution

Fedora Installation (Also works for Fedora-based distributions like Nobara)

Note: These instructions work for Fedora and Fedora-based distributions such as Nobara. They all use the same dnf package manager and package names.

Fedora makes installing virt-manager very simple with its virtualization package group. This group includes all the necessary components in one installation.

Step 1: Install the Virtualization Package Group

Open your terminal and run the following command:

sudo dnf install @virtualization -y

What each part means:

  • sudo: Runs the command with administrator privileges (needed to install software)
  • dnf: The package manager for Fedora (Dandified YUM)
  • install: Tells dnf to install the specified packages
  • @virtualization: This is a package group (indicated by the @ symbol)
  • Package groups are collections of related packages that work together
  • The @virtualization group includes:
  • virt-manager (the graphical interface)
  • libvirt (the virtualization toolkit)
  • QEMU (the virtualization engine)
  • And all other necessary virtualization components
  • -y: Automatically answers "yes" to all prompts
  • This saves you from having to confirm the installation
  • Without this flag, dnf would ask you to confirm before installing

What this does: This single command installs everything you need for virtualization on Fedora, including virt-manager, libvirt, QEMU, and all their dependencies. It's much simpler than installing packages individually!

Step 1.1: Enter your password when prompted

The sudo command requires administrator privileges. You'll be asked to enter your user password. Note that when you type your password, nothing will appear on screen (this is normal for security reasons).

Example output:

[sudo] password for yourusername: 

Type your password and press Enter.

Step 1.2: Wait for installation to complete

The installation will proceed automatically (thanks to the -y flag). You'll see a list of packages being installed. Wait for it to complete - this may take a few minutes depending on your internet connection.

Step 2: Enable and Start libvirtd Service

After the packages are installed, you need to start the libvirtd service. This service manages virtualization on your system.

What is libvirtd?

  • libvirtd is the libvirt daemon (background service)
  • A daemon is a program that runs in the background and performs system tasks
  • libvirtd manages your virtual machines and provides the connection between virt-manager and QEMU
  • It must be running for virt-manager to work

Method 1: Combined Command (Recommended - Quick and Easy)

The easiest way is to use a single command that both enables and starts the service:

sudo systemctl enable --now libvirtd

What each part means:

  • sudo: Runs the command with administrator privileges (needed to manage system services)
  • systemctl: System control command - used to manage systemd services
  • enable: Enables the service to start automatically at boot
  • This means libvirtd will start every time you boot your computer
  • You won't have to manually start it each time
  • --now: Starts the service immediately (in addition to enabling it)
  • Without --now, the service would only be enabled but not started until the next reboot
  • This flag ensures the service starts right away

What this does: This single command both enables libvirtd to start automatically on boot AND starts it immediately so you can use virt-manager right away.

Expected output: You should see a message like:

Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /usr/lib/systemd/system/libvirtd.service.

This confirms that the service has been enabled and started.

Method 2: Separate Commands (For Learning)

If you want to understand what each step does, you can run the commands separately:

Step 2.1: Start the libvirtd service

sudo systemctl start libvirtd

What this does:

  • systemctl: System control command - used to manage systemd services
  • start: Starts the libvirtd service immediately
  • This makes libvirtd active right now so you can use virt-manager

Step 2.2: Enable the libvirtd service

sudo systemctl enable libvirtd

What this does:

  • enable: Configures the service to start automatically at boot
  • This ensures libvirtd will start every time you boot your computer
  • Without this, you'd have to manually start libvirtd after each reboot

Why both commands?

  • start makes it run now
  • enable makes it run automatically in the future
  • You need both to have libvirtd working immediately AND after reboots

Expected output: After running enable, you should see:

Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /usr/lib/systemd/system/libvirtd.service.

This confirms that the service has been enabled.

Note: Both methods achieve the same result. Use Method 1 for convenience, or Method 2 if you want to understand each step individually.

Step 3: Verify Installation

You can verify that everything is working:

Check libvirtd status:

sudo systemctl status libvirtd

What to look for: You should see Active: active (running) in green. Press q to exit the status view.

Launch virt-manager: You can now launch virt-manager from:

  • Your application menu (look for "Virtual Machine Manager" or "virt-manager")
  • Or from the terminal:
virt-manager

What you should see:

  • The virt-manager window should open
  • You should see "QEMU/KVM" listed as a connection (usually localhost (system))
  • If you see the connection, everything is set up correctly!

That's it! On Fedora (or Fedora-based distributions like Nobara), virt-manager should now be fully working. You can proceed to the Advanced Configuration section if you want to set up GPU acceleration, or start creating virtual machines right away.


Arch Linux Installation (Also works for Arch-based distributions like CachyOS)

Note: These instructions work for Arch Linux and Arch-based distributions such as CachyOS, EndeavourOS, Manjaro, and others. They all use the same pacman package manager and package names.

Arch Linux requires installing packages individually and configuring additional services.

Step 1: Install virt-manager and QEMU

Open your terminal and run the following command to install virt-manager and QEMU:

sudo pacman -S virt-manager qemu-full

What each part means:

  • sudo: Runs the command with administrator privileges (needed to install software)
  • pacman: The package manager for Arch Linux
  • -S: Synchronize/install packages (tells pacman to install the specified packages)
  • virt-manager: The Virtual Machine Manager graphical application
  • qemu-full: The complete QEMU virtualization package
  • QEMU is a generic and open source machine emulator and virtualizer
  • The qemu-full package is a meta-package that installs all QEMU components and features
  • This includes all emulators, audio backends, block device support, and tools
  • This ensures you have all the virtualization capabilities you might need
  • Reference: Arch Linux qemu-full package

What this does: This command installs both the virt-manager application (the GUI you'll use) and QEMU (the underlying virtualization engine that actually runs your virtual machines). Without QEMU, virt-manager wouldn't be able to create or run virtual machines.

Step 1.1: Enter your password when prompted

The sudo command requires administrator privileges. You'll be asked to enter your user password. Note that when you type your password, nothing will appear on screen (this is normal for security reasons).

Example output:

[sudo] password for yourusername: 

Type your password and press Enter.

Step 1.2: Confirm installation

Pacman will show you a list of packages to be installed and ask for confirmation. You'll see something like:

Packages (X) to install:
  virt-manager
  qemu-full
  ... (dependencies will be listed here)

Proceed with installation? [Y/n]

Type Y and press Enter to proceed with the installation.


Step 2: Enable and Start libvirtd Service

After installing the packages, you need to enable and start the libvirtd service. This service manages virtualization on your system.

What is libvirtd?

  • libvirtd is the libvirt daemon (background service)
  • A daemon is a program that runs in the background and performs system tasks
  • libvirtd manages your virtual machines and provides the connection between virt-manager and QEMU
  • It must be running for virt-manager to work

Method 1: Combined Command (Recommended - Quick and Easy)

The easiest way is to use a single command that both enables and starts the service:

sudo systemctl enable --now libvirtd

What each part means:

  • sudo: Runs the command with administrator privileges (needed to manage system services)
  • systemctl: System control command - used to manage systemd services
  • enable: Enables the service to start automatically at boot
  • This means libvirtd will start every time you boot your computer
  • You won't have to manually start it each time
  • --now: Starts the service immediately (in addition to enabling it)
  • Without --now, the service would only be enabled but not started until the next reboot
  • This flag ensures the service starts right away

What this does: This single command both enables libvirtd to start automatically on boot AND starts it immediately so you can use virt-manager right away.

Expected output: You should see a message like:

Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /usr/lib/systemd/system/libvirtd.service.

This confirms that the service has been enabled and started.

Method 2: Separate Commands (For Learning)

If you want to understand what each step does, you can run the commands separately:

Step 2.1: Start the libvirtd service

sudo systemctl start libvirtd

What this does:

  • systemctl: System control command - used to manage systemd services
  • start: Starts the libvirtd service immediately
  • This makes libvirtd active right now so you can use virt-manager

Step 2.2: Enable the libvirtd service

sudo systemctl enable libvirtd

What this does:

  • enable: Configures the service to start automatically at boot
  • This ensures libvirtd will start every time you boot your computer
  • Without this, you'd have to manually start libvirtd after each reboot

Why both commands?

  • start makes it run now
  • enable makes it run automatically in the future
  • You need both to have libvirtd working immediately AND after reboots

Expected output: After running enable, you should see:

Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /usr/lib/systemd/system/libvirtd.service.

This confirms that the service has been enabled.

Note: Both methods achieve the same result. Use Method 1 for convenience, or Method 2 if you want to understand each step individually.


Step 3: Start and Enable iptables Service

Note: This step may not be necessary on all Arch Linux installations. Modern libvirt typically manages networking automatically. However, if you experience network connectivity issues with your VMs, these steps can help.

iptables is a firewall management tool that can be used for network management in virtual machines. Some configurations may require the iptables service to be running.

What is iptables?

  • iptables is a user-space utility program that allows system administrators to configure the IP packet filter rules of the Linux kernel firewall
  • libvirtd can use iptables for managing network connectivity for your virtual machines
  • Some network configurations may require iptables to be active

Important: If the iptables.service doesn't exist on your system (which is common on modern Arch Linux installations), you can skip this step. Libvirt will use its own network management. If you get an error that the service doesn't exist, proceed to Step 4.

Step 3.1: Start the iptables service

sudo systemctl start iptables.service

What this does:

  • start: Starts the iptables service immediately
  • iptables.service: The systemd service name for iptables
  • This makes iptables active right now

Step 3.2: Enable the iptables service

sudo systemctl enable iptables.service

What this does:

  • enable: Configures the service to start automatically at boot
  • This ensures iptables will be running every time you start your computer
  • Without this, you'd have to manually start iptables after each reboot

Why both commands?

  • start makes it run now
  • enable makes it run automatically in the future
  • You need both to have iptables working immediately AND after reboots

Step 4: Restart libvirtd Service

After starting iptables, you should restart libvirtd to ensure it properly recognizes and uses the iptables service:

sudo systemctl restart libvirtd.service

What this does:

  • restart: Stops the service and then starts it again
  • This ensures libvirtd picks up the newly started iptables service
  • Restarting is necessary because libvirtd needs to detect that iptables is now available for network management

Why restart libvirtd? When you start iptables after libvirtd is already running, libvirtd may not automatically detect it. Restarting libvirtd ensures it properly connects to iptables for managing virtual machine networks.


PikaOS 4 Installation

PikaOS 4 is a Debian-based gaming and optimization-focused distribution. It uses the apt package manager and standard Debian package names. The installation process is similar to Ubuntu and Debian.

About PikaOS:

  • Built on Debian base with custom compiled packages for stability and up-to-date software
  • Gaming-focused setup with excellent performance optimizations
  • High compatibility with broad software and hardware support
  • Performance-tuned with custom-tweaked kernel and optimized packages

Step 1: Install virt-manager and QEMU

Open your terminal and run the following command to install virt-manager and QEMU:

sudo apt install virt-manager qemu-kvm libvirt-daemon-system libvirt-clients

What each part means:

  • sudo: Runs the command with administrator privileges (needed to install software)
  • apt: The Advanced Package Tool, the package manager for Debian-based systems
  • install: Tells apt to install the specified packages
  • virt-manager: The Virtual Machine Manager graphical application
  • qemu-kvm: QEMU with KVM acceleration support
  • QEMU is a generic and open source machine emulator and virtualizer
  • KVM (Kernel-based Virtual Machine) provides hardware acceleration
  • This package includes the core QEMU functionality needed for virtualization
  • libvirt-daemon-system: The libvirt system daemon and systemd service files
  • This provides the libvirtd service that manages virtualization
  • libvirt-clients: Client libraries and utilities for libvirt
  • Includes command-line tools for managing virtual machines

What this does: This command installs virt-manager (the GUI), QEMU with KVM support (the virtualization engine), and all the libvirt components needed to manage virtual machines.

Step 1.1: Enter your password when prompted

The sudo command requires administrator privileges. You'll be asked to enter your user password. Note that when you type your password, nothing will appear on screen (this is normal for security reasons).

Example output:

[sudo] password for yourusername: 

Type your password and press Enter.

Step 1.2: Confirm installation

Apt will show you a list of packages to be installed and ask for confirmation. You'll see something like:

The following NEW packages will be installed:
  virt-manager
  qemu-kvm
  libvirt-daemon-system
  libvirt-clients
  ... (dependencies will be listed here)

Do you want to continue? [Y/n]

Type Y and press Enter to proceed with the installation.

Step 2: Enable and Start libvirtd Service

After installing the packages, you need to enable and start the libvirtd service. This service manages virtualization on your system.

What is libvirtd?

  • libvirtd is the libvirt daemon (background service)
  • A daemon is a program that runs in the background and performs system tasks
  • libvirtd manages your virtual machines and provides the connection between virt-manager and QEMU
  • It must be running for virt-manager to work

Method 1: Combined Command (Recommended - Quick and Easy)

The easiest way is to use a single command that both enables and starts the service:

sudo systemctl enable --now libvirtd

What each part means:

  • sudo: Runs the command with administrator privileges (needed to manage system services)
  • systemctl: System control command - used to manage systemd services
  • enable: Enables the service to start automatically at boot
  • This means libvirtd will start every time you boot your computer
  • You won't have to manually start it each time
  • --now: Starts the service immediately (in addition to enabling it)
  • Without --now, the service would only be enabled but not started until the next reboot
  • This flag ensures the service starts right away

What this does: This single command both enables libvirtd to start automatically on boot AND starts it immediately so you can use virt-manager right away.

Expected output: You should see a message like:

Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /lib/systemd/system/libvirtd.service.

This confirms that the service has been enabled and started.

Method 2: Separate Commands (For Learning)

If you want to understand what each step does, you can run the commands separately:

Step 2.1: Start the libvirtd service

sudo systemctl start libvirtd

What this does:

  • systemctl: System control command - used to manage systemd services
  • start: Starts the libvirtd service immediately
  • This makes libvirtd active right now so you can use virt-manager

Step 2.2: Enable the libvirtd service

sudo systemctl enable libvirtd

What this does:

  • enable: Configures the service to start automatically at boot
  • This ensures libvirtd will start every time you boot your computer
  • Without this, you'd have to manually start libvirtd after each reboot

Why both commands?

  • start makes it run now
  • enable makes it run automatically in the future
  • You need both to have libvirtd working immediately AND after reboots

Expected output: After running enable, you should see:

Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /lib/systemd/system/libvirtd.service.

This confirms that the service has been enabled.

Note: Both methods achieve the same result. Use Method 1 for convenience, or Method 2 if you want to understand each step individually.

Step 3: Verify Installation

You can verify that everything is working:

Check libvirtd status:

sudo systemctl status libvirtd

What to look for: You should see Active: active (running) in green. Press q to exit the status view.

Launch virt-manager: You can now launch virt-manager from:

  • Your application menu (look for "Virtual Machine Manager" or "virt-manager")
  • Or from the terminal:
virt-manager

What you should see:

  • The virt-manager window should open
  • You should see "QEMU/KVM" listed as a connection (usually localhost (system))
  • If you see the connection, everything is set up correctly!

That's it! On PikaOS 4, virt-manager should now be fully working. You can proceed to the Advanced Configuration section if you want to set up GPU acceleration or NVMe passthrough, or start creating virtual machines right away.

Note: PikaOS 4, being Debian-based, typically doesn't require iptables configuration like Arch Linux. The libvirt networking should work automatically with the default configuration.


Debian Installation

Debian uses the apt package manager and standard Debian package names. The installation process is straightforward and similar to other Debian-based distributions.

Step 1: Install virt-manager and QEMU

Open your terminal and run the following command to install virt-manager and QEMU:

sudo apt install virt-manager qemu-kvm libvirt-daemon-system libvirt-clients

What each part means:

  • sudo: Runs the command with administrator privileges (needed to install software)
  • apt: The Advanced Package Tool, the package manager for Debian-based systems
  • install: Tells apt to install the specified packages
  • virt-manager: The Virtual Machine Manager graphical application
  • qemu-kvm: QEMU with KVM acceleration support
  • QEMU is a generic and open source machine emulator and virtualizer
  • KVM (Kernel-based Virtual Machine) provides hardware acceleration
  • This package includes the core QEMU functionality needed for virtualization
  • libvirt-daemon-system: The libvirt system daemon and systemd service files
  • This provides the libvirtd service that manages virtualization
  • libvirt-clients: Client libraries and utilities for libvirt
  • Includes command-line tools for managing virtual machines

What this does: This command installs virt-manager (the GUI), QEMU with KVM support (the virtualization engine), and all the libvirt components needed to manage virtual machines.

Step 1.1: Enter your password when prompted

The sudo command requires administrator privileges. You'll be asked to enter your user password. Note that when you type your password, nothing will appear on screen (this is normal for security reasons).

Example output:

[sudo] password for yourusername: 

Type your password and press Enter.

Step 1.2: Confirm installation

Apt will show you a list of packages to be installed and ask for confirmation. You'll see something like:

The following NEW packages will be installed:
  virt-manager
  qemu-kvm
  libvirt-daemon-system
  libvirt-clients
  ... (dependencies will be listed here)

Do you want to continue? [Y/n]

Type Y and press Enter to proceed with the installation.

Step 2: Enable and Start libvirtd Service

After installing the packages, you need to enable and start the libvirtd service. This service manages virtualization on your system.

What is libvirtd?

  • libvirtd is the libvirt daemon (background service)
  • A daemon is a program that runs in the background and performs system tasks
  • libvirtd manages your virtual machines and provides the connection between virt-manager and QEMU
  • It must be running for virt-manager to work

Method 1: Combined Command (Recommended - Quick and Easy)

The easiest way is to use a single command that both enables and starts the service:

sudo systemctl enable --now libvirtd

What each part means:

  • sudo: Runs the command with administrator privileges (needed to manage system services)
  • systemctl: System control command - used to manage systemd services
  • enable: Enables the service to start automatically at boot
  • This means libvirtd will start every time you boot your computer
  • You won't have to manually start it each time
  • --now: Starts the service immediately (in addition to enabling it)
  • Without --now, the service would only be enabled but not started until the next reboot
  • This flag ensures the service starts right away

What this does: This single command both enables libvirtd to start automatically on boot AND starts it immediately so you can use virt-manager right away.

Expected output: You should see a message like:

Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /lib/systemd/system/libvirtd.service.

This confirms that the service has been enabled and started.

Method 2: Separate Commands (For Learning)

If you want to understand what each step does, you can run the commands separately:

Step 2.1: Start the libvirtd service

sudo systemctl start libvirtd

What this does:

  • systemctl: System control command - used to manage systemd services
  • start: Starts the libvirtd service immediately
  • This makes libvirtd active right now so you can use virt-manager

Step 2.2: Enable the libvirtd service

sudo systemctl enable libvirtd

What this does:

  • enable: Configures the service to start automatically at boot
  • This ensures libvirtd will start every time you boot your computer
  • Without this, you'd have to manually start libvirtd after each reboot

Why both commands?

  • start makes it run now
  • enable makes it run automatically in the future
  • You need both to have libvirtd working immediately AND after reboots

Expected output: After running enable, you should see:

Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /lib/systemd/system/libvirtd.service.

This confirms that the service has been enabled.

Note: Both methods achieve the same result. Use Method 1 for convenience, or Method 2 if you want to understand each step individually.

Step 3: Verify Installation

You can verify that everything is working:

Check libvirtd status:

sudo systemctl status libvirtd

What to look for: You should see Active: active (running) in green. Press q to exit the status view.

Launch virt-manager: You can now launch virt-manager from:

  • Your application menu (look for "Virtual Machine Manager" or "virt-manager")
  • Or from the terminal:
virt-manager

What you should see:

  • The virt-manager window should open
  • You should see "QEMU/KVM" listed as a connection (usually localhost (system))
  • If you see the connection, everything is set up correctly!

That's it! On Debian, virt-manager should now be fully working. You can proceed to the Advanced Configuration section if you want to set up GPU acceleration or NVMe passthrough, or start creating virtual machines right away.

Note: Debian, being a Debian-based distribution, typically doesn't require iptables configuration like Arch Linux. The libvirt networking should work automatically with the default configuration.


Verification

Step 1: Check Service Status

You can verify that all services are running correctly:

Check libvirtd status:

sudo systemctl status libvirtd

What to look for: You should see Active: active (running) in green. Press q to exit the status view.

Check iptables status:

sudo systemctl status iptables

What to look for: You should see Active: active (exited) or Active: active (running). Press q to exit.

Step 2: Launch virt-manager

You can now launch virt-manager from:

  • Your application menu (look for "Virtual Machine Manager" or "virt-manager")
  • Or from the terminal:
virt-manager

What you should see:

  • The virt-manager window should open
  • You should see "QEMU/KVM" listed as a connection (usually localhost (system))
  • If you see the connection, everything is set up correctly!

Advanced Configuration: GPU Acceleration (NVIDIA)

This section covers configuring GPU acceleration for virtual machines using NVIDIA GPUs. This allows your virtual machines to use your host system's GPU for better graphics performance.

Important Prerequisites:

  • You must have an NVIDIA GPU installed
  • NVIDIA drivers must be installed on your host system
  • You must have a virtual machine already created in virt-manager
  • This configuration is for advanced users who want GPU acceleration in their VMs

Step 1: Enable XML Edit Support

Before you can edit the XML configuration directly, you need to enable XML editing in virt-manager:

  1. Open virt-manager (if not already open)

  2. Go to Edit menu:

  • Click on "Edit" in the menu bar at the top
  • Select "Preferences" (or "Settings" depending on your version)
  1. Enable XML editing:
  • In the Preferences/Settings window, look for an option like "Enable XML Edit Support" or "Enable XML editing"
  • Check the box to enable it
  • Click "OK" or "Close" to save the setting

What this does:

  • Enables the XML tab in device configuration windows
  • Allows you to directly edit the XML configuration of virtual machine devices
  • This is necessary because some advanced GPU settings can only be configured via XML

Step 2: Configure the Video Device

Now you'll modify the video device settings for your virtual machine:

  1. Open your virtual machine settings:
  • In virt-manager, select your virtual machine from the list
  • Click the "Open" button (or double-click the VM) to view its details
  • Make sure the VM is shut down (not running) - you cannot edit hardware settings while the VM is running
  1. Access the Video settings:
  • Click on "Video" in the left sidebar (under the hardware list)
  • You should see a video device listed (usually "Video - virtio" or similar)
  1. Open the XML tab:
  • At the bottom of the Video settings window, click on the "XML" tab
  • You'll see the current XML configuration for the video device
  1. Replace the XML content:
  • Select all the existing XML content in the XML tab (you can use Ctrl+A)
  • Delete it
  • Paste the following XML configuration:
<video>
  <driver iommu="on" ats="on" packed="on"/>
  <model type="virtio" heads="1" primary="yes">
    <acceleration accel3d="yes"/>
  </model>
  <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
</video>

What each part means:

  • <video>: The video device element
  • <driver iommu="on" ats="on" packed="on"/>: GPU driver settings
  • iommu="on": Enables Input/Output Memory Management Unit (allows direct GPU access)
  • ats="on": Enables Address Translation Services (improves performance)
  • packed="on": Enables packed virtqueues (improves virtio performance)
  • <model type="virtio" heads="1" primary="yes">: Video model configuration
  • type="virtio": Uses virtio video device (paravirtualized, better performance)
  • heads="1": Number of display heads (monitors)
  • primary="yes": Makes this the primary display
  • <acceleration accel3d="yes"/>: Enables 3D acceleration
  • <address type="pci" ...>: PCI bus address for the device
  1. Apply the changes:
  • Click "Apply" or "OK" to save the changes

Step 3: Remove the Spice Graphics Server

The default Spice graphics server needs to be removed and replaced:

  1. Find the Graphics device:
  • In your VM settings, look for "Graphics" or "Display" in the hardware list
  • Select it
  1. Remove the device:
  • Click the "Remove" button (usually at the bottom left of the settings window)
  • Confirm the removal if prompted

What this does:

  • Removes the default Spice display server
  • Spice is the default graphics protocol, but we need to replace it with EGL headless for GPU acceleration

Step 4: Add EGL Headless Graphics Device

Now you'll add a new graphics device configured for GPU acceleration:

  1. Add a new Graphics device:
  • At the bottom left of the VM settings window, click "Add Hardware" or the "+" button
  • In the "Add New Virtual Hardware" window, select "Graphics" from the list
  • Click "Add"
  1. Configure the new Graphics device:
  • The new Graphics device should appear in your hardware list
  • Select it
  • Click on the "XML" tab at the bottom
  1. Replace the XML content:
  • Select all the existing XML content
  • Delete it
  • Paste the following XML configuration:
<graphics type="egl-headless">
  <gl rendernode="/dev/nvidia0"/>
</graphics>

What each part means:

  • <graphics type="egl-headless">: Graphics device configuration
  • type="egl-headless": Uses EGL (Embedded-System Graphics Library) in headless mode
  • Headless means it doesn't create a display window, but allows GPU access
  • <gl rendernode="/dev/nvidia0"/>: OpenGL configuration
  • rendernode="/dev/nvidia0": Points to your NVIDIA GPU device
  • This allows the VM to use your NVIDIA GPU for rendering
  1. Apply the changes:
  • Click "Apply" or "OK" to save

Note: If you have multiple NVIDIA GPUs, you might need to use /dev/nvidia1, /dev/nvidia2, etc. You can check available devices with:

ls -la /dev/nvidia*

Step 5: Add Default Spice Display

Finally, add a standard Spice graphics device for the display:

  1. Add another Graphics device:
  • Click "Add Hardware" or the "+" button again
  • Select "Graphics" from the list
  • Click "Add"
  1. Keep default settings:
  • Select the new Graphics device
  • Do NOT modify anything - leave all settings at their defaults
  • The default type should be "Spice server"
  • Just click "Apply" or "OK"

What this does:

  • Adds back a Spice display server for viewing the VM's screen
  • This provides the graphical interface you'll see in virt-manager
  • The EGL headless device handles GPU acceleration, while Spice handles the display

Step 6: Verify Configuration

After making all these changes:

  1. Review your hardware list:
  • You should see:
  • One Video device (virtio with acceleration)
  • Two Graphics devices (one EGL headless, one Spice)
  1. Start your virtual machine:
  • Close the VM settings window
  • Start your virtual machine
  • The VM should now have GPU acceleration enabled
  1. Test GPU acceleration:
  • Install GPU drivers inside your guest OS (the OS running in the VM)
  • Run GPU-intensive applications to verify acceleration is working

Important Notes:

  • Make sure your VM is shut down before making hardware changes
  • You may need to install NVIDIA drivers inside the guest OS as well
  • Some guest operating systems may require additional configuration
  • If you encounter issues, check that your NVIDIA drivers are properly installed on the host system

Advanced Configuration: NVMe Passthrough

This section covers configuring NVMe passthrough, which allows a virtual machine to have direct access to a physical NVMe drive. This provides native NVMe performance and is useful for performance-critical applications or testing OS installations directly on real hardware.

What is NVMe Passthrough?

Passthrough means giving a virtual machine direct access to a physical device. For NVMe drives, this allows the VM to use the disk as if it were physically installed inside it, rather than using a virtual disk file.

Benefits:

  • Native NVMe speeds: The VM gets full access to the drive's performance
  • Direct hardware access: Useful for testing OS installs on real hardware
  • Better performance: No virtualization overhead for disk operations

Important Warning:

  • Passthrough gives the VM full control of the hardware
  • Do NOT share the NVMe drive with the host system at the same time
  • The drive will be unavailable to the host while passed through to the VM

Prerequisites

Before you begin, you need:

Hardware Requirements:

  • CPU with IOMMU support:
  • Intel processors: Must support VT-d (Virtualization Technology for Directed I/O)
  • AMD processors: Must support AMD-V/IOMMU (Input/Output Memory Management Unit)
  • NVMe drive: The physical NVMe drive you want to pass through to the VM

BIOS/UEFI Configuration:

  • Enable IOMMU in BIOS/UEFI:
  • Intel: Enable VT-d (Virtualization Technology for Directed I/O)
  • AMD: Enable SVM/IOMMU (Secure Virtual Machine/IOMMU)
  • Access your BIOS/UEFI settings during boot (usually F2, F10, F12, or Delete key)
  • Look for virtualization settings in Advanced or CPU Configuration menus

Software Requirements:

  • Linux host system (Fedora, Arch Linux, Ubuntu, etc.)
  • Virt-manager and QEMU/KVM installed (see installation sections above)
  • VFIO kernel modules (usually built into modern Linux kernels)

Step 1: Identify Your NVMe Device

First, you need to identify the PCI address and vendor ID of your NVMe drive.

Run the identification command:

lspci -nn | grep -i nvme

What this command does:

  • lspci: Lists all PCI devices connected to your system
  • -nn: Shows both device names and numeric IDs
  • grep -i nvme: Filters the output to show only NVMe devices (case-insensitive)

Example output:

04:00.0 Non-Volatile memory controller [0108]: Samsung Electronics NVMe SSD Controller [144d:a808]

What you need from this output:

  • PCI Bus Address: 04:00.0 (the first part before the device name)
  • This tells the system where the device is located on the PCI bus
  • Vendor and Device ID: 144d:a808 (the numbers in brackets at the end)
  • 144d is the vendor ID (Samsung in this case)
  • a808 is the device ID (specific model)
  • You'll need this in the format 144d:a808 for configuration

If you have multiple NVMe drives: The command will show all of them. Make sure you identify the correct one by:

  • Checking the drive model name
  • Verifying the size matches your target drive
  • Using lsblk to see which /dev/nvme* device corresponds to which PCI address

Step 2: Configure Kernel Parameters

You need to add kernel parameters to enable IOMMU and bind your NVMe drive to the VFIO driver. These parameters tell the Linux kernel how to handle the device at boot time.

What are kernel parameters?

  • Kernel parameters are options passed to the Linux kernel when it starts
  • They control how the kernel behaves and which drivers are used
  • They're configured in your bootloader configuration

Required Parameters:

  • Enable IOMMU:
  • Intel: intel_iommu=on
  • AMD: amd_iommu=on
  • Passthrough mode: iommu=pt
  • pt stands for "passthrough" mode
  • This optimizes IOMMU for device passthrough
  • Bind NVMe to VFIO: vfio-pci.ids=144d:a808
  • Replace 144d:a808 with your actual vendor:device ID from Step 1
  • This tells the kernel to use the VFIO driver for your NVMe controller
  • Optional ACS override (for grouping issues): pcie_acs_override=multifunction
  • Only needed if you have IOMMU grouping issues
  • ACS (Access Control Services) override helps with device grouping

Complete parameter string example:

intel_iommu=on iommu=pt vfio-pci.ids=144d:a808 pcie_acs_override=multifunction

Step 3: Configure Your Bootloader

The method depends on which bootloader you're using. Choose the one that matches your system.

GRUB (Most Common on Fedora and Many Distributions)

Step 3.1: Edit GRUB configuration

Open the GRUB configuration file:

sudo nano /etc/default/grub

What this does:

  • Opens the GRUB bootloader configuration file for editing
  • GRUB is the most common bootloader on Linux systems

Step 3.2: Find and modify the kernel parameters line

Look for a line that starts with GRUB_CMDLINE_LINUX_DEFAULT=. It might look like:

GRUB_CMDLINE_LINUX_DEFAULT="quiet"

Modify it to include the IOMMU parameters:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt vfio-pci.ids=144d:a808 pcie_acs_override=multifunction"

Important:

  • Replace 144d:a808 with your actual vendor:device ID from Step 1
  • If you have AMD, use amd_iommu=on instead of intel_iommu=on
  • Keep any existing parameters (like quiet) and add the new ones

Step 3.3: Save and exit

  • Press Ctrl + X to exit
  • Press Y to confirm saving
  • Press Enter to confirm the filename

Step 3.4: Update GRUB configuration

After editing the file, you must regenerate the GRUB configuration:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg

What this does:

  • grub2-mkconfig: Generates a new GRUB configuration file
  • -o /boot/grub2/grub.cfg: Specifies the output file location
  • This applies your changes to the actual boot configuration

Note: On some systems, the command might be grub-mkconfig instead of grub2-mkconfig. If the above doesn't work, try:

sudo grub-mkconfig -o /boot/grub/grub.cfg

systemd-boot (Common on Some Arch Linux Installations)

Step 3.1: Find your boot entry file

systemd-boot stores configuration in /boot/loader/entries/. List the files:

ls /boot/loader/entries/

You'll see files like fedora.conf, arch.conf, etc. Note the name of your boot entry.

Step 3.2: Edit the boot entry

Open your boot entry file (replace fedora.conf with your actual file):

sudo nano /boot/loader/entries/fedora.conf

Step 3.3: Add parameters to the options line

Find the options line and add the kernel parameters. It should look something like:

title   Fedora
linux   /vmlinuz-linux
initrd  /initramfs-linux.img
options root=UUID=xxxx-xxxx intel_iommu=on iommu=pt vfio-pci.ids=144d:a808 pcie_acs_override=multifunction

Important:

  • Keep the existing root=UUID=xxxx-xxxx part (your root filesystem)
  • Add the IOMMU parameters after it
  • Replace 144d:a808 with your actual vendor:device ID

Step 3.4: Save and exit

  • Press Ctrl + X, then Y, then Enter

rEFInd (Alternative Boot Manager)

Step 3.1: Edit rEFInd configuration

sudo nano /boot/EFI/refind/refind.conf

Step 3.2: Add kernel parameters

Find or add the extra_kernel_params line:

extra_kernel_params "intel_iommu=on iommu=pt vfio-pci.ids=144d:a808 pcie_acs_override=multifunction"

Step 3.3: Save and exit

  • Press Ctrl + X, then Y, then Enter

Limine (Modern Bootloader)

Step 3.1: Edit Limine configuration

sudo nano limine.cfg

Step 3.2: Modify the CMDLINE in your boot entry

Find your boot entry and modify the CMDLINE parameter:

TIMEOUT=5
DEFAULT_ENTRY=Fedora

:Fedora
    PROTOCOL=linux
    KERNEL_PATH=boot:///vmlinuz-linux
    CMDLINE=quiet root=UUID=xxxx-xxxx intel_iommu=on iommu=pt vfio-pci.ids=144d:a808 pcie_acs_override=multifunction
    MODULE_PATH=boot:///initramfs-linux.img

Important:

  • Keep your existing root=UUID=xxxx-xxxx parameter
  • Add the IOMMU parameters after it

Step 3.3: Save and exit

  • Press Ctrl + X, then Y, then Enter

Step 4: Reboot and Verify

After configuring your bootloader, you need to reboot for the changes to take effect.

Step 4.1: Reboot your system

sudo reboot

What this does:

  • Reboots your computer
  • The new kernel parameters will be applied during boot
  • Your NVMe drive should be bound to the VFIO driver

Step 4.2: Verify VFIO binding

After rebooting, check if your NVMe drive is bound to the VFIO driver. Use the PCI bus address you identified in Step 1 (replace 04:00.0 with your actual address):

lspci -nnk -s 04:00.0

What this command does:

  • lspci: Lists PCI devices
  • -nnk: Shows numeric IDs and kernel drivers
  • -s 04:00.0: Shows only the device at this specific PCI address

Expected output:

04:00.0 Non-Volatile memory controller [0108]: Samsung Electronics NVMe SSD Controller [144d:a808]
        Subsystem: Samsung Electronics Device [144d:a801]
        Kernel driver in use: vfio-pci
        Kernel modules: nvme

What to look for:

  • "Kernel driver in use: vfio-pci" - This confirms the device is bound to VFIO
  • If you see "Kernel driver in use: nvme" instead, the binding didn't work

If VFIO binding failed:

  • Double-check that you used the correct vendor:device ID
  • Verify that IOMMU is enabled in BIOS
  • Check that your kernel parameters were saved correctly
  • Review boot messages for IOMMU-related errors

Step 5: Attach NVMe in Virt-Manager

Now that the NVMe drive is bound to VFIO, you can attach it to your virtual machine.

Step 5.1: Open Virt-Manager

Launch virt-manager (make sure your VM is shut down - you cannot add hardware while it's running).

Step 5.2: Open VM settings

  • Select your virtual machine from the list
  • Click the "Open" button (or double-click the VM) to view its details
  • Make sure the VM is shut down (not running)

Step 5.3: Add PCI Host Device

  1. Click "Add Hardware" or the "+" button (usually at the bottom left)
  2. In the "Add New Virtual Hardware" window, select "PCI Host Device" from the list on the left
  3. You should see your NVMe controller listed (it will show the PCI address like 04:00.0 and the device name)
  4. Select your NVMe controller from the list
  5. Click "Finish" or "Add"

What this does:

  • Adds the physical NVMe controller as a PCI device to your VM
  • The VM will now have direct access to the NVMe drive
  • The drive will appear in the VM as if it were physically installed

Step 5.4: Start the VM

  • Close the VM settings window
  • Start your virtual machine
  • The NVMe drive should now be available inside the VM

Important Notes:

  • Boot from NVMe: If you want the VM to boot directly from the NVMe drive, ensure your VM is configured to use OVMF (UEFI) firmware, not BIOS. You can check this in the VM's firmware settings.
  • Drive availability: Once passed through, the NVMe drive will be unavailable to the host system. Make sure you're not using it for anything important on the host.
  • Clean shutdown: Some NVMe controllers don't reset cleanly. You may need to fully power-off the VM (not just restart) before the host can use the drive again.

Step 6: Verify Inside the VM

After starting the VM, verify that the NVMe drive is detected:

Inside the VM (Linux guest):

lsblk

or

lspci | grep -i nvme

Inside the VM (Windows guest):

  • Open Device Manager
  • Look for the NVMe controller under "Storage controllers"
  • Check Disk Management to see the drive

What you should see:

  • The NVMe drive should appear as a storage device
  • It should be available for partitioning and formatting
  • Performance should be at native NVMe speeds

NVMe Passthrough Troubleshooting

Issue: Device Still Bound to nvme Driver

Symptoms:

  • After reboot, lspci -nnk -s XX:XX.X shows "Kernel driver in use: nvme" instead of "vfio-pci"
  • The device is not available in virt-manager's PCI Host Device list

Solutions:

  1. Verify kernel parameters were applied:
    cat /proc/cmdline

Check if your IOMMU parameters are present. If not, your bootloader configuration wasn't applied correctly.

  1. Check for typos in vendor:device ID:
  • Make sure you used the correct format: vfio-pci.ids=XXXX:YYYY
  • Verify the ID with: lspci -nn | grep -i nvme
  • The format should be exactly vendor:device (e.g., 144d:a808)
  1. Verify IOMMU is enabled:
    dmesg | grep -i iommu

You should see messages indicating IOMMU is enabled. If not, check BIOS settings.

  1. Manually unbind and bind to VFIO (temporary fix):
    # Replace 04:00.0 with your PCI address
    echo "0000:04:00.0" | sudo tee /sys/bus/pci/drivers/nvme/unbind
    echo "144d a808" | sudo tee /sys/bus/pci/drivers/vfio-pci/new_id

Note: This is temporary and will be lost on reboot. Fix the kernel parameters for a permanent solution.

Issue: VM Won't Boot with NVMe Passthrough

Symptoms:

  • VM fails to start
  • Error messages about device not being available
  • VM hangs during boot

Solutions:

  1. Verify the device is bound to VFIO:
    lspci -nnk -s 04:00.0

Must show "Kernel driver in use: vfio-pci"

  1. Check IOMMU groups:
    find /sys/kernel/iommu_groups/ -type l | sort -V

Your NVMe device should be in its own IOMMU group or with devices that can be safely passed through together.

  1. Verify VM firmware:
  • Make sure your VM is using OVMF (UEFI) firmware, not BIOS
  • Check in virt-manager: VM Settings → Overview → Firmware
  1. Check VM logs:
    sudo journalctl -u libvirtd -n 50

Look for error messages related to the PCI device.

Issue: NVMe Not Detected Inside VM

Symptoms:

  • VM boots successfully but NVMe drive doesn't appear
  • lsblk or lspci doesn't show the NVMe device inside the VM

Solutions:

  1. Verify the device is attached in virt-manager:
  • Open VM settings
  • Check that the PCI Host Device appears in the hardware list
  • Make sure it's enabled (checkbox is checked)
  1. Check guest OS support:
  • Some older operating systems may not have NVMe drivers
  • Windows may need NVMe drivers installed
  • Linux should detect it automatically
  1. Verify the device is actually passed through:
  • Inside the VM, run: lspci | grep -i nvme
  • If nothing appears, the passthrough didn't work

Issue: Host Can't Access NVMe After VM Shutdown

Symptoms:

  • After shutting down the VM, the host system can't access the NVMe drive
  • Drive doesn't appear in lsblk or fdisk -l

Solutions:

  1. Fully power off the VM:
  • Don't just restart - do a complete shutdown
  • Some NVMe controllers need a full power cycle to reset
  1. Manually unbind from VFIO:

    # Replace 04:00.0 with your PCI address
    echo "0000:04:00.0" | sudo tee /sys/bus/pci/drivers/vfio-pci/unbind
    echo "0000:04:00.0" | sudo tee /sys/bus/pci/drivers/nvme/bind
  2. Reboot the host system:

  • This will reset all PCI devices
  • The drive should be available again after reboot

Prevention:

  • Consider using a dedicated NVMe drive for passthrough
  • Don't use your system's boot drive for passthrough
  • Always fully power off VMs before trying to access the drive on the host

Issue: IOMMU Grouping Problems

Symptoms:

  • Can't pass through just the NVMe device
  • Other devices in the same IOMMU group prevent passthrough
  • Error messages about IOMMU groups

Solutions:

  1. Use ACS override (already included in our configuration):
  • The pcie_acs_override=multifunction parameter should help
  • If still having issues, try: pcie_acs_override=downstream,multifunction
  1. Check IOMMU group:

    # Find your device's IOMMU group
    find /sys/kernel/iommu_groups/ -name "04:00.0"
    # List all devices in that group
    ls -la /sys/kernel/iommu_groups/*/devices/
  2. Pass through the entire group:

  • If necessary, you may need to pass through all devices in the IOMMU group
  • This is usually safe if they're all related (like a PCIe switch)

Issue: Performance Not as Expected

Symptoms:

  • NVMe drive works but performance is slower than expected
  • Not getting native NVMe speeds

Solutions:

  1. Verify direct passthrough:
  • Make sure you're using PCI Host Device, not a virtual disk
  • Check that the device shows "vfio-pci" driver on the host
  1. Check CPU pinning:
  • Consider pinning VM vCPUs to specific physical cores
  • This can improve performance by reducing CPU migration
  1. Verify NUMA topology (if applicable):
  • On multi-socket systems, ensure the VM and NVMe are on the same NUMA node
  • Check with: numactl --hardware
  1. Check for host system load:
  • High CPU usage on the host can affect VM performance
  • Monitor with: htop or top

Troubleshooting

Issue: "Unable to connect to libvirt"

Symptoms:

  • virt-manager opens but shows an error about not being able to connect
  • You see "Connection failed" messages

Solutions:

  1. Check if libvirtd is running:
    sudo systemctl status libvirtd

If it's not running, start it:

sudo systemctl start libvirtd
  1. Check if you're in the libvirt group:
    groups

You should see libvirt in the list. If not, add yourself:

sudo usermod -aG libvirt $USER

Then log out and log back in (or reboot) for the change to take effect.

Issue: Virtual machines can't connect to the network

Symptoms:

  • Virtual machines start but have no internet connection
  • Network settings show no connectivity

Solutions:

  1. Verify iptables is running:
    sudo systemctl status iptables

If it's not running, start it:

sudo systemctl start iptables
sudo systemctl enable iptables
  1. Restart libvirtd:

    sudo systemctl restart libvirtd
  2. Check the default network in virt-manager:

  • Open virt-manager
  • Right-click on "QEMU/KVM" connection → "Details"
  • Go to the "Virtual Networks" tab
  • Make sure the "default" network is active (should show as "Active")

Issue: "Permission denied" errors

Symptoms:

  • Cannot create or manage virtual machines
  • Permission errors when trying to use virt-manager

Solutions:

  1. Add your user to the libvirt group:
    sudo usermod -aG libvirt $USER

Then log out and log back in.

  1. Verify group membership:
    groups

You should see libvirt in the output.

Issue: Services won't start

Symptoms:

  • systemctl start commands fail
  • Services show as "failed" in status

Solutions:

  1. Check for error messages:
    sudo systemctl status libvirtd
    sudo journalctl -xe

Look for specific error messages that can help identify the problem.

  1. Verify packages are installed:
    pacman -Q virt-manager qemu-full

If packages are missing, reinstall them.


Quick Reference: All Commands in Order

Here's a quick reference of all commands you need to run, in order:

# Step 1: Install packages
sudo pacman -S virt-manager qemu-full

# Step 2: Enable and start libvirtd
sudo systemctl enable --now libvirtd

# Step 3: Start and enable iptables
sudo systemctl start iptables.service
sudo systemctl enable iptables.service

# Step 4: Restart libvirtd
sudo systemctl restart libvirtd.service

# Verification (optional)
sudo systemctl status libvirtd
sudo systemctl status iptables

Additional Resources


Summary

This guide covered:

  1. Installing virt-manager on Fedora and Fedora-based distributions like Nobara (using the @virtualization package group), Arch Linux and Arch-based distributions like CachyOS (qemu-full package), Debian, and PikaOS 4 (Debian-based packages)
  2. Enabling and starting the libvirtd service
  3. Starting and enabling the iptables service (Arch Linux and Arch-based distributions only)
  4. Restarting libvirtd to recognize iptables (Arch Linux and Arch-based distributions only)
  5. Verifying the installation
  6. Advanced GPU acceleration configuration (NVIDIA)
  7. Advanced NVMe passthrough configuration (including kernel parameters, bootloader configuration, and VFIO binding)
  8. Troubleshooting common issues (including NVMe passthrough-specific problems)

After completing these steps, you should have a fully functional virt-manager installation on Fedora (or Fedora-based distributions like Nobara), Arch Linux (or Arch-based distributions like CachyOS), Debian, or PikaOS 4, ready to create and manage virtual machines with advanced features like GPU acceleration and NVMe passthrough!

⚠️ **GitHub.com Fallback** ⚠️