kvm - dwilson2547/wiki_demo GitHub Wiki

KVM (Kernel-based Virtual Machine) Explained in Detail


1. What is KVM?

KVM (Kernel-based Virtual Machine) is an open-source virtualization technology built into the Linux kernel. It allows the Linux kernel to function as a hypervisor, enabling you to run multiple virtual machines (VMs) with their own isolated operating systems (guests) on a single physical host. KVM is widely used for server virtualization, cloud computing, and desktop virtualization due to its performance, scalability, and integration with Linux.


2. How KVM Works

KVM turns the Linux kernel into a Type-1 hypervisor (bare-metal hypervisor), meaning it runs directly on the hardware without requiring an additional host OS layer. Here’s how it works:

2.1. Core Components

  1. KVM Kernel Module:

    • A loadable kernel module (kvm.ko) that provides the core virtualization infrastructure.
    • Manages CPU and memory virtualization by leveraging hardware extensions (e.g., Intel VT-x or AMD-V).
  2. QEMU (Quick Emulator):

    • Provides device emulation (e.g., network cards, storage, GPU) for VMs.
    • Works with KVM to create fully functional virtual machines.
    • Example: qemu-system-x86_64 is the emulator used with KVM.
  3. libvirt:

    • A virtualization API and management tool that simplifies VM management.
    • Provides tools like virsh and virt-manager for creating, starting, and managing VMs.
    • Example: virsh list --all lists all VMs.

2.2. Hardware Requirements

  • CPU Virtualization Extensions:

    • Intel: VT-x (Intel Virtualization Technology).
    • AMD: AMD-V (AMD Virtualization).
    • Check if your CPU supports virtualization:
      grep -E --color "vmx|svm" /proc/cpuinfo
      • vmx = Intel VT-x, svm = AMD-V.
  • Memory: Enough RAM to allocate to VMs (e.g., 4GB+ for the host + additional for each VM).

  • Storage: Disk space for VM images (e.g., QCOW2 or raw disk files).


2.3. KVM Architecture

+---------------------+
|   Guest OS (VM)     |
|   (Linux/Windows)   |
+----------+----------+
           |
           | (Virtual CPU, Memory, Devices)
           |
+----------v----------+
|   KVM (Kernel      |
|   Module)          |
+----------+----------+
           |
           | (Hardware Virtualization)
           |
+----------v----------+
|   Host OS (Linux)   |
+----------+----------+
           |
           | (Physical Hardware: CPU, RAM, Storage, NIC)
           |
+----------v----------+

3. KVM Features

3.1. Performance

  • Near-native performance: KVM leverages hardware virtualization extensions (Intel VT-x/AMD-V) to run VMs at near-native speed.
  • Low overhead: Minimal performance loss compared to bare-metal systems.

3.2. Security

  • Isolation: VMs are isolated from each other and the host OS.
  • SELinux/AppArmor: Integration with Linux security modules for enhanced protection.
  • Secure Boot: Supports UEFI Secure Boot for VMs.

3.3. Scalability

  • Supports hundreds of VMs on a single host (depending on hardware).
  • Live Migration: Move running VMs between physical hosts with minimal downtime (using virsh migrate).

3.4. Device Emulation

  • Network: Virtual NICs (e.g., virtio-net for high performance).
  • Storage: Virtual disks (QCOW2, raw, LVM).
  • GPU: GPU passthrough for graphics-intensive workloads (e.g., gaming, CAD).

3.5. Management Tools

  • libvirt: Manage VMs with virsh, virt-manager, or cockpit.
  • OpenStack: KVM is the default hypervisor for OpenStack clouds.
  • oVirt: Enterprise-grade virtualization management platform.

4. Setting Up KVM

4.1. Install KVM on Linux

Ubuntu/Debian:

sudo apt update
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager
  • qemu-kvm: KVM emulator.
  • libvirt-daemon-system: Libvirt daemon for managing VMs.
  • virt-manager: GUI for managing VMs.

RHEL/CentOS/Fedora:

sudo dnf install qemu-kvm libvirt virt-install virt-manager bridge-utils

Arch Linux:

sudo pacman -S qemu libvirt virt-manager dnsmasq ebtables

4.2. Verify KVM Installation

lsmod | grep kvm
  • Output should include kvm_intel or kvm_amd.
virsh list --all
  • Lists all VMs (initially empty).

4.3. Start and Enable libvirt

sudo systemctl enable --now libvirtd

4.4. Add Your User to the libvirt Group

sudo usermod -aG libvirt $(whoami)
sudo usermod -aG kvm $(whoami)
  • Log out and back in for changes to take effect.

5. Creating and Managing VMs

5.1. Create a VM with virt-manager (GUI)

  1. Open virt-manager:
    virt-manager
  2. Click Create a New Virtual Machine.
  3. Choose:
    • Local install media (ISO) for OS installation.
    • Import existing disk image for pre-built VMs.
  4. Allocate CPU, RAM, and storage.
  5. Configure networking (e.g., NAT or bridged).
  6. Start the VM.

5.2. Create a VM with virsh (CLI)

Step 1: Create a Virtual Disk

qemu-img create -f qcow2 /var/lib/libvirt/images/myvm.qcow2 20G
  • Creates a 20GB QCOW2 disk image.

Step 2: Install a VM from an ISO

virt-install \
  --name myvm \
  --ram 2048 \
  --vcpus 2 \
  --disk path=/var/lib/libvirt/images/myvm.qcow2,size=20 \
  --os-type linux \
  --os-variant ubuntu20.04 \
  --network bridge=virbr0 \
  --graphics spice \
  --cdrom /path/to/ubuntu.iso
  • Installs Ubuntu 20.04 with 2GB RAM, 2 vCPUs, and a 20GB disk.

Step 3: Manage VMs with virsh

Command Description
virsh list --all List all VMs.
virsh start myvm Start a VM.
virsh shutdown myvm Gracefully shut down a VM.
virsh destroy myvm Forcefully stop a VM.
virsh undefine myvm Remove a VM (does not delete disk images).
virsh console myvm Access the VM’s console.
virsh edit myvm Edit the VM’s XML configuration.
virsh domstats myvm Show VM performance statistics.

5.3. Create a VM with virt-builder (Prebuilt Images)

sudo virt-builder ubuntu-20.04 --output /var/lib/libvirt/images/ubuntu.qcow2 --size 20G
  • Downloads a prebuilt Ubuntu 20.04 image and resizes it to 20GB.

6. Networking in KVM

KVM supports multiple networking modes for VMs:

6.1. NAT (Default)

  • VMs share the host’s IP address via Network Address Translation (NAT).
  • VMs can access the internet but are not directly accessible from outside the host.
  • Configured via virbr0 (default virtual bridge).

6.2. Bridged Networking

  • VMs appear as independent devices on the physical network.
  • Requires a bridge interface (e.g., br0).
  • Example setup:
    sudo nmcli con add type bridge ifname br0
    sudo nmcli con modify br0 ipv4.method auto
    sudo nmcli con up br0
    • Edit the VM’s network interface to use br0.

6.3. Isolated Network

  • VMs communicate only with each other and the host.
  • Useful for testing or development environments.

6.4. Direct Attachment (MacVTAP)

  • VMs connect directly to a physical NIC for high performance.
  • Example:
    <interface type='direct'>
      <source dev='eth0' mode='bridge'/>
      <model type='virtio'/>
    </interface>

7. Storage in KVM

KVM supports multiple storage backends for VMs:

7.1. File-Based Storage

  • QCOW2: Dynamic disk image (grows as needed).
    qemu-img create -f qcow2 myvm.qcow2 20G
  • RAW: Fixed-size disk image (better performance).
    qemu-img create -f raw myvm.raw 20G

7.2. Block Storage (LVM)

  • Use Logical Volume Manager (LVM) for better performance and snapshots.
    lvcreate -L 20G -n myvm_lvm vg0

7.3. Storage Pools

  • Manage storage centrally with virsh:
    virsh pool-list --all
    virsh pool-create-as mypool dir --target /var/lib/libvirt/mypool

8. Advanced KVM Features

8.1. Live Migration

Move a running VM to another host with minimal downtime:

virsh migrate --live myvm qemu+ssh://other-host/system
  • Requires shared storage (e.g., NFS, iSCSI) between hosts.

8.2. Snapshots

Create and manage VM snapshots:

virsh snapshot-create myvm
virsh snapshot-list myvm
virsh snapshot-revert myvm snapshot-name

8.3. PCI Passthrough

Assign physical PCI devices (e.g., GPUs, NICs) directly to a VM for near-native performance:

  1. Enable IOMMU in the host BIOS.
  2. Edit the VM’s XML configuration to include the PCI device:
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
    </hostdev>
  3. Start the VM.

8.4. Nested Virtualization

Run VMs inside other VMs (useful for testing):

echo "options kvm-intel nested=1" | sudo tee /etc/modprobe.d/kvm-intel.conf
sudo modprobe -r kvm-intel
sudo modprobe kvm-intel
  • Verify with:
    cat /sys/module/kvm_intel/parameters/nested
    • Output should be Y.

8.5. SR-IOV (Single Root I/O Virtualization)

Improve network performance by allowing a physical NIC to be shared among VMs:

  1. Enable SR-IOV in the host BIOS.
  2. Load the igb_uio or vfio-pci driver.
  3. Configure the VM’s XML to use a Virtual Function (VF).

9. KVM Performance Tuning

9.1. CPU Pinning

Assign specific CPU cores to VMs for deterministic performance:

<cputune>
  <vcpupin vcpu='0' cpuset='0'/>
  <vcpupin vcpu='1' cpuset='1'/>
</cputune>

9.2. HugePages

Improve memory performance by using HugePages:

  1. Reserve HugePages at boot:
    sudo sysctl vm.nr_hugepages=1024
  2. Allocate HugePages to a VM:
    <memoryBacking>
      <hugepages/>
    </memoryBacking>

9.3. Virtio Drivers

Use virtio for high-performance I/O (network and storage):

<interface type='bridge'>
  <model type='virtio'/>
</interface>
<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2' cache='none' io='native'/>
  <target dev='vda' bus='virtio'/>
</disk>

9.4. CPU Models

Specify a CPU model for compatibility or performance:

<cpu mode='host-passthrough' check='none'/>

or

<cpu mode='custom' match='exact'>
  <model fallback='allow'>Skylake-Client</model>
</cpu>

10. KVM Management Tools

Tool Description
virt-manager GUI for managing VMs (create, start, stop, edit).
virsh CLI for managing VMs (e.g., virsh list, virsh start myvm).
cockpit Web-based management interface for VMs and host resources.
OpenStack Cloud platform that uses KVM as its hypervisor.
oVirt Enterprise-grade virtualization management platform.
Terraform Infrastructure-as-code tool for provisioning KVM VMs.
Ansible Automation tool for managing KVM hosts and VMs.

11. Real-World Use Cases for KVM

11.1. Cloud Computing

  • OpenStack and oVirt use KVM as the default hypervisor for cloud environments.
  • Example: AWS Nitro (inspired by KVM) powers AWS EC2 instances.

11.2. Enterprise Virtualization

  • Replace VMware ESXi or Microsoft Hyper-V with KVM for cost savings and open-source flexibility.
  • Example: Red Hat Virtualization (RHV) is built on KVM.

11.3. Development and Testing

  • Run multiple OS environments (e.g., Linux, Windows) on a single machine for software testing.
  • Example: A developer uses KVM to test an application on Ubuntu, CentOS, and Windows VMs.

11.4. High-Performance Computing (HPC)

  • Use KVM with PCI passthrough or SR-IOV for GPU-intensive workloads (e.g., machine learning, rendering).
  • Example: A research lab uses KVM to virtualize GPU-enabled VMs for AI training.

11.5. Homelab and Self-Hosting

  • Host multiple services (e.g., web servers, databases, media servers) on a single machine.
  • Example: A homelab uses KVM to run Nextcloud, Plex, and a web server on separate VMs.

11.6. Disaster Recovery

  • Use KVM’s live migration and snapshots to create backups and failover systems.
  • Example: A business replicates critical VMs to a backup host for disaster recovery.

12. KVM vs. Other Virtualization Technologies

Feature KVM VMware ESXi Microsoft Hyper-V Xen
Type Type-1 (bare-metal) Type-1 (bare-metal) Type-1 (bare-metal) Type-1 (bare-metal)
License Open-source (GPL) Proprietary (paid) Proprietary (free with Windows) Open-source (GPL)
Performance Near-native High High Near-native
Hardware Support Intel VT-x/AMD-V Intel VT-x/AMD-V Intel VT-x/AMD-V Intel VT-x/AMD-V
Management Tools libvirt, virt-manager, OpenStack vSphere Client, ESXi CLI Hyper-V Manager, PowerShell Xen Orchestra, xl
Live Migration Yes (with shared storage) Yes Yes Yes
GPU Passthrough Yes Yes Yes Yes
Container Support Yes (via LXC/Kata Containers) Limited Yes (via Hyper-V Containers) Yes (via Xen Containers)
Cloud Integration OpenStack, oVirt VMware Cloud Azure AWS (via Xen-based instances)

13. Troubleshooting KVM

13.1. VM Fails to Start

  • Cause: Insufficient RAM, misconfigured XML, or missing disk images.
  • Fix:
    • Check logs: journalctl -u libvirtd.
    • Verify VM configuration: virsh dumpxml myvm.
    • Ensure disk images exist: ls -lh /var/lib/libvirt/images/.

13.2. Networking Issues

  • Cause: Misconfigured bridge, NAT, or firewall rules.
  • Fix:
    • Check bridge status: brctl show.
    • Verify NAT rules: iptables -t nat -L.
    • Restart networking: sudo systemctl restart NetworkManager.

13.3. Poor Performance

  • Cause: Insufficient CPU/RAM, non-virtio drivers, or misconfigured storage.
  • Fix:
    • Use virtio for disks and NICs.
    • Allocate more resources to the VM.
    • Enable HugePages and CPU pinning.

13.4. PCI Passthrough Failures

  • Cause: IOMMU not enabled, incorrect PCI device assignment.
  • Fix:
    • Enable IOMMU in BIOS.
    • Verify PCI device isolation: lspci -nnk.
    • Edit VM XML to include the PCI device.

13.5. Storage Issues

  • Cause: Corrupted disk images or insufficient storage.
  • Fix:
    • Check disk images: qemu-img check myvm.qcow2.
    • Resize disks: qemu-img resize myvm.qcow2 +10G.

14. Learning Resources for KVM


15. Summary

  • KVM is a Linux-based, open-source virtualization solution that turns the kernel into a Type-1 hypervisor.
  • Components: KVM kernel module, QEMU (emulation), and libvirt (management).
  • Features: Near-native performance, live migration, PCI passthrough, and snapshots.
  • Use Cases: Cloud computing, enterprise virtualization, development, HPC, and homelabs.
  • Management Tools: virt-manager, virsh, cockpit, and OpenStack.
  • Performance Tuning: CPU pinning, HugePages, virtio drivers, and nested virtualization.
  • Troubleshooting: Check logs, verify configurations, and allocate sufficient resources.
⚠️ **GitHub.com Fallback** ⚠️