kvm - dwilson2547/wiki_demo GitHub Wiki
- 1. What is KVM?
- 2. How KVM Works
- 3. KVM Features
- 4. Setting Up KVM
- 5. Creating and Managing VMs
- 6. Networking in KVM
- 7. Storage in KVM
- 8. Advanced KVM Features
- 9. KVM Performance Tuning
- 10. KVM Management Tools
- 11. Real-World Use Cases for KVM
- 12. KVM vs. Other Virtualization Technologies
- 13. Troubleshooting KVM
- 14. Learning Resources for KVM
- 15. Summary
KVM (Kernel-based Virtual Machine) is an open-source virtualization technology built into the Linux kernel. It allows the Linux kernel to function as a hypervisor, enabling you to run multiple virtual machines (VMs) with their own isolated operating systems (guests) on a single physical host. KVM is widely used for server virtualization, cloud computing, and desktop virtualization due to its performance, scalability, and integration with Linux.
KVM turns the Linux kernel into a Type-1 hypervisor (bare-metal hypervisor), meaning it runs directly on the hardware without requiring an additional host OS layer. Here’s how it works:
-
KVM Kernel Module:
- A loadable kernel module (
kvm.ko
) that provides the core virtualization infrastructure. - Manages CPU and memory virtualization by leveraging hardware extensions (e.g., Intel VT-x or AMD-V).
- A loadable kernel module (
-
QEMU (Quick Emulator):
- Provides device emulation (e.g., network cards, storage, GPU) for VMs.
- Works with KVM to create fully functional virtual machines.
- Example:
qemu-system-x86_64
is the emulator used with KVM.
-
libvirt:
- A virtualization API and management tool that simplifies VM management.
- Provides tools like
virsh
andvirt-manager
for creating, starting, and managing VMs. - Example:
virsh list --all
lists all VMs.
-
CPU Virtualization Extensions:
- Intel: VT-x (Intel Virtualization Technology).
- AMD: AMD-V (AMD Virtualization).
- Check if your CPU supports virtualization:
grep -E --color "vmx|svm" /proc/cpuinfo
-
vmx
= Intel VT-x,svm
= AMD-V.
-
-
Memory: Enough RAM to allocate to VMs (e.g., 4GB+ for the host + additional for each VM).
-
Storage: Disk space for VM images (e.g., QCOW2 or raw disk files).
+---------------------+
| Guest OS (VM) |
| (Linux/Windows) |
+----------+----------+
|
| (Virtual CPU, Memory, Devices)
|
+----------v----------+
| KVM (Kernel |
| Module) |
+----------+----------+
|
| (Hardware Virtualization)
|
+----------v----------+
| Host OS (Linux) |
+----------+----------+
|
| (Physical Hardware: CPU, RAM, Storage, NIC)
|
+----------v----------+
- Near-native performance: KVM leverages hardware virtualization extensions (Intel VT-x/AMD-V) to run VMs at near-native speed.
- Low overhead: Minimal performance loss compared to bare-metal systems.
- Isolation: VMs are isolated from each other and the host OS.
- SELinux/AppArmor: Integration with Linux security modules for enhanced protection.
- Secure Boot: Supports UEFI Secure Boot for VMs.
- Supports hundreds of VMs on a single host (depending on hardware).
-
Live Migration: Move running VMs between physical hosts with minimal downtime (using
virsh migrate
).
-
Network: Virtual NICs (e.g.,
virtio-net
for high performance). - Storage: Virtual disks (QCOW2, raw, LVM).
- GPU: GPU passthrough for graphics-intensive workloads (e.g., gaming, CAD).
-
libvirt: Manage VMs with
virsh
,virt-manager
, orcockpit
. - OpenStack: KVM is the default hypervisor for OpenStack clouds.
- oVirt: Enterprise-grade virtualization management platform.
sudo apt update
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager
-
qemu-kvm
: KVM emulator. -
libvirt-daemon-system
: Libvirt daemon for managing VMs. -
virt-manager
: GUI for managing VMs.
sudo dnf install qemu-kvm libvirt virt-install virt-manager bridge-utils
sudo pacman -S qemu libvirt virt-manager dnsmasq ebtables
lsmod | grep kvm
- Output should include
kvm_intel
orkvm_amd
.
virsh list --all
- Lists all VMs (initially empty).
sudo systemctl enable --now libvirtd
sudo usermod -aG libvirt $(whoami)
sudo usermod -aG kvm $(whoami)
- Log out and back in for changes to take effect.
- Open
virt-manager
:virt-manager
- Click Create a New Virtual Machine.
- Choose:
- Local install media (ISO) for OS installation.
- Import existing disk image for pre-built VMs.
- Allocate CPU, RAM, and storage.
- Configure networking (e.g., NAT or bridged).
- Start the VM.
qemu-img create -f qcow2 /var/lib/libvirt/images/myvm.qcow2 20G
- Creates a 20GB QCOW2 disk image.
virt-install \
--name myvm \
--ram 2048 \
--vcpus 2 \
--disk path=/var/lib/libvirt/images/myvm.qcow2,size=20 \
--os-type linux \
--os-variant ubuntu20.04 \
--network bridge=virbr0 \
--graphics spice \
--cdrom /path/to/ubuntu.iso
- Installs Ubuntu 20.04 with 2GB RAM, 2 vCPUs, and a 20GB disk.
Command | Description |
---|---|
virsh list --all |
List all VMs. |
virsh start myvm |
Start a VM. |
virsh shutdown myvm |
Gracefully shut down a VM. |
virsh destroy myvm |
Forcefully stop a VM. |
virsh undefine myvm |
Remove a VM (does not delete disk images). |
virsh console myvm |
Access the VM’s console. |
virsh edit myvm |
Edit the VM’s XML configuration. |
virsh domstats myvm |
Show VM performance statistics. |
sudo virt-builder ubuntu-20.04 --output /var/lib/libvirt/images/ubuntu.qcow2 --size 20G
- Downloads a prebuilt Ubuntu 20.04 image and resizes it to 20GB.
KVM supports multiple networking modes for VMs:
- VMs share the host’s IP address via Network Address Translation (NAT).
- VMs can access the internet but are not directly accessible from outside the host.
- Configured via
virbr0
(default virtual bridge).
- VMs appear as independent devices on the physical network.
- Requires a bridge interface (e.g.,
br0
). - Example setup:
sudo nmcli con add type bridge ifname br0 sudo nmcli con modify br0 ipv4.method auto sudo nmcli con up br0
- Edit the VM’s network interface to use
br0
.
- Edit the VM’s network interface to use
- VMs communicate only with each other and the host.
- Useful for testing or development environments.
- VMs connect directly to a physical NIC for high performance.
- Example:
<interface type='direct'> <source dev='eth0' mode='bridge'/> <model type='virtio'/> </interface>
KVM supports multiple storage backends for VMs:
-
QCOW2: Dynamic disk image (grows as needed).
qemu-img create -f qcow2 myvm.qcow2 20G
-
RAW: Fixed-size disk image (better performance).
qemu-img create -f raw myvm.raw 20G
- Use Logical Volume Manager (LVM) for better performance and snapshots.
lvcreate -L 20G -n myvm_lvm vg0
- Manage storage centrally with
virsh
:virsh pool-list --all virsh pool-create-as mypool dir --target /var/lib/libvirt/mypool
Move a running VM to another host with minimal downtime:
virsh migrate --live myvm qemu+ssh://other-host/system
- Requires shared storage (e.g., NFS, iSCSI) between hosts.
Create and manage VM snapshots:
virsh snapshot-create myvm
virsh snapshot-list myvm
virsh snapshot-revert myvm snapshot-name
Assign physical PCI devices (e.g., GPUs, NICs) directly to a VM for near-native performance:
- Enable IOMMU in the host BIOS.
- Edit the VM’s XML configuration to include the PCI device:
<hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> </hostdev>
- Start the VM.
Run VMs inside other VMs (useful for testing):
echo "options kvm-intel nested=1" | sudo tee /etc/modprobe.d/kvm-intel.conf
sudo modprobe -r kvm-intel
sudo modprobe kvm-intel
- Verify with:
cat /sys/module/kvm_intel/parameters/nested
- Output should be
Y
.
- Output should be
Improve network performance by allowing a physical NIC to be shared among VMs:
- Enable SR-IOV in the host BIOS.
- Load the
igb_uio
orvfio-pci
driver. - Configure the VM’s XML to use a Virtual Function (VF).
Assign specific CPU cores to VMs for deterministic performance:
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
</cputune>
Improve memory performance by using HugePages:
- Reserve HugePages at boot:
sudo sysctl vm.nr_hugepages=1024
- Allocate HugePages to a VM:
<memoryBacking> <hugepages/> </memoryBacking>
Use virtio for high-performance I/O (network and storage):
<interface type='bridge'>
<model type='virtio'/>
</interface>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<target dev='vda' bus='virtio'/>
</disk>
Specify a CPU model for compatibility or performance:
<cpu mode='host-passthrough' check='none'/>
or
<cpu mode='custom' match='exact'>
<model fallback='allow'>Skylake-Client</model>
</cpu>
Tool | Description |
---|---|
virt-manager | GUI for managing VMs (create, start, stop, edit). |
virsh | CLI for managing VMs (e.g., virsh list , virsh start myvm ). |
cockpit | Web-based management interface for VMs and host resources. |
OpenStack | Cloud platform that uses KVM as its hypervisor. |
oVirt | Enterprise-grade virtualization management platform. |
Terraform | Infrastructure-as-code tool for provisioning KVM VMs. |
Ansible | Automation tool for managing KVM hosts and VMs. |
- OpenStack and oVirt use KVM as the default hypervisor for cloud environments.
- Example: AWS Nitro (inspired by KVM) powers AWS EC2 instances.
- Replace VMware ESXi or Microsoft Hyper-V with KVM for cost savings and open-source flexibility.
- Example: Red Hat Virtualization (RHV) is built on KVM.
- Run multiple OS environments (e.g., Linux, Windows) on a single machine for software testing.
- Example: A developer uses KVM to test an application on Ubuntu, CentOS, and Windows VMs.
- Use KVM with PCI passthrough or SR-IOV for GPU-intensive workloads (e.g., machine learning, rendering).
- Example: A research lab uses KVM to virtualize GPU-enabled VMs for AI training.
- Host multiple services (e.g., web servers, databases, media servers) on a single machine.
- Example: A homelab uses KVM to run Nextcloud, Plex, and a web server on separate VMs.
- Use KVM’s live migration and snapshots to create backups and failover systems.
- Example: A business replicates critical VMs to a backup host for disaster recovery.
Feature | KVM | VMware ESXi | Microsoft Hyper-V | Xen |
---|---|---|---|---|
Type | Type-1 (bare-metal) | Type-1 (bare-metal) | Type-1 (bare-metal) | Type-1 (bare-metal) |
License | Open-source (GPL) | Proprietary (paid) | Proprietary (free with Windows) | Open-source (GPL) |
Performance | Near-native | High | High | Near-native |
Hardware Support | Intel VT-x/AMD-V | Intel VT-x/AMD-V | Intel VT-x/AMD-V | Intel VT-x/AMD-V |
Management Tools | libvirt, virt-manager, OpenStack | vSphere Client, ESXi CLI | Hyper-V Manager, PowerShell | Xen Orchestra, xl |
Live Migration | Yes (with shared storage) | Yes | Yes | Yes |
GPU Passthrough | Yes | Yes | Yes | Yes |
Container Support | Yes (via LXC/Kata Containers) | Limited | Yes (via Hyper-V Containers) | Yes (via Xen Containers) |
Cloud Integration | OpenStack, oVirt | VMware Cloud | Azure | AWS (via Xen-based instances) |
- Cause: Insufficient RAM, misconfigured XML, or missing disk images.
-
Fix:
- Check logs:
journalctl -u libvirtd
. - Verify VM configuration:
virsh dumpxml myvm
. - Ensure disk images exist:
ls -lh /var/lib/libvirt/images/
.
- Check logs:
- Cause: Misconfigured bridge, NAT, or firewall rules.
-
Fix:
- Check bridge status:
brctl show
. - Verify NAT rules:
iptables -t nat -L
. - Restart networking:
sudo systemctl restart NetworkManager
.
- Check bridge status:
- Cause: Insufficient CPU/RAM, non-virtio drivers, or misconfigured storage.
-
Fix:
- Use
virtio
for disks and NICs. - Allocate more resources to the VM.
- Enable HugePages and CPU pinning.
- Use
- Cause: IOMMU not enabled, incorrect PCI device assignment.
-
Fix:
- Enable IOMMU in BIOS.
- Verify PCI device isolation:
lspci -nnk
. - Edit VM XML to include the PCI device.
- Cause: Corrupted disk images or insufficient storage.
-
Fix:
- Check disk images:
qemu-img check myvm.qcow2
. - Resize disks:
qemu-img resize myvm.qcow2 +10G
.
- Check disk images:
- Documentation:
-
Books:
- KVM Virtualization Cookbook by Konstantin Ivanov.
- Mastering KVM Virtualization by Humble Devassy Chirammal.
- Courses:
- Communities:
- KVM is a Linux-based, open-source virtualization solution that turns the kernel into a Type-1 hypervisor.
- Components: KVM kernel module, QEMU (emulation), and libvirt (management).
- Features: Near-native performance, live migration, PCI passthrough, and snapshots.
- Use Cases: Cloud computing, enterprise virtualization, development, HPC, and homelabs.
-
Management Tools:
virt-manager
,virsh
,cockpit
, and OpenStack. - Performance Tuning: CPU pinning, HugePages, virtio drivers, and nested virtualization.
- Troubleshooting: Check logs, verify configurations, and allocate sufficient resources.