KVMQuickNotes - henk52/knowledgesharing GitHub Wiki

KVM hypervisor

Introduction

Vocabulary

  • domain - VM is called a domain.

References

  • OpenStack nested virtual machines

  • Install and use CentOS 7 or RHEL 7 as KVM virtualisation host

  • Virtual routers

    • Quagga
  • Libvirt API

  • virt-clone

  • virt-convert

    • converts e.g. .ovf to running vm. img conversion is done by qemu-img.
  • Virtual nesting

  • vSwitch

  • Quemu

  • LXC

  • Console

  • Network

  • Cloud

  • Puppet

  • cloud-init

    • - good overview of how it works with some sort of web server on the 169 net.

    • - example of how to use an ISO for KVM virtual machine configuration.

    • - example of cloud-config data.

    • - very nice overview.

    • - Cloud-init with Vagrant

  • AWS

  • Node configuration

  • Hyperthreading

  • Networking

    • Tunelling:
  • Performance

  • LVM

  • Quick provisioning

  • OVF

  • Performance

  • Workflow

  • ACPI

  • Kernel configuration

  • Performance

  • VirtualBox

    • VBoxManage clonehd /path/to/image.vdmk /path/to/newimage.raw --format RAW

Open issues

  • How do I inject commands into a VM guest that I'm starting up?

    • Possibly have a local vNIC, know the MAC address, code the DHCP server to serve a specific IP address, then based on the IP address, provide the proper install script or pupet /etc/puppet/data/default file.
    • Possibly do a wget based on the primary mac address of the guest-VM, this will be a script which is then run. The network has to be a local network, that can't get out of the box, for security reasons.
  • How do I create vNICs

  • How do I crate VLANs on the vNICs

  • How do I change the network connections for a guest-vm

  • How to use kubernetes/atomic

Concepts

  • Storage: Use LVM

    • primary partition
    • IBE, to switch to during upgrades.
    • /swap
    • The rest in an unused VG.
  • Base images

    • Soft router: Fedora 20
    • m1: RHEL 6.3 - 64bit, with 32bit libs and apps, common for all other images.

commands

Virsh

  • virsh destroy - stop
  • virsh console - connect to the serial port on the machine
    • Requires that the kernel has been given the extra args: console=tty0 console=ttyS0,115200
  • virsh undefine - remove the vm
  • virsh dominfo
  • virsh shutdown
  • virsh autostart
    • virsh autostart –disable
  • virsh attach-disk vm1 /dev/sdb vdb --driver qemu --mode shareable
  • virsh detach-disk vm1 vdb
  • virsh attach-disk vm1 /some.iso hdc --type cdrom
  • virsh pool-dumpxml default
    • show path where the pool is stored.

virt- commands

  • virt-clone
    • virt-clone --connect qemu:///system --original baseks --name clone1 --file /var/lib/libvirt/images/clone1.qcow2
    • time virt-clone --connect qemu:///system --original basecow --name vroute1 --file /virt_images/vroute1.qcow2
      • ~1m10s
    • time virt-clone --connect qemu:///system --original base_ubuntu_2204 --name alphaworker2 --file /hdd1/virt_images/alphaworker2.qcow2
      • real 0m6.890s
    • With LVM:
      • virsh vol-clone baseks-1 cloen1 --pool vg_images
        • Took ~ 8 min to clone 10GB
      • virt-clone --connect qemu:///system --original baseks --name clone1 --file /dev/vg_images/cloen1
        • Not this, must be a different way.
        • virsh create clone.xml --paused
    • lvm method
      • time lvcreate --name clone1 --size 20G vg_images
      • time virt-resize --expand vda2 /dev/vg_images/baseks /dev/vg_images/clone1
        • real 8m10.571s
      • time virt-resize --resize-force vda2=4G /dev/vg_images/baseks /dev/vg_images/clone1b
        • ~4m, there was a block writing error.
      • virsh dumpxml baseks > clone1.xml
      • vi clone1.xml
      • virsh create clone1.xml
  • virt-install
  • virt-manager - GUI admin app
  • virt-sysprep

brctl

  • brctl show

  • deleting a bridge

    • ifconfig virbr18 down
    • brctl delbr virbr18
    • See:

Installation

Installing kvm on Ubuntu

See: KVM hypervisor: a beginners’ guide

  1. sudo apt -y install bridge-utils cpu-checker libvirt-clients libvirt-daemon qemu-kvm
  2. kvm-ok
  3. sudo apt install virt-manager
  • sudo usermod -a -G libvirt-qemu vmadm

set-up host network for VM guests on Ubuntu

  • ls /etc/netplan
  • cp /etc/netplan/00-installer-config.yaml ~/.
  • sudo vi /etc/netplan/00-installer-config.yaml
    • See below
    • the mac address is just a random one for the bridge, the eno1 keeps its mac address.
    • I am not sure if 'renderer: networkd' is needed
  • sudo chmod 600 /etc/netplan/00-installer-config.yaml
  • sudo netplan apply
network:
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: false
  bridges:
    br0:
      dhcp4: true
      interfaces:
        - eno1
      macaddress: 52:54:00:12:34:56
  version: 2

Original

# This is the network config written by 'subiquity'
network:
  ethernets:
    eno1:
      dhcp4: true
  version: 2

set-up host network for VM guests on Debian

How to Install KVM and Configure Bridge Network in Debian12

testing

  • virt-install --name baseks --memory 768 --disk "pool=vg_images,bus=virtio,size=10" --vcpus 1 --location http://10.1.233.3/images/linux/releases/20/Fedora/x86_64/os --graphics none --extra-args="console=tty0 console=ttyS0,115200 ks=http://10.1.233.3/configs/ks_fedora-20-x86_64_http_kvm_guest.cfg" --network bridge:virbr0

    • QCOW2 version: virt-install --name baseks --memory 768 --disk "pool=qcows,bus=virtio,size=10" --vcpus 1 --location http://10.1.233.3/images/linux/releases/20/Fedora/x86_64/os --graphics none --extra-args="console=tty0 console=ttyS0,115200 ks=http://10.1.233.3/configs/ks_fedora-20-x86_64_http_kvm_guest.cfg" --network bridge:virbr0
  • virt-install --name baseks --memory 768 --disk size=10 --vcpus 1 --location http://10.1.233.3/images/linux/releases/20/Fedora/x86_64/os --graphics none --extra-args="console=tty0 console=ttyS0,115200 ks=http://10.1.233.3/configs/ks_fedora-20-x86_64_http_kvm_guest.cfg" --network bridge:br0

  • virt-install --name baseks --memory 768 --disk size=10 --vcpus 1 --location http://10.1.233.3/images/linux/releases/20/Fedora/x86_64/os --graphics none --extra-args="console=tty0 console=ttyS0,115200 ks=http://10.1.233.3/configs/ks_fedora-20-x86_64_http_kvm_host.cfg" --network bridge:br0

  • virt-install --name base --memory 768 --disk size=10 --vcpus 1 --location http://10.1.233.3/images/linux/releases/18/Fedora/x86_64/os --graphics none --extra-args="console=tty0 console=ttyS0,115200" --network bridge:br0

    • --extra-args="ks=http://my.server.com/pub/ks.cfg console=tty0 console=ttyS0,115200”
  • virt-install --name=base --memory=768 --vcpus=1 --location=http://10.1.2.3/linux/releases/18/Fedora/x86_64/os --pxe --os-variant=fedora20 --disk size=10 --network=default --paravirt --virt-type=kvm

  • virt-install --name k8s_worker --memory 2048 --vcpus 4 --disk size=1024 --cdrom $HOME/Downloads/ubuntu-22.04.2-live-server-amd64.iso --os-variant ubuntu-lts-latest --network default --virt-type=kvm

  • virt-install --name k8s_worker --memory 2048 --vcpus 4 --disk size=1024 --cdrom $HOME/Downloads/ubuntu-22.04.2-live-server-amd64.iso --os-variant ubuntu-lts-latest --network bridge:virbr0

    • paravirt

libvirt

  • virsh -c lxc:/// list

    • Will only list containers that was started with libvirt, not containers started with lxc-start -n somethingelse
  • sudo apt-get install libvirt-bin

  • virsh -c lxc+ssh://root@HOSTNAME/ list

Cookbook

VM Cookbook

Creating VMs

Create an image for use with terraform

  • virt-clone --connect qemu:///system --original baseks --name base_ubuntu_2204 --file /var/lib/libvirt/images/base_ubuntu_2204.qcow2
  • virsh list --all | grep base_ubuntu_2204
  • virt-sysprep -d base_ubuntu_2204
    • virt-sysprep -d base_ubuntu_2204 --run-command "sed -i 's/enp1s0/ens3/' /etc/netplan/00-installer-config.yaml"
    • virt-sysprep -a ~/base_ubuntu_2204.qcow2 --run-command 'sed -i s/enp1s0/ens3/ /etc/netplan/00-installer-config.yaml'

Phone home with netcat(NC)

KS Install Fedora 23 KVM guest on qcow2

  • virt-install --name base_f23_x86_64 --memory 1024 --disk "pool=qcows,bus=virtio,size=6" --vcpus 1 --location http://10.1.233.3/mirrors/fedora23/linux/releases/23/Server/x86_64/os --graphics none --extra-args="console=tty0 console=ttyS0,115200 ks=http://10.1.233.3/configs/ks_fedora-23-x86_64_http_kvm_guest.cfg" --network bridge:virbr0

KS Install Fedora 20 KVM guest on qcow2

  • virt-install --name base_f20_x86_64 --memory 768 --disk "pool=qcows,bus=virtio,size=6" --vcpus 1 --location http://dm/mirrors/f20/linux/releases/20/Fedora/x86_64/os --graphics none --extra-args="console=tty0 console=ttyS0,115200 ks=http://dm/configs/ks_fedora-20-x86_64_http_kvm_guest_no_cloud_init.cfg" --network bridge:virbr0

KS Install RedHat 6.6 KVM guest on qcow2

  • virt-install --name base_rhel6_x86_64 --memory 768 --disk "pool=qcows,bus=virtio,size=6" --vcpus 1 --location http://10.1.233.3/images/rhel-6Server-x86_64 --graphics none --extra-args="console=tty0 console=ttyS0,115200 ks=http://10.1.233.3/configs/ks_rhel66-x86_64_http_kvm_guest_no_cloud_init.cfg" --network bridge:virbr0

Creating a instance based on a base image using qcow2 images

See:

  • qemu-img create -f qcow2 -o backing_file=/virt_images/baseks-1.qcow2 /virt_images/vrouter.qcow2
  • virsh define --file /opt/gas/vrouter.xml

Domain creation

define a CDROM ISO image in the domain XML file

Cloning VMs

VM Image manipulation

Accessing the VM qcow2 image from the host

See also:

  • login to the host
  • sudo modprobe nbd max_part=8
  • sudo qemu-nbd -c /dev/nbd0 /virt_images/vm.qcow2
  • sudo fdisk -l /dev/nbd0
  • sudo mount /dev/nbd0p2 /mnt
  • do stuff to the content.
  • sudo umount /mnt
  • sudo qemu-nbd -d /dev/nbd0

Exporting to OVA

  • Stop the machine
  • create a vmdk
    • qemu-img convert -O vmdk base.qcow2 base.vmdk
    • or: qemu-img convert -f qcow2 -O vmdk -o adapter_type=lsilogic,subformat=streamOptimized,compat6 SC-1.qcow2 SC-1.vmdk
  • write the .ovf
  • write the manifest.md
  • generate the ova

See also:

the ova format

Generated using 'tar' From: 5.3 Distribution as a Single File An OVF package may be stored as a single file using the TAR format. The extension of that file shall be 433 .ova (open virtual appliance or application).

Order:

    1. OVF descriptor
    1. The remaining files shall be in the same order as listed in the References section (see 7.1).
    1. OVF manifest
    1. OVF certificate

the minimal OVF 1.1

It seems that VMWare only supports OVF 1.0.0 and 1.1.0

  • DSP0004:

Convert from other Hypervisors to KVM

Convert from OVA to KVM

  • export LIBGUESTFS_BACKEND=direct

  • virt-v2v -v -x -i ova MY.ova -o libvirt -of qcow2 -os qcowpool

Troubleshooting OVA import on other hypervisors

Error uploading file disk1.vmdk to server. Not a supported disk format(sparse VMDK version too old)

It now imports, but it can't boot, nothing happens. Tried

  • Set VMDK version 3 for foo.vmdk $ printf '\x03' | dd conv=notrunc of=foo.vmdk bs=1 seek=$((0x4))

Use qemu-img info *.vmdk to see 'create type: monolithicSparse'

See also:

On ESXi 5.1 The vmdk was created with qemu-img version 2.4.1

VD: error VERR_VD_VMDK_INVALID_HEADER opening image file 'disk1.vmdk' (VERR_VD_VMDK_INVALID_HEADER)

On VirtualBox 4.3.10

Could not open the medium storage unit 'disk1.vmdk'. VMDK: inconsistent references to grain directory in 'disk1.vmdk' (VERR_VD_VMDK_INVALID_HEADER). VD: error VERR_VD_VMDK_INVALID_HEADER opening image file 'disk1.vmdk' (VERR_VD_VMDK_INVALID_HEADER).

capacity of uploaded disk is larger than requested

The 'ovf:capacityAllocationUnits' was wrong (it was 2^10). <Disk ovf:capacity="10" ovf:capacityAllocationUnits="byte * 2^30" ...

Network cookbook

Creating a private network

I finally tracked down a solution. The following sysctl variables have to be set:

net.bridge.bridge-nf-call-arptables = 0,
net.bridge.bridge-nf-call-ip6tables = 0, 
net.bridge.bridge-nf-call-iptables = 0

From:

  • Create a virtual bridge.
  • Define the private network and set it to forward to the relevant virtual bridge
  • Configure the domain to use the private network.

Creating a host network

A network with "direct" access to the hosts interface

# 'interfaces' is a 'facter' entry, that can be accessed via the '$'
$arInterfaceList = split($interfaces, ',')
$szSecondInterface = $arInterfaceList[1]
#notify{ "Second NIC: $szSecondInterface": }

$szPublicBridgeName = 'publicbr0'

network::bridge { "$szPublicBridgeName":
  ensure        => 'up',
  stp           => true,
  delay         => '0',
}
#  bridging_opts => 'priority=65535',


network::if::bridge { "$szSecondInterface":
  ensure => 'up',
  bridge => "$szPublicBridgeName",
  require => Network::Bridge[ "$szPublicBridgeName" ],
}

Getting the IP address of a Guest VM

See:

  • virsh domiflist VM_NAME
  • arp -n

e.g.

# virsh domiflist MYVM | grep virbr0 | awk '{print $5}'
52:54:00:68:0c:1e
# arp -n | grep 52:54:00:68:0c:1e
192.168.122.44           ether   52:54:00:68:0c:1e   C                     virbr0

Assigning static IP addresses to guests

Static IP addresses in a KVM network

  • virsh net-update default delete ip-dhcp-range "<range start='192.168.122.2' end='192.168.122.254'/>" --live --config
  • virsh net-update default add ip-dhcp-range "<range start='192.168.122.100' end='192.168.122.254'/>" --live --config
  • virsh net-update default add-last ip-dhcp-host "<host mac='52:54:00:a0:c8:11' name='alphamaster' ip='192.168.122.72'/>" --live --config
  • In guest
    • sudo dhclient -r && sudo dhclient
<network>
  <name>default</name>
  <uuid>1750ec86-c43d-4cb5-bd36-01a8e7abc9e6</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:a6:cd:a6'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.100' end='192.168.122.254'/>
      <host mac='52:54:00:a0:c8:11' name='alphamaster' ip='192.168.122.72'/>
      <host mac='52:54:00:d1:a0:57' name='alphaworker1' ip='192.168.122.73'/>
      <host mac='52:54:00:67:fb:f8' name='alphaworker2' ip='192.168.122.74'/>
    </dhcp>
  </ip>
</network>

Pools

Directory type pools

Create a directory based storage pool

  • mkdir /virt_images
  • virsh pool-define-as --name qcows --type dir --target /virt_images
  • virsh pool-autostart qcows
  • virsh pool-start qcows
  • virsh pool-list

LVM Pools

Create lvm pool

See:

  • vg_images: The name of the Volume group: vgdisplay

    • Create using: vgcreate vg_images
  • virsh pool-define-as --name vg_images --type logical --source-format lvm2 --target /dev/vg_images

  • virsh pool-autostart vg_images

  • virsh pool-start vg_images

  • virsh pool-list

Possible errors:

  • Pool vg_images defined
    • 'vg_images' already exists.

LVM volumes

  • virsh vol-info --vol baseks-1 --pool vg_images
  • virsh vol-clone baseks-1 cloen1 --pool vg_images

VM config

run a script at startup

See:

[Unit]
Description=Startup
After=syslog.target
After=network.target

[Service]
Type=forking
User=root
Group=root
ExecStart=/root/startup.sh
TimeoutSec=300

[Install]
WantedBy=multi-user.target

Enabling hyper threading

See:

acpi=ht i needed.

This should be simple enough with Grub2. You don't want to modify /boot/grub/grub.conf ever, and you never want Puppet trying to install a configuration that the OS may overwrite, leading to inconsistent states.

You should be able to simply add your modifications to /etc/default/grub on the GRUB_CMDLINE_LINUX line.

Then execute /sbin/grub2-mkconfig -o /boot/grub2/grub.cfg

If you want to get fancy, add an exec resource that is only refreshed when your file resource updates /etc/default/grub to run grub2-mkconfig. Otherwise, your cmdline changes won't take effect until the next kernel update, which runs grub2-mkconfig.

Post boot configuration

  • wget or curl
    • If using curl, have a manifest file, that describes what to get?
    • Probably just use wget to get the recursive thing going.

cloud-init

cloud-init information provisioning

user-data

See also:

#cloud-config
users:
  - name: vagrant
    gecos: generic administration.
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: adm,wheel,systemd-journal
    lock-passwd: true
    chpasswd: { expire: False }
    ssh_pwauth: True
    ssh_authorized_keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1YDp28djnVCnpULIepAlHhlfiwO94V5Kaxe3yfjp+2vZhVHNDddnZlRDrn+dc9BIzYrddCnc59JhVDo/SY76Ba8g+/BTr7bKxN0Ak6HxlhH4t1Mfh7bFaUjqZIBcYqkc28sle5rZ4qXIzjvh4R4NLWaKtuFbFtXte7Gp9PfWkGnZZ3bZsNl/3XMPyYB97BAfF9DknoR6D500zhaE16bHqKOt72NRGm2xyPj2mfbA3K6my3IocZeUNFyjBFPWIotdogtgqovKtncBnbcA6+ivaEPu4YZF8yB7pYuiufQIPl79Wdi8QYweMwg3hf3NNa75usilTA7l53cBW4xuj3vVN [email protected]

write_files:
  - path: /etc/puppet/data/global.yaml
    permissions: 0644
    owner: root
    encoding: gzip+base64
    content: |
      H4sICC1L8lQAA2dsb2JhbC55YW1sAI3MTQqAIBQE4L2neBdQ9JkbLyNiVhK+oqTz978TajEwDMPHOWcUS5ioS71lABulQPlsAEd15HO0EMugrinNzrftYkFJgQK1Fmia+xxL9utoAY0Rb+QjHn5RFRSrqPqPYgXVVRS/ULYDOj115wwBAAA=

Testing

storage testing

qcow test

  • lvcreate --name storage --size 30G vg_images
  • mkfs.ext4 /dev/vg_images/storage
  • mkdir /virt_images
  • mount /dev/vg_images/storage /virt_images
  • virsh pool-define-as --name qcows --type dir --target /virt_images
  • virsh pool-autostart qcows
  • virsh pool-start qcows
  • virsh pool-list
  • virt-install --name basecow --memory 768 --disk "pool=qcows,size=10" --vcpus 1 --location http://10.1.233.3/images/linux/releases/20/Fedora/x86_64/os --graphics none --extra-args="acpi=on console=tty0 console=ttyS0,115200 ks=http://10.1.233.3/configs/ks_fedora-20-x86_64_http_kvm_guest.cfg" --network bridge:virbr0

Converting images

Convert from VMDK to QCOW2

  • time qemu-img convert -O qcow2 junos-vsrx-12.1X47-D20.7-domestic-disk1.vmdk junos-vsrx-12.1X47-D20.7-domestic-disk1.qcow2.3
  • time qemu-img convert -f qcow2 -O qcow2 -o compat=0.10 junos-vsrx-12.1X47-D20.7-domestic-disk1.qcow2.3 junos-vsrx-12.1X47-D20.7-domestic-disk1.qcow2

Error starting domain: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/opt/zones/disks/junos-vsrx-12.1X47-D20.7-domestic-disk1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none: 'drive-virtio-disk0' uses a qcow2 feature which is not supported by this qemu version: QCOW version 3
qemu-kvm: -drive file=/opt/zones/disks/junos-vsrx-12.1X47-D20.7-domestic-disk1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none: could not open disk image /opt/zones/disks/junos-vsrx-12.1X47-D20.7-domestic-disk1.qcow2: Operation not supported


Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 65, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1063, in startup
    self._backend.create()
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 620, in create
    if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/opt/zones/disks/junos-vsrx-12.1X47-D20.7-domestic-disk1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none: 'drive-virtio-disk0' uses a qcow2 feature which is not supported by this qemu version: QCOW version 3
qemu-kvm: -drive file=/opt/zones/disks/junos-vsrx-12.1X47-D20.7-domestic-disk1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none: could not open disk image /opt/zones/disks/junos-vsrx-12.1X47-D20.7-domestic-disk1.qcow2: Operation not supported

Troubleshooting

Troubleshooting cloud-init

cloud-init takes 500 seconds to start

Removing a file, that was 19MB before Base64 encryption, fixed the issue.

Starting cloud-init: /usr/lib/python2.6/site-packages/cloudinit/url_helper.py:45: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site-packages/backports/__init__.pyc, but /usr/lib/python2.6/site-packages is being added to sys.path
  import pkg_resources
Cloud-init v. 0.7.5 running 'init' at Mon, 06 Apr 2015 22:26:08 +0000. Up 255.98 seconds.
...
Starting cloud-init: /usr/lib/python2.6/site-packages/cloudinit/url_helper.py:45: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site-packages/backports/__init__.pyc, but /usr/lib/python2.6/site-packages is being added to sys.path
  import pkg_resources
Cloud-init v. 0.7.5 running 'modules:config' at Mon, 06 Apr 2015 22:26:52 +0000. Up 299.88 seconds.
Starting cloud-init: /usr/lib/python2.6/site-packages/cloudinit/url_helper.py:45: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site-packages/backports/__init__.pyc, but /usr/lib/python2.6/site-packages is being added to sys.path
  import pkg_resources

Starting cloud-init: /usr/lib/python2.6/site-packages/cloudinit/url_helper.py:45: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site-packages/backports/__init__.pyc, but /usr/lib/python2.6/site-packages is being added to sys.path
  import pkg_resources
Cloud-init v. 0.7.5 running 'modules:final' at Mon, 06 Apr 2015 22:29:03 +0000. Up 431.07 seconds.

Virtual devices

first NIC is eth1, not eth0

Fixed it:

  • rm /etc/udev/rules.d/70-persistent-net.rules

Virtual networks troubleshooting

Error starting domain: internal error: network 'alpha' uses a direct mode, but has no forward dev and no interface pool

A forwarding bridge must be specified as well.

Error starting domain: internal error: network 'alpha' uses a direct mode, but has no forward dev and no interface pool

VM can't ping external addresses via the virbr0

  • A: ip forwarding has to be enabled on the host.
    • Temporary: sysctl -w net.ipv4.ip_forward=1
      • See:
    • Permanent: vi /etc/sysctl.conf:
      • net.ipv4.ip_forward = 1

VM creation

ERROR Error validating install location: Opening URL

The HTTP server had not enabled directory listing. E.g. for lighttpd:

  • dir-listing.activate = "enable"
  • /etc/lighttpd/conf.d/dirlisting.conf
# virt-install --name rhel63_x86_64 --memory 768 --disk "pool=qcows,bus=virtio,size=10" --vcpus 1 --location http://169.254.0.3/images/rhel_63_x86_64 --graphics none --extra-args="acpi=on console=tty0 console=ttyS0,115200 ks=http://169.254.0.3/configs/ks_rhel-63-x86_64_http_kvm_guest.cfg" --network bridge:virbrconf
ERROR    Error validating install location: Opening URL http://169.254.0.3/images/rhel_63_x86_64 failed.

guest VM doesn't power down after finished KS

See:

acpid

ERROR Host does not support virtualization type 'xen'

--paravirt is the offending parameter.

This got a bit further:

virt-install --name base --memory 768 --disk size=10  --vcpus 1 --location http://10.1.2.3/images/linux/releases/18/Fedora/x86_64/os
WARNING  No 'console' seen in --extra-args, a 'console=ttyS0' kernel argument is likely required to see text install output from the guest.

Starting install...
Retrieving file .treeinfo...                                                                                                                                                                                                           | 2.2 kB  00:00:00 !!! 
Retrieving file vmlinuz...                                                                                                                                                                                                             | 9.3 MB  00:00:00 !!! 
Retrieving file initrd.img...                                                                                                                                                                                                          |  53 MB  00:00:02 !!! 
Allocating 'base-2.qcow2'                                                                                                                                                                                                              |  10 GB  00:00:00     
ERROR    internal error: Network 'default' is not active.
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
  virsh --connect qemu:///system start base
otherwise, please restart your installation.

See also:

# virt-install --name base --memory 768 --vcpus 1 --location http://10.1.2.3/linux/releases/18/Fedora/x86_64/os --pxe --os-variant fedora20 --disk size=10 --network default --paravirt --virt-type kvm 
ERROR    Host does not support virtualization type 'xen' 
[root@localhost kvm-host]# virt-install --name base --memory 768 --vcpus 1 --location http://10.1.2.3/linux/releases/18/Fedora/x86_64/os  --os-variant fedora20 --disk size=10 --network default --paravirt --virt-type kvm 
ERROR    Host does not support virtualization type 'xen' 
[root@localhost kvm-host]# virt-install --prompt
WARNING  --prompt mode is no longer supported.
ERROR    
--name is required
--memory amount in MB is required
--disk storage must be specified (override with --nodisks)
An install method must be specified
(--location URL, --cdrom CD/ISO, --pxe, --import, --boot hd|cdrom|...)
[root@localhost kvm-host]# virt-install --name base --memory 768 --disk size=10 --pxe
WARNING  The guest's network configuration does not support PXE

Starting install...
Allocating 'base.qcow2'                                                                                                                                                                                                                |  10 GB  00:00:00     
ERROR    internal error: Network 'default' is not active.
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
  virsh --connect qemu:///system start base
otherwise, please restart your installation.

Warning: failed to fetch kickstart from http//dm:/configs/ks_fedora-20-x86_64_http_kvm_guest_no_cloud_init.cfg

  • A1: The 'dm' is not resolved inside the virtual machine.
  • A2: In one instance I was missing the ':' after the ip address.

Unable to retrieve http://10.1.233.3:/images/rhel-6Server-x86_64/images/install.img

  • A: Remove the ':' after the '.3' sudo virt-sysprep -d base_ubuntu_2204 --run-command "sed -i 's/enp1s0/ens3/' /etc/netplan/00-installer-config.yaml" [sudo] password for hck: [ 0.0] Examining the guest ... virt-sysprep: warning: mount_options: mount_options_stub: /dev/disk/by-id/dm-uuid-LVM-mKPAMvrjMnz1ZtDk79zsc3UUDsiI59SEGwHkFWHpYjw7E9TiNsj7dpEtM5RnsiKr: No such file or directory (ignored) virt-sysprep: warning: mount_options: mount: /boot: mount point is not a directory (ignored) [ 2.6] Performing "abrt-data" ... virt-sysprep: error: libguestfs error: glob_expand: glob_expand_stub: you must call 'mount' first to mount the root filesystem

If reporting bugs, run virt-sysprep with debugging enabled and include the complete output:

virt-sysprep -v -x [...]

can't use '--run-command'

the command works with cloud image downloaded from the internet... scripts

sudo virt-sysprep -d base_ubuntu_2204 --run-command "sed -i 's/enp1s0/ens3/' /etc/netplan/00-installer-config.yaml"
[sudo] password for hck: 
[   0.0] Examining the guest ...
virt-sysprep: warning: mount_options: mount_options_stub: 
/dev/disk/by-id/dm-uuid-LVM-mKPAMvrjMnz1ZtDk79zsc3UUDsiI59SEGwHkFWHpYjw7E9TiNsj7dpEtM5RnsiKr: 
No such file or directory (ignored)
virt-sysprep: warning: mount_options: mount: /boot: mount point is not a 
directory (ignored)
[   2.6] Performing "abrt-data" ...
virt-sysprep: error: libguestfs error: glob_expand: glob_expand_stub: you 
must call 'mount' first to mount the root filesystem

If reporting bugs, run virt-sysprep with debugging enabled and include the 
complete output:

  virt-sysprep -v -x [...]

Could not open '/hdd2/vmstoragepool/tf_cloudstack.qcow2': Permission denied

sudo vi /etc/apparmor.d/libvirt/TEMPLATE.qemu

#
# This profile is for the domain whose UUID matches this file.
#

#include <tunables/global>

profile LIBVIRT_TEMPLATE flags=(attach_disconnected) {
  #include <abstractions/libvirt-qemu>
  file,
}
root@kvm1:~# virsh start test
error: Failed to start domain 'test'
error: internal error: process exited while connecting to monitor: 2023-09-12T15:30:34.419358Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/hdd2/vmstoragepool/tf_cloudstack.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: Could not open '/hdd2/vmstoragepool/tf_cloudstack.qcow2': Permission denied
root@kvm1:~# dmesg | tail
[  531.772025] virbr0: port 1(vnet2) entered disabled state
[  531.772096] device vnet2 entered promiscuous mode
[  531.772284] virbr0: port 1(vnet2) entered blocking state
[  531.772288] virbr0: port 1(vnet2) entered listening state
[  531.881076] audit: type=1400 audit(1694531960.505:33): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-fda15f28-2602-4bcd-ba56-91b10200681d" pid=1921 comm="apparmor_parser"
[  531.930961] audit: type=1400 audit(1694531960.557:34): apparmor="DENIED" operation="open" profile="libvirt-fda15f28-2602-4bcd-ba56-91b10200681d" name="/hdd2/vmstoragepool/tf_cloudstack.qcow2" pid=1923 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=64055 ouid=64055
[  531.962908] virbr0: port 1(vnet2) entered disabled state
[  531.963348] device vnet2 left promiscuous mode
[  531.963351] virbr0: port 1(vnet2) entered disabled state
[  532.167875] audit: type=1400 audit(1694531960.793:35): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="libvirt-fda15f28-2602-4bcd-ba56-91b10200681d" pid=1932 comm="apparmor_parser"

Two KVM VMs both gets the same IP address

A: run this command during the virt-customize: truncate -s 0 /etc/machine-id

⚠️ **GitHub.com Fallback** ⚠️