Installing lxd - lmmx/devnotes GitHub Wiki
(Previously: installing lxc)
- Don't want to install? Try
lxd
online here! - Want a quick and less detailed guide? I recommend cyberciti.biz's
LXD 4.0 can be installed via Snap (not via apt, debian packages are only available up to LXD 3.0)
Snap can be installed on Linux Mint as detailed here
To install snap from the Software Manager application, search for snapd and click Install.
Alternatively, snapd can be installed from the command line:
sudo apt install snapd
Then
snap install lxd
-
Note that it appears to freeze at the moment it says
Set up snap "snapd" security profiles
but then it will continue, this message does not mean you have to do anything -
It tells you to restart (so that
/snap/bin
will be in your PATH) but if you want to do this later, run- (Actually you should restart though)
export PATH="$PATH:/snap/bin"
Running lxc
now gives its help page:
Description:
Command line client for LXD
All of LXD's features can be driven through the various commands below.
For help with any of those, simply call them with --help.
Usage:
lxc [command]
Available Commands:
alias Manage command aliases
cluster Manage cluster members
config Manage instance and server configuration options
console Attach to instance consoles
copy Copy instances within or in between LXD servers
delete Delete instances and snapshots
exec Execute commands in instances
export Export instance backups
file Manage files in instances
help Help about any command
image Manage images
import Import instance backups
info Show instance or server information
launch Create and start instances from images
list List instances
move Move instances within or in between LXD servers
network Manage and attach instances to networks
operation List, show and delete background operations
profile Manage profiles
project Manage projects
publish Publish instances as images
remote Manage the list of remote servers
rename Rename instances and snapshots
restart Restart instances
restore Restore instances from snapshots
snapshot Create instance snapshots
start Start instances
stop Stop instances
storage Manage storage pools and volumes
version Show local and remote versions
Flags:
--all Show less common commands
--debug Show all debug messages
--force-local Force using the local unix socket
-h, --help Print help
--project string Override the source project
-q, --quiet Don't show progress information
-v, --verbose Show all information messages
--version Print version number
Use "lxc [command] --help" for more information about a command.
(Note that you really should restart though!)
Upon first running, you are advised to run sudo lxd init
, which will set up a config
(options explained here):
- clustering
- sounds cool but maybe not for a first attempt
- MAAS
- network bridge
- "provides network access for the instances", LXD will create it by default
- storage pools
- network access
- automatic image update
- "YAML lxd init preseed"
- "Will display a summary of your chosen configuration options in the terminal"
If you try running lxd init
without sudo
you will get an error
error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: connection refused
To create and put yourself in the lxd
group so you'll have permission, add your account to the
LXD group
sudo adduser louis lxd
then log out and log back in again
newgrp lxd
logs into a new group session (all files/folders will be in the lxd
group),
once you've added yourself to the lxd group (check with getent group lxd
, which should now have
your username at the end)
I also came across sudo gpasswd -a $USER lxd
(while looking for adduser
), it didn't work.
The alternative is to go with sudo -i
then lxc list
without sudo
or running newgrp lxd
,
using this bashrc function
I originally thought I'd have to run newgrp lxd
on every login as I read that
On package installation, all members of the admin and sudoers group are automatically added to the lxd group. However groups are only effective after a new session is open.
...but adduser
is effective indefinitely after logging out and back in
OK so now that's set up, lxc
commands will work
lxc image list
⇣
+-------+-------------+--------+-------------+--------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+--------------+------+------+-------------+
As can be seen, I don't have any images, which can be solved by retrieving a remote image list.
-
Edit - this file includes an exploration of the various storage types, but eventually I
settled on BTRFS (read on for details on
dir
andzfs
storage types. If you're on Ubuntu 19 or 20 then you should be able to use ZFS (I encountered error messages and have read it's still experimental on 20 but am going to try it)- Additionally, if you're using SSH to access the machine you're installing LXD on then select "yes" for "would you like the LXD server to be available over the network?". The default address to bind to will be "all" and the default port will be 8443, and a password must be set (cannot be empty).
But first, I'll do the lxd init
, and here is the print out
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (lvm, ceph, btrfs, dir) [default=btrfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: ""
name: lxdbr0
type: ""
storage_pools:
- config: {}
description: ""
name: default
driver: dir
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
cluster: null
I chose "dir" as the storage method, which was "bad" in the sense that it doesn't use optimised image storage/container creation/snapshot creation/transfer etc. etc. This is the fallback storage method used when nothing else is configured, and will be much slower than others
To remove the default storage pool and try again I run
lxc storage delete default
But this isn't possible as it's in use in the default profile, so first you have to blank out the default profile
printf 'config: {}\ndevices: {}' | lxc profile edit default
then, now the default storage pool isn't used in any profile, delete that with the line above,
and re-run the lxd init
: I found that the default storage changed from btrfs
to zfs
.
It should default to btrfs
on a BTRFS-based file system, as it did at first (weird bug).
After doing some reading I opted to stay with the default suggestion of ZFS. Fedora is an
example of an OS which is on BTRFS, and I read Ubuntu 18.04 "ships" with ZFS.
-
Update later I got an error which suggested I should in fact have chosen ZFS, see below
-
See these docs for storage comparison
- ZFS is better if quotas are desired, BTRFS can have trouble enforcing quotas
- it's also recommended to dedicate a partition to the storage pool but not straight away I don't think...
ZFS
- LXD can use any zpool or part of a zpool.
storage.zfs_pool_name
must be set to the path to be used.- ZFS doesn't have to (and shouldn't be) mounted on
/var/lib/lxd
- Uses ZFS filesystems for images, then snapshots and clones to create containers and snapshots.
- Due to the way copy-on-write works in ZFS, parent filesystems can't be removed until all children are gone. As a result, LXD will automatically rename any removed but still referenced object to a random deleted/ path and keep it until such time the references are gone and it can safely be removed.
- ZFS as it is today doesn't support delegating part of a pool to a container user. Upstream is actively working on this.
- ZFS doesn't support restoring from snapshots other than the latest one. You can however create new containers from older snapshots which makes it possible to confirm the snapshots is indeed what you want to restore before you remove the newer snapshots.
Also note that container copies use ZFS snapshots, so you also cannot restore a container to a snapshot taken before the last copy without having to also delete container copies.
Trying to set the network bridge raises another error, again as it already exists
The requested network bridge "lxdbr0" already exists. Please choose another name.
To show it, run lxc network list
then lxc network delete lxdbr0
removes it, allowing
the name to be reassigned. (After thinking about it, I could probably have just skipped
recreating it but at least I learnt how this all works...)
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (lvm, zfs, ceph, btrfs, dir) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty disk or partition? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=30GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
The requested network bridge "lxdbr0" already exists. Please choose another name.
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: ""
name: lxdbr0
type: ""
storage_pools:
- config:
size: 30GB
description: ""
name: default
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
cluster: null
and we're done.
So back to the images: by running lxc remote list
you can see the pre-installed remotes:
+-----------------+------------------------------------------+---------------+-------------+--------+--------+
| NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+
| images | https://images.linuxcontainers.org | simplestreams | none | YES | NO |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+
| local (default) | unix:// | lxd | file access | NO | YES |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+
| ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | none | YES | YES |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+
| ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | none | YES | YES |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+
which shows that there's a remote called images
which we can pull a remote image from: so we list
them with lxc image list images: | less
+--------------------------------------+--------------+--------+----------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+--------------------------------------+--------------+--------+----------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| alpine/3.9 (3 more) | 010267d95c38 | yes | Alpine 3.9 amd64 (20200919_13:00) | x86_64 | VIRTUAL-MACHINE | 77.06MB | Sep 19, 2020 at 12:00am (UTC) |
+--------------------------------------+--------------+--------+----------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| alpine/3.9 (3 more) | e711488fafd0 | yes | Alpine 3.9 amd64 (20200919_13:00) | x86_64 | CONTAINER | 2.38MB | Sep 19, 2020 at 12:00am (UTC) |
+--------------------------------------+--------------+--------+----------------------------------------------+--------------+-----------------+-----------+-------------------------------+
...
from the file sizes of these first 2 it looks like some are "inflated" and some are "inflatable" (VMs are much larger as I presume they are stored with all state, and containers will download executables etc. and build themselves, but I'm not sure yet)
VMs are a new feature in version 4.0 (announced in April)
LXD 4.0 natively supports virtual machines and thanks to a built-in agent, can have them behave almost like containers.
As explained here,
This effectively lets you mix and match containers and virtual machines on the same system based on the workloads you want to run. Those virtual machines use the same profiles, networks and storage pools as the containers.
The VMs are run through qemu using separate VM images. To attempt to get feature parity with LXD containers, an agent is available which when run in the VM makes it possible to use lxc exec, lxc file, … the same way you would with a container.
Some features that VMs are incompatible with include GPU passthrough from the host.
Now that we've seen the image list, we can launch one from the remote.
Some other examples in the table from lxc image list
were:
-
ubuntu/18.04
(AKA Ubuntu 18.04 Bionic Beaver)- container: 95MB
- VM: 221MB
-
ubuntu/focal
(AKA Ubuntu 20.04 Focal Fossa)- container: 98MB
- VM: 232 MB
These are both x86_64
and there're one each of container/VM.
- Note that the Alpine Linux distribution is 2MB/95MB for container/VM respectively, i.e. its VM is the same size as the container for Ubuntu! This is why it's considered a "lightweight" distro (but others still complain about its size and in Docker land they prefer a "slim" Python distro).
So let's launch one of these: the Ubuntu 18.04 one, and let's call it first
since it's
the first container I've made in lxc
lxc launch images:ubuntu/18.04 first
⇣
Creating first
Starting first
Notice that there was no mention of VMs so this will be launching a container.
To run a virtual machine you need to pass --vm
(but there are extra steps for
official Ubuntu images, outlined
here)
lxc list
now shows that the container first
is RUNNING
+-------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| first | RUNNING | 10.208.246.224 (eth0) | fd42:9c64:8636:9b1f:216:3eff:fea3:821f (eth0) | CONTAINER | 0 |
+-------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
Note that IPv4 addresses beginning 10.
and IPv6 addresses beginning fd:
are private addresses,
meaning they're like 127.
or localhost
addresses (are not shared outside your network), so
if you show someone this they can't use it to access your network over the internet.
-
See Wikipedia: Private network
IP packets originating from or addressed to a private IP address cannot be routed through the public Internet.
The command to launch bash is a bit awkward and is conveniently aliased as lxc shell
. This is not
listed when you run lxc alias list
because it's part of the client.
lxc shell mycontainer
will give lxc exec mycontainer -- su -l
and give a login shell (as root)
by running su -l
inside the container.
This blog suggests you replace this alias with a non-root login shell:
lxc alias add shell "exec @ARGS@ -- su -l ubuntu"
lxc shell mycontainer
Notice they've got a user called ubuntu
and a container called mycontainer
(I think).
To keep it more simple, I'm just going to launch bash
, which is the more standard approach.
lxc exec first bash
This enters a root shell, at /root/
root@first:~# cat /etc/os-release
⇣
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
So far so good, we can even access the internet
root@first:~# apt show firefox
⇣
Package: firefox
Version: 80.0.1+build1-0ubuntu0.18.04.1
Priority: optional
Section: web
Origin: Ubuntu
Maintainer: Ubuntu Mozilla Team <[email protected]>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 214 MB
Provides: gnome-www-browser, iceweasel, www-browser
Depends: lsb-release, libatk1.0-0 (>= 1.12.4), libc6 (>= 2.27), libcairo-gobject2 (>= 1.10.0),
libcairo2 (>= 1.10.0), libdbus-1-3 (>= 1.9.14), libdbus-glib-1-2 (>= 0.78), libfontconfig1 (>=
2.12), libfreetype6 (>= 2.3.5), libgcc1 (>= 1:3.3), libgdk-pixbuf2.0-0 (>= 2.22.0), libglib2.0-0 (>=
2.37.3), libgtk-3-0 (>= 3.4), libpango-1.0-0 (>= 1.22.0), libpangocairo-1.0-0 (>= 1.14.0),
libpangoft2-1.0-0 (>= 1.14.0), libstdc++6 (>= 6), libx11-6, libx11-xcb1, libxcb-shm0, libxcb1,
libxcomposite1 (>= 1:0.3-1), libxcursor1 (>> 1.1.2), libxdamage1 (>= 1:1.1), libxext6, libxfixes3,
libxi6, libxrender1, libxt6
Recommends: xul-ext-ubufox, libcanberra0, libdbusmenu-glib4, libdbusmenu-gtk3-4
Suggests: fonts-lyx
Replaces: kubuntu-firefox-installer
Task: ubuntu-desktop, kubuntu-desktop, kubuntu-full, xubuntu-desktop, lubuntu-gtk-desktop,
lubuntu-desktop, ubuntustudio-desktop, ubuntukylin-desktop, ubuntu-mate-core, ubuntu-mate-desktop
Xul-Appid: {ec8030f7-c20a-464f-9b0e-13a3a9e97384}
Supported: 5y
Download-Size: 55.4 MB
Let's go ahead and install firefox
apt install firefox
Note that you can also do this from the host by running
lxc exec first apt install firefox
- Try uninstalling firefox with
apt remove
then reinstall from the host to confirm this - This command also works as
lxc exec first -- apt install firefox
but the--
doesn't seem to be necessary.
lxc exec first which firefox
⇣
/usr/bin/firefox
So now that we've updated the first container we can snapshot it to save this state, as "ffirst" ("first" + "firefox")
lxc snapshot first ffirst
Now let's imagine something terrible happens and the container is damaged
lxc exec first -- rm -Rf /etc
Now the container won't run
lxc exec first bash
⇣
I have no name!@first:~#
Oops, oh well at least we can restore from our snapshot:
lxc restore first ffirst
lxc exec first bash
⇣
root@first:~#
Say if I didn't want to go through the whole process of installing Firefox
on top of Ubuntu, I could save this hassle by publishing the ffirst
snapshot
image for internal distribution with lxc publish
lxc publish first/ffirst --alias firefox-ubuntu
⇣
Instance published with fingerprint: 5aeae506b8c73281748fa942fb8051e3c289ac675daa9b68149838cfe824aa34
Since I passed an alias name I get the much more readable name firefox-ubuntu
which
I can use instead of the hash 5aeae506...
, which now shows in lxc image list
I'll now delete that first
container as it's not useful to keep now I have the new image,
lxc stop first
lxc delete first
and now lxc list
shows no running containers, and I can launch a new container using that new
image
lxc launch firefox-ubuntu second
⇣
Creating second
Starting second
lxc list
⇣
+--------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| second | RUNNING | 10.208.246.38 (eth0) | fd42:9c64:8636:9b1f:216:3eff:fe6e:d734 (eth0) | CONTAINER | 0 |
+--------+---------+----------------------+-----------------------------------------------+-----------+-----------+
and this time we can see it comes with Firefox ready to go
lxc exec second which firefox
⇣
/usr/bin/firefox
I can also easily pull any file from the container using lxc file pull
lxc file pull second/var/log/syslog .
which copies over the file syslog
to the working directory on the host,
ls -l
⇣
-rw-r----- 1 louis lxd 19912 Sep 20 12:58 syslog
Note that the group permission is lxd but the user permission is louis
(me),
since I used newgrp lxd
at the beginning. If I try to read this file from a
normal shell (user and group both louis
) then it's not a problem.
Also note that the system logs go all the way back to the time I created the
first
image, i.e. its history doesn't start at the time it was created
from the published image as lxc launch firefox-ubuntu second
but from the
time it was created as lxc launch images:ubuntu/18.04 first
.
Again we can delete that with
lxc stop second
lxc delete second
and lastly let's add a GUI viewing interface by putting spicy
in there too
lxc launch firefox-ubuntu third
lxc exec third apt install spice-client-gtk
lxc snapshot third spicythird
lxc publish third/spicythird --alias spicy-firefox-ubuntu
and now we have an image with both Firefox and spicy
. SPICE is a protocol
for passing a GUI over the VGA
virtual graphics port (as I understand it),
You can now pass
--console=vga
tolxc launch
orlxc start
or--type=vga
tolxc console
. This relies on eitherremote-viewer
orspicy
being available on the client system. If neither are, the SPICE socket is mapped on the client and the path is provided.
i.e. we can see a visual GUI window in our host X server by stopping and then restarting the container like so:
lxc launch spicy-firefox-ubuntu firstgui --console=vga
But nope: as we see, only a virtual machine can be made to run like this
Creating firstgui
Starting firstgui
Error: VGA console is only supported by virtual machines
lxc list
shows it was created however
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| firstgui | RUNNING | 10.208.246.254 (eth0) | fd42:9c64:8636:9b1f:216:3eff:fe43:2a98 (eth0) | CONTAINER | 0 |
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| third | STOPPED | | | CONTAINER | 1 |
+----------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
No luck there, let's delete that last container.
lxc stop firstgui
lxc delete firstgui
Delete the one we snapshotted as spicy-firefox-ubuntu
too
lxc delete third
Let's now retry getting a GUI as a VM this time. To do so we need to run a few extra steps as mentioned above, and described here
For official Ubuntu images, cloud-init must be used along with a config drive to seed a default user into the VM and allow console access.
Upon more careful inspection, and reading the full thread since April, there's no need to do this.
VM images are being built daily on the community images:
remote, so that can be used. Apparently,
since the thread was made Ubuntu 20.04 is now being distributed from official Ubuntu remote with
lxd-agent
installed so it no longer has trouble setting up its network config upon VM
initialisation.
However, I think it should be as simple as running with --vm
after all.
So now we've prepared, we can try again
Recall that we tried to run this:
lxc launch spicy-firefox-ubuntu firstgui --console=vga
and now I'm going to again review the available images with
lxc image list images: | less
and select ubuntu/18.04
to launch
- Note there's also a
ubuntu/18.04/cloud
one which might be plan B, andubuntu/focal
orubuntu/focal/cloud
as plans C and D!
lxc launch images:ubuntu/18.04 ub18vm --vm
This one's 200 MB so it takes a bit of downloading... (This is known as a 'cold start')
lxc list
now shows it's active
+--------+---------+------------------------+-------------------------------------------------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+------------------------+-------------------------------------------------+-----------------+-----------+
| ub18vm | RUNNING | 10.208.246.36 (enp5s0) | fd42:9c64:8636:9b1f:216:3eff:feb9:5bbb (enp5s0) | VIRTUAL-MACHINE | 0 |
+--------+---------+------------------------+-------------------------------------------------+-----------------+-----------+
Oops, forgot to launch with VGA! Tear it down (lxc stop ub18vm
and lxc delete ub18vm
) and
relaunch:
lxc launch images:ubuntu/18.04 ub18gui --vm --console=vga
This didn't take any time to set up (a 'warm start', i.e. the image was cached this time), but it also didn't work:
Creating ub18gui
Starting ub18gui
LXD automatically uses either spicy or remote-viewer when present.
As neither could be found, the raw SPICE socket can be found at:
spice+unix:///home/louis/snap/lxd/17299/.config/lxc/sockets/173345718.spice
This is because SPICE needs to be installed, we knew this!
(I had to Ctrl
+ \
to get the process to stop, Ctrl
+ C
wasn't enough to terminate it)
So to get that, I'll launch without the VGA console, install the required packages, snapshot the image, and then that will launch with VGA console.
lxc stop ub18gui
lxc delete ub18gui
lxc launch images:ubuntu/18.04 ubuvm --vm
lxc exec ubuvm apt install firefox spice-client-gtk
-
lxc exec ubuvm which firefox
⇒/usr/bin/firefox
-
lxc exec ubuvm which spicy
⇒/usr/bin/spicy
Perfect: now we can snapshot the VM and publish it, as we did for the container. Obviously we could do this for the new daily releases of these images, but there's no strict need to (and we'd avoid the waiting around to 'cold start' if we didn't).
lxc snapshot ubuvm spicyubu
lxc publish ubuvm/spicyubu --alias spicy-firefox-ubuntu-vm
So the snapshot after the apt get
call is named spicyubu
, then I publish that image [locally]
as "spicy-firefox-ubuntu-vm" (matching "spicy-firefox-ubuntu" which was the same but as a container,
which I then discovered can't be used with the GUI console).
However it doesn't work!
Error: Failed to run: zfs set volmode=dev default/virtual-machines/ubuvm.block@snapshot-spicyubu: cannot set property for 'default/virtual-machines/ubuvm.block@snapshot-spicyubu': this property can not be modified for snapshots
It's not clear if ZFS is supported on my operating system... I read that it was but it appears that actually it's only since Ubuntu 19.2 and still experimental in 20. BTRFS is definitely available on my machine (it's a shell command).
At this point I want to delete everything in the storage pool (I don't want to end up with untracked
storage somehow or preventing it from being freed up), by running lxc list containername
and lxc stop
/lxc delete containername
to remove any running containers/VMs, then the same with lxc image list
and lxc image delete imagename
or using the FINGERPRINT
rather than the ALIAS
if they didn't
have a name (like "spicy-firefox-ubuntu")
- It doesn't seem like I can delete anything in
lxc network list
so I'll just avoid making a new bridge when I re-init
After that's done, once again I clear the default storage and re-run the lxd init
,
this time choosing BTRFS storage.
printf 'config: {}\ndevices: {}' | lxc profile edit default
lxc storage delete default
⇣
Storage pool default deleted
and then lxd init
:
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm, zfs, ceph, btrfs) [default=zfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty disk or partition? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=30GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]:
yes
Name of the existing bridge or host interface: lxdbr0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Everything now seems to be working, so let's try again!
lxc launch images:ubuntu/18.04 ubuvm --vm
lxc exec ubuvm apt install firefox spice-client-gtk
It seems to be a good sign that this image takes a while to download, as it means the cache is gone, which (I believe) means that the storage pool was properly deleted and replaced by the BTRFS storage pool (which hopefully will work!)
and once again publish it, and this time it works:
lxc snapshot ubuvm spicyubu
lxc publish ubuvm/spicyubu --alias spicy-firefox-ubuntu-vm
⇣
Instance published with fingerprint: 4bad808...
So that means I can delete the VM and create a new one from this VM image in VGA console!
lxc stop ubuvm
lxc delete ubuvm
lxc launch spicy-firefox-ubuntu-vm ub18gui --vm --console=vga
This gives an error but seems to work
(spicy:2118): dbind-WARNING **: 18:32:10.783: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
** Message: 18:32:12.173: main channel: opened
Then I can't log in (password is not given?) but I do see the GUI.
I retry with Ubuntu 20.04 next
lxc launch images:ubuntu/focal ubu20vm --vm
lxc exec ubu20vm apt install firefox spice-client-gtk
lxc snapshot ubu20vm spicyubu20
lxc publish ubu20vm/spicyubu20 --alias spicy-firefox-ubuntu20-vm
lxc stop ubu20vm
lxc delete ubu20vm
lxc launch spicy-firefox-ubuntu20-vm ub20ui --vm --console=vga
...at this point I get informed the BTRFS filesystem is out of space!
You can probably inspect this by mounting the image at /var/snap/lxd/common/lxd/disks
but I'm not sure and am just going to use the one I set up already.
In the meantime, Stéphane Graber replied to my query about passwords:
Our images never have accounts setup with passwords.
You have two ways to set passwords up:
- Use a cloud variant of the image and pass it some cloud-init user data
- With any image, use
lxc exec NAME -- passwd ubuntu
to set a password on theubuntu
user (or any other user you may want to get login access to)
lxc delete ub18gui
lxc launch spicy-firefox-ubuntu-vm ub18gui --vm
lxc exec ub18gui -- passwd ubuntu
I don't get as far as setting the password however as there's an error: lxd-agent
is not
installed. The Arch wiki
mentions this happening and suggests you sudo -i
in, then:
root@v1:~# mount -t 9p config /mnt/
root@v1:~# cd /mnt/
root@v1:/mnt# ./install.sh
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent.service →
/lib/systemd/system/lxd-agent.service.
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent-9p.service →
/lib/systemd/system/lxd-agent-9p.service.
LXD agent has been installed, reboot to confirm setup.
To start it now, unmount this filesystem and run: systemctl start lxd-agent-9p lxd-agentroot@v1:~#
mount -t 9p config /mnt/
root@v1:~# cd /mnt/
root@v1:/mnt# ./install.sh
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent.service →
/lib/systemd/system/lxd-agent.service.
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent-9p.service →
/lib/systemd/system/lxd-agent-9p.service.
LXD agent has been installed, reboot to confirm setup.
To start it now, unmount this filesystem and run: systemctl start lxd-agent-9p lxd-agent
I'm going to start afresh and see if I can exec
inside the original VM to change password,
before I modify it with apt
this time.
lxc delete ub18gui
lxc image delete spicy-firefox-ubuntu-vm
printf 'config: {}\ndevices: {}' | lxc profile edit default
lxc storage delete default
lxd init
This time I changed a few settings:
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (zfs, ceph, btrfs, dir, lvm) [default=zfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty disk or partition? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=30GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: lxdbr0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
i.e. all default except again BTRFS storage pool set up (I was notified that the 30GB of the last
one had been reached so wanted to clear it all away), again use the existing lxdbr0
bridge, and
this time also don't update stale cached images automatically. I suspect this was the cause
of the 'phantom' rsync processes (but I didn't check their /proc/PID/fd
to see what they were
touching)
So this time I checked lxc image list images: | less
and chose Alpine Linux to try:
lxc launch images:alpine/3.12 alpvm --vm --console=vga
This booted to an error screen (saying please boot over IPv4)... Shame as this is a smaller image!
The Alpine Linux VM would then not respond to lxc stop
so I had to use lxc stop -f
(force stop).
Trying again with lxc launch images:ubuntu/focal ubu20vm --vm --console=vga
I get a screen with
the LXD logo and no password auth screen. If I try lxc console ubu20vm
I get no response, but
this seems to be because the prompt is hidden: I am actually sitting at the login page if I hit
enter, it asks for a password.
If I run lxc exec ubu20vm bash
I can then get a command line with root access, indicating
I can now set the password:
lxc exec ubu20vm passwd ubuntu
⇣
New password:
Retype new password:
passwd: password updated successfully
and now we're in!
lxc console ubu20vm
⇣
To detach from the console, press: <ctrl>+a q
ubuntu
Password:
Login incorrect
ubu20vm login: ubuntu
Password:
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-47-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@ubu20vm:~$
Trying this a few times, the first login always seems to fail, so don't
try to enter username and password, let it fail, then when asked again
with the proper prompt, use the username ubuntu
and password you set up.
Interestingly,
To run a command as administrator (user "root"), use "sudo ". See "man sudo_root" for details.
but no password is requested to do so (root is unpassworded but the ubuntu user is!)
ubuntu@ubu20vm:~$ ls -l /var/log/syslog
-rw-r----- 1 syslog adm 103377 Sep 20 20:51 /var/log/syslog
ubuntu@ubu20vm:~$ ls
ubuntu@ubu20vm:~$ cp /var/log/syslog .
cp: cannot open '/var/log/syslog' for reading: Permission denied
ubuntu@ubu20vm:~$ sudo cp /var/log/syslog .
ubuntu@ubu20vm:~$ ls
syslog
ubuntu@ubu20vm:~$ ls -l syslog
-rw-r----- 1 root root 103377 Sep 20 20:53 syslog
When I exit the console with Ctrl
+ a
+ q
, I can still see this syslog
file in the user directory
lxc exec ubu20vm ls /home/ubuntu
⇣
syslog
...and let's try starting with VGA again
lxc stop ubu20vm
lxc start ubu20vm --console=vga
lxc console ubu20vm --type=vga
Again I just get the logo on screen, and not the login interface provided over console
.
Once again deleting the ubu20vm
and the image it was generated from, I'm going to
try running an Ubuntu 18.04 image as VM again.
lxc launch images:ubuntu/18.04 ubu18vm --vm
This time I won't step into it with VGA straight away, as I know I need to set the password.
I quickly checked I could get the root shell by entering lxc exec ubu18vm bash
, then proceeded:
lxc exec ubu18vm passwd ubuntu
⇣
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
That worked easily, so I next get the VGA console (the machine is still running):
lxc console ubu18vm --type=vga
and get an error that "no DISPLAY environment variable specified".
Searching around, I find that this has been documented
here
and that first of all I should be on v4.6 of LXD (as I suspected earlier,
but avoided since I'm not familiar with the snap
package manager).
- In turn, that blog linked is associated with this forum thread
Following these instructions (which note that it'll be necessary to change back to the stable from the candidate channel, tomorrow in fact)
snap refresh lxd --candidate
⇣
lxd (candidate) 4.6 from Canonical✓ refreshed
lxc --version
⇣
4.6
Next, you check the $DISPLAY
env variable on your system (mine is :0
) and change the colon to
X
(so mine is X0
). You then set that as the end part of the connect
entry in the following
file, x11.profile
:
config:
environment.DISPLAY: :0
environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native
nvidia.driver.capabilities: all
nvidia.runtime: "true"
user.user-data: |
#cloud-config
runcmd:
- 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
packages:
- x11-apps
- mesa-utils
- pulseaudio
description: GUI LXD profile
devices:
PASocket1:
bind: container
connect: unix:/run/user/1000/pulse/native
listen: unix:/home/ubuntu/pulse-native
security.gid: "1000"
security.uid: "1000"
uid: "1000"
gid: "1000"
mode: "0777"
type: proxy
X0:
bind: container
connect: unix:@/tmp/.X11-unix/X0
listen: unix:@/tmp/.X11-unix/X0
security.gid: "1000"
security.uid: "1000"
type: proxy
mygpu:
type: gpu
name: x11
used_by: []
Then, create the profile with the following commands. This creates a profile called
x11
.
lxc profile create x11
⇣
Profile x11 created
cat x11.profile | lxc profile edit x11
Next, to launch an Ubuntu VM, add --profile x11
before the name of the container, like so:
lxc launch ubuntu:18.04 --profile default --profile x11 mycontainer
except I want to use the community images not the official one, so I'll go with
lxc stop ubu18vm
lxc delete ubu18vm
lxc launch images:ubuntu/18.04 --vm --profile default --profile x11 ubu18vm
Error: Failed instance creation: Create instance: Create instance: Invalid devices: Device validation failed for "PASocket1": Unsupported device type
Well that makes sense, the bind
value of "container" doesn't match the VM I just told it to
make... OK, let's try a VM instead?
lxc launch images:ubuntu/18.04 ubu18 --profile default --profile x11
⇣
Creating ubu18
Starting ubu18
But then an error
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart ubu18 /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/ubu18/lxc.conf: Try
lxc info --show-log local:ubu18
for more info
I run that command and get
Name: ubu18
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/09/20 22:02 UTC
Status: Stopped
Type: container
Profiles: default, x11
Followed by a log:
lxc ubu18 20200920220256.128 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.ubu18"
lxc ubu18 20200920220256.129 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.ubu18"
lxc ubu18 20200920220256.130 ERROR utils - utils.c:lxc_can_use_pidfd:1834 - Kernel does not support pidfds
lxc ubu18 20200920220256.545 ERROR conf - conf.c:run_buffer:324 - Script exited with status 1
lxc ubu18 20200920220256.545 ERROR conf - conf.c:lxc_setup:3292 - Failed to run mount hooks
lxc ubu18 20200920220256.545 ERROR start - start.c:do_start:1224 - Failed to setup container "ubu18"
lxc ubu18 20200920220256.545 ERROR sync - sync.c:__sync_wait:41 - An error occurred in another process (expected sequence number 5)
lxc ubu18 20200920220256.548 WARN network - network.c:lxc_delete_network_priv:3185 - Failed to rename interface with index 0 from "eth0" to its initial name "veth7c3decdc"
lxc ubu18 20200920220256.548 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:850 - Received container state "ABORTING" instead of "RUNNING"
lxc ubu18 20200920220256.548 ERROR start - start.c:__lxc_start:1999 - Failed to spawn container "ubu18"
lxc ubu18 20200920220256.548 WARN start - start.c:lxc_abort:1024 - No such process - Failed to send SIGKILL to 12371
lxc ubu18 20200920220256.709 WARN cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1109 - Success - Failed to initialize cpuset /sys/fs/cgroup/cpuset//lxc.pivot/lxc.pivot
lxc 20200920220256.710 WARN commands - commands.c:lxc_cmd_rsp_recv:124 - Connection reset by peer - Failed to receive response for command "get_state"
It does sound like the first error was due to a file overwrite: I never made any container called ubu18 before, so this shouldn't be the case.
This appears to arise due to "stricter AppArmor policy", which was the whole point of upgrading to 4.6 (see discussion)
OK, I'll try it the author's way:
lxc launch ubuntu:18.04 --profile default --profile x11 mycontainer
Again it's the same error. At this point I suspect I may need to re-initialise everything.
- Delete all the containers and images
printf 'config: {}\ndevices: {}' | lxc profile edit default
lxc storage delete default
lxc profile delete x11
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty disk or partition? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=30GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: lxdbr0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no
lxc profile create x11
cat x11.profile | lxc profile edit x11
lxc launch ubuntu:18.04 --profile default --profile x11 x11ubu18
Once again I get the same error
Creating x11ubu18
Starting x11ubu18
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart x11ubu18 /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/x11ubu18/lxc.conf:
Try `lxc info --show-log local:x11ubu18` for more info
and when I run lxc info --show-log local:x11ubu18
I get the same log
Name: x11ubu18
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/09/20 22:38 UTC
Status: Stopped
Type: container
Profiles: default, x11
Log:
lxc x11ubu18 20200920223830.243 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.x11ubu18"
lxc x11ubu18 20200920223830.244 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.x11ubu18"
lxc x11ubu18 20200920223830.245 ERROR utils - utils.c:lxc_can_use_pidfd:1834 - Kernel does not support pidfds
lxc x11ubu18 20200920223830.487 ERROR conf - conf.c:run_buffer:324 - Script exited with status 1
lxc x11ubu18 20200920223830.487 ERROR conf - conf.c:lxc_setup:3292 - Failed to run mount hooks
lxc x11ubu18 20200920223830.487 ERROR start - start.c:do_start:1224 - Failed to setup container "x11ubu18"
lxc x11ubu18 20200920223830.487 ERROR sync - sync.c:__sync_wait:41 - An error occurred in another process (expected sequence number 5)
lxc x11ubu18 20200920223830.491 WARN network - network.c:lxc_delete_network_priv:3185 - Failed to rename interface with index 0 from "eth0" to its initial name "veth23adf325"
lxc x11ubu18 20200920223830.491 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:850 - Received container state "ABORTING" instead of "RUNNING"
lxc x11ubu18 20200920223830.492 ERROR start - start.c:__lxc_start:1999 - Failed to spawn container "x11ubu18"
lxc x11ubu18 20200920223830.492 WARN start - start.c:lxc_abort:1024 - No such process - Failed to send SIGKILL to 14669
lxc 20200920223830.658 WARN commands - commands.c:lxc_cmd_rsp_recv:124 - Connection reset by peer - Failed to receive response for command "get_state"
The WARN
messages can be ignored, the error arises because "Kernel does not support pidfds", but
as seen here
this too is a red herring: the real error is from the next line, about the config not working,
and this is likely due to the Nvidia lines.
lxc delete x11ubu18
lxc profile delete x11
Then edit the file x11.profile
again, removing the lines for nvidia.driver.capabilities
and
nvidia.runtime
(even if you have an NVIDIA driver, if it's causing a problem here).
Source
Once again
lxc profile create x11
cat x11.profile | lxc profile edit x11
and now re-running lxc launch ubuntu:18.04 --profile default --profile x11 x11ubu18
...
Creating x11ubu18
Starting x11ubu18
It worked! lxc exec x11ubu18 bash
brings up a root shell so I know I can now...
lxc exec x11ubu18 -- sudo --user ubuntu --login
Rather than resetting the password (but I'll do it anyway)
lxc exec x11ubu18 passwd ubuntu
After this, lxc console x11ubu18
brings up the login screen, which brings you to
the same place that the previous command (sudo --user ubuntu --login
) took you to immediately.
We can then run diagnostic commands
-
glxinfo -B
all comes back OK -
pactl info
says "connection refused", which apparently indicates an error inpulseaudio
installation
Reading more about the source and how to debug pactl info
problems in LXD, I came across the
suggestion
to place the following lines after the pulseaudio
line in x11.profile
:
write_files:
- owner: root:root
permissions: '0644'
append: true
content: |
PULSE_SERVER=unix:/home/ubuntu/pulse-native
path: /etc/environment
(There's another recommendation in that thread if I need a plan B, involving splitting the profile up into two parts and making the pulseaudio proxy a device instead)
lxc stop x11ubu18
lxc delete x11ubu18
lxc profile delete x11
lxc profile create x11
cat x11.profile | lxc profile edit x11
lxc launch ubuntu:18.04 --profile default --profile x11 x11ubu18
lxc exec x11ubu18 passwd ubuntu
lxc exec x11ubu18 -- sudo --user ubuntu --login
glxinfo -B
pactl info
⇣
Connection failure: Connection terminated
Still no pulseaudio!
The other option was two parts of the x11.profile
:
config:
environment.DISPLAY: :0
user.user-data: |
#cloud-config
packages:
- x11-apps
- mesa-utils
description: GUI LXD profile
devices:
X0:
bind: container
connect: unix:@/tmp/.X11-unix/X0
listen: unix:@/tmp/.X11-unix/X0
security.gid: "1000"
security.uid: "1000"
type: proxy
mygpu:
type: gpu
name: x11
used_by: []
pa component:
config:
raw.idmap: "both 1000 1000"
user.user-data: |
#cloud-config
runcmd:
- 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
- 'echo export PULSE_SERVER=unix:/tmp/.pulse-native | tee --append /home/ubuntu/.profile'
packages:
- pulseaudio
description: pulseaudio LXD profile
devices:
PASocket:
path: /tmp/.pulse-native
source: /run/user/1000/pulse/native
type: disk
name: pa
used_by:
I'll save this as x11v2.profile
and run
lxc profile create x11v2
cat x11v2.profile | lxc profile edit x11v2
lxc launch ubuntu:18.04 --profile default --profile x11v2 x11ubu18v2
lxc exec x11ubu18v2 passwd ubuntu
lxc exec x11ubu18v2 -- sudo --user ubuntu --login
glxinfo -B
pactl info
...and now we have a working pactl info
response but a broken glxinfo -B
response!
This indicates that the right answer is probably a combination of both of these profiles.
Taking a more careful look, I noticed the answer actually said
Containers are launched by combining both profiles along with default. Both methods get the job done, but I prefer quater’s as being more elegant.
In other words, you supply all 3 of these profiles!
This should work:
lxc stop --all
lxc delete x11ubu18 x11ubu18v2
lxc launch ubuntu:18.04 --profile default --profile x11 --profile x11v2 x11ubu18
lxc exec x11ubu18 passwd ubuntu
lxc exec x11ubu18 -- sudo --user ubuntu --login
glxinfo -B
pactl info
It still doesn't work... Again only the pulseaudio works. I now realise these are supposed to be 2 files, it's just hard to read the forum separations on my screen contrast... oops
One more try
lxc profile delete x11
lxc profile delete x11v2
lxc profile create pa
lxc profile create x11
mv x11v2.profile x11_partial.profile
cp x11_partial.profile pulseaudio_partial.profile
vim *_partial.profile # split the files, save, advance/exit with `:n`/`:x`
cat x11_partial.profile | lxc profile edit x11
cat pulseaudio_partial.profile | lxc profile edit pa
lxc launch ubuntu:18.04 --profile default --profile x11 --profile pa x11ubu18
lxc exec x11ubu18 passwd ubuntu
lxc exec x11ubu18 -- sudo --user ubuntu --login
glxinfo -B
pactl info
Again only the pulseaudio worked... I edited the last line of the pulseaudio profile
since it looked like it needed a []
for the value of the used_by
key, then remade:
lxc stop --all
lxc delete x11ubu18
lxc profile delete pa
lxc profile delete x11
lxc profile create pa
lxc profile create x11
cat x11_partial.profile | lxc profile edit x11
cat pulseaudio_partial.profile | lxc profile edit pa
lxc launch ubuntu:18.04 --profile default --profile x11 --profile pa x11ubu18
#lxc exec x11ubu18 passwd ubuntu
lxc exec x11ubu18 -- sudo --user ubuntu --login
glxinfo -B
pactl info
...now when I reset it to how it was before I edited the pulseaudio profile,
pactl info
errors (connection refused)... which shouldn't happen given the
same input for which it worked last time! Something is amiss...
- All 4 together give a working
glxinfo -B
and a "terminated"pactl info
Now I'm going to try the ubuntu/18.04/cloud
from the community images:
instead...
lxc stop --all
lxc delete x11ubu18
lxc launch images:ubuntu/18.04/cloud --profile default --profile x11 --profile pa x11ubu18
which at least has the pulseaudio working!
ubuntu@x11ubu18:~$ glxinfo -B
-bash: glxinfo: command not found
ubuntu@x11ubu18:~$ pactl info
Server String: unix:/tmp/.pulse-native
Library Protocol Version: 32
Server Protocol Version: 32
Is Local: yes
Client Index: 18
Tile Size: 65472
User Name: louis
Host Name: louis-Aspire-E1-571
Server Name: pulseaudio
Server Version: 11.1
Default Sample Specification: s16le 2ch 44100Hz
Default Channel Map: front-left,front-right
Default Sink: alsa_output.pci-0000_00_1b.0.analog-stereo
Default Source: alsa_input.pci-0000_00_1b.0.analog-stereo
Cookie: 4a4f:942f
I then ran sudo apt install x11-apps mesa-utils
and now the output matches
that on the host! (But annoying I couldn't get it to work from the profile YAML)
Now that it's been set up, I can run a GUI program (xclock
) and view it as a window
on the host machine!!! :—)
Then I can install Firefox... and it works!
lxc snapshot x11ubu18 ffubu
lxc publish x11ubu18/ffubu --alias firefox-ubuntu
Now this image can be used to create new containerised Firefox sessions!
lxc launch firefox-ubuntu ff2 --profile default --profile pa --profile x11 ff2
lxc exec ff2 -- sudo --user ubuntu --login
firefox
Two things to note are the error about 'sandbox': this doesn't seem to be as bad as it sounds, firstly the AppArmor feature of LXD handles the security carefully, and secondly it seems to mean (from what I read in this bug issue) that it's failing to read the host's parent process running the container due to sandboxing, not that it's not doing sandboxing. This in turn leads to this bug issue.
- I again recommend Simos's blog for discussion of the concepts here, as well as this specific post on this implementation in particular which I used (including the comments debugging)
- To get started, I watched Stephane Graber's 2015 talk
"An introduction to LXD, the container lighter-visor" (a lot of this is adapted from that)
- The name "lighter-visor" is a pun on hypervisor which is another word for VMM [Virtual Machine Manager], the "lighter" part is about containers being lightweight.
- Also read his blog category LXD