UserGuide - kata-containers/documentation GitHub Wiki
Kata Containers User Guide
:warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning:
THIS DOCUMENT IS A WorkInProgress
:warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning: :warning:
- Kata Containers User Guide
- Installation
- Configuration
- Workloads
- Appendix
This Kata Containers User Guide aims to be a comprehensive guide to the explanation of, installation, configuration, and use of Kata Containers.
The Kata Containers source repositories contain significant amounts of other documentation, covering some subjects in more detail.
What is Kata Containers?
Kata Containers is, as defined in the community repo:
Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs.
It is a drop in additional OCI compatible container runtime, which can therefore be used with Docker and Kubernetes.
Installation
Supported platforms
Kata Containers is primarily a Linux based application. It can be installed on the most common Linux distributions, using the common Linux packaging tools.
Details on installation can be found in documentation repository.
For the curious, adventurous, developers or those using a distribution not presently supported with pre-built pacakges, Kata Containers can be installed from source. If you are on a distribution that is not presently supported, please feel free to reach out to the community to discuss adding support. Of course, contributions are most welcome.
docker
Kata Containers can be installed into Docker as an additional container runtime. This does
not remove any functionality from Docker. You can choose which container runtime is the
default for Docker if none is specified. You can run Kata Container runtime containers
in parallel with other contianers using a different container runtime (such as the default
Docker runc
runtime).
Instructions on how to configure Docker to add Kata Containers as a runtime can be found in the documentation repository
compose
It should be noted, that presently Kata Containers may not function fully in all
docker compose
situations. In particular, Docker compose makes use of network links to
supply its own internal DNS service, which is difficult for Kata Containers to replicate.
Work is on-going, and the Kata Containers limitations
document can be checked for the present state.
Kubernetes
Kata Containers can be integrated as a runtime into Kubernetes. Kata Containers can be integrated via either CRI-containerd or CRI-O.
For details on configuring Kata Containers with CRI-containerd see this document
pods
Note that pods have some different functionality from straight docker - and note them throughout the document (such as memory and cpu scaling).
Zun
Kata Containers can be used as a runtime for OpenStack by integrating with Zun. Details on how to set this integration up can be found in this document
Configuration
Kata Containers has a comprehensive TOML based configuration file. Much of the information on the available configuration options is contained directly in that file. This section expands on some of the details and finer points of the configuration.
Images
Kata Containers supports rootfs images and initrd images. It also supports running with either the kata-agent
as the init process, or systemd.
The kata-containers-image
package includes both a rootfs-based image and an initrd-based image. Currently, the default configuration.toml
configuration file specifies a rootfs image using systemd as the init daemon.
To help decide which combination of image and init daemon is appropriate for your uses, consider the following table:
Image type | init | Boot speed | Image Size | Supports Factory? | Supports agent tracing? | Supports debug console? | Notes |
---|---|---|---|---|---|---|---|
rootfs | systemd | good | small | no | yes | yes | Flexible as easy to customise |
rootfs | agent | fastest | smaller | no | yes | no | |
initrd | agent | faster | smallest | yes | no | no | Not as flexible as systemd-based image |
initrd | systemd | n/a | n/a | n/a | n/a | no | Not supported |
Note:
To determine what type of image your system is configured for, run the following command and look at the "Image details" information:
$ sudo kata-collect-data.sh
Or, to just see the details, run:
$ sudo kata-collect-data.sh | sed -ne "/osbuilder:/, /\`\`\`/ p" | egrep "description:|agent-is-init-daemon:"
Memory
As Kata Containers runs containers inside VMs it differs from software containers in how memory is allocated and restricted to the container. VMs are allocated an amount of memory, whereas software containers can run either unconstrained (they can access, and share with other containers, all of the host memory), or they can have some constraints imposed upon them via hard or soft limits.
memory allocation
If no constraints are set, then Kata Containers will set the VM memory size using a combination of the value set in the runtime config file, which is 2048 MiB by default, plus the addition of the requested constraint.
Kata Containers gets the memory constraint information from the OCI JSON file passed to it by the orchestration layer. In the case of Docker, these can be set on the command line For Kubernetes, you can set up memory limits and requests
Note We should detail how limits and requests map into Kata VMs.
CPUs
If the container orchestrator provides CPU constraints, then Kata Containers configures the VM per those constraints (rounded up to the nearest whole CPU), plus one extra CPU (to cater for any VM overheads). More details can be found in the cpu constraints document
filesystems and storage
host graph drivers
Briefly explain that Kata maps in rootfs differently depending on the host side graph driver. block or 9p.
non-block exposed (overlay et. al.)
block exposed (devicemapper et. al.)
rootfs mounts
volume mounts
Types of filesystems
SPDK
CEPH et. al.
VM features
Hypervisors
Kernels
tracking stable
required features
kernel-per-pod
See this issue for more details on how to configure per-pod kernels
rootfs
images
See this issue for more details on how to configure per-pod images
initrd
PC types
QEMU versions
NEMU
NEMU is a version of the QEMU hypervisor specifically tailored for lightweight cloud use cases. NEMU can be integrated and used with Kata Containers. A guide can be found in this document
Random and entropy
direct hardware mapping
SR-IOV
Kata Containers supports passthrough of SR-IOV devices to the container workloads. A guide on configuration can be found in this document
docker device arguments
GPU
Kata Containers supports direct GPU assignment to containers. Documentation can be found here
notes on scaling
ptys, file handles, network size.
migration
KSM
DAX
balloooning
pinning on the host
legacy workloads
Some legacy workloads, such as centos:6
and debian:7
require the kernel CONFIG_LEGACY_VSYSCALL_EMULATE
option
to be enabled in order to work with their older versions of (pre-version 2.15) glibc
's for instance. By default
Kata Containers kernel does not enable this feature, which may result in such workloads failing (such as bash
creating
a core dump etc.).
The vsyscall
feature can be enabled in the Kata kernel without a recompile, by adding vsyscall=emulate
to the
kernel parameters in the Kata Containers config file.
Note, this change will affect all Kata Containers launched, and may reduce the security of your containers.
Networking
This section covers the configurations and variations of networking supported by Kata Containers.
It covers the default networking cases, as well as advanced use cases and acceleration techniques.
veth
macvtap
CNI
Kata Containers supports the CNI networking plugins by default. This is the preferred networking model for use with Kata Containers and Kubernetes.
CNM
Kata Containers does support the CNM networking model, but the CNI is the preferred model.
DPDK
Kata Containers can be used with DPDK. An example can be found in the Kata Containers VPP guide
VPP
Kata Containers can use VPP. Instructions can be found in this document
Security layers
There are a number of different security tools and layers that can be applied at a number of different levels (such as on the host or inside the container) in Kata Containers. This section details which layers are supported and where, and if they are enabled by default or not.
SELinux
There are plans to construct a host side SELinux profile for Kata Containers.
There has also been discussion and valid use cases proposed for enabling SELinux inside the containers, in particular in the case of multi-container pods, where SELinux isolation between the containers may be desirable.
seccomp
seccomp is supported by Kata Containers inside the guest container, but is not enabled by default in the shipped rootfs (as it adds overheads to the system that not all users may want).
seccomp can be enabled by building a new rootfs image using osbuilder, whilst setting SECCOMP=true
in your environment.
AppArmor
AppArmor support is not currently present in Kata Containers.
Workloads
Some containers (workloads) may require special treatment to run under Kata Containers - in particular
workloads that have close interactions with the host, such as those using --privileged
mode or handing
in host side items such as sockets
.
This section will detail known 'special' use cases, and where possible, additions, tweaks and workarounds that can be used to enable such workloads to function under Kata Containers.
X11 containers
The x11docker is known to be able to run at least a subset of X11 applications under Kata Containers. See the project documentation for more details.
Appendix
Things that are missing...
entropy