Fedora k8s kind - hpaluch/hpaluch.github.io GitHub Wiki
"Kind" is tool to quickly create k8s clusters using nested containers. So you can use use anything (laptop or PC with single VM) to manage several clusters on it.
Caution
I originally tested it with podman
, but it is "experimental" - for example
Cilium network layer has issues - so I recommend to use Docker on your Host
system.
Sidenote: You may know that k8s deprecates Docker, but this deprecation
concerns only Docker as CRI (container runtime used by k8s) - using Docker for
containers by k8s. But > "Kind" uses Docker only for Top-level containers but
not inside k8s. When you > look inside Docker container there is used
containerd
as CRI which is > officially supported.
Homepage on: https://kind.sigs.k8s.io/
First install docker:
sudo dnf install docker-cli
# add yourself to 'docker' group:
sudo /usr/sbin/usermod -G docker -a $USER
# ensure that docker containers will autostart on boot:
sudo systemctl enable docker.service iscsi
# reboot
sudo reboot
# verify that docker is running
docker ps # should provide empty output *without* error
How I started it under Fedora 41:
cd
curl -fsSLo ./kind https://kind.sigs.k8s.io/dl/v0.26.0/kind-linux-amd64
chmod +x kind
sudo mv kind /usr/local/bin/
sudo chown root:root /usr/local/bin/kind
# This creates cluster:
kind create cluster
# You need client 'kubectl' - version should match that one printed by Kind, for example:
sudo dnf install kubernetes1.32-client
# now try:
kubectl get nodes
Optional: colored output from kubectl
command:
cd
curl -fLO https://github.com/kubecolor/kubecolor/releases/download/v0.4.0/kubecolor_0.4.0_linux_amd64.tar.gz
tar xvfz kubecolor_0.4.0_linux_amd64.tar.gz
sudo cp kubecolor /usr/local/bin/
- WARNING! There are 2 similar GitHub projects offering
kubecolor
- later is Archived so I hope that 1st one is right version.
Finally append to your ~/.bashrc
:
alias k='kubecolor'
And reload shell:
source ~/.bashrc
And use k
command instead of kubectl
for example to get all pods in all namespaces:
k get pod -A
To quickly test your k8s cluster we will combine these resources:
-
https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
-
first create Deployment (Pod with containers) with:
kubectl create deployment hello-deploy \ --image=registry.k8s.io/e2e-test-images/agnhost:2.39 \ -- /agnhost netexec --http-port=8080
-
we can query status with:
$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE hello-deploy 1/1 1 1 24s $ kubectl get pod NAME READY STATUS RESTARTS AGE hello-deploy-594d4494b5-kdf2c 1/1 Running 0 89s
-
status must be
Running
-
now we have to expose our Pod with containers using
NodePort
Service:kubectl expose deployment hello-deploy --type=NodePort --name=hello-svc --port 8080
What is NodePort
? It exposes Pod to be accessible from any Node
at same Port
.
In other words - if your cluster has 3 nodes you can use IP address of any node to
access your Pod (even when it runs on other node) - see text below for details.
To access our Pod we need 2 pieces:
- IP address of any node (but should use only
worker
nodes - without Role:control-plane
) - Exposed port
For IP address of any node
-
we can use this command:
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kind-control-plane Ready control-plane 63m v1.32.0 172.18.0.2 <none> Debian GNU/Linux 12 (bookworm) 6.12.6-200.fc41.x86_64 containerd://1.7.24 kind-worker Ready <none> 63m v1.32.0 172.18.0.4 <none> Debian GNU/Linux 12 (bookworm) 6.12.6-200.fc41.x86_64 containerd://1.7.24 kind-worker2 Ready <none> 63m v1.32.0 172.18.0.3 <none> Debian GNU/Linux 12 (bookworm) 6.12.6-200.fc41.x86_64 containerd://1.7.24 kind-worker3 Ready <none> 63m v1.32.0 172.18.0.5 <none> Debian GNU/Linux 12 (bookworm) 6.12.6-200.fc41.x86_64 containerd://1.7.24
-
we can use any address from
INTERNAL_IP
column - but preferably avoid one withcontrol-plane
role (inROLES
) column. -
so I will use
172.18.0.4
Now we need to get exposed port
-
where is our Deployment available:
$ kubectl get svc hello-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc NodePort 10.96.222.242 <none> 8080:31156/TCP 6m17s
-
so our web server should be available on port 31156 (2nd port in column
PORT(S)
) where 1st number is which port service uses (8080) -
now you can try from your Host:
$ curl -f 172.18.0.4:31156 && echo NOW: 2024-12-31 07:07:51.618304913 +0000 UTC m=+616.201216172
-
please note that you can use any node IP address, so also this will work:
$ curl -f 172.18.0.5:31156 && echo NOW: 2024-12-31 07:08:39.131221358 +0000 UTC m=+663.714132636
There also exist other ways to expose Pods (Deployments) to external access:
-
Ingress
- similar to Virtual Hosting in Apache -
LoadBalancer
- typical in Public cloud - requires external load balancer integrated with k8s
Kind uses nested containers - because normally each OS should run only single Node.
- Top level containers (define Clusters) managed by Kind using Docker
- Nested containers managed by K8s using CRI interface and Containerd implenetation.
Top-level containers are available via docker
-
to list top level containers use:
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1c0545743471 kindest/node:v1.32.0 "/usr/local/bin/entr…" About an hour ago Up About an hour kind-worker2 4e16ae293bba kindest/node:v1.32.0 "/usr/local/bin/entr…" About an hour ago Up About an hour kind-worker 7ff71a1fedcb kindest/node:v1.32.0 "/usr/local/bin/entr…" About an hour ago Up About an hour kind-worker3 63fd204f7ac1 kindest/node:v1.32.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:46287->6443/tcp kind-control-plane
-
so at Docker level you see K8s Nodes
-
to see k8 Pods (collocated containers on same Node) we have to exec shell in specific Docker container, for example:
$ docker exec -it kind-worker bash root@kind-worker:/# crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE 056c02a38b7be f876bfc1cc63d About an hour ago Running nginx 0 87aa1bab583f4 nginx-deployment-96b9d695-7nx45 default 9b08955ae0534 cd11a6130b7ec About an hour ago Running cilium-envoy 0 78894744ffb0e cilium-envoy-qp8rn kube-system 8945037eb1623 808119d0de26e About an hour ago Running cilium-agent 0 0cf833898d90e cilium-kzrcl kube-system ae1392df332ff afb5c96afff65 About an hour ago Running cilium-operator 0 f3646974c1ad3 cilium-operator-799f498c8-vbx95 kube-system a65c79e72fc58 aa194712e698a About an hour ago Running kube-proxy 0 dc2c3c52639e1 kube-proxy-7xp9l kube-system root@kind-worker:/# crictl version Version: 0.1.0 RuntimeName: containerd RuntimeVersion: v1.7.24 RuntimeApiVersion: v1 root@kind-worker:/# exit
-
note that
crictl
is abstract CLI above any container runtime (compatible with standard CRI). In above example it usescontainerd
"runtime" to run containers.
To have working commands kubectl top nodes
and kubectl top pods
we have to install
metrics-server - tested my guide Fedora k8s - helm variant:
sudo dnf install helm
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
helm install metrics-server metrics-server/metrics-server --set args="{--kubelet-insecure-tls}" -n kube-system
Poll kubectl get pod -n kube-system
until Pod metrics-server-xxxxxx
has status Running
and READY 1/1
.
After around 15 seconds these commands should work:
kubectl top nodes
kubectl top pods
In case of pods you can append -A
for all namespaces or -n NAME
for specific namespace of NAME.
Cilium is eBPF based network layer with "hubble" introspection tool (for example to sniff DNS requests, etc...)
There is nice Video using Cilium on: https://www.youtube.com/watch?v=7qUDyQQ52Uk
Caution
Cilium (as of Dec 2024) has issues with podman
when accessing BPF mount -
see https://github.com/kubernetes-sigs/kind/issues/3545 - recommendation is to use Docker.
First ensure that there are enough descriptors for inotify (for watching changes in system):
- following: https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files
- I created file
/etc/sysctl.d/98-cilium.conf
with content:# see https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files fs.inotify.max_user_watches = 524288 fs.inotify.max_user_instances = 512
- and reloaded it with
sudo systemctl restart systemd-sysctl
Next we can basically follow: https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/
# added -f : means "fail on error with proper status code"
curl -fLO https://raw.githubusercontent.com/cilium/cilium/1.16.5/Documentation/installation/kind-config.yaml
kind create cluster --config=kind-config.yaml
Now installed cilium
binary as suggested on
https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-the-cilium-cli.
Run script install-client.sh
with contents:
#!/bin/bash
set -xeuo pipefail
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
# added
sudo chown root:root /usr/local/bin/cilium
exit 0
Now we will trigger Cilium install and wait for its completion.
cilium install --version 1.16.5
cilium status --wait
You can also poll Pods with kubectl get po -A
- all should be RUNNING
.
For diagnostics we should also install Hubble following: https://docs.cilium.io/en/stable/observability/hubble/setup/#hubble-setup
Run script install-hubble-client.sh
with contents:
#!/bin/bash
set -xeuo pipefail
HUBBLE_VERSION=$(curl -fsS https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
HUBBLE_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then HUBBLE_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-${HUBBLE_ARCH}.tar.gz.sha256sum
sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sudo chown root:root /usr/local/bin/hubble
exit 0
Now we have to enable it with:
cilium hubble enable
cilium status
Last command should report Hubble Relay: OK
You can now run Cilium tests with:
cilium connectivity test
# optional: delete test pods when test finishes:
kubectl get ns
kubectl delete ns cilium-test-1
Now we can use recommended commans for observation:
tmux # recommended - running several terminal
cilium hubble port-forward
Now in tmux press Ctrl-b followed by c
(without Ctrl) to create new
session and run:
# -f is follow
hubble observe -f
# and wait a while
You will get output similar to tcpdump
To test Pod to Pod communication we can create Pod with shell and curl, modified example from: https://stackoverflow.com/a/74940082 (because plain Alpine does no contain curl)
$ kubectl run curl --image curlimages/curl --command sleep -- 999d
Disclaimer: It is very hard to find authentic and trusted image on Docker Hub! I believe that this one is from Curl authors but have no proof: https://hub.docker.com/r/curlimages/curl
Now you can anytime run any command inside Pod curl
for example run command cat /etc/os-release
inside Pod curl
:
$ kubectl exec -it curl -- cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.21.0
PRETTY_NAME="Alpine Linux v3.21"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
And now important stuff - Service Discovery:
# remember that we have already deployed service:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-svc NodePort 10.96.222.242 <none> 8080:31156/TCP 175m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h54m
nginx-service NodePort 10.96.166.253 <none> 80:32000/TCP 3h52m
# so we can directly nslooku "hello-svc":
$ kubectl exec -it curl -- nslookup hello-svc
Server: 10.96.0.10
Address: 10.96.0.10:53
** server can't find hello-svc.svc.cluster.local: NXDOMAIN
Name: hello-svc.default.svc.cluster.local
Address: 10.96.222.242
.. more error to follows
Please note that K8s always search several suffixes as can be seen with:
$ kubectl exec -it curl -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local example.com
nameserver 10.96.0.10
options ndots:5
That's the reason for several errors with NXDOMAIN (because nothing was found for other suffixes).
Also ndots:5
is well known acronym in K8s world - it means that all
names that has less than 5 dots will try appending suffixes in search ...
list(!).
So to access our simple web server we just need Service Name and 1st port:
$ k get svc hello-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-svc NodePort 10.96.222.242 <none> 8080:31156/TCP 177m
So from any Pod (but NOT from Host) we can just invoke:
$ kubectl exec -it curl -- curl -fsS hello-svc:8080 ; echo
NOW: 2024-12-31 09:58:18.565873894 +0000 UTC m=+10843.148785182
So now you understand service discovery in K8s - basically one Pod can simply lookup Service IP address (accessible from Pods only!) by DNS lookup with Service Name...
IMPORTANT! To use hubble observe
with --protocol xxx
you have
to first define policy (otherwise it will be not visible to flows).
Details are on: https://docs.cilium.io/en/latest/observability/visibility/
- To enable L4 and L7 monitoring we have to create file
example-policy.yaml
with contents:
kind: CiliumNetworkPolicy
metadata:
name: "l7-visibility"
spec:
endpointSelector:
matchLabels:
"k8s:io.kubernetes.pod.namespace": default
egress:
- toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": default
toPorts:
- ports:
- port: "80"
protocol: TCP
- port: "8080"
protocol: TCP
rules:
http: [{}]
- and apply it with
kubectl apply -f example-policy.yaml
- only then when you run
hubble observe --pod curl -t l7 -f
(l7
is Level 7 protocol - means application level protocol - DNS query or HTTP - now in another terminal run
kubectl exec -it curl -- curl -fsS hello-svc:8080 ; echo
- on Hubble terminal you should see both DNS and HTTP requests:
# terminal running: hubble observe --pod curl -t l7 -f
# in other terminal invoke: kubectl exec -it curl -- curl -fsS hello-svc:8080
Dec 31 11:00:41.546: default/curl:38457 (ID:22469) -> kube-system/coredns-668d6bf9bc-vrqjx:53 (ID:9052) dns-request proxy FORWARDED (DNS Query hello-svc.default.svc.cluster.local. AAAA)
Dec 31 11:00:41.546: default/curl:38457 (ID:22469) -> kube-system/coredns-668d6bf9bc-vrqjx:53 (ID:9052) dns-request proxy FORWARDED (DNS Query hello-svc.default.svc.cluster.local. A)
Dec 31 11:00:41.546: default/curl:38457 (ID:22469) <- kube-system/coredns-668d6bf9bc-vrqjx:53 (ID:9052) dns-response proxy FORWARDED (DNS Answer "10.96.222.242" TTL: 30 (Proxy hello-svc.default.svc.cluster.local. A))
Dec 31 11:00:41.546: default/curl:38457 (ID:22469) <- kube-system/coredns-668d6bf9bc-vrqjx:53 (ID:9052) dns-response proxy FORWARDED (DNS Answer TTL: 4294967295 (Proxy hello-svc.default.svc.cluster.local. AAAA))
Dec 31 11:00:41.548: default/curl:33454 (ID:22469) -> default/hello-deploy-594d4494b5-kdf2c:8080 (ID:37378) http-request FORWARDED (HTTP/1.1 GET http://hello-svc:8080/)
Dec 31 11:00:41.549: default/curl:33454 (ID:22469) <- default/hello-deploy-594d4494b5-kdf2c:8080 (ID:37378) http-response FORWARDED (HTTP/1.1 200 1ms (GET http://hello-svc:8080/))
Notice that there are two Level 7 protocols:
-
dns-request
anddns-response
-
http-request
andhttp-response
That's cool :-)
Because Kind supports more than 1 "Node" on host it will not export NodePort
Services to Host (there could be collisions among many nodes and ports). You can
access it only using Node's IP address - output from k get nodes -o wide
If you want to export NodePort
to Host - you will need to add custom YAML file
at time of cluster creation (!). That will allow you to access such service
from outside your Host.
See
- https://kind.sigs.k8s.io/docs/user/configuration/#nodeport-with-port-mappings
- https://stackoverflow.com/questions/62432961/how-to-use-nodeport-with-kind
For study:
- https://isovalent.com/blog/post/tutorial-getting-started-with-the-cilium-gateway-api/
- https://isovalent.com/blog/post/its-dns/
- (video): https://isovalent.com/videos/video-getting-started-with-cilium-monitoring-with-grafana/
- https://docs.cilium.io/en/stable/gettingstarted/demo/
- https://docs.cilium.io/en/latest/observability/visibility/