Build K3S for Dev purpose - tl-brcm/wiki GitHub Wiki

#K3s #StepByStep #POC

Create a new work switch

Why should we do that? Because K3s cluster need static IP for the worker node to interact with the master, and every time restarting k3s vms will result in ip changes. so we need to create a network that allow the static ip instead of using dhcp.

In hyper v manager create a network switch.

!

Verify the ip address of the network switch

ipconfig
**Ethernet adapter vEthernet (multipass):

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::b7b1:ec02:3ec4:1a7c%40
   IPv4 Address. . . . . . . . . . . : 192.168.100.8
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 192.168.100.1**

Create K3s Infra

Create a VM for k3s master with the defined network.

multipass launch -n k3s-master -c 4 -m 16G -d 40G --network name=Multipass,mode=manual,mac="52:74:63:8B:A6:1F"

multipass exec -n k3s-master -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
    version: 2
    ethernets:
        eth1:
            dhcp4: no
            match:
                macaddress: "52:74:63:8B:A6:1F"
            addresses: [192.168.100.101/24]
EOF'

multipass exec -n k3s-master -- sudo netplan apply

Install k3s master

multipass exec k3s-master -- bash -c "curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION='v1.27.8+k3s2' sh -s - --disable=traefik"

Here notice:

  • We are using k3s-master as the URL to the master.
  • We are using v1.27.8 as the version of k8s
  • We disable the serviceLB so we will be using our own LB later.

Retrieve the token

multipass exec k3s-master -- bash -c "sudo cat /var/lib/rancher/k3s/server/node-token"

Update /etc/hosts file with below contents. You can add more worker nodes and add accordingly.

multipass exec -n k3s-master -- sudo bash -c 'cat << EOF >> /etc/hosts
192.168.100.101 k3s-master
192.168.100.102 worker1
192.168.100.103 worker2
192.168.100.104 worker3
192.168.100.105 worker4
192.168.100.120 k3s-client
EOF'

Install k3s worker nodes

multipass launch -n worker6 -c 4 -m 8G -d 10G --network name=Multipass,mode=manual,mac="3E:F7:4A:62:9D:9F"

multipass exec -n worker6 -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
    version: 2
    ethernets:
        eth1:
            dhcp4: no
            match:
                macaddress: "3E:F7:4A:62:9D:9F"
            addresses: [192.168.100.107/24]
EOF'

multipass exec -n worker6 -- sudo netplan apply

Update /etc/hosts file with below contents. You can add more worker nodes and add accordingly.

multipass exec -n k3s-master -- sudo bash -c 'cat << EOF >> /etc/hosts
192.168.100.101 k3s-master
192.168.100.102 worker1
192.168.100.103 worker2
192.168.100.104 worker3
192.168.100.105 worker4
192.168.100.106 worker5
192.168.100.120 k3s-client
EOF'

Verify the master is pingable

multipass exec -n worker2 -- ping k3s-master

Shell in to the k3s worker and run the below command

multipass shell worker1
curl -sfL https://get.k3s.io | K3S_URL=https://k3s-master:6443 K3S_TOKEN=K10362b41ddb973d93299aea13c44e9ed55f4e17f3a0d4b1b6a59ae448f318e2e1d::server:98ca57e0157c97d6b5dba835ec403239 INSTALL_K3S_VERSION='v1.27.8+k3s2' sh -

Here notice:

  • We are using k3s-master as the URL to the master.
  • We are using v1.27.8 as the version of k8s

Repeat same step for the other worker nodes. Make sure you use a different IP and Mac Address for the network

Launch a client

multipass launch -n k3s-client -c 1 -m 2G -d 10G --network name=Multipass,mode=manual,mac="B8:27:9A:D5:7E:40"

multipass exec -n k3s-client -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
    version: 2
    ethernets:
        eth1:
            dhcp4: no
            match:
                macaddress: "B8:27:9A:D5:7E:40"
            addresses: [192.168.100.120/24]
EOF'

multipass exec -n k3s-client -- sudo netplan apply

Update /etc/hosts file with below contents. You can add more worker nodes and add accordingly.

multipass exec -n k3s-client -- sudo bash -c 'cat << EOF >> /etc/hosts
192.168.100.101 k3s-master
192.168.100.102 worker1
192.168.100.103 worker2
192.168.100.104 worker3
192.168.100.120 k3s-client
EOF'

Installing kubectl

Update package listings for the latest version of the repository:

sudo apt-get update

Install kubectl:

sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

Add the key file to your .kube/config Details steps: Get the content of the file /etc/rancher/k3s/k3s.yaml from your k3s master Copy the content to your client's ~/.kube/k3s.yaml ⚠ Be aware of the tabbing.

Change the server address

server: https://k3s-master:6443

Change the permission

chmod og-rw k3s.yaml

Edit your .bashrc and add this in:

export KUBECONFIG=~/.kube/k3s.yaml

Reload shell and test the kubectl

kubectl get nodes

Installing Helm First, download Helm's installation script from the official source:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3

Next, make the script executable:

chmod 700 get_helm.sh

Finally, run the script to install Helm:

./get_helm.sh

Run the steps to add k and h and autocomplete for helm and kubernetes. add below to .bashrc profile

source <(kubectl completion bash)
alias k='kubectl'
complete -o default -F __start_kubectl k
alias h='helm'
source <(helm completion bash)
complete -o default -F __start_helm h

Get kubxctx and kubens for your convenience (optional) Install Krew first

(
  set -x; cd "$(mktemp -d)" &&
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
  KREW="krew-${OS}_${ARCH}" &&
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
  tar zxvf "${KREW}.tar.gz" &&
  ./"${KREW}" install krew
)

Next install the plugin

kubectl krew install ctx
kubectl krew install ns
sudo snap install kubectx --classic

Export the alias by updating the .bashrc file

alias ks=kubens
alias kx=kubectx

Create Load Balancer

Configure the metallb, run below command

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml

Create the layer 2 LB (most simply configure) l2lb.yaml

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.100.240-192.168.100.250

💡 The IP range must be available in your subnet created earlier.

Create the advertise adviertise.yaml

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

Test Deploy nginx ingress

This step is not required if you want to use the ingress controller from SSP)

h repo add ingress-nginx https://kubernetes.github.io/ingress-nginx/
k create ns ingress
h install nginx-ingress ingress-nginx/ingress-nginx -n ingress

Verify the LB external IP should be created

ubuntu@k3s-client:~/dev/mysql$ k get svc -n ingress 
NAME                                               TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.43.165.228   <none>          443/TCP                      61m
nginx-ingress-ingress-nginx-controller             LoadBalancer   10.43.94.176    192.168.1.241   80:31330/TCP,443:31747/TCP   61m

Deploying helm chart for testing

For bitnami helm chart you might need to do this in order to get it working.

export HELM_EXPERIMENTAL_OCI=1
helm registry login registry-1.docker.io

Do a sample test on elasticsearch and kibana as an exmaple.

k create ns logging
h repo add elastic https://helm.elastic.co
h install elasticsearch --version 7.17.3 elastic/elasticsearch -n logging 
h install k2 --version 7.17.3 elastic/kibana -n logging  -f ../logging/kibana-values.yam

Example Content of the kibana-values.yaml:

ingress:
  enabled: true
  classsName: nginx
  hosts:
  - host: k2.k3s.demo
    paths:
    - path: /

Troubleshooting

Why my VM networks is not working?

This may happen after you reboot your PC. You might need to do a re-choose the proper network in your hyper-v network manager and make sure the network adaptor has a correct IP address. !

!

!

Stop all the workloads

h uninstall -n logging elastic-operator
h uninstall -n monitoring grafana-operator 
h uninstall -n monitoring prometheus-operator 
h uninstall -n ingress ingress-nginx 
h uninstall -n ssp infra-ssp ssp ssp-data 

Stop K3S worker and Master

On worker node

sudo systemctl stop k3s-agent  

On master Node

sudo systemctl stop k3s

Delete K3S

First delete the agent. Run below commands on the worker and repeat that for different workers

/usr/local/bin/k3s-agent-uninstall.sh

Run below command on the master

/usr/local/bin/k3s-uninstall.sh

Remove the cluster after POC

multipass stop k3s-master worker1 worker2 k3s-client
multipass delete k3s-master worker1 worker2 k3s-client
multipass purge

Resizing for VM

Increase ram or CPU in multi pass You must stop workloads, k3s service and the VM first from ../Knowledge/Build K3S for Dev purpose#Stop all the workloads

Run below to set RAM

multipass stop worker1
multipass set local.worker1.memory=8G

You can verify the usage by running below command CPU

ubuntu@k3s-client:~/development/sm-container/build$ k describe nodes | grep cpu | grep '%'
  cpu                1300m (65%)   2600m (130%)
  cpu                1400m (70%)   2300m (114%)
  cpu                1150m (57%)   2200m (110%)
  cpu                3400m (85%)    8200m (204%)
  cpu                0 (0%)    0 (0%)

Memory

ubuntu@k3s-client:~/development/sm-container/build$ k describe nodes | grep memory | grep '%'
  memory             2266Mi (28%)  2560Mi (32%)
  memory             11404Mi (71%)  12458Mi (77%)
  memory             0 (0%)    0 (0%)
  memory             2754Mi (34%)  4Gi (51%)
  memory             3322Mi (41%)  5170Mi (65%)

What's next

Review the ../POC/NFS for SM Container to get prepare for the deployment of SM container.

⚠️ **GitHub.com Fallback** ⚠️