Californium as k8s service - eclipse-californium/californium GitHub Wiki

In the last years, k8s got very common to run applications in the cloud. Though at the begin UDP was rarely supported, this was changed and now many cloud-providers, not all, provide also support for UDP.

A k8s application is provided as containers and the maintained and orchestrated by k8s using REST APIs or kubectl.

This page is not intended to be a "step by step" tutorial. It rather provides an overview of steps, which must be followed, usually amended by the tutorial of a cloud provider, that offered to use their managed k8s there. Some cloud providers offers also complete cli-tool-chains, so the approach may differ a lot.

Container Images

Using Californium as Container requires some tools to build a container image. One common tool is docker. You find installation instructions here.

Each Container image is described by a Dockerfile. Californium requires a java runtime to be executed, so the easiest way is to start with that in the Dockerfile:

FROM docker.io/openjdk:11-jre-slim

After that, a folder is prepared and the Californium jar together with adjusted Californium3???.properties (file with configuration values for Californium) and a file with the build-number "./service/build" (only used by the cf-extplugtest-server to check the current version of the container) is copied into the Container's prepared folder.

RUN mkdir /opt/app
COPY ./service/build ./CaliforniumReceivetest3.properties /opt/app/
COPY ./target/cf-extplugtest-server-3.11.0.jar /opt/app/cf-extplugtest-server.jar

Then define the network interface:

#EXPOSE 5683/udp
#EXPOSE 5683/tcp
#EXPOSE 5684/udp
#EXPOSE 5684/tcp
EXPOSE 5783/udp
EXPOSE 5784/udp
EXPOSE 5884/udp
EXPOSE 5884/tcp
EXPOSE 8080/tcp

CoAP uses 5683 and 5684 as default ports. The main protocol is UDP, TCP is only implemented experimental. The examples only exposes 5783/udp and 5784/udp, because the cf-extplugtest-server is listening on these interfaces. 5884/udp is used for Californium's internal DTLS CID cluster support, 5884/tcp to download credentials for the DTLS graceful restart. And 8080/tcp used by k8s for liveness and readiness checks.

Finally, the java process is started with

WORKDIR /opt/app
CMD ["java", "-XX:+UseContainerSupport", "-XX:MaxRAMPercentage=75", "-jar", "./cf-extplugtest-server.jar", "--no-plugtest", "--no-tcp", "--benchmark", "--k8s-dtls-cluster", ":5784;:5884;5884", "--k8s-monitor", ":8080", "--k8s-restore", ":5884"]

(See cf-extplugtest-server for further details.)

To create the container from that Dockerfile, execute

> docker build . -t cf-extserver-jdk11-slim -f service/Dockerfile

assuming, the current directory contains the CaliforniumReceivetest3.properties a service folder with the Dockerfile, and a target folder with the Californium jar. If all works well, you now have a Container in your local docker installation.

> docker images

REPOSITORY                TAG           IMAGE ID       CREATED              SIZE
cf-extserver-jdk11-slim   3.11.0.3      c0d6a1759bb1   About a minute ago   234MB
openjdk                   11-jre-slim   8f0967480f22   13 days ago          228MB

k8s

Depending on your usage, you may install a local k8s implementation, e.g. minikube or microk8s. Or you use a managed k8s of your cloud provider.

To deploy your application on a k8s, you may use kubectl to apply the descriptions of several components. Starting with the service itself,

apiVersion: v1
kind: Service
metadata:
  name: cf-extserver
spec:
  selector:
    app: cf-extserver
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports: 
  - name: coap2
    port: 5783
    targetPort: cf-coap2
    nodePort: 30783
    protocol: UDP
  - name: coaps2
    port: 5784
    targetPort: cf-coaps2
    nodePort: 30784
    protocol: UDP

(See k8s.)

describes the external view: it creates a load-balancer and exposes the ports of the container. The description is given in yaml and is "applied" to k8s using

> kubectl apply -f k8s.yaml

To define the application, the example uses a StatefulSet

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cf-extserver-a
  labels:
    app: cf-extserver
spec:
  replicas: 2
  podManagementPolicy: "Parallel"
  selector:
    matchLabels:
      app: cf-extserver
  serviceName: "cf-extserver"
  template:
    metadata:
      labels:
        app: cf-extserver
        initialDtlsClusterNodes: "2"
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            app: cf-extserver
      containers:
      - name: cf-extserver
        volumeMounts:
        - name: cf-extserver-config-files
          mountPath: "/etc/certs"
          readOnly: true
        env:
        - name: KUBECTL_TOKEN
          valueFrom:
            secretKeyRef:
              name: cf-extserver-config
              key:  kubectl_token
              optional: true
              ...
        ports:
        - name: cf-coap2
          containerPort: 5783
          protocol: UDP
        - name: cf-coaps2
          containerPort: 5784
          protocol: UDP
        - name: cf-coaps-mgmt
          containerPort: 5884
          protocol: UDP
        - name: cf-http-monitor
          containerPort: 8080
          protocol: TCP
        - name: cf-https-mgmt
          containerPort: 5884
          protocol: TCP
        readinessProbe:
          httpGet:
            path: /ready
            port: cf-http-monitor
          initialDelaySeconds: 3
          periodSeconds: 1
      volumes:
      - name: cf-extserver-config-files
        secret:
          secretName: cf-extserver-config

(See k8sa.)

This describes the application with some configuration (env), the internal exposed network and how to check the container for readiness. Usually, the container image to load is also contained here. But in the example service the image is applied separate to the deployment. The description of the deployment is also applied.

> kubectl apply -f k8sa.yaml

Container Registry

Before we apply the container image to be used for the application, one important step is to push that from the local docker installation to an accessible container registry. A k8s cloud service will not be able to load such a container from your local machine.

Usually cloud provider offers you a container registry as well. Or you use other container registry offerings as Docker Hub or some offers of in your providers marketplace.

The first step to use such a cloud container registry is to login.

> docker login <cloud-registry>

The second step is to tag the container image for the cloud registry.

> docker image tag cf-extserver-jdk11-slim   <cloud-registry>/cf-extserver-jdk11-slim

That creates a tag, which includes the destination registry as well. Now push that image to the cloud-registry

> docker push <cloud-registry>/cf-extserver-jdk11-slim

The push refers to repository [<cloud-registry>/cf-extserver-jdk11-slim]
a4755a33520c: Pushing [==========================================>        ]  4.982MB/5.814MB
91fcaa0388e6: Pushed 
0ad9879de3af: Layer already exists 
...

It's also possible to build and tag in one step:

> docker build . -t <cloud-registry>/cf-extserver-jdk11-slim -f service/Dockerfile

k8s - Accessing The Container Registry

Now the container image is pushed in the cloud-registry, the k8s deployment requires to know the image to pull. Therefore

> kubectl patch statefulset cf-extserver-a --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"<cloud-registry>/cf-extserver-jdk11-slim"}]'

The downside of this approach occurs during development, you may push a new image on the already used tag, but other components may have cached an other image with that tag. Even, if the tag uses also a version, e.g. "/cf-extserver-jdk11-slim:3.11.0.", that may get accidentally reused and ends up in hard to find errors. Therefore it's more reliable to use the digest than the tag.

> docker pull <cloud-registry>/cf-extserver-jdk11-slim:3.11.0.3

3.0.0.3: Pulling from cf-extserver-jdk11-slim
Digest: sha256:735f6b401ab66cc74cd351508e72bd4b06029850dfdd8574483be55660918a9a
Status: Image is up to date for <cloud-registry>/cf-extserver-jdk11-slim:3.11.0.3
<cloud-registry>/cf-extserver-jdk11-slim:3.11.0.3

and so

> kubectl patch statefulset cf-extserver-a --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"<cloud-registry>/cf-extserver-jdk11-slim@sha256:735f6b401ab66cc74cd351508e72bd4b06029850dfdd8574483be55660918a9a"}]'

That results in

> kubectl describe statefulset/cf-extserver-a

Name:               cf-extserver-a
CreationTimestamp:  Thu, 04 Nov 2021 16:14:27 +0100
Selector:           app=cf-extserver
Labels:             app=cf-extserver
Annotations:        <none>
Replicas:           2 desired | 2 total
Update Strategy:    RollingUpdate
  Partition:        0
Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=cf-extserver
           initialDtlsClusterNodes=2
  Containers:
   cf-extserver:
    Image:       <cloud-registry>/cf-extserver-jdk11-slim@sha256:735f6b401ab66cc74cd351508e72bd4b06029850dfdd8574483be55660918a9a
    Ports:       5783/UDP, 5784/UDP, 5884/UDP, 8080/TCP, 5884/TCP
    Host Ports:  0/UDP, 0/UDP, 0/UDP, 0/TCP, 0/TCP
    Readiness:   http-get http://:cf-http-monitor/ready delay=3s timeout=1s period=1s #success=1 #failure=3
    Environment:
      KUBECTL_TOKEN:                <set to the key 'kubectl_token' in secret 'cf-extserver-config'>                Optional: true
      KUBECTL_HOST:                 <set to the key 'kubectl_host' in secret 'cf-extserver-config'>                 Optional: true
      KUBECTL_NAMESPACE:            <set to the key 'kubectl_namespace' in secret 'cf-extserver-config'>            Optional: true
      KUBECTL_SELECTOR:             <set to the key 'kubectl_selector' in secret 'cf-extserver-config'>             Optional: true
      KUBECTL_SELECTOR_LABEL:       <set to the key 'kubectl_selector_label' in secret 'cf-extserver-config'>       Optional: true
      KUBECTL_NODE_ID:              <set to the key 'kubectl_node_id' in secret 'cf-extserver-config'>              Optional: true
      DTLS_CID_MGMT_IDENTITY:       <set to the key 'dtls_cid_mgmt_identity' in secret 'cf-extserver-config'>       Optional: true
      DTLS_CID_MGMT_SECRET_BASE64:  <set to the key 'dtls_cid_mgmt_secret_base64' in secret 'cf-extserver-config'>  Optional: true
    Mounts:
      /etc/certs from cf-extserver-config-files (ro)
  Volumes:
   cf-extserver-config-files:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cf-extserver-config
    Optional:    false
Volume Claims:   <none>
Events:          <none>

Access Rights

If you try the above procedure, you may get faced several "authorization" issues. One is to allow k8s to access the cloud container-registry. In some clouds that may be the default, some use a simple UI to grant k8s running on the same cloud access to the container-registry. Other's setup is a little more complex and the cloud providers therefore offers tutorials.

If the container registry is separated, you may need to setup k8s with the proper credentials to access that registry.

The other topic is accessing the k8s REST API from the pods themself. Californium lists all sibling pods using

GET https://kubernetes.default.svc/api/v1/namespaces/default/pods

To grant permission to do so,

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: list-pods
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: list-pods-rb
subjects:
- kind: ServiceAccount
  name: default
roleRef:
  kind: Role
  name: list-pods
  apiGroup: ""

(See k8s-rbac.)

a "list-pods" role must be applied.

> kubectl apply -f k8s-rbac.yaml

Boring Details

Above you have seen something as:


    Environment:
      KUBECTL_TOKEN:                <set to the key 'kubectl_token' in secret 'cf-extserver-config'>                Optional: true
      KUBECTL_HOST:                 <set to the key 'kubectl_host' in secret 'cf-extserver-config'>                 Optional: true
      KUBECTL_NAMESPACE:            <set to the key 'kubectl_namespace' in secret 'cf-extserver-config'>            Optional: true
      KUBECTL_SELECTOR:             <set to the key 'kubectl_selector' in secret 'cf-extserver-config'>             Optional: true
      KUBECTL_SELECTOR_LABEL:       <set to the key 'kubectl_selector_label' in secret 'cf-extserver-config'>       Optional: true
      KUBECTL_NODE_ID:              <set to the key 'kubectl_node_id' in secret 'cf-extserver-config'>              Optional: true
      DTLS_CID_MGMT_IDENTITY:       <set to the key 'dtls_cid_mgmt_identity' in secret 'cf-extserver-config'>       Optional: true
      DTLS_CID_MGMT_SECRET_BASE64:  <set to the key 'dtls_cid_mgmt_secret_base64' in secret 'cf-extserver-config'>  Optional: true
    Mounts:
      /etc/certs from cf-extserver-config-files (ro)

These values are set via a k8s secret.

> kubectl create secret generic cf-extserver-config \
	  --from-file=https_client_cert.pem="service/client.pem" \
	  --from-file=https_client_trust.pem="service/caTrustStore.pem" \
	  --from-file=https_server_cert.pem="service/server.pem" \
	  --from-file=https_server_trust.pem="service/caTrustStore.pem" \
	  --from-literal=kubectl_token="" \
	  --from-literal=kubectl_selector_label="controller-revision-hash" \
	  --from-literal=dtls_cid_mgmt_identity="cid-cluster-manager" \
	  --from-literal=dtls_cid_mgmt_secret_base64="${secret}"

All Together

For people new to k8s, that's a lot of stuff. Therefore all files to use the extended plugtest server as k8s service are contained in service. The idea is to call it with

service/deploy_k8s.sh (install|update0)

for local deployment (using microk8s).

For a cloud deployment, adapt deplay_k8s_cloud.sh with your values and use that.

Additionally, the deploy_k8s.sh script uses a namespace in order to address all components of that service by adding -n <namespace>. The default is "cali".

> kubectl --context=<cloud> -n cali get all
NAME                   READY   STATUS    RESTARTS   AGE
pod/cf-extserver-a-0   1/1     Running   0          24h
pod/cf-extserver-a-1   1/1     Running   0          24h

NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                         AGE
service/cf-extserver   LoadBalancer   ???.??.??.???  ???.??.??.??   5783:30783/UDP,5784:30784/UDP   24h

NAME                              READY   AGE
statefulset.apps/cf-extserver-a   2/2     24h

Remove It

Using a namespace comes also with the benefit, to delete all just be deleting the namespace

> kubectl --context=<cloud> delete namespace cali 

No UDP Load-Balancer, No IPv6 Support

A few clouds are still not supporting an UDP load-balancer. And some no IPv6 inbound traffic for k8s. Californium started some time ago a small utility, the cf-nat, which was extended recently, to help to overcome that for a short term and for test purpose.

It could be downloaded from Eclipse Repository - cf-nat. The utility can solve both, the missing UDP load-balancer and translating IPv6 to IPv4. It uses source-Nating and is therefore limited in the number of concurrent clients (while "concurrent" is then defined by the NAT timeout for entries, default 30s).

java -jar cf-nat-3.11.0.jar [<local-ipv6-address>]:5784 <local-ipv4-address>:5784 -- node1.coap.cluster:5784 node2.coap.cluster:5784 node3.coap.cluster:5784 -x

Using a unix-systemd-service will then provide the "availability".

Note: this fallback solution is not intended to be used for midterm. It doesn't support scaling the replicas or changing the pod's nodes. But to overcome the gap in UDP/IPv6 support for a short time, especially for testing, is not too bad.

k8s - Standardized

Not only the UDP or IPv6 may be missing. Testing several managed k8s cloud providers, especially the used security systems seems sometimes to require very deep knowledge. In some managed k8s it seems that accessing the internal DNS (e.g. "kubernetes.default.svc") is considered as advanced and requires some extra steps. Or traffic between the pods doesn't work "out-of-the-box".

let's see, how that evolves.

⚠️ **GitHub.com Fallback** ⚠️