Scraping Sonoff S31 ESPHome power sensor metrics into Prometheus and Grafana on K8S - datadoc24/kubernetes-power-monitoring GitHub Wiki

These instructions explain how to export Prometheus-compatible metrics from a Sonoff S31 wifi-enabled smart switch (https://www.amazon.com/dp/B08TNF4835), flashed with ESPHome (https://esphome.io/index.html) using a USB to UART adapter (https://www.amazon.com/dp/B07V556Q82). By powering your Kubernetes cluster nodes through such smart switches, and dashboarding the resulting power metrics in the cluster itself, you can run workloads in a controlled environment to see their impact on node power consumption and thus their carbon footprint.

Sonoff S31 Smart Switch

First of all, flash the Sonoff switch with ESPHome firmware, as described in this blog - https://alfter.us/2021/12/12/using-the-sonoff-s31-with-esphome-first-time-flash/.

I tried and failed a couple of times with different USB-to-serial adapters that I had lying around; but ultimately, the one specified in the blog (and for which the Amazon link is given above) worked perfectly on the first attempt. After the first instance of ESPHome is running on the switch and the switch is connected to your WiFi network, you can recompile the code in your local ESPHome Docker container instance (described in the blog post) and perform subsequent reflashes over-the-air.

Edit the blog's example code to add an update_interval of 1 second and supply a unique sensor name for each of your Sonoff switches - this allows Prometheus to identify the metrics coming from each switch as belonging to their own series.

esphome-code

This section's measure of success is that you can connect to the ESPHome web UI running on the Sonoff switch itself and see the power updates refreshing in the log window every second:

sonoff-webui

Single-node Kubernetes cluster with Microk8s

In the unlikely event that you're reading these instructions but have no Kubernetes cluster to play with, you can set up a suitable cluster on a single Ubuntu host using the microk8s snap that ships with Ubuntu 22 Jammy Jellyfish. You also need to be able to run kubectl commands against that cluster from the same PC on which you run your web browser. That's because we'll use kubectl port forwarding to allow your browser to access the web UIs of the Prometheus and Grafana services running on the cluster. Of course, if you run Ubuntu on your desktop you can run microk8s locally to simplify matters although you'll still need to use kubectl port forwarding to access Prometheus and Grafana because they run inside the clusterIP network.

  • On an Ubuntu host, run sudo snap install microk8s && sudo microk8s start
  • Check the install with sudo microk8s kubectl get nodes and/or sudo microk8s kubectl get po -A
  • Install an up-to-date version of kubectl on your PC and put it on your path. At the time of writing, microk8s installs Kubernetes 1.27, so the kubectl version should be within one minor version of that, higher or lower. https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
  • From your Ubuntu host, collect the kubeconfig of your microk8s cluster with sudo microk8s config and paste all of those contents into ~/.kube/config on your local pc (by default, kubectl will look for a kubeconfig in a file named config under ~/.kube). If you already have other cluster configs stored on your PC, add the microk8s config as a new context - see kubectl config instructions online for details on how to create multiple contexts and switch between them.

Install Prometheus and Grafana

From your local PC:

git clone https://github.com/prometheus-operator/kube-prometheus.git

kubectl apply --server-side -f manifests/setup

kubectl wait --for condition=Established --all CustomResourceDefinition --namespace=monitoring

kubectl apply -f manifests/

watch kubectl -n monitoring get po

After all pods are in Ready state:

kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 &

kubectl --namespace monitoring port-forward svc/grafana 3000 &

In your web browser, check that you can now reach http://localhost:9090/ (Prometheus UI) and http://localhost:3000/ (Grafana UI). Because there is no persistent storage on your microk8s cluster, any pod restart means that all of your stored metrics and customized dashboards will be lost. That's not really a problem for this exercise, but if your Kubernetes cluster does have a persistent storage class, configuring Prometheus and Grafana to use persistent volumes is trivial and well covered by the Prometheus documentation.

Check that your Kubernetes pods have network connectivity to your Sonoff switches

Assuming that you were able to access the web UI of your ESPHome-flashed Sonoff switches, you'll know their IP addresses. Verify that your Kubernetes pods have network connectivity to them:

kubectl run nginx --image=nginx kubectl exec -it nginx -- bash curl http://192.168.0.28/sensor/powerplug13 {"id":"sensor-powerplug13","value":18.37487,"state":"18.4 W"}root@nginx:/# root@nginx:/#

sonoff-network-test-from-pod

kubectl delete pod nginx

Deploy the JSON exporter

As you saw in the curl test above, the Sonoff switches return their current power readings in JSON format, which is not consumable by Prometheus. We need an intermediary exporter to convert the power readings into something Prometheus can use. For this purpose, I have packaged a preconfigured version JSON exporter (https://github.com/prometheus-community/json_exporter) as a Docker image which you can run as a deployment on Kubernetes. This JSON exporter image scrapes the power plug sensor on demand and return the results in a Prometheus-compatible format.

Save the contents of esphome-power-exporter-deploy.yaml (found in this repo) locally, then apply them:

kubectl apply -f esphome-power-exporter-deploy.yaml

This should tell you that the deployment and service have been created. Run watch kubectl -n monitoring get po to check that the esphome-power-exporter pod has come to running state

At this point, your K8S cluster has a service which will scrape specified sensors and return their metrics in prometheus compatible format on demand.

Configure Prometheus to scrape the exporter

We need to configure Prometheus to scrape the exporter service at regular intervals. There are various ways of doing that, but we will hard-wire the config into the ‘additional scrape config’ secret, then configure Prometheus to use that secret.

Create a local copy of prometheus-esphome-scrape-config.yaml from this repo. Convert the yaml file into a Kubernetes secret, then apply it:

kubectl create secret generic additional-scrape-configs --from-file=prometheus-esphome-scrape-config.yaml --dry-run=client -oyaml > additional-scrape-configs.yaml

kubectl apply -f additional-scrape-configs.yaml -n monitoring

Then configure the Prometheus custom resource to use the contents of the secret as its additionalScrapeConfig:

kubectl -n monitoring edit prometheus k8s

Edit the spec to add these 3 lines at the position indicated in the screen shot:

additonalScrapeConfigs: key: prometheus-esphome-scrape-config.yaml name: additional-scrape-configs

additional-scrape-config

In theory, the Prometheus operator should spot your updated config automatically and trigger the prometheus-k8s pod(s) to reload the new config, but if it doesn’t or if you are impatient, force the reload:

kubectl -n monitoring rollout restart sts prometheus-k8s watch kubectl -n monitoring get po

After the prometheus-k8s pods have restarted, you’ll probably need to restart the kubectl port-forwarding process so you can hit the UI again:

kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 &

Now access the Prometheus UI in your browser and check Status -> Targets. Search for targets named json to see that you are getting data from the json exporter service.

prometheus-json-targets

See charts of your Sonoff power usage data

In the Prometheus UI, enter esphome_power_sensor in the expression field, select Graph and hit the Execute button. Prometheus will plot graphs of the metrics that it has scraped from your switches, labelled according to the unique name that you programmed for each sensor. You can use these metrics in Grafana dashboards too, just as you would use any gauge-type metric.

prometheus-esphome-metrics-graph