elk configuration filebeat - juancamilocc/virtual_resources GitHub Wiki

ELK configuration to get Kubernetes logs using Filebeat

In this guide, you will learn how to configure an ELK (Elasticsearch, Logstash, Kibana) stack to receive Kubernetes logs using Filebeat and create personalized dashboards.

This is the general architecture diagram:

General Diagram

Note: This guide assumes that network traffic between the ELK instance and EKS is already allowed.

ELK Stack Configuration

We will use the following directory structure:

.
├── docker-compose.yaml
├── .env
├── certs/
│   ├── privkey.pem
│   ├── fullchain.pem
│   └── ca.pem
└── logstash/
    └── pipeline/logstash.conf

.env file.

ELASTIC_HOSTNAME=<hostname>
ELASTIC_PASSWORD=<elastic_password>
KIBANA_USER_PASSWORD=<kibana_user_password>
LOGSTASH_USER_PASSWORD=<logstash_user_password>
KIBANA_ENCRYPTION_KEY=<kibana_encryption_key> # Must be a random string of 32 characters

logstash.conf file.

input {
  beats {
    port => 5044
    ssl_enabled => true
    ssl_certificate => "/usr/share/logstash/certs/fullchain.pem"
    ssl_key => "/usr/share/logstash/certs/privkey.pem"
  }
}

output {
  elasticsearch {
    hosts => ["https://${ELASTIC_HOSTNAME}:9200"]
    ssl_enabled => true
    cacert => "/usr/share/logstash/certs/ca.pem"
    user => "logstash_user"
    password => ${LOGSTASH_USER_PASSWORD}
    data_stream => true
    data_stream_type => "logs"
    data_stream_dataset => "from-logstash"
    data_stream_namespace => "default"
  }
}

docker-compose.yaml file.

version: '3.8'

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.17.4
    container_name: elasticsearch
    hostname: ${ELASTIC_HOSTNAME}
    environment:
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - discovery.type=single-node
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/privkey.pem
      - xpack.security.http.ssl.certificate=certs/fullchain.pem
      - xpack.security.http.ssl.certificate_authorities=certs/ca.pem
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/privkey.pem
      - xpack.security.transport.ssl.certificate=certs/fullchain.pem
      - xpack.security.transport.ssl.certificate_authorities=certs/ca.pem
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
      - ./certs:/usr/share/elasticsearch/config/certs
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elastic_network
    healthcheck:
      test: ["CMD-SHELL", "curl -k -u elastic:${ELASTIC_PASSWORD} https://localhost:9200"]
      interval: 30s
      timeout: 10s
      retries: 5

  elasticsearch-init:
    image: curlimages/curl:latest
    depends_on:
      elasticsearch:
        condition: service_healthy
    networks:
      - elastic_network
    restart: "no"
    command: >
      /bin/sh -c '
      
      echo "Waiting for Elasticsearch to be ready...";
      until curl -k -s "https://elasticsearch:9200" -u elastic:${ELASTIC_PASSWORD} > /dev/null; do
        sleep 5;
      done;
      
      echo "Creating Kibana user...";
      curl -k -X POST "https://elasticsearch:9200/_security/user/kibana_user" 
        -u elastic:${ELASTIC_PASSWORD} 
        -H "Content-Type: application/json" 
        -d "{
          \"password\":\"${KIBANA_USER_PASSWORD}\",
          \"roles\":[\"kibana_system\"],
          \"full_name\":\"kibana-access\",
          \"email\":\"[email protected]\",
          \"enabled\":true
        }";
      
      echo "Creating Logstash role...";
      curl -k -X PUT "https://elasticsearch:9200/_security/role/logstash_writer" 
        -u elastic:${ELASTIC_PASSWORD} 
        -H "Content-Type: application/json" 
        -d "{
          \"cluster\":[\"monitor\",\"manage_index_templates\",\"manage_ilm\"],
          \"indices\":[
            {
              \"names\":[\"logs-from-logstash-*\"],
              \"privileges\":[\"write\",\"create\",\"create_index\",\"read\",\"delete\"]
            }
          ]
        }";
      
      echo "Creating Logstash user...";
      curl -k -X POST "https://elasticsearch:9200/_security/user/logstash_user" 
        -u elastic:${ELASTIC_PASSWORD} 
        -H "Content-Type: application/json" 
        -d "{
          \"password\":\"${LOGSTASH_USER_PASSWORD}\",
          \"roles\":[\"logstash_writer\"],
          \"full_name\":\"Internal Logstash User\"
        }";
      
      echo "Initialization completed!"
      '

  kibana:
    image: docker.elastic.co/kibana/kibana:8.17.4
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=https://${ELASTIC_HOSTNAME}:9200
      - ELASTICSEARCH_USERNAME=kibana_user
      - ELASTICSEARCH_PASSWORD=${KIBANA_USER_PASSWORD}
      - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${KIBANA_ENCRYPTION_KEY}
      - SERVER_SSL_ENABLED=true
      - SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/certs/fullchain.pem
      - SERVER_SSL_KEY=/usr/share/kibana/config/certs/privkey.pem
      - SERVER_PUBLICBASEURL=https://${ELASTIC_HOSTNAME}:5601
    volumes:
      - kibana_data:/usr/share/kibana/data
      - ./certs:/usr/share/kibana/config/certs:ro
    ports:
      - "5601:5601"
    networks:
      - elastic_network
    depends_on:
      elasticsearch-init:
        condition: service_completed_successfully

  logstash:
    image: docker.elastic.co/logstash/logstash:8.17.4
    container_name: logstash
    environment:
      - xpack.monitoring.enabled=false
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
      - ./certs:/usr/share/logstash/certs
    ports:
      - "5044:5044"
    networks:
      - elastic_network
    depends_on:
      elasticsearch-init:
        condition: service_completed_successfully

networks:
  elastic_network:
    driver: bridge

volumes:
  elasticsearch_data:
  kibana_data:

Once the files are configured, create network, volumes and start the ELK stack:

docker compose up

Check all containers are running.

docker ps
# CONTAINER ID   IMAGE                                                  COMMAND                  CREATED        STATUS      PORTS                                                                                  NAMES
# e094a5a5fa67   docker.elastic.co/logstash/logstash:8.17.4             "/usr/local/bin/dock…"   6 minutes ago     Up 6 minutes   0.0.0.0:5044->5044/tcp, :::5044->5044/tcp, 9600/tcp                                    logstash
# 200b7dc51e3d   docker.elastic.co/kibana/kibana:8.17.4                 "/bin/tini -- /usr/l…"   6 minutes ago     Up 6 minutes   0.0.0.0:5601->5601/tcp, :::5601->5601/tcp                                              kibana
# 01d81943bc9d   docker.elastic.co/elasticsearch/elasticsearch:8.17.4   "/bin/tini -- /usr/l…"   6 minutes ago     Up 6 minutes   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp   elasticsearch

Filebeat configuration on Kubernetes

To collect logs from your Kubernetes cluster, you need to deploy Filebeat as a DaemonSet, along with a ConfigMap and the necessary RBAC permissions.

filebeat.yaml file.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:8.17.4
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: LOGSTASH_HOST
          value: "<your_domain>"
        - name: BEATS_PORT
          value: "5044"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate

configmap.yaml file.

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
 
    processors:
      - add_host_metadata:
 
    output.logstash:
      hosts: ['${LOGSTASH_HOST}:${BEATS_PORT}']
      loadbalance: true
      ssl.enabled: true

rbac.yaml file.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: kube-system
roleRef:
  kind: Role
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat-kubeadm-config
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: kube-system
roleRef:
  kind: Role
  name: filebeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] 
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
- apiGroups: ["apps"]
  resources:
    - replicasets
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat-kubeadm-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat

Deploy the resources.

kubectl apply -f filebeat.yaml
kubectl apply -f configmap.yaml
kubectl apply -f rbac.yaml

Check that they are deployed correctly.

kubectl -n kube-system get cm
# NAME                                                   DATA   AGE
# filebeat-config                                        1      17s

kubectl -n kube-system get daemonset
# NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR              AGE
# filebeat                 10        10        10      10           10          <none>                     36s

To check if filebeat is sending logs correctly, looking for something like this.

kubectl -n kube-system logs -l k8s-app=filebeat
# {"log.level":"info","@timestamp":"2025-06-15T22:56:53.177Z","log.logger":"input.harvester","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/log.(*Harvester).Run","file.name":"log/harvester.go","file.line":311},"message":"Harvester started for paths: [/var/log/containers/*.log]","service.name":"filebeat","input_id":"fe5549f8-5f3e-4e0d-878b-929225d4e6ed","source_file":"/var/log/containers/pod-name-7d45f67bbc-6gxng_pod-name-f31d48f280d5fdf445663f7ea67acea86572d55f1e4297c205a6cec0d59342bc.log","state_id":"native::29427666-66306","finished":false,"os_id":"29427666-66306","harvester_id":"b64a9f56-664a-4ef0-a154-b297fd321e71","ecs.version":"1.6.0"}

Viewing Kubernetes Logs in Kibana

Open a browser and go to https://<your_domain>:5601. There, navigate to Analytics > Discover.

Discover in Kibana Discover in Kibana

There, you can filter typing kubernetes.node.name : "node-name" for example, as follows.

Filtering logs in Kibana

Create a custom dashboard

To customize a dashboard, we first need to create a visualize library with a defined filter. Navigate to Analytics > Visualize Library > Create visualization > Lens.

Create Visualization

There, let's create a pie chart and filter by kubernetes.namespace and click on +. This will show us the top 5 namespaces with more records.

Top 5 namespaces by records

So, let's save this visualization as namespaces by records.

Save visualization

Return to Analytics > Discover to save a search filtering by kubernetes.container.name: *, as follows.

Logs all containers

Save this search as Kubernetes Logs Table.

Logs all containers

Now, let's use the previous visualization and search to integrate them into a unified dashboard. Go to Analytics > Dashboards > Create Dashboard > Add from Library and select Kubernetes Logs Table and namespaces by records.

Logs all containers

You can move them around as you like, changing their size and location.

Personalized dashboard

Finally, let's add controls to manage specific filters. Go to Controls > Add control. Filter by namespace and container.name respectively.

Personalized dashboard Personalized dashboard

This way is more interactive to get the logs filtering by namespace and container name, getting its respective logs. The dashboard should looks like this.

Personalized dashboard

Finally, save the dashboard.

Download logs report

To get a logs report in .csv format, first filter the data using the controls above. Then, click the three-dots icon in the table and select Generate CSV report.

Personalized dashboard

You will get a message like this.

Personalized dashboard

Go there and download the report.

Personalized dashboard

Conclusions

By following this guide, you have set up a robust ELK stack capable of securely collecting, storing, and visualizing Kubernetes logs using Filebeat. This setup not only centralizes your log management but also empowers you to create interactive dashboards and generate reports, greatly enhancing your ability to monitor and troubleshoot your Kubernetes workloads. With the flexibility of Kibana, you can tailor visualizations and dashboards to your team's needs, ensuring efficient and insightful observability for your infrastructure.

⚠️ **GitHub.com Fallback** ⚠️