ElkNotes - henk52/knowledgesharing GitHub Wiki

ELK - Elasticsearch Logstash Kibana

References

Vocabulary

  • Replicas: How many different nodes each index shard is copied to.

    • See also:
  • Shards: how many parts the an index is distributed to.

    • See also:

Trying it out

  1. local logs
  2. filebeat, with fixed time stamps
  3. syslog plugin
  4. snmptrap
  5. topbeat

Overview

  • An index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain your data.

    • see:
    • By default, Elasticsearch indexes all data in every field and each indexed field has a dedicated, optimized data structure.
      • For example, text fields are stored in inverted indices, and numeric and geo fields are stored in BKD trees.
  • When dynamic mapping is enabled, Elasticsearch automatically detects and adds new fields to the index.

    • This default behavior makes it easy to index and explore your data—​just start indexing documents and Elasticsearch will detect and map, to the appropriate Elasticsearch datatypes:
      • booleans,
      • floating point
      • integer values,
      • dates,
      • strings
  • You can define rules to control dynamic mapping and explicitly define mappings to take full control of how fields are stored and indexed.

    • Defining your own mappings enables you to:
      • Distinguish between full-text string fields and exact value string fields
      • Perform language-specific text analysis
      • Optimize fields for partial matching
      • Use custom date formats
      • Use data types such as geo_point and geo_shape that cannot be automatically detected
  • Bulk

    • The optimal batch size depends on a number of factors:
      • the document size and complexity,
      • the indexing and search load,
      • and the resources available to your cluster.
    • A good place to start is with batches of 1,000 to 5,000 documents and a total payload between 5MB and 15MB.
      • From there, you can experiment to find the sweet spot.
  • Index template

    • curl -XGET http://localhost:9200/_template/logstash-MINE-index | json_pp | less
      • After you have created it

Quick commands

  • Nodes
    • curl -XGET http://localhost:9200/_nodes | json_pp | less
      • Get information on nodes

elasticseach

Installing Elasticsearch

  • sudo apt update
  • sudo apt --with-new-pkgs upgrade
  • See Install Elasticsearch with Debian Package
  • sudo apt update
  • sudo apt install elasticsearch
  • sudo nano /etc/elasticsearch/elasticsearch.yml
    • see below for content
  • sudo nano /etc/default/elasticsearch
    • see below for content
  • sudo systemctl start elasticsearch
  • sudo systemctl status elasticsearch
  • sudo systemctl enable elasticsearch
  • curl -X GET "localhost:9200"
  • curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '{"index" : {"number_of_replicas" : 0, "number_of_shards" : 1}}'
    • This wont work until there is data in elasticsearch, like after installing kibana
    • number_of_shards seems to fail here

installing on a single node elasticsearch

/etc/elasticsearch/elasticsearch.yml

# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0

# --------------------------------- Discovery ----------------------------------
#
discovery.type: single-node
# ---------------------------------- Security -----------------------------------
# Sec settings: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
xpack.security.transport.ssl.enabled: false
xpack.security.http.ssl.enabled: false

/etc/default/elasticsearch

################################
# Elasticsearch
################################
# Target: /etc/default/elasticsearch

# Elasticsearch home directory
#ES_HOME=/usr/share/elasticsearch

# Elasticsearch Java path
#ES_JAVA_HOME=

# Elasticsearch configuration directory
# Note: this setting will be shared with command-line tools
ES_PATH_CONF=/etc/elasticsearch

# Elasticsearch PID directory
#PID_DIR=/var/run/elasticsearch

# Additional Java OPTS
ES_JAVA_OPTS="-Xms750m -Xmx750m"

# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true

This is probably only needed on the older 6.x installation.

Set the replicas to 0

curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '
{
    "index" : {
        "number_of_replicas" : 0
    }
}'

Index Lifecycle Management

  • See:

  • You can manually apply a lifecycle policy when you create an index.

    • For time series indices, you need to associate the lifecycle policy with the index template used to create new indices in the series.
      • When an index rolls over, a manually-applied policy isn’t automatically applied to the new index.

Index templates

  1. curl -XPUT http://localhost:9200/_ilm/policy/MyLogstash -H 'Content-Type: application/json' -d'{"policy" : {"phases" : {"delete" : {"actions" : {"delete" : {"delete_searchable_snapshot" : true}},"min_age" : "2d"}}}}'
  2. curl -XPUT http://localhost:9200/_template/logstash-index -H 'Content-Type: application/json' -d'{ "index_patterns" : ["logstash.mine.*"], "settings": { "number_of_replicas": 0, "index" : { "lifecycle" : {"name" : "MyLogstash"} } } }'

Get full list of index templates:

  • curl -s 'localhost:9200/_template' | json_pp | less

  1. metricbeat export template > metricbeat.template.json
  2. curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_template/metricbeat-7.9.2 [email protected]

Template syntax

  • setup.template.enabled: true
  • setup.template.name: metricbeat
    • so the final name is metricbeat-%{[agent.version]}
  • setup.template.pattern: metricbeat-*
    • metricbeat-%{[agent.version]}-*
  • setup.template.overwrite: false

Does this require a license?

  • setup.ilm.enabled: true
  • setup.ilm.policy_name: metricbeat
  • setup.ilm.overwrite: false

Template setting via the .yml

See:

setup.template.overwrite: false
setup.template.name: "template-name"
setup.template.pattern: "template-pattern*"
setup.template.index_patterns: [ "packetbeat-*" ]
setup.template.settings:
        index.number_of_shards: "0"
        index.number_of_replicas: "0"
        index.lifecycle.name: "packetbeat-ilm"
        index.lifecycle.rollover_alias: "packetbeat-ilm"
        index.analysis.analyzer.default.type: "whitespace"

mapping

  • mappings define how fields are interpreted by Elasticsearch.
  • "mapping," or telling Elasticsearch exactly how that data should be treated.

Analyzer

term vs query

  • terms are exact,
  • queries have a more lax way of looking up things

Choosing the analyzer

See:

PUT packetbeat.bolt-8.0.0-2021.01.06
{
  "settings": {
    "analysis": {
      "analyzer": {
        "default": {
          "type": "whitespace"
        }
      }
    }
  }
}

elasticsearch api

Elasitcsearch PERL API

elasticsearch cookbook

  • Get list of templates: curl -XGET localhost:9200/_template/?pretty
  • Get list of indices: curl 'localhost:9200/_cat/indices?v'
  • Put new template: curl -XPUT 'http://localhost:9200/_template/topbeat' -d@/opt/filebeat-1.2.2-x86_64/filebeat.template.json

List size of indexes

curl -X GET "localhost:9200/_cat/indices?v&s=store.size:desc"

Delete index

curl -X DELETE "localhost:9200/packetbeat-8.0.0-2020.01.04-000001"

Automated index rollover with Index Lifecycle Management (ILM)

See:

example

  • curl '172.21.0.3:9200/_cat/indices?v'
    • yellow open csgo _rkwBUoJSOGE6Q0vVfNRGA 1 1 26946 0 4.4mb 4.4mb
  • curl -XGET 172.21.0.3:9200/csgo | json_pp
  • curl -XGET 172.21.0.3:9200/csgo/_mapping/field/action | json_pp

Elasticsearch query

You can design the query in kibana discover, then choose inspect( in the top left) The 'query' part of the json can then be used as a starting point.

A field must have been analyzed before it can be filtered

GET packetbeat*/_mapping

Simple elasticsearch query

GET packetbeat*/_search
{
  "query": {
    "query_string": {
      "query": "(system:QUAKE AND type:BANDITS AND PlayerInfo.PlayerAlive:true AND @timestamp:[now-1m TO now])"
    }
  }
}

elasticsearch match query

GET packetbeat*/_search
{
  "query": {
    "bool": {
      "must": [
        {
          "match": {
            "system": "QUAKE"
          }
        },
        {
          "match": {
            "type": "BANDITS"
          }
        }
      ]
    }
  }
}

Kibana

Installing kibana

Pre-requisite: elasticsearch has been installed(so repo etc is set-up)

  • sudo apt install kibana
  • sudo systemctl start kibana
  • sudo systemctl status kibana
  • sudo systemctl enable kibana
  • sudo apt install nginx
    • This is to allow access to kibana from outside the box
  • sudo rm /etc/nginx/sites-enabled/default
  • sudo nano /etc/nginx/sites-available/example.com
  • sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
  • echo "admin:openssl passwd -apr1 YourPassword" | sudo tee -a /etc/nginx/htpasswd.users
    • NOTE: Actually replace this with a real password and username
  • sudo nginx -t
  • sudo systemctl restart nginx
  • sudo systemctl status nginx
  • sudo systemctl enable nginx
  • sudo ufw allow 'Nginx Full'
  • curl http://localhost:5601/status
  • Test it from outside

file: example.com

server {
    listen 80;

    server_name example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Kibana UI

Kibana UI - Tables

You create multiple coloumns by adding sub buckets. (See also 'Mastring Kibana 6x' p91)

Kibana UI - Vertical bars

  • Bucket: X-axis: time
  • Sub-bucket: Split time series - terms - testResult

Kibana UI - plotting data

  • Select Lines
  • Y-axis
    • Aggregation: Max
    • Field: find name of field
      • If you can't find your field
        • Make sure it is identified as a number in the discovery section
          • Make sure you send it as a number.
          • If you have to change it from e.g. string to number in ELK, you probably have to delete the index? maybe more
        • Maybe wait some minutes for it to show up.
  • Bucket
    • X-axis: @timestamp
    • Split series: e.g. testname or whatever it is you want it to group on.

Logstash

Installing logstash

See also:

  1. sudo apt install logstash
  2. sudo /usr/share/logstash/bin/logstash -f logstash.conf --config.test_and_exit
  3. sudo cp logstash.conf /etc/logstash/conf.d

Patterns included with logstach: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns

install logstash plugins

  1. /opt/logstash/bin/logstash-plugin install /root/logstash-filter-range-master/logstash-filter-range-2.0.4.gem
  2. /opt/logstash/bin/logstash-plugin list | grep range
  3. /opt/logstash/bin/logstash -e 'filter {range {}}'

Developing the logstash conf

Put this in the output section of the logstash.conf

    file {
        path => "/tmp/logstash_debug_output.txt"
        codec => rubydebug
    }

    stdout { codec => rubydebug }
  • sudo /usr/share/logstash/bin/logstash -f logstash.conf --config.reload.automatic
  • sudo /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"

logstash filters

Filter for how long something took

Filtering Json data

logstash configuration

 #syslog auth.log kern.log
 # See: http://logstash.net/docs/1.4.2//configuration
 #path => [ "/var/log/messages", "/var/log/*.log" ]
 #path => "/data/mysql/mysql.log"

input {
  file {
    path => "/var/log/syslog"
    type => "syslog"
  }
  beats {
    port => 5044
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  } else {
    # for the filebeat input.
    # To convert to the original time stamp, not the timestamp, of when the log was read.
    # https://www.elastic.co/webinars/beats-lightweight-shippers-for-all-kinds-of-data?baymax=rtp&elektra=downloads&iesrc=ctr
    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601:time} %{GREEDYDATA:log_message}" }
    }
    date {
      match => ["time", "YYYY-MM-dd HH:mm:ss"]
    }
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

Logstash configuration

Get timestamp from date 'Sep 14, 2017 00:00:46 -0500'

 # https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
    date {
      match => [ "DateTime", "MMM d, yyyy HH:mm:ss Z" ]
    }

Logstash ports

  • 5044 is a default beats port. The input plugin for beats
  • 9600 - 9700 is for web api calls

beats

Filebeat

Installing filebeat

See:

  • sudo apt install filebeat
    • or: curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.1-amd64.deb
    • sudo dpkg -i filebeat-7.7.1-amd64.deb
  • sudo vi /etc/filebeat/filebeat.yml
  • sudo /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"

Configuring filebeat

Configuring filebeat for reading the logs from the running docker containers

/etc/filebeat/filebeat.yml

See also:

filebeat.inputs:
- type: container
  id: my-filestream-id
  enabled: true
  paths:
    - /var/lib/docker/containers/*/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
output.console:
  pretty: true
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

filebeat configuration

./filebeat -e -c filebeat.yml -d "publish"

filebeat:
  # List of prospectors to fetch data.
  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -
      # Paths that should be crawled and fetched. Glob based paths.
      # For each file found under this path, a harvester is started.
      paths:
        - "/var/log/*.log"
      input_type: log
      fields:
        environment: docker
        service: test
        zone: zone1

  # Seting up the use of logstash
  # https://www.elastic.co/guide/en/beats/libbeat/1.2/logstash-installation.html#logstash-setup
output:
  logstash:
    hosts: ["10.0.3.199:5044"]

    index: "filebeat"

logging:
  level: info

Installing metricbeat

See:

  • sudo apt install metricbeat
  • sudo vi /etc/metricbeat/metricbeat.yml
  • activate localhost for kibana
  • sudo metricbeat test config -e
  • sudo metricbeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
  • sudo metricbeat setup --dashboards
  • sudo service metricbeat start
  • curl -XGET 'http://localhost:9200/metricbeat-*/_search?pretty'

Other log handling tools

Docker run

version: '2.2'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.2
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9300:9300
    networks:
      - esnet
  kibana:
    image: docker.elastic.co/kibana/kibana:6.8.2
    environment:
      SERVER_HOST: 0.0.0.0
      ELASTICSEARCH_URL: http://elasticsearch:9200
    volumes:
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - 5601:5601
    networks:
      - esnet
  logstash:
    image: docker.elastic.co/logstash/logstash:6.8.2
    volumes:
      - ./pipeline/:/usr/share/logstash/pipeline/
    ports:
      - 5044:5044
      - 9600:9600
    networks:
      - esnet
  filebeat:
    image: docker.elastic.co/beats/filebeat:6.8.2
    environment:
      - setup.kibana.host=kibana:5601
      - output.elasticsearch.hosts=["elasticsearch:9200"]
    networks:
      - esnet

volumes:
  esdata1:
    driver: local
networks:
  esnet:
  • docker-compose up

  • Check elasticsearch

    • curl -XGET 'http://localhost:9200/_cat/health?v'
  • Check logstash

    • curl -XGET 'localhost:9600/?pretty'

Monitor your containers using the ELK stack

docker/start_filebeat_with_owned_yaml.sh

#!/bin/bash

cp /usr/share/filebeat/filebeat.yml /usr/share/filebeat/filebeat_local.yml
chown root:root /usr/share/filebeat/filebeat_local.yml
./filebeat -e -c /usr/share/filebeat/filebeat_local.yml

docker/filebeat.yml

filebeat.inputs:
- type: container
  paths: 
    - '/var/lib/docker/containers/*/*.log'

processors:
- add_docker_metadata:
    host: "unix:///var/run/docker.sock"

- decode_json_fields:
    fields: ["message"]
    target: "json"
    overwrite_keys: true

#output.elasticsearch:
#  hosts: ["elasticsearch:9200"]
#  indices:
#    - index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"

output.console:
  pretty: true

logging.json: true
logging.metrics.enabled: false

docker/docker-compose.dev.yml

version: "3"
services:
  filebeat:
        image: "docker.elastic.co/beats/filebeat:7.2.0"
        user: root
        command: /usr/share/filebeat/start_filebeat_with_owned_yaml.sh
        volumes:
            - ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
            - ./start_filebeat_with_owned_yaml.sh:/usr/share/filebeat/start_filebeat_with_owned_yaml.sh
            - /var/lib/docker:/var/lib/docker:ro
            - /var/run/docker.sock:/var/run/docker.sock


Trouble shooting

  • curl -XGET 'http://localhost:9200/_cat/health?v'
  • curl 'localhost:9200/_cat/indices?v'
  • curl -XGET 'http://localhost:9200/_cluster/allocation/explain' | json_pp
  • curl -s 'localhost:9200/_cat/shards'
  • curl -XGET 'http://localhost:9200/_cluster/health?level=shards' | json_pp | less

Troubleshooting kibana

No cached mapping for this field

  • Select the "Index Patterns" NOT the 'Index management'
  • Select refresh fields
No cached mapping for this field.
Refresh field list from the Management > Index Patterns
page

503 gateway issue

turns out kibana wasn't running on the machine

  • systemctl status kibana
  • netstat -tulpn
    • look for port
      • 5601: kibana
      • 80: the nginx that connects the external world to kibana

kibana incompatible with elasticsearch

for some reason the solution was rebooting.

restarting elasticsearch was not enough.

No spaces match search criteria

See:

Workaround: rebooted and ran the shard fix again

cannot allocate because allocation is not permitted to any of the nodes

Seems to be because the number_of_replicas are >= 1.

It seems like the replica count was back to 1, instead of 0.

Looking into is:

  • curl -XGET http://localhost:9200/.kibana/_settings | json_pp

Fix: Seemed to be Changing replicas to 0:

curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '
{
    "index" : {
        "number_of_replicas" : 0
    }
}'

Exiting: Initializing protocol analyzers failed: missing field accessing 'packetbeat.protocols.0.default_system_ID' (source:'packetbeat.yaml')

'--E SYSTEM_ID=10' was missing.

The yaml file has this content:

...
packetbeat.protocols:
- type: quake
  ports: [10218]
  default_system_ID: '${SYSTEM_ID}'
...

Troubleshooting logstash

Parsing failures

To look for parsing failures:

  1. Go to kibana
  2. In discovery click on the Tag entry
  3. Click filter on '_grokparsefailure'

No data coming into elasticsearch

Solution?:

Date on logstash server was behind the date on the servers providing the logs.

No data shown in logstash-web

Fix:

  date {
    match => [ "date_time", "MM/dd/yy HH:mm:ss.SSS", "EEE MMM dd HH:mm:ss YYYY", "EEE MMM  d HH:mm:ss YYYY" ]
    timezone => "UTC"
  }

See:

Try Change /etc/sysconfig/clock: ZONE="UTC" Didn't fix it. See:

It turned out that logstash converted the time so that the timestamp was 5h in the future. Changing the time range to include future time showed the data.

Troubleshootig elasticsearch

master_not_discovered_exception

On a single node solution the elasticsearch.yml must contain: 'discovery.type: single-node'

index_not_found_exception

curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '{"index" : {"number_of_replicas" : 0, "number_of_shards" : 1}}'
{
   "status" : 404,
   "error" : {
      "type" : "index_not_found_exception",
      "resource.type" : "index_expression",
      "reason" : "no such index [null] and no indices exist",
      "resource.id" : "_all",
      "root_cause" : [
         {
            "type" : "index_not_found_exception",
            "resource.type" : "index_expression",
            "reason" : "no such index [null] and no indices exist",
            "resource.id" : "_all"
         }
      ]
   }
}

the shard cannot be allocated to the same node on which a copy of the shard already exists

Fix: Change replicas to 0:

curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '
{
    "index" : {
        "number_of_replicas" : 0
    }
}'

cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster

Output from: curl -XGET 'http://localhost:9200/_cluster/allocation/explain' | json_pp

See:

#!/bin/bash

nodeName="MY_NODE_NAME"

range=2
IFS=$'\n'

echo "DDD begnning loop"
for line in $(curl -s 'localhost:9200/_cat/shards' | fgrep UNASSIGNED); do
  echo "DDD line: ${line}"
  INDEX=$(echo $line | (awk '{print $1}'))
  SHARD=$(echo $line | (awk '{print $2}'))
  number=$RANDOM
  let "number %= ${range}"

  echo "DDD     Index: ${INDEX} Shard: ${SHARD} number: ${number}"

  curl -XPOST -H 'Content-Type: application/json' http://localhost:9200/_cluster/reroute? -d '{
    "commands" : [ {
    "allocate_empty_primary" :
      {
        "index" : '\"${INDEX}\"',
        "shard" : '\"${SHARD}\"',
        "node" : '\"${nodeName}\"',
        "accept_data_loss" : true
      }
  }
  ]
  }'
  echo "---"
done

{"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}

See:

Add: -H 'Content-Type: application/json'

a copy of this shard is already allocated to this node

filter returns nothing

See:

Attempted to resurrect connection to dead ES instance, but got an error

logstash       | [2024-02-23T10:32:52,252][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash       | [2024-02-23T10:32:53,183][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connection pool shut down", :exception=>Manticore::ClientStoppedException, :cause=>#<Java::JavaLang::IllegalStateException: Connection pool shut down>}
logstash       | [2024-02-23T10:32:53,184][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ClientStoppedException] Connection pool shut down"}
[2024-02-23T12:31:50,648][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.22.0.3] failed: Connection refused {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.22.0.3] failed: Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2024-02-23T12:31:50,649][WARN ][logstash.licensechecker.licensereader] Attempt to validate Elasticsearch license failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.22.0.3] failed: Connection refused"}
[2024-02-23T12:31:50,670][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2024-02-23T12:31:50,693][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2024-02-23T12:31:50,818][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-02-23T12:31:51,726][INFO ][org.reflections.Reflections] Reflections took 156 ms to scan 1 urls, producing 132 keys and 464 values
[2024-02-23T12:31:52,549][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-02-23T12:31:52,562][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2024-02-23T12:31:52,566][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2024-02-23T12:31:52,578][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused>}
[2024-02-23T12:31:52,578][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused"}
[2024-02-23T12:31:52,589][INFO ][logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"your_index_name"}
[2024-02-23T12:31:52,589][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`
[2024-02-23T12:31:52,602][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf", "/usr/share/logstash/pipeline/test.conf"], :thread=>"#<Thread:0x53c77787 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-02-23T12:31:53,602][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.0}
[2024-02-23T12:31:53,611][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-02-23T12:31:53,675][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-02-23T12:31:53,677][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-02-23T12:31:53,728][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-02-23T12:31:53,789][INFO ][org.logstash.beats.Server][main][d37f7bea3b35601bcb9ae494fa222934aa490dbeb1e13311e8f9ffd7038d7039] Starting server on port: 5044
[2024-02-23T12:31:53,789][INFO ][org.logstash.beats.Server][main][b2f6a29d1182d9fe7f7007cc3b4e91014867b91cb069881c0709a956fcadf861] Starting server on port: 5044
[2024-02-23T12:31:57,585][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused>}
[2024-02-23T12:31:57,587][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused"}
warning: thread "[main]<beats" terminated with exception (report_on_exception is true):
ArgumentError: wrong number of arguments (given 2, expected 0..1)
    translate at /usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/i18n-1.14.1/lib/i18n.rb:210
  inputworker at /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:427
  start_input at /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405
[2024-02-23T12:32:00,112][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: wrong number of arguments (given 2, expected 0..1)>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/i18n-1.14.1/lib/i18n.rb:210:in `translate'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:427:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405:in `block in start_input'"]}
[2024-02-23T12:32:02,594][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connection pool shut down", :exception=>Manticore::ClientStoppedException, :cause=>#<Java::JavaLang::IllegalStateException: Connection pool shut down>}
[2024-02-23T12:32:02,594][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientStoppedException] Connection pool shut down"}
[2024-02-23T12:32:06,324][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2024-02-23T12:32:06,326][ERROR][logstash.javapipeline    ][main] Pipeline worker error, the pipeline will be stopped {:pipeline_id=>"main", :error=>"", :exception=>Java::OrgJrubyExceptions::ThreadKill, :backtrace=>[], :thread=>"#<Thread:0x53c77787 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 dead>"}

Troublehsooting logstash

An unexpected error occurred! {:error=>#<ArgumentError: wrong number of arguments (given 2, expected 0..1)

[2024-02-23T13:04:05,486][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.78}
[2024-02-23T13:04:05,493][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-02-23T13:04:05,502][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-02-23T13:04:05,503][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-02-23T13:04:05,519][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-02-23T13:04:05,581][INFO ][org.logstash.beats.Server][main][b3252fceecf24b73e9c7d9b099ea8836aebfdc4717a00064aa10d8a18eca722c] Starting server on port: 5044
[2024-02-23T13:04:05,582][INFO ][org.logstash.beats.Server][main][7b13892edad71fd5e4692db1f3a8584d5fc3ffd21e6fa8c4ec5b802cecf2cda3] Starting server on port: 5044
warning: thread "[main]<beats" terminated with exception (report_on_exception is true):
ArgumentError: wrong number of arguments (given 2, expected 0..1)
    translate at /usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/i18n-1.14.1/lib/i18n.rb:210
  inputworker at /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:427
  start_input at /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405
[2024-02-23T13:04:11,812][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: wrong number of arguments (given 2, expected 0..1)>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/i18n-1.14.1/lib/i18n.rb:210:in `translate'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:427:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405:in `block in start_input'"]}
[2024-02-23T13:04:18,079][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2024-02-23T13:04:18,080][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
  at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:795) ~[jruby.jar:?]
  at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:758) ~[jruby.jar:?]
  at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:90) ~[?:?]

Error: Address already in use

make sure there is only one conf file in /usr/share/logstash/pipeline/

Logstash Error: Address already in use

logstash       | [2024-02-23T14:03:56,564][ERROR][logstash.javapipeline    ][main][7b13892edad71fd5e4692db1f3a8584d5fc3ffd21e6fa8c4ec5b802cecf2cda3] A plugin had an unrecoverable error. Will restart this plugin.
logstash       |   Pipeline_id:main
logstash       |   Plugin: <LogStash::Inputs::Beats port=>5044, id=>"7b13892edad71fd5e4692db1f3a8584d5fc3ffd21e6fa8c4ec5b802cecf2cda3", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_d28a56b7-5e14-4d5d-830e-a06a60a974ae", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_enabled=>false, ssl_client_authentication=>"none", ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, ssl_cipher_suites=>["TLS_AES_256_GCM_SHA384", "TLS_AES_128_GCM_SHA256", "TLS_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], ssl_supported_protocols=>["TLSv1.2", "TLSv1.3"], client_inactivity_timeout=>60, executor_threads=>12, add_hostname=>false, tls_min_version=>1, tls_max_version=>1.3>
logstash       |   Error: Address already in use
logstash       |   Exception: Java::JavaNet::BindException
logstash       |   Stack: sun.nio.ch.Net.listen(Native Method)
logstash       | sun.nio.ch.ServerSocketChannelImpl.netBind(sun/nio/ch/ServerSocketChannelImpl.java:338)
logstash       | sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:294)
logstash       | io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:141)
logstash       | io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
logstash       | io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
logstash       | io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:600)
logstash       | io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:579)
logstash       | io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
logstash       | io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
logstash       | io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
logstash       | io.netty.util.concurrent.AbstractEventExecutor.runTask(io/netty/util/concurrent/AbstractEventExecutor.java:173)
logstash       | io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:166)
logstash       | io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:470)
logstash       | io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:569)
logstash       | io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:997)
logstash       | io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
logstash       | io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
logstash       | java.lang.Thread.run(java/lang/Thread.java:840)
elasticsearch  | {"@timestamp":"2024-02-23T14:03:56.777Z", "log.level": "INFO", "message":"license [dc7ce918-ca2a-4210-aa25-2b0c75568add] mode [basic] - valid", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[656595ea3897][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.license.ClusterStateLicenseService","elasticsearch.cluster.uuid":"2wswu2CoSluxoEHecXibIg","elasticsearch.node.id":"35bKcCN3RQePobt1G-HiKw","elasticsearch.node.name":"656595ea3897","elasticsearch.cluster.name":"docker-cluster"}
logstash       | [2024-02-23T14:03:57,566][INFO ][org.logstash.beats.Server][main][7b13892edad71fd5e4692db1f3a8584d5fc3ffd21e6fa8c4ec5b802cecf2cda3] Starting server on port: 5044
^CGracefully stopping... (press Ctrl+C again to force)
⚠️ **GitHub.com Fallback** ⚠️