ElkNotes - henk52/knowledgesharing GitHub Wiki
-
Videos
- Kibana
- Time series
- Elasticsearch
-
Replicas: How many different nodes each index shard is copied to.
-
Shards: how many parts the an index is distributed to.
- local logs
- filebeat, with fixed time stamps
- syslog plugin
- snmptrap
- topbeat
-
An index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain your data.
-
When dynamic mapping is enabled, Elasticsearch automatically detects and adds new fields to the index.
- This default behavior makes it easy to index and explore your data—just start indexing documents and Elasticsearch will detect and map, to the appropriate Elasticsearch datatypes:
- booleans,
- floating point
- integer values,
- dates,
- strings
- This default behavior makes it easy to index and explore your data—just start indexing documents and Elasticsearch will detect and map, to the appropriate Elasticsearch datatypes:
-
You can define rules to control dynamic mapping and explicitly define mappings to take full control of how fields are stored and indexed.
- Defining your own mappings enables you to:
- Distinguish between full-text string fields and exact value string fields
- Perform language-specific text analysis
- Optimize fields for partial matching
- Use custom date formats
- Use data types such as geo_point and geo_shape that cannot be automatically detected
- Defining your own mappings enables you to:
-
Bulk
- The optimal batch size depends on a number of factors:
- the document size and complexity,
- the indexing and search load,
- and the resources available to your cluster.
- A good place to start is with batches of 1,000 to 5,000 documents and a total payload between 5MB and 15MB.
- From there, you can experiment to find the sweet spot.
- The optimal batch size depends on a number of factors:
-
Index template
- sudo apt update
- sudo apt --with-new-pkgs upgrade
- See Install Elasticsearch with Debian Package
- sudo apt update
- sudo apt install elasticsearch
-
sudo nano /etc/elasticsearch/elasticsearch.yml
- see below for content
-
sudo nano /etc/default/elasticsearch
- see below for content
- sudo systemctl start elasticsearch
- sudo systemctl status elasticsearch
- sudo systemctl enable elasticsearch
- curl -X GET "localhost:9200"
- curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '{"index" : {"number_of_replicas" : 0, "number_of_shards" : 1}}'
- This wont work until there is data in elasticsearch, like after installing kibana
- number_of_shards seems to fail here
/etc/elasticsearch/elasticsearch.yml
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
# --------------------------------- Discovery ----------------------------------
#
discovery.type: single-node
# ---------------------------------- Security -----------------------------------
# Sec settings: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
xpack.security.transport.ssl.enabled: false
xpack.security.http.ssl.enabled: false
/etc/default/elasticsearch
################################
# Elasticsearch
################################
# Target: /etc/default/elasticsearch
# Elasticsearch home directory
#ES_HOME=/usr/share/elasticsearch
# Elasticsearch Java path
#ES_JAVA_HOME=
# Elasticsearch configuration directory
# Note: this setting will be shared with command-line tools
ES_PATH_CONF=/etc/elasticsearch
# Elasticsearch PID directory
#PID_DIR=/var/run/elasticsearch
# Additional Java OPTS
ES_JAVA_OPTS="-Xms750m -Xmx750m"
# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true
This is probably only needed on the older 6.x installation.
Set the replicas to 0
curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
-
You can manually apply a lifecycle policy when you create an index.
- For time series indices, you need to associate the lifecycle policy with the index template used to create new indices in the series.
- When an index rolls over, a manually-applied policy isn’t automatically applied to the new index.
- For time series indices, you need to associate the lifecycle policy with the index template used to create new indices in the series.
curl -XPUT http://localhost:9200/_ilm/policy/MyLogstash -H 'Content-Type: application/json' -d'{"policy" : {"phases" : {"delete" : {"actions" : {"delete" : {"delete_searchable_snapshot" : true}},"min_age" : "2d"}}}}'
curl -XPUT http://localhost:9200/_template/logstash-index -H 'Content-Type: application/json' -d'{ "index_patterns" : ["logstash.mine.*"], "settings": { "number_of_replicas": 0, "index" : { "lifecycle" : {"name" : "MyLogstash"} } } }'
Get full list of index templates:
- metricbeat export template > metricbeat.template.json
curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_template/metricbeat-7.9.2 [email protected]
- setup.template.enabled: true
- setup.template.name: metricbeat
- so the final name is metricbeat-%{[agent.version]}
- setup.template.pattern: metricbeat-*
- metricbeat-%{[agent.version]}-*
- setup.template.overwrite: false
Does this require a license?
- setup.ilm.enabled: true
- setup.ilm.policy_name: metricbeat
- setup.ilm.overwrite: false
setup.template.overwrite: false
setup.template.name: "template-name"
setup.template.pattern: "template-pattern*"
setup.template.index_patterns: [ "packetbeat-*" ]
setup.template.settings:
index.number_of_shards: "0"
index.number_of_replicas: "0"
index.lifecycle.name: "packetbeat-ilm"
index.lifecycle.rollover_alias: "packetbeat-ilm"
index.analysis.analyzer.default.type: "whitespace"
- mappings define how fields are interpreted by Elasticsearch.
- "mapping," or telling Elasticsearch exactly how that data should be treated.
- terms are exact,
- queries have a more lax way of looking up things
See:
PUT packetbeat.bolt-8.0.0-2021.01.06
{
"settings": {
"analysis": {
"analyzer": {
"default": {
"type": "whitespace"
}
}
}
}
}
- Get list of templates: curl -XGET localhost:9200/_template/?pretty
- Get list of indices: curl 'localhost:9200/_cat/indices?v'
- Put new template:
curl -XPUT 'http://localhost:9200/_template/topbeat' -d@/opt/filebeat-1.2.2-x86_64/filebeat.template.json
curl -X GET "localhost:9200/_cat/indices?v&s=store.size:desc"
curl -X DELETE "localhost:9200/packetbeat-8.0.0-2020.01.04-000001"
- curl '172.21.0.3:9200/_cat/indices?v'
yellow open csgo _rkwBUoJSOGE6Q0vVfNRGA 1 1 26946 0 4.4mb 4.4mb
- curl -XGET 172.21.0.3:9200/csgo | json_pp
- curl -XGET 172.21.0.3:9200/csgo/_mapping/field/action | json_pp
You can design the query in kibana discover, then choose inspect( in the top left) The 'query' part of the json can then be used as a starting point.
GET packetbeat*/_mapping
GET packetbeat*/_search
{
"query": {
"query_string": {
"query": "(system:QUAKE AND type:BANDITS AND PlayerInfo.PlayerAlive:true AND @timestamp:[now-1m TO now])"
}
}
}
GET packetbeat*/_search
{
"query": {
"bool": {
"must": [
{
"match": {
"system": "QUAKE"
}
},
{
"match": {
"type": "BANDITS"
}
}
]
}
}
}
Pre-requisite: elasticsearch has been installed(so repo etc is set-up)
- sudo apt install kibana
- sudo systemctl start kibana
- sudo systemctl status kibana
- sudo systemctl enable kibana
- sudo apt install nginx
- This is to allow access to kibana from outside the box
- sudo rm /etc/nginx/sites-enabled/default
- sudo nano /etc/nginx/sites-available/example.com
- sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
- echo "admin:
openssl passwd -apr1 YourPassword
" | sudo tee -a /etc/nginx/htpasswd.users- NOTE: Actually replace this with a real password and username
- sudo nginx -t
- sudo systemctl restart nginx
- sudo systemctl status nginx
- sudo systemctl enable nginx
- sudo ufw allow 'Nginx Full'
curl http://localhost:5601/status
- Test it from outside
file: example.com
server {
listen 80;
server_name example.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
You create multiple coloumns by adding sub buckets. (See also 'Mastring Kibana 6x' p91)
- Bucket: X-axis: time
- Sub-bucket: Split time series - terms - testResult
- Select Lines
- Y-axis
- Aggregation: Max
- Field: find name of field
- If you can't find your field
- Make sure it is identified as a number in the discovery section
- Make sure you send it as a number.
- If you have to change it from e.g. string to number in ELK, you probably have to delete the index? maybe more
- Maybe wait some minutes for it to show up.
- Make sure it is identified as a number in the discovery section
- If you can't find your field
- Bucket
- X-axis: @timestamp
- Split series: e.g. testname or whatever it is you want it to group on.
- sudo apt install logstash
- sudo /usr/share/logstash/bin/logstash -f logstash.conf --config.test_and_exit
- sudo cp logstash.conf /etc/logstash/conf.d
Patterns included with logstach: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns
- /opt/logstash/bin/logstash-plugin install /root/logstash-filter-range-master/logstash-filter-range-2.0.4.gem
- /opt/logstash/bin/logstash-plugin list | grep range
- /opt/logstash/bin/logstash -e 'filter {range {}}'
Put this in the output section of the logstash.conf
file {
path => "/tmp/logstash_debug_output.txt"
codec => rubydebug
}
stdout { codec => rubydebug }
- sudo /usr/share/logstash/bin/logstash -f logstash.conf --config.reload.automatic
- sudo /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"
#syslog auth.log kern.log
# See: http://logstash.net/docs/1.4.2//configuration
#path => [ "/var/log/messages", "/var/log/*.log" ]
#path => "/data/mysql/mysql.log"
input {
file {
path => "/var/log/syslog"
type => "syslog"
}
beats {
port => 5044
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
} else {
# for the filebeat input.
# To convert to the original time stamp, not the timestamp, of when the log was read.
# https://www.elastic.co/webinars/beats-lightweight-shippers-for-all-kinds-of-data?baymax=rtp&elektra=downloads&iesrc=ctr
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} %{GREEDYDATA:log_message}" }
}
date {
match => ["time", "YYYY-MM-dd HH:mm:ss"]
}
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
date {
match => [ "DateTime", "MMM d, yyyy HH:mm:ss Z" ]
}
- 5044 is a default beats port. The input plugin for beats
- 9600 - 9700 is for web api calls
- sudo apt install filebeat
- or:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.1-amd64.deb
- sudo dpkg -i filebeat-7.7.1-amd64.deb
- or:
- sudo vi /etc/filebeat/filebeat.yml
- sudo /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"
/etc/filebeat/filebeat.yml
filebeat.inputs:
- type: container
id: my-filestream-id
enabled: true
paths:
- /var/lib/docker/containers/*/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.console:
pretty: true
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
./filebeat -e -c filebeat.yml -d "publish"
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# For each file found under this path, a harvester is started.
paths:
- "/var/log/*.log"
input_type: log
fields:
environment: docker
service: test
zone: zone1
# Seting up the use of logstash
# https://www.elastic.co/guide/en/beats/libbeat/1.2/logstash-installation.html#logstash-setup
output:
logstash:
hosts: ["10.0.3.199:5044"]
index: "filebeat"
logging:
level: info
- sudo apt install metricbeat
- sudo vi /etc/metricbeat/metricbeat.yml
- activate localhost for kibana
- sudo metricbeat test config -e
- sudo metricbeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
- sudo metricbeat setup --dashboards
- sudo service metricbeat start
curl -XGET 'http://localhost:9200/metricbeat-*/_search?pretty'
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.2
container_name: elasticsearch
environment:
- discovery.type=single-node
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:6.8.2
environment:
SERVER_HOST: 0.0.0.0
ELASTICSEARCH_URL: http://elasticsearch:9200
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
networks:
- esnet
logstash:
image: docker.elastic.co/logstash/logstash:6.8.2
volumes:
- ./pipeline/:/usr/share/logstash/pipeline/
ports:
- 5044:5044
- 9600:9600
networks:
- esnet
filebeat:
image: docker.elastic.co/beats/filebeat:6.8.2
environment:
- setup.kibana.host=kibana:5601
- output.elasticsearch.hosts=["elasticsearch:9200"]
networks:
- esnet
volumes:
esdata1:
driver: local
networks:
esnet:
-
docker-compose up
-
Check elasticsearch
curl -XGET 'http://localhost:9200/_cat/health?v'
-
Check logstash
curl -XGET 'localhost:9600/?pretty'
docker/start_filebeat_with_owned_yaml.sh
#!/bin/bash
cp /usr/share/filebeat/filebeat.yml /usr/share/filebeat/filebeat_local.yml
chown root:root /usr/share/filebeat/filebeat_local.yml
./filebeat -e -c /usr/share/filebeat/filebeat_local.yml
docker/filebeat.yml
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- decode_json_fields:
fields: ["message"]
target: "json"
overwrite_keys: true
#output.elasticsearch:
# hosts: ["elasticsearch:9200"]
# indices:
# - index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
output.console:
pretty: true
logging.json: true
logging.metrics.enabled: false
docker/docker-compose.dev.yml
version: "3"
services:
filebeat:
image: "docker.elastic.co/beats/filebeat:7.2.0"
user: root
command: /usr/share/filebeat/start_filebeat_with_owned_yaml.sh
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- ./start_filebeat_with_owned_yaml.sh:/usr/share/filebeat/start_filebeat_with_owned_yaml.sh
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
curl -XGET 'http://localhost:9200/_cat/health?v'
curl 'localhost:9200/_cat/indices?v'
curl -XGET 'http://localhost:9200/_cluster/allocation/explain' | json_pp
curl -s 'localhost:9200/_cat/shards'
curl -XGET 'http://localhost:9200/_cluster/health?level=shards' | json_pp | less
- Select the "Index Patterns" NOT the 'Index management'
- Select refresh fields
No cached mapping for this field.
Refresh field list from the Management > Index Patterns
page
turns out kibana wasn't running on the machine
- systemctl status kibana
- netstat -tulpn
- look for port
- 5601: kibana
- 80: the nginx that connects the external world to kibana
- look for port
for some reason the solution was rebooting.
restarting elasticsearch was not enough.
Workaround: rebooted and ran the shard fix again
Seems to be because the number_of_replicas are >= 1.
It seems like the replica count was back to 1, instead of 0.
Looking into is:
Fix: Seemed to be Changing replicas to 0:
curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
Exiting: Initializing protocol analyzers failed: missing field accessing 'packetbeat.protocols.0.default_system_ID' (source:'packetbeat.yaml')
'--E SYSTEM_ID=10' was missing.
The yaml file has this content:
...
packetbeat.protocols:
- type: quake
ports: [10218]
default_system_ID: '${SYSTEM_ID}'
...
To look for parsing failures:
- Go to kibana
- In discovery click on the Tag entry
- Click filter on '_grokparsefailure'
Solution?:
Date on logstash server was behind the date on the servers providing the logs.
Fix:
date {
match => [ "date_time", "MM/dd/yy HH:mm:ss.SSS", "EEE MMM dd HH:mm:ss YYYY", "EEE MMM d HH:mm:ss YYYY" ]
timezone => "UTC"
}
See:
Try Change /etc/sysconfig/clock: ZONE="UTC" Didn't fix it. See:
It turned out that logstash converted the time so that the timestamp was 5h in the future. Changing the time range to include future time showed the data.
On a single node solution the elasticsearch.yml must contain: 'discovery.type: single-node'
curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '{"index" : {"number_of_replicas" : 0, "number_of_shards" : 1}}'
{
"status" : 404,
"error" : {
"type" : "index_not_found_exception",
"resource.type" : "index_expression",
"reason" : "no such index [null] and no indices exist",
"resource.id" : "_all",
"root_cause" : [
{
"type" : "index_not_found_exception",
"resource.type" : "index_expression",
"reason" : "no such index [null] and no indices exist",
"resource.id" : "_all"
}
]
}
}
Fix: Change replicas to 0:
curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster
Output from: curl -XGET 'http://localhost:9200/_cluster/allocation/explain' | json_pp
#!/bin/bash
nodeName="MY_NODE_NAME"
range=2
IFS=$'\n'
echo "DDD begnning loop"
for line in $(curl -s 'localhost:9200/_cat/shards' | fgrep UNASSIGNED); do
echo "DDD line: ${line}"
INDEX=$(echo $line | (awk '{print $1}'))
SHARD=$(echo $line | (awk '{print $2}'))
number=$RANDOM
let "number %= ${range}"
echo "DDD Index: ${INDEX} Shard: ${SHARD} number: ${number}"
curl -XPOST -H 'Content-Type: application/json' http://localhost:9200/_cluster/reroute? -d '{
"commands" : [ {
"allocate_empty_primary" :
{
"index" : '\"${INDEX}\"',
"shard" : '\"${SHARD}\"',
"node" : '\"${nodeName}\"',
"accept_data_loss" : true
}
}
]
}'
echo "---"
done
Add: -H 'Content-Type: application/json'
See:
logstash | [2024-02-23T10:32:52,252][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash | [2024-02-23T10:32:53,183][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connection pool shut down", :exception=>Manticore::ClientStoppedException, :cause=>#<Java::JavaLang::IllegalStateException: Connection pool shut down>}
logstash | [2024-02-23T10:32:53,184][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ClientStoppedException] Connection pool shut down"}
[2024-02-23T12:31:50,648][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.22.0.3] failed: Connection refused {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.22.0.3] failed: Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2024-02-23T12:31:50,649][WARN ][logstash.licensechecker.licensereader] Attempt to validate Elasticsearch license failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/_xpack][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.22.0.3] failed: Connection refused"}
[2024-02-23T12:31:50,670][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2024-02-23T12:31:50,693][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2024-02-23T12:31:50,818][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-02-23T12:31:51,726][INFO ][org.reflections.Reflections] Reflections took 156 ms to scan 1 urls, producing 132 keys and 464 values
[2024-02-23T12:31:52,549][INFO ][logstash.javapipeline ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-02-23T12:31:52,562][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2024-02-23T12:31:52,566][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2024-02-23T12:31:52,578][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused>}
[2024-02-23T12:31:52,578][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused"}
[2024-02-23T12:31:52,589][INFO ][logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"your_index_name"}
[2024-02-23T12:31:52,589][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`
[2024-02-23T12:31:52,602][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf", "/usr/share/logstash/pipeline/test.conf"], :thread=>"#<Thread:0x53c77787 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-02-23T12:31:53,602][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.0}
[2024-02-23T12:31:53,611][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-02-23T12:31:53,675][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-02-23T12:31:53,677][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-02-23T12:31:53,728][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-02-23T12:31:53,789][INFO ][org.logstash.beats.Server][main][d37f7bea3b35601bcb9ae494fa222934aa490dbeb1e13311e8f9ffd7038d7039] Starting server on port: 5044
[2024-02-23T12:31:53,789][INFO ][org.logstash.beats.Server][main][b2f6a29d1182d9fe7f7007cc3b4e91014867b91cb069881c0709a956fcadf861] Starting server on port: 5044
[2024-02-23T12:31:57,585][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused>}
[2024-02-23T12:31:57,587][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused"}
warning: thread "[main]<beats" terminated with exception (report_on_exception is true):
ArgumentError: wrong number of arguments (given 2, expected 0..1)
translate at /usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/i18n-1.14.1/lib/i18n.rb:210
inputworker at /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:427
start_input at /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405
[2024-02-23T12:32:00,112][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: wrong number of arguments (given 2, expected 0..1)>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/i18n-1.14.1/lib/i18n.rb:210:in `translate'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:427:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405:in `block in start_input'"]}
[2024-02-23T12:32:02,594][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connection pool shut down", :exception=>Manticore::ClientStoppedException, :cause=>#<Java::JavaLang::IllegalStateException: Connection pool shut down>}
[2024-02-23T12:32:02,594][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientStoppedException] Connection pool shut down"}
[2024-02-23T12:32:06,324][INFO ][logstash.javapipeline ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2024-02-23T12:32:06,326][ERROR][logstash.javapipeline ][main] Pipeline worker error, the pipeline will be stopped {:pipeline_id=>"main", :error=>"", :exception=>Java::OrgJrubyExceptions::ThreadKill, :backtrace=>[], :thread=>"#<Thread:0x53c77787 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 dead>"}
An unexpected error occurred! {:error=>#<ArgumentError: wrong number of arguments (given 2, expected 0..1)
[2024-02-23T13:04:05,486][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.78}
[2024-02-23T13:04:05,493][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-02-23T13:04:05,502][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-02-23T13:04:05,503][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-02-23T13:04:05,519][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-02-23T13:04:05,581][INFO ][org.logstash.beats.Server][main][b3252fceecf24b73e9c7d9b099ea8836aebfdc4717a00064aa10d8a18eca722c] Starting server on port: 5044
[2024-02-23T13:04:05,582][INFO ][org.logstash.beats.Server][main][7b13892edad71fd5e4692db1f3a8584d5fc3ffd21e6fa8c4ec5b802cecf2cda3] Starting server on port: 5044
warning: thread "[main]<beats" terminated with exception (report_on_exception is true):
ArgumentError: wrong number of arguments (given 2, expected 0..1)
translate at /usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/i18n-1.14.1/lib/i18n.rb:210
inputworker at /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:427
start_input at /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405
[2024-02-23T13:04:11,812][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: wrong number of arguments (given 2, expected 0..1)>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/i18n-1.14.1/lib/i18n.rb:210:in `translate'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:427:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405:in `block in start_input'"]}
[2024-02-23T13:04:18,079][INFO ][logstash.javapipeline ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2024-02-23T13:04:18,080][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:795) ~[jruby.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:758) ~[jruby.jar:?]
at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:90) ~[?:?]
make sure there is only one conf file in /usr/share/logstash/pipeline/
Logstash Error: Address already in use
logstash | [2024-02-23T14:03:56,564][ERROR][logstash.javapipeline ][main][7b13892edad71fd5e4692db1f3a8584d5fc3ffd21e6fa8c4ec5b802cecf2cda3] A plugin had an unrecoverable error. Will restart this plugin.
logstash | Pipeline_id:main
logstash | Plugin: <LogStash::Inputs::Beats port=>5044, id=>"7b13892edad71fd5e4692db1f3a8584d5fc3ffd21e6fa8c4ec5b802cecf2cda3", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_d28a56b7-5e14-4d5d-830e-a06a60a974ae", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_enabled=>false, ssl_client_authentication=>"none", ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, ssl_cipher_suites=>["TLS_AES_256_GCM_SHA384", "TLS_AES_128_GCM_SHA256", "TLS_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], ssl_supported_protocols=>["TLSv1.2", "TLSv1.3"], client_inactivity_timeout=>60, executor_threads=>12, add_hostname=>false, tls_min_version=>1, tls_max_version=>1.3>
logstash | Error: Address already in use
logstash | Exception: Java::JavaNet::BindException
logstash | Stack: sun.nio.ch.Net.listen(Native Method)
logstash | sun.nio.ch.ServerSocketChannelImpl.netBind(sun/nio/ch/ServerSocketChannelImpl.java:338)
logstash | sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:294)
logstash | io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:141)
logstash | io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
logstash | io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
logstash | io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:600)
logstash | io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:579)
logstash | io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
logstash | io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
logstash | io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
logstash | io.netty.util.concurrent.AbstractEventExecutor.runTask(io/netty/util/concurrent/AbstractEventExecutor.java:173)
logstash | io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:166)
logstash | io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:470)
logstash | io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:569)
logstash | io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:997)
logstash | io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
logstash | io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
logstash | java.lang.Thread.run(java/lang/Thread.java:840)
elasticsearch | {"@timestamp":"2024-02-23T14:03:56.777Z", "log.level": "INFO", "message":"license [dc7ce918-ca2a-4210-aa25-2b0c75568add] mode [basic] - valid", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[656595ea3897][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.license.ClusterStateLicenseService","elasticsearch.cluster.uuid":"2wswu2CoSluxoEHecXibIg","elasticsearch.node.id":"35bKcCN3RQePobt1G-HiKw","elasticsearch.node.name":"656595ea3897","elasticsearch.cluster.name":"docker-cluster"}
logstash | [2024-02-23T14:03:57,566][INFO ][org.logstash.beats.Server][main][7b13892edad71fd5e4692db1f3a8584d5fc3ffd21e6fa8c4ec5b802cecf2cda3] Starting server on port: 5044
^CGracefully stopping... (press Ctrl+C again to force)