Elastic Stack - jonatello/lab-musing GitHub Wiki

The Elastic Stack is composed of Elasticsearch, Logstash, and Kibana. This is using version 6.4.2

I did not originally create these logstash filters, they actually came from the pfSense forums - https://forum.pfsense.org/index.php?topic=120937.30

In order to run them within a jail, I did have to compile them all into a single logstash.conf, because for some reason FreeNAS jails aren't set up out of the box to look for the conf.d directory due to the package or something? I'm not sure what it is exactly but this is what I've observed.

Configuration:

Update the jail

pkg update && pkg upgrade

Configure ElasticSearch

Install the ElasticSearch package

pkg install elasticsearch6-6.4.2_1

Install cURL which will be useful for interfacing with ElasticSearch

pkg install curl

Configure fstab to mount fdescfs and procfs

echo "fdesc /dev/fd fdescfs rw 0 0" >> /etc/fstab

echo "proc /proc procfs rw 0 0" >> /etc/fstab

If this is a jail with iocage, you will need to mount procfs from the host (replace "elastic" with the jail name)

iocage set mount_procfs=1 elastic

You may need to configure memory locking (again, if this is a jail, you would do this on the host)

sysctl security.bsd.unprivileged_mlock=1

Enable Elasticsearch via rc.conf

sysrc elasticsearch_enable=yes

Configure Elasticsearch to use network host address 0.0.0.0 and network port 9200 (this will be non-clustered). Enable cross-origin resource sharing and set it to allow from any origin.

sed -i '' 's/#network.host: 192.168.0.1/network.host: 0.0.0.0/g' /usr/local/etc/elasticsearch/elasticsearch.yml

sed -i '' 's/#http.port: 9200/http.port: 9200/g' /usr/local/etc/elasticsearch/elasticsearch.yml

echo 'http.cors.allow-origin: "/.*/"' >> /usr/local/etc/elasticsearch/elasticsearch.yml

echo 'http.cors.enabled: true' >> /usr/local/etc/elasticsearch/elasticsearch.yml

Single node tweaks

These steps are for configuring ElasticSearch as a single node rather than a cluster. First, configure ElasticSearch to run as a standalone master node within /usr/local/etc/elasticsearch/elasticsearch.yml:

echo "node.master: true" >> /usr/local/etc/elasticsearch/elasticsearch.yml

Within the same config, it's possible to set our node to not swap memory:

sed -i '' 's/#bootstrap.memory_lock: true/bootstrap.memory_lock: true/g' /usr/local/etc/elasticsearch/elasticsearch.yml

Configure the heap size to be ~50% of available memory via /etc/elasticsearch/jvm.options (4GB in this example)

sed -i '' 's/-Xms1g/-Xms4g/g' /usr/local/etc/elasticsearch/jvm.options

sed -i '' 's/-Xmx1g/-Xmx4g/g' /usr/local/etc/elasticsearch/jvm.options

Start Elasticsearch

service elasticsearch start

Memory locking can be verified with the following command, making sure it returns "true"

curl "localhost:9200/_nodes?filter_path=**.mlockall"

Heap size can be verified with the following command, making sure it returns "true"

curl "localhost:9200/_cat/nodes?h=heap.max"

Configure Logstash

Install the Logstash package

pkg install logstash6-6.4.2

Enable Logstash and enable logging to /var/log/logstash.log via rc.conf

sysrc logstash_enable=yes

sysrc logstash_log=yes

sysrc logstash_log_file="/var/log/logstash.log"

---Configure the Logstash filters within logstash.conf, at minimum we will want an input, filter, and ---output. In this case we will use "elastic.logstash.conf"

---vi /usr/local/etc/logstash/logstash.conf

Create a directory to hold custom pattern definitions and then add nginx_patterns.conf using "elastic.logstash.nginx.conf"

mkdir -p /usr/local/etc/logstash/patterns.d

vi /usr/local/etc/logstash/patterns.d/nginx_pattern.conf

Start Logstash

service logstash start

Configure Kibana

Install the Kibana package

pkg install kibana6-6.4.2

Enable Kibana

sysrc kibana_enable="yes"

Default configurations should be fine

Start Kibana

`service kibana start

Configure Beats dashboards

To get all the Beats dashboards into Kibana it's easiest to install it locally

pkg install beats-6.4.2

To install a specific dashboard, such as for Metricbeat

metricbeat setup --dashboards

Configure Nginx

Install the Nginx package

pkg install nginx

Configure Nginx to proxy to Kibana on port 5601 with SSL termination as well as basic authentication

vi /usr/local/etc/nginx/nginx.conf

Or use "elastic.nginx.conf"

Create private key and certificate directories

mkdir -p /usr/local/etc/pki/tls/certs /usr/local/etc/pki/tls/private

Generate an RSA private key and certificate

openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /usr/local/etc/pki/tls/private/ssl.key -out /usr/local/etc/pki/tls/certs/ssl.crt

Enable Nginx via rc.conf

sysrc nginx_enable=yes

Install the htpasswd Python package

pkg install py27-htpasswd-2008.03.05_2

Run htpasswd to create the file to store the hashed credentials and add our "jonatello" user (replace %Password% appropriately)

python2.7 /usr/local/bin/htpasswd.py -c -b /usr/local/etc/nginx/kibana.example.com.htpasswd jonatello %Password%

Start Nginx

service nginx start

Configure IPFW

Enable IPFW

sysrc firewall_enable=yes

Enable firewall logging

sysrc firewall_logging=yes

Use custom firewall script rules within "/etc/ipfw.rules"

sysrc firewall_script="/etc/ipfw.rules"

Create the /etc/ipfw.rules file with the appropriate rules using

Start IPFW

service ipfw start

Usage

All endpoint devices will need to be configured in order to send their logs to the Elastic Stack. Most network devices have a syslog feature built in. Some endpoints can take advantage of Filebeat. There's a lot of other methods to get this done as well, mostly using Logstash plugins - https://www.elastic.co/guide/en/logstash/6.4/input-plugins.html.

Elasticsearch

Upload a template from a json file to be used with an index (filebeat, which can be copied from ~/farmalytics/farmalytics.template.json). In this case the template is modified to set the number of shards to 1 and number of replicas to 0 (as we are using a single node setup with ElasticSearch)

curl -XPUT 'http://localhost:9200/_template/filebeat' -H 'Content-Type: application/json' -d@ '~/farmalytics/farmalytics.template.json'

Verify the template has been uploaded

curl localhost:9200/_cat/templates

To reindex, delete the existing indices

curl -XDELETE 'localhost:9200/farmalytics-*'

Give it some time for documents to be written, then verify the number of shards/replicas is 1/0

curl localhost:9200/_cat/indices

Kibana

Create index pattern

  • Log into Kibana
  • Configure an index pattern, "logstash-*" > Next
  • Configure a timestamp, "@timestamp" > Create

Configure pfSense to send logs (replace IP address appropriately)

  • Log into pfSense, Status > System Logs > Settings
  • Enable 'Send log messages to remote syslog server'
  • Remote log servers: '10.0.0.1:5140'

Troubleshooting

Networking

Verify all programs are listening: Elasticsearch (9200, 9300), Logstash (5140, 5044, 9600), Kibana (5601), and Nginx (80)

sockstat -4 -l

Use tcpdump to watch for traffic on port 5140 or IP, depending on what is being verified

tcpdump port 5140

tcpdump host 10.0.0.50

Logstash

Run Logstash in debug mode to troubleshoot easier

/usr/local/logstash/bin/logstash agent -f /usr/local/etc/logstash/logstash.conf --debug

Validate the configuration file

/usr/local/logstash/bin/logstash --configtest -f /usr/local/etc/logstash/logstash.conf

Elasticsearch

Verify Elasticsearch health with cURL

curl -XGET "http://localhost:9200/_cat/indices"

Get an index to verify details like activity and health (filebeat-*)

curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'