Lab 4.2 Auditbeat with Logstash - Hsanokklis/2023-2024-Tech-journal GitHub Wiki
Helpful Info
The Public IPv4 address will change every new session.
Current IPv4 address in use: 44.203.77.137
to access your instance
ssh -i hannelore-elk-key.pem ubuntu@public key
Private IPv4 address is :
172.31.87.23
When next login your system make sure to start everything again
- Start in this order: Elasticsearch, Logstash, Kibana
To get to the kibana dashboard type
http://public-ip:5601
into your browser
Helpful Vocab!
Auditbeat is another Elastic Beat - and is a lightweight shipper that you can install on your servers to audit the activities of users and processes on your Linux systems. You can also use Auditbeat to detect changes to critical files, like binaries and configuration files, and identify potential security policy violations.
lightweight shipper
- a component that is used to collect and ship logs or audit data to Elasticsearch for analysis and visualization
For this lab, instead of having Auditbeat outputting directly to Elasticsearch, you will configure to output to Logstash which will then send to Elasticsearch.
Pre-Lab: Add first-pipeline.conf to logstash service startup
In Lab 4-1, you were running logstash in a terminal window (and it may still be running if you just moved on from Lab 4-1)
To have logstash read your config file at startup, it needs to be copied to the /etc/logstash/conf.d directory
cp
yourfirst-pipeline.conf
file to/etc/logstash/conf.d
- logstash will include any files with *.conf in that folder (and therefore start the tcp/5044 beats listener)
- If logstash is still running in a terminal, the CTRL-C to stop
- use
sudo systemctl start logstash
Lab
Step 1: Installing auditbeat
- sudo apt-get install auditbeat
Step 2: Configuring auditbeat for logstash
- edit the /etc/auditbeat.yml file to
- Comment out the Elasticsearch output (output.elasticsearch and hosts lines)
- Uncomment the Logstash output (output.logstash and update hosts line to have your server IP)
Step 3: Build the auditbeat index in Elasticsearch
A connection to Elasticsearch is required to build the index. As logstash is the enabled output, you need to temporarily disable that output and enable Elasticsearch by using the -E
option from the command line.
auditbeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["your_ip_address:9200"]'
Step 4: Load the Auditbeat Dashboards in Kibana
- To load dashboards when the Logstash output is enabled, you need to temporarily disable the Logstash output and enable Elasticsearch.
auditbeat setup -e \
-E output.logstash.enabled=false \
-E output.elasticsearch.hosts=['your_ip_address:9200'] \
-E output.elasticsearch.username=auditbeat_internal \
-E setup.kibana.host=your_ip_address:5601
Step 5: Start the Auditbeat service
sudo systemctl start auditbeat
TROUBLESHOOTING: My Auditbeat was failing to start upon sudo systemctl start auditbeat.
I ran
sudo auditbeat test config
to check if the configuration was correct, and it said that there were no outputs defined in the config file (which was weird since I defined the logstash output the day before while I was working on this).I realized that I only left the hosts line uncommneted so I uncommented the 2 lines above as well.
I ran the config command again and it said that the hosts I put could not be mapped to. I commneted out the middle line and starting auditbeat worked!
So once I changed that it worked!
Step 6: Create some events
- cd to
/etc
- create a new file using nano
Step 7: View the process and file info in Kibana
TROUBLESHOOTING: The auditbeat data was till not showing up in Kibana
I ran the command
auditbeat test output
to check that I can actually use the output in the config file and I got an error
I went into the logstash sample config file and input the host
The problem ended up being that I didn't define my private IP host in my first-pipeline.conf file because I had copy pasted from the lab and forgot to add that back in.
I made a file called feet.txt
and it shows up in the logs!
Submit Screenshot of the file creation in the logs
Step 8: Find the info directly in Elasticsearch
Submit Screenshot of searching in elasticserach
Step 9: Generate some other actions and submit screenshots
- Add user
greg
- cat /etc/passwd