Lab 1.0: Setting Up Elastic in AWS - squatchulator/Tech-Journal GitHub Wiki

Lab 1 - Setting up Elastic in AWS

This lab is about how to set up the ELK stack within an AWS instance. It is relatively simple despite the abundance of steps, and uses elastic version 7.17.4 which is a bit outdated, but still is built on Elastic's original open-source license so it is a little bit easier to play with. It is also more up-to-date with Elastic's current documentation.

Setup

  • Navigate to the AWS Canvas page, and click Start Lab on the Learner Lab page. Wait for the AWS indicator to turn green, and click it when it does. You will be redirected to the AWS dashboard.
  • In the services search bar, search for "EC2". Click it to get to the EC2 dashboard.
  • On the left panel, click Instances. Click the orange "Launch Instance" button.
    • On this screen:
      • Name the server your_name ELK Server
      • Select Quick Start Ubuntu as the OS image
      • Select t2.medium as the Instance Type
      • Create a new RSA key pair and name it your_name-elk-key. Make sure it's a .pem.
        • Download this to your computer and keep it somewhere safe!
      • Now in Network Settings:
        • Create a Security Group
        • Check Allow SSH Traffic from Anywhere 0.0.0.0
        • Check Allow HTTPS Traffic from Internet
        • Check Allow HTTP Traffic from Internet
        • Leave the defaults for Configure Storage and Advanced Details
      • In Storage Settings:
        • Configure storage to 30gb.
  • Now we can start the instance with the Launch Instance button. It may take a minute or two before it's accessible, just monitor the instance state to verify it's running.

SSH to AWS Server

  • Open a command prompt and navigate to the directory where you downloaded your key.
  • From this directory, run the command ssh -i your_name-elk-key.pem ubuntu@<public ip>
    • NOTE: If on Windows and getting an error regarding file permissions being "too open", do the following:
    1. Right-click on the private key file (miles-elk-key.pem).
    2. Select "Properties" from the context menu.
    3. Go to the "Security" tab.
    4. Click on the "Advanced" button.
    5. In the "Advanced Security Settings" window, click on the "Disable inheritance" button at the bottom.
    6. When prompted, choose "Remove all inherited permissions from this object" and then click "Apply."
    7. After removing inherited permissions, click on the "Add" button to add a new user.
    8. In the "Permission Entry" window, click on "Select a principal".
    9. In the "Enter the object name to select" field, type your Windows username and click "Check Names" to verify. Click "OK" to close the window.
    10. In the "Permission Entry" window, check the "Full Control" box under "Basic Permissions."
    11. Click "OK" to close the "Permission Entry" window.
    12. Click "Apply" and then "OK" to close the "Advanced Security Settings" window.
  • In AWS, click on the Instance ID and scroll down to the Details panel - next to it there should be an option called Security. Click this, and click on the option under Security groups.
  • Edit the Inbound Rules:
    • Add custom TCP port 5601 with source Any IPv4 for Kibana
    • Add custom TCP port 9200 with source Any IPv4 for Elasticsearch

Building the Stack

Installing Elasticsearch

  • First, add Elastic's signing key to the keyring
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
  • Then install apt-transport-https
sudo apt-get update
sudo apt-get install apt-transport-https
  • Now add the repo to your apt list
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
  • Now we can install Elasticsearch
sudo apt-get update
sudo apt-get install elasticsearch
  • It will install configuration files that we are going to need to edit. They can be edited under /etc/elasticsearch/elasticsearch.yml. Configure the following:
network.host: "<private IP>"
http.port: 9200
cluster.initial_master_nodes: ["<private IP>"]
  • Now starting elasticsearch should work! Try sudo systemctl start elasticsearch.
  • To verify that it works, run curl http://<private IP>:9200

Installing Logstash

  • It requires certain JRE versions to run properly so we need to install that first. Do this with sudo apt-get install default-jre. Verify it with java -version.
  • Now that we have the GPG key added to the keyring and Logstash can be installed from the same repo, we can just say sudo apt-get install logstash to install.

Creating a Data Pipeline

  • Logstash needs some data that we can play with so we are going to pull it from a repo containing some Apache access logs. These logs are massive, like 17k lines long, so copy them from this URL and paste them into a file called apache-daily-access.log in a new /logstash directory. NOTE: The paste will take a few minutes since it's so large. We can run a wc -l apache-daily-access.log to verify that all the lines pasted ok.
  • Now we need to change the owner/group of the logstash folder to the logstash user with
sudo chown -R logstash /logstash
sudo chgrp -R logstash /logstash

Creating a Logstash Config

  • We need one of these configured to ingest the saample apache logs and output them to elasticsearch. Make a new one with sudo nano /etc/logstash/conf.d/apache-01.conf and enter the following:
input {
  file {
    path => "/logstash/apache-daily-access.log"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}
filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
  geoip {
    source => "clientip"
  }
}
output {
  elasticsearch { 
  #stdout {}
    hosts => ["<private IP>:9200"] 
  }
}

  • To make sure the config has no errors, run the following command. It might take a minute. It should give you an OK message if everything is set correctly.
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/apache-01.conf
  • Now we can start it up with sudo service logstash start! We now should query Elasticsearch to see if the new Logstash index was created. It should say that the status is yellow, but that it is open. Doc count shouldn't be zero but if it is, wait a few minutes and let the data index before trying again. If it doesn't change, this might indicate an issue in the apache log file such as the path, or permissions on the file.
curl http://172.31.91.35:9200/_cat/indices
  • Assuming this works, we can see the data with the following command:
curl http://172.31.91.35:9200/<your index name>/_search?pretty=true

Installing Kibana

  • Easy to install just like Logstash. Just a simple sudo apt-get install kibana.
  • Once installed, edit the config file with sudo nano /etc/kibana/kibana.yml and add the following:
server.port: 5601
server.host: '<private IP>'
elasticsearch.hosts: ["http://<private IP>:9200"]
  • Now we can start it up with sudo systemctl start kibana, and it should be super quick. It might take a few minutes for it to ACTUALLY start running though.
  • In any browser, navigate to http://<public IP>:5601 to get to the Kibana home page. Yay!

Adding Logstash Index Pattern

  • Click the burger menu to open the navigation pane, and head to Stack Management -> Kibana -> Index Patterns, and select Create Index Pattern. Call the pattern logstash-*, and select @timestamp as the timestamp field. Now we can hit Create Index Pattern to make it.
  • Go back to the burger menu, and go to Analytics -> Discover. Using the time filter in the upper right, change the data the logs go back to to about 2 years ago -> now. This is important because the logs are older and will not show up if you don't change this!
⚠️ **GitHub.com Fallback** ⚠️