Guide: Install - ReplayProject/ReplayHoneypots GitHub Wiki
Table of Contents
- Getting Started
- Glossary
- Setting up the Repo & Configuration
- Whole System Deployment
- Honeypot Initial Setup
- Troubleshooting
This guide details how to install and set up this project. Honeypots need to be ran on debian-based linux machines. If you are running the project without Docker, each Honeypot needs its own machine and the machines need network access to the management server. This document refers to these installs as "bare metal". Some of the images we depend on are different for ARM architectures (and raspberry pi devices). Search the codebase for "ARM_DEP" where these have been noted.
Note, that the bare metal guide needs reverification.
The system that will hold a copy of the repo, build the docker images for the honeypots and management, usually runs the frontend aspect of the system, and acts as a docker swarm manager node.
A string that defines how to connect to the CouchDB instance. It is of the form
user:password@host:port
. When in a normal docker container/service, using the name of
the service will resolve to the database. However, when services like the honeypots run
in host
networking mode, they must use ip addresses or 0.0.0.0
to use docker swarm
ingress routing.
Steps to deploy a honeypot from scratch:
-
The first step in installing is to get a copy of the repository and dependencies, make sure you have Git and NodeJS installed on your manager system with your package manager.
# git
sudo apt install git -y
# docker-compose
$ sudo curl -L --fail https://github.com/AppTower/docker-compose/releases/download/1.26.2/run.sh -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
- On the machine you wish to deploy the honeypot on, clone the repo
$ git clone https://github.com/ReplayProject/ReplayHoneypots.git
# or if using ssh
$ git clone [email protected]:ReplayProject/ReplayHoneypots.git
$ cd ReplayHoneypots/
- Next, configure the credentials for the following aspects of the system. Optional
locations are marked with the DEV tag. (The locations of the account configs are
also marked with TODO statements)
- Couchdb Admin Account -
docker-compose.yml
- Lines 13 and 14
- Couchdb Additional Admin Accounts -
config/couch_defaults.ini
- Lines 13 to 15
- Update the DB connection string in the following files: (Note, this uses the
credentials from the previous 2 steps)
-
docker-compose.yml
line 50 - docker config to deploy honeypots -
management/frontend/.env
line 2 - config for the frontend deployment -
honeypots/honeypot/tests/test_all.py
line 63 (DEV) - file for running honeypot testing
-
- Default Accounts for Replay Manager UI -
config/frontend_users_seed.json
- Use the 'management/frontend/passwordhash.sh' file to generate password hashes for new users, and insert these into json document, if you would like to have them accessible at first run. By default a single admin account is added.
- Configure the public IP that the frontend will be accessed from
-
management/frontend/package.json
lines 7 and 8 public url config for parcel
-
- Management CLI Accounts - Uses regular UNIX accounts for SSH access
- Optionally, you can choose to give certain users permission to manage the docker engine instance (makes handling docker much easier)
- Couchdb Admin Account -
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
- Next, you should configure the files in the “config” folder to reflect how you wish
to set up the system. Please see the Developers Guide for details on these config
files. The most important file is
default_hp_config.json
. This is the base configuration with which all honeypots start. Read the Honeypot Initial Config section for suggestions how to get the file ready for your deployment.
-
Follow basic setup for a docker swarm
-
Check that swarm mode works with this command (you will see output about the swarm manager/worker nodes)
$ docker node ls
-
Ensure the github repo is cloned into the manager
-
Now we need to deploy the "meta" stack. This includes
- a “throwaway” docker registry to hold our custom images, or you can use your own (in which case, change the image URIs in the compose.yml files)
- a visualizer service that is the recommended way to see swarm's state
- we also are making a docker network for our inter-container communications
docker network create --driver=overlay --attachable honey-infra
$ ./meta_deploy.sh # (roughly 1 min)
After running this, wait a few moments, and then check that the visualizer is available on port 8080. Using
docker service ls
you can see the status of the deployment. (if problems arise, see troubleshooting, then Google)
- Build the
replay-manager
andreplay-honeypot
docker images and push them to the registry. This will take much longer the first build, due to not having a cache or the local dependent docker images. Use this time to start reading the Users Guide!
$ docker-compose build --parallel # (roughly 8 mins)
$ docker-compose push # (roughly 2 mins)
- Deploy the “docker-compose.yml” file for the
couchdb,
fauxton,
replay-manager, and replay-honeypot services:
- After running this script, wait until the
docker service ls
command shows all services as being replicated - You can check logs for a given service by using
docker service logs honey_replay-manager
- At this point, you should be able to see the replay manager service on port 8080 and the services on 8082 through viz.
- After running this script, wait until the
$ ./stack_deploy.sh
At this point, you will have honeypots on all hosts, reporting to CouchDB, and a frontend to gather info from the database.
If you want to pause/stop log collection you do not have to remove the whole stack. Just run this command to take down the honeypot services.
docker service rm honey_replay-honeypot
This is to setup the CLI, if you are not interested in using it, you can safely skip this part of the install process.
- Add all hosts to “~/.ssh/config” (optional, but makes life easier)
# .ssh/config
Host [host]
HostName [ip]
User [user]
...
- Create an ssh key
$ ssh-keygen
Enter file in which to save the key($HOME/.ssh/id_rsa): [OPTIONAL]
Enter passphrase (empty for no passphrase): [OPTIONAL]
Enter same passphrase again: [OPTIONAL]
- For each host, copy over your ssh key
$ ssh-copy-id [host]
[user]@[ip]’s password: [password for user]
- Move to the CLI folder
$ cd management/cli
- Install requirements
$ pip3 install -r requirements.txt
- Move to the deployment folder
$ cd management/cli/deployment
- Copy the honeypot code into a tar file
$ ./tar_generator.sh
(management system setup required for initial deployment)
(if all config files already exists, skip this section)
- Install requirements from “honeypots” folder
$ pip3 install -r requirements.txt
-
Locate the IP Address of the machine you would like to mimic.
-
Scan the machine with Nmap (or use existing scan output)
$ nmap -n -A -p0-65535 -oA honeypot\_mimic $IP_ADDR
-
Move the honeypot_mimic.nmap onto the honeypot machine you wish to deploy.
-
Parse ports from the Nmap scan needed to configure a honeypot. This command parses the Nmap file and creates a custom config file that will be used when deploying the honeypot.
$ python3 NmapParser.py honeypot_mimic.nmap
-
Capture a PCAP file of how each port responds when they are interacted with on the machine you want to mimic.
- Wireshark is a great tool to use for creating a PCAP file. It can be found at https://www.wireshark.org/#download
-
Search through the PCAP file by hand and find where each port is being interacted with
-
Extract the raw bytes of the response and copy them into a Json file. Refer to
config/senddata.json
as an example. -
Put the path of the file you just created into the properties file under the attribute pcap_data_file. The properties file is located at config/properties.cfg
- Pay attention to the path of this file, as it is different for a docker or bare metal install.
-
Run the honeypot:
- This command uses a connection string that was setup in “Setting up the Repo & Config Setup” step 3
$ sudo DB_URL=”[connection string]” python3 PortThreadManager.py
# or
$ sudo DB_URL=”[connection string]” PortThreadManager.py -n [location of nmap file]
This command will create your honeypot with the custom ports from the Nmap file and custom responses from the pcap file.
-
After setting up the management system and a virtual machine, go to “management/cli”
-
Add a host by running the following command:
$ python3 replay_cli.py addhost
-
Fill out the following information when prompted:
- Hostname - the id of this host on the CLI
- Username - the username you use on the host machine
- IP Address - the IP address of the host machine
- Port - the port of the ssh server on the host machine
- SSH Key - the path to the ssh key created for this machine
-
Submit the information and install the honeypot by running:
$ python3 replay_cli.py installhoneypot
-
Select the host you just added and fill out the following information when prompted:
- Tar File: Path to tar file containing the honeypot repository
-
Submit the information and start the honeypot by running:
$ python3 replay_cli.py starthoneypot
-
Select the host you just installed a honeypot on and fill out the following information when prompted:
- Database URL: The url to the management database
- Password for[Username]@[IP Address]: The password set up for the virtual machine
-
The honeypot on the host will then start and connect to the management server.
If you need to reconfigure a service from a stack, then change the related config files or compose files. Then you will need to remove the service and redeploy the stack. Docker will do the heavy lifting of pushing out the configuration changes to the updated service. Here is an example:
# I needed to change the DB_URL for the honeypots, so I edited it, then
docker service rm honey_replay-honeypot
# now redeploy the stack and watch the output like normal
./stack_deploy.sh
If you get errors during a docker image build for replay-manager, ensure that the file “package-lock.json” was not created in the “management/frontend” folder. This file causes dependency issues when copied into the docker image.
If you get errors about make failing, there is a package of requirements to build all sorts of software. Installing that, fixes these errors most of the time.
# this will be different if in an alpine docker contaier (use apk and Google the syntax)
sudo apt-get install -y build-essential
If deployment seems to be having issues starting up honeypots, check that all your config files have the right values. The most common problem here is that the honeypots could not connect to the database. In which case, SSH to the machine you are having troubles from and check the accessibility of the database. There are two ways to approach this:
curl admin:couchdb@localhost:5984 # from manager machine
# Deploy a temp container and start a shell inside to debug
docker run --name temp --rm -it --network honey-infra alpine:3 ash
> apk add curl
# continue debugging
If the viz service never comes up. Try running this command to troubleshoot.
$ docker stack ps meta --no-trunc
# if no errors are showing run this until you see the service is "replicated"
$ watch -n0 docker service ls
# CTRL-C to exit the watch command
If it's an architecture issue, simply swap out the image for something more fitting. See
the file meta.yml
for an example.
If code changes are not being reflected in your docker builds, there are a few things that can be done to address it. Docker images can be built with the “--no-cache” options, or you can remove docker caches, images and, volumes from the whole machine with the following command:
$ docker system prune --all --volumes --force