Deployment - ReplayProject/ReplayHoneypots GitHub Wiki
Summary
Deployment has been one of the quite difficult parts of this project. As of Iteration 2, we have shifted to using Docker to handle the creation, distribution, and management of images of our code. For more details on how Docker works see these links:
Along with the Docker functionality, we also have bash scripts attached to our CLI so honeypots can be deployed without the use of Docker. This allows users to choose whether or not they want to use Docker for their system. Note that this tends to complicate deployments, and is more unreliable due to the dependance of individual system conditions/environments.
Docker Implementation
Architecture
Docker allows us to split the different responsibilities of the system into separate
containers. Right now we use a number of services (most of which are defined in detail
in docker-compose.yml
). Only the docker prefixed with "replay-" are managed by this
codebase/team.
Replay-Manager
This service serves the frontend of our management interface, along with the authentication server.
Replay-Honeypot
This service runs the honeypot code and reports logs to a configurable couchdb endpoint.
CouchDB (external)
This service runs the database(s) for the project. Honeypots can be configured to write logs directly to the management system's couchDB, or to have a couchdb instance locally which will replicate logs to the management system.
Documentation for this feature coming soon.
Fauxton (external)
This service runs a GUI interface for the couchdb instance it is configured to connect to. This would most likely be disabled in a production environment.
Viz (external)
This service runs a website that allows us to check in on the state of the docker swarm at a glance. It would probably be disabled in a production environment.
This is a screenshot of the interface the visualizer service produces when a sample deployment is up and running.
Architecture Summary
replay-cli & docker manage nodes in a docker swarm.
replay-honeypots scan configured ports/interfaces and report logs to couchdb database.
replay-manager provides a GUI interface to browse and export collected logs.
or in a slightly more visual way
+------------------+
| |
| Replay Manager |
| |
+-------+-----^----+
| |
+---------------+ | | +--------------------+
| | | | | |
| CouchDB <---+ +--------+ Replay Honeypot 1 |
| (and Fauxton) | | | |
| | | +--------------------+
+---------------+ |
| +--------------------+
| | |
+--------+ Replay Honeypot 2 |
| |
+--------------------+
Scripts / Files
docker-compose.yml
- production environment definitionsdocker-compose.override.yml
- development environment definitionsstack_deploy.sh
- deploy docker-compose definitionservice_deploy.sh
- deploy replay-honeypot serviceteardown.sh
- teardown production definition & replay-honeypot service, and remove db dataviz.yml
- definition of vizualizer service
Bash Implementation (WIP)
Design
The management server contains a bash script called deploy.sh. This script is called through the CLI which prompts a user for the following parameters:
- Keypath: The path to the SSH key for the target server
- Remote IP: The IP address for the target server
- Remote Name: The username for the target server
- Remote Password: The password for the target server
- Repository Path: The path to the tar file containing the code for the honeypot
The script then uses those parameters to copy a tar file over to a remote machine where it then unzips the code and runs PortThreadManager.py, which starts the honeypot. Another script, tar_generator.sh, creates the tar file containing the code needed to run a honeypot.