Setup Infrastructure - froyo75/SpREaD GitHub Wiki
Install Ansible (Ubuntu/Debian)
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common curl
sudo apt-add-repository "deb [arch=amd64] http://ppa.launchpad.net/ansible/ansible/ubuntu focal main"
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367
sudo apt-get update && sudo apt-get install ansible
Install Terraform (Ubuntu/Debian)
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common curl
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install terraform
Install Azure CLI to authenticate with the Azure Backend with Terraform (Ubuntu/Debian)
Terraform Azure provider (aka azurerm) requires "az" cli tool to obtain authentication token !
sudo apt-get install ca-certificates curl apt-transport-https lsb-release gnupg
curl -sL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/microsoft.gpg > /dev/null
AZ_REPO=$(lsb_release -cs) echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | sudo tee /etc/apt/sources.list.d/azure-cli.list
sudo apt-get update && sudo apt-get install azure-cli
Note
Ansible is agentless, using SSH to push changes from a single source to multiple remote resources.
An SSH key pair need to be defined to allow Ansible to connect to the target hosts and configure the new VPS instance.
This key pair can be stored on ./Ansible/.ssh
folder for simplicity but need to be defined in Terraform/infra.env
file using TF_VAR_do_ssh_public_key
and TF_VAR_do_ssh_private_key
variables.
Create a new SSH key pair with ed25519 algorithm
ssh-keygen -t ed25519 -C "[email protected]" -f ./Ansible/.ssh/id_rtops
All API tokens and global variables need to be stored in Terraform/infra.env
file.
TF_VAR_ansible_path=../../../Ansible
TF_VAR_aws_access_key_id=<AWS Access Key>
TF_VAR_aws_secret_access_key=<AWS Secret Access Key>
TF_VAR_aws_ssh_user=root
TF_VAR_aws_ssh_key_name=rtops
TF_VAR_aws_ssh_public_key=$TF_VAR_ansible_path/.ssh/id_rtops.pub
TF_VAR_aws_ssh_private_key=$TF_VAR_ansible_path/.ssh/id_rtops
TF_VAR_do_token=<API key Digital Ocean)>
TF_VAR_do_ssh_user=root
TF_VAR_do_ssh_key_name=rtops
TF_VAR_do_ssh_public_key=$TF_VAR_ansible_path/.ssh/id_rtops.pub
TF_VAR_do_ssh_private_key=$TF_VAR_ansible_path/.ssh/id_rtops
TF_VAR_dns_token=<API Key Registrar (e.g Gandi.net))>
TF_VAR_mailgun_token=<API key Mailgun)>
TF_VAR_op_name=rtops
#TF_LOG=debug
When deploying a new instance, the SSH server configuration is applied using the Ansible role harden_sshd
with the default template sshd_config.j2
. All SSH public keys (for public key authentication) defined in the Ansible/ssh/<op_name>
folder will be automatically added to the authorized_keys
file for the specified user ansible_user
.
To grant new users access to the instances, a new subfolder needs to be created in the Ansible/ssh/<op_name>
folder, and the path needs to be defined in the provided Terraform template (.tfvars file) as vps_ssh_authorized_keys_folder
.
Example
Ansible/ssh/rtX:
id_rt_toto.pub
id_rt_tata.pub
Note
Specific authorized key options can be defined for all users using the vps_authorized_key_options variable within the Terraform template.
axiom.tfvars
[...]
vps_ssh_authorized_keys_folder = "./ssh/rtops"
vps_authorized_key_options = "no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-user-rc,from=\"0.0.0.0/0\""
[...]
Access to a specific or multiple instances can be revoked using the Ansible playbook revoke-ssh-access.yml
by providing a list of SSH public keys stored in the Ansible/ssh/quarantine
folder as follows:
$ ls -1 Ansible/ssh/quarantine/
id_rto1.pub
id_rto2.pub
id_rto3.pub
The createNewTerraInfra.sh
script is designed to simplify the creation of a new infrastructure project folder.
Usage: ./createNewTerraInfra.sh <Operation Name (example:rtX)> <Admin Email Address (example:[email protected])> <Authorized Keys Folder (example:./ssh/rtX)> <Infra Name (example:mycompany)> <Provider Name (aws|digitalocean|azure)> <Service Type 1,Service Type 2,... (example: recon,axiom,simple-smtp,simple-cdn,evilginx,evilginx-cdn,evilginx-cdn-adfs,nextcloud,mailu,gophish,c2proxy,clonesite,brc4,cs,havoc)> [global infra env variables file path (default: Terraform/infra.env)]
Once the new project folder is created successfully, simply modify the template variables (in the <vps_service_type>.tfvars
file) according to your needs and run the init-infra.sh
script to deploy your new infrastructure.
Run the "init-infra.sh" bash script to deploy/or destroy the infrastructure
./init-infra.sh recon.tfvars <deploy|destroy>
Warning
Several services need to be configured before deploying. Refer to the table below.
Service name | Configuration required | Ansible Playbook |
---|---|---|
brc4 | yes | init-c2server.yml |
cs (Cobalt Strike) | yes | init-c2server.yml |
havoc | yes | init-c2server.yml |
evilginx | yes | init-evilginx.yml |
evilginx-cdn | yes | init-evilginx-cdn.yml |
mailu | yes | init-mailu.yml |
gophish | no | init-gophish.yml |
clonesite | yes | init-clonesite.yml |
c2proxy | yes | init-c2proxy.yml |
axiom | yes | init-axiom.yml |
recon | no | n/a |
Note
By default, a recon service acts as a SOCKS proxy and can be useful for conducting OSINT recon (footprinting, Google Dorking, and deep searching for file metadata, etc.)
When deploying a new instance, several rules are automatically applied using the Ansible role harden_iptables
. The Ansible/iptables/<vps_service_type>
folder contains a list of iptables rules configuration rules.v4
for a specific service type (e.g brc4 or clonesite or evilginx2 or gophish or website).
To define a new custom iptables rules configuration for a specific host, simply create a new subfolder in Ansible/iptables/<vps_service_type>/
with the instance name given in the <vps_service_type>.tfvars
template file. Then, create/copy your own rules.v4
iptables rule configuration.
Example
$ mkdir Ansible/iptables/axiom/rtX-axiom
$ cp rules.v4 Ansible/iptables/axiom/rtX-axiom
Tip
Various samples of rules.v4 iptables rule configurations are available in Ansible/iptables/<vps_service_type> folder.
Note
If no iptables rules are provided, the default rules located in Ansible/iptables/<vps_service_type>/rules.v4 will be used instead for the corresponding service type (e.g. c2proxy, c2server, clonesite, evilginx2, gophish, mailu, axiom, recon or website).
To set up a new Axiom instance, you need to create a new Axiom profile and add it to the Ansible folder: Ansible/axiom/<ansible_inventory_hostname>
. If no profile is found, the configuration file Ansible/axiom/default.json
will be used instead.
Important
When deploying a new Axiom instance, a new base images/snapshot will be created according to the Axiom configuration (Ansible/axiom/default.json) provided. Don't forget to remove the associated Axiom images/snapshot before destroying the Axiom instance using the axiom_aws_cleanup.sh script, which is available in the home folder on the Axiom instance or at Ansible/roles/install_axiom/files/axiom_aws_cleanup.sh.
Tip
A recon_axiom.sh script is available in the home folder on the Axiom instance to automate the process of reconnaissance and information gathering. It provides various techniques, including passive and active subdomain enumeration, port scanning, taking screenshots, nuclei scans, fetching known URLs, etc.
To deploy a new Evilginx instance, the glue record option need to be set to true as follows:
evilginx-cdn.tfvars
vps_glue_record = true
Glue records will be automatically configured for the given domain using the vps_domain
variable within the provided registrar (Gandi.net) with the Ansible role configure_gandi_glue_records
.
Note
A glue record provides the IP address of a nameserver so that the domain can be resolved in the case where the name server is hosted at the same domain as the domain name itself. Setting up custom nameservers involves configuring glue records with the domain registrar.
Important
A new base domain need to be set using the vps_glue_record variable for Evilginx
When deploying a new Evilginx instance with CDN support, the CDN endpoint names need to be declared in the <vps_service_type>.tfvars
template file as follows:
evilginx-cdn.tfvars
vps_cdn_endpoints = "app-login.azureedge.net app-www.azureedge.net app-aadcdn.azureedge.net"
- A new DNS A record using Gandi's APIs will be automatically configured for the given domain using the
vps_domain
variable with the Ansible roleconfigure_gandi_dns_records
. - A new Let's Encrypt SSL/TLS certificate will also be deployed using the certbot utility with the Ansible role
configure_letsencrypt
.
Important
The phish_sub parameters in the provided phislets Docker/evilginx-cdn/app/phishlets/adfs.yaml or Docker/evilginx-cdn/app/phishlets/o365.yaml are set with default endpoint names; don't forget to modify them according to your configuration.
Important
The glue record option need to be set to false vps_glue_record = false
Note
When no vps_dns_template is defined, the default DNS template default-a.j2 will be automatically used by the the Ansible role configure_gandi_dns_records
.
A clonesite can be useful for hosting a legitimate website or a clonesite to maintain a good domain reputation. Later, this website can be switched to a c2proxy using the init-c2proxy.yml
Ansible playbook.
When deploying a new Clonesite instance, a new website will be set up with the Ansible role install_clonesite
. The Ansible/clonesite/<server_domain>
folder contains all HTML files and folders for a specific domain.
To define a new Clonesite for a specific domain, simply create a new subfolder in Ansible/clonesite/
with the domain name vps_domain
given in the <vps_service_type>.tfvars
template file. Then, copy your web root folder into it.
Example
$ mkdir Ansible/clonesite/toto.com
$ cp -r <toto.com_html_folder> Ansible/clonesite/toto.com
- A new DNS A record using Gandi's APIs will be automatically configured for the given domain using the
vps_domain
variable with the Ansible roleconfigure_gandi_dns_records
. - A new Let's Encrypt SSL/TLS certificate will also be deployed using the certbot utility with the Ansible role
configure_letsencrypt
.
Important
The glue record option need to be set to false vps_glue_record = false
Note
index.html or index.htm is always the main file. All html files and folders will be copied into the default /var/www/html/ folder.
Note
When no vps_dns_template is defined, the default DNS template default-a.j2 will be automatically used by the the Ansible role configure_gandi_dns_records
.
Redirect rules are automatically deployed when invoking the Ansible role install_redirect_rules
. The Ansible/redirect_rules/<c2proxy|clonesite|gophish>
folders contains a list of redirect rules configuration redirect.rules
for a specific service type (e.g c2proxy or clonesite or gophish).
To define a new custom redirect rules configuration for a specific domain, simply create a new subfolder in Ansible/redirect_rules/<c2proxy|clonesite|gophish>/
with the domain name vps_domain
given in the <vps_service_type>.tfvars
template file. Then, create/copy your own redirect.rules
redirect rule configuration.
Example
$ mkdir Ansible/redirect_rules/c2proxy/toto.com
$ cp redirect.rules Ansible/redirect_rules/toto.com
Note
Redirect rules can be used to set up a c2proxy server using the init-c2proxy.yml
Ansible playbook.
Important
Dynamic redirect rules work with the Apache2 service only.
To deploy a new C2 server, a new subfolder need to be created in Ansible/<havoc|cobaltstrike|brc4>/
with the instance name given in the <C2_Name>.tfvars
template file.
Example
$ mkdir Ansible/havoc/rtX-c2server-havoc
$ cp havoc.sh Ansible/havoc/rtX-c2server-havoc
Each new C2 folder should contain a specific service script to run the C2 server as a service
C2 Name | C2 Service Script |
---|---|
Havoc | havoc.sh |
BRC4 | brc4-boomerang.sh or brc4-ratel.sh |
Cobalt Strike | cs.sh |
Note
Samples of C2 profiles and service scripts are available in Ansible/<havoc|cs|brc4>/vps_test folder.
The C2 framework type and mode (if available) need to be declared in the <C2_Name>.tfvars
template file as follows:
brc4.tfvars
vps_c2_framework = "brc4"
vps_c2_mode = "ratel"
#OR
vps_c2_framework = "brc4"
vps_c2_mode = "boomerang"
cs.tfvars
vps_c2_framework = "cobaltstrike"
vps_c2_mode = ""
havoc.tfvars
vps_c2_framework = "havoc"
vps_c2_mode = ""
When deploying a new BRC4 server, the Ansible role install_brc4
will automatically copy the brc4 server files (extract version of bruteratel.tar.gz) from the Ansible/install_brc4/files/
folder. It will also copy the C2 profiles and service scripts from the Ansible/brc4/<brc4 c2 name>/
folder.
Example
$ ls Ansible/brc4/<brc4 c2 name>/
.brauth
brc4-boomerang.sh
brc4-ratel.sh
c4profile.conf
cert.pem
key.pem
Note
Use the updateBRC4Role.sh script to automatically update the Ansible install_brc4 role for provisioning a new version of BRC4 framework
When deploying a new Cobalt Strike team server, the Ansible role install_cobaltstrike
will automatically unzip the Ansible/install_cobaltstrike/files/cs.zip
archive and install the
Ansible/cobaltstrike/<cobaltstrike c2 name>/cs.sh
as a service.
Important
The Ansible/install_cobaltstrike/files/cs.zip
archive contains the cobaltstrike.zip file, including the team server, certificates, license file, and configuration files.
When deploying a new Havoc team server, the Ansible role install_havoc
will automatically install the latest available version of the Havoc framework and copy the C2 profiles and service scripts from the Ansible/havoc/<havoc c2 name>/
folder.
Example
$ ls Ansible/havoc/rtX-c2server-havoc
havoc.sh
havoc.yaotl
http_smb.yaotl
webhook_example.yaotl
When deploying a new Docker service, the Ansible role setup_docker_container
will automatically set up and configure a new service container. By default, the Docker/<vps_service_type>
will be used with the corresponding service type.
To define a custom Docker service, simply create a new subfolder in Docker/
with the instance name given in the <vps_service_type>.tfvars
template file.
Example
$ mkdir Docker/rtX-axiom
$ cp -r Docker/evilginx Docker/evilginx/rtX-axiom
A new NextCloud server can be setup using the Ansible playbook init-nextcloud.yml
.
Note
The nextcloud.env file stored in Docker/nextcloud folder can be used to customize the NextCloud container.
Important
By default, the NextCloud server is configured to only listen on 127.0.0.1 on port 8080 for security reasons. To access the server, it is required to set up SSH Local Port Forwarding using the following command: ssh -p2222 -i <SSH_KEY> -q -C -n -N -T -f -L 127.0.0.1:<MY_CUSTOM_PORT>:127.0.0.1:8080 <USER>@<NextCloud IP Address>
A new Mailu server can be set up using the Ansible playbook init-mailu.yml
. To properly configure the SMTP server, a DKIM key is required. The Ansible/dkim/genDKIMKeys.sh
script can be used to generate a new DKIM key pair for a specific domain and a given key size. Once generated, the DKIM key must be copied to the Ansible/dkim/<vps_domain>
folder and renamed as {{ vps_domain }}.dkim.key
.
The following variables can be used to set up a Gandi.net SMTP configuration:
mailu.tfvars
vps_domain = "toto.com"
vps_dns_provider = "gandi"
vps_glue_record = false
vps_dns_template = "default-smtp"
vps_smtp_dkim_domain_key = "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3IHsiMSxb9EDgNlYDUlH"
vps_smtp_dkim_selector = "dkim"
A new Gophish server can be setup using the Ansible playbook init-gophish.yml
.
Note
The gophish.env file stored in Docker/gophish folder contains the environment variables that override the Gophish settings on the "config.json" file.
The following variables can be used to set up a Gandi.net SMTP configuration:
gophish.tfvars
vps_domain = "toto.com"
vps_dns_provider = "gandi"
vps_glue_record = false
vps_dns_template = "default-smtp"
vps_smtp_dkim_domain_key = "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3IHsiMSxb9EDgNlYDUlH"
vps_smtp_dkim_selector = "dkim"
To setup a Mailgun SMTP configuration:
gophish.tfvars
vps_domain = "toto.com"
vps_dns_provider = "gandi"
vps_glue_record = false
vps_dns_template = "mailgun-eu"
vps_smtp_dkim_domain_key = "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3IHsiMSxb9EDgNlYDUlH"
vps_smtp_dkim_selector = "mx"
Important
To set up a new Gophish service, dynamic redirect rules are required (refer to Setup redirect rules).
Important
To properly set up the SMTP configuration, a DKIM key is required. The Ansible/dkim/genDKIMKeys.sh can be used to generate a new DKIM key pair for a specific domain and a given key size.
A new Gophish server with Evilginx support can be setup using the Ansible playbook init-gophish-evilginx.yml
.
Note
Since this version is just a fork of the original Gophish version with Evilginx integration, it can be setup by following the procedure Gophish.
Important
Dynamic redirect rules are not required in this configuration.
The integration has been designed in such a way that Evilginx will notify Gophish on the following events:
- A hidden image tracker is triggered when the email is opened. The tracker image is just a lure URL with specific parameters to let Evilginx know it should be used as a tracker.
- A phishing link is clicked within the email message. The phishing link within the email message sent through Gophish is just the lure URL with embedded parameters.
- The session is successfully captured with Evilginx. Once Evilginx gathers the credentials and logs the cookies, it will notify Gophish that the data has been submitted.
To enable Evilginx to communicate with Gophish, the following steps can be followed:
Create a new OpenSSH key pair with the Ed25519 algorithm on the Evilginx instance
evilginx$ ssh-keygen -t ed25519 -f ~/.ssh/id_gophish -C "rtops@evilginx"
Add the SSH public key to the authorized_keys file on the Gophish server
gophish$ cat id_gophish.pub >> ~/.ssh/authorized_keys
Forward traffic from port 3333 on all interfaces of Evilginx instance to port 3333 on the remote Gophish server
evilginx$ CurrentDocker0IPV4Address=$(ip addr show docker0 | grep -w inet | awk '{print $2}' | cut -d '/' -f 1) && echo $CurrentDocker0IPV4Address
evilginx$ ssh -i ~/.ssh/id_gophish -p 2222 -q -C -n -N -T -f -L $CurrentDocker0IPV4Address:3333:127.0.0.1:3333 username@remote_gophish_server_ip
Add an iptables rule allowing traffic on port 3333 from the docker bridge interface
sudo iptables -I INPUT 3 -i $(ip link show type bridge | grep -v docker0 | awk -F': ' '{print $2}' | tr -d '\n') -p tcp -m tcp --dport 3333 -j ACCEPT
Set up Gophish configuration on Evilginx instance
> config gophish admin_url https://<CurrentDocker0IPV4Address>:3333
> config gophish api_key <Gophish API KEY>
#If you do not use a valid TLS certificate for the exposed Gophish instance
> config gophish insecure true
Test if the communication with Gophish works properly
> config gophish test
Run a specific playbook init-vps.yml
against a specific target "vps_test"
ansible-playbook -i inventory/ --limit vps_test init-vps.yml
Run a specific role configure_timezone
against a specific target "vps_test"
# Run "setup" module to display facts from a single host "vps_test" and store them indexed at `/tmp/facts`
ansible -m setup -i inventory vps_test
# Run "service_facts" module to return service state information as fact data against a single host "vps_test"
ansible -m service_facts -i inventory vps_test
######################################################################
ansible -i inventory -m import_role -a name=configure_timezone vps_test