Milestone 7: Deploying and Post Provisioning of BlueX - squatchulator/Tech-Journal GitHub Wiki
Milestone 7: Deploying and Post Provisioning of BlueX Linux Servers
Updated scripts can be found here.
- First step is to download the minimal version of the Rocky Linux 9.1 ISO from their archive. Go to ESXi (NOT vCenter, doesn't work with untrusted certs) and head over to your datastore where the ISO folder is located, and upload the Rocky 9.1 ISO.
- Create a new base vm called
rocky.base
in yourBASEVM
folder. It should have specs similar to this (NOTE: Make sure it's thin provisioned!):
- Boot the VM up. Run through the installer making a new admin user named
deployer
and setting a root password. - Once you are in the VM, log in as deployer and run the following:
curl https://raw.githubusercontent.com/gmcyber/RangeControl/main/src/scripts/base-vms/rhel-sealer.sh > rhel-sealer.sh
- Cat the file to make sure that it resolved correctly. Now
sudo bash rhel-sealer.sh
to run the sysprep script. - Once this is finished, make sure the VM is off and take a snapshot named
Base
. - Head into 480-fw and enter the following commands to make a new static route that allows WAN traffic to be routed to fw-blue1's WAN interface.
configure
set protocols static route 10.0.5.0/24 next-hop 10.0.17.200
commit
save
- After doing so, we will now need to create two new files: one named
blue-config.yml
and one calledfw-blue1-vars.yml
. After making them, your directory structure will look like this:
- The
blue-config.yml
will be our actual playbook that we run to create the DHCP server on our firewall, and the other will be where all our variables are stored.blue-config.yml
should contain something similar to the following:
---
- name: Configure DHCP Server on Blue FW
hosts: vyos
tasks:
- name: Get VyOS Version Info
vyos_command:
commands: show version
register: version
- debug:
var: version.stdout_lines
- name: DHCP Server for blue-lan
vyos_config:
save: yes
lines:
- set service dhcp-server global-parameters "local-address {{ lan_ip }};"
- set service dhcp-server shared-network-name {{ shared_network }} authoritative
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} default-router '{{ lan_ip }}'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} domain-name '{{ dhcp_domain }}'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} lease '86400'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} name-server '{{ dhcp_name_server }}'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} range {{ shared_network }} start '10.0.5.75'
- set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} range {{ shared_network }} stop '10.0.5.125'
- And
fw-blue1-vars.yml
should look like this:
vyos:
hosts:
10.0.17.200:
hostname: blue1-fw
lan_ip: 10.0.5.2
lan: 10.0.5.0/24
dhcp_name_server: 10.0.5.5
shared_network: blue-lan
dhcp_domain: blue1.local
mac: 00:50:56:8a:27:17
vars:
ansible_python_interpreter: /usr/bin/python3
ansible_connection: network_cli
ansible_network_os: vyos
ansible_user: vyos
-
Now run the playbook with
ansible-playbook -i ansible/inventory/fw-blue1-vars.yml --user vyos --ask-pass ansible/blue-config.yml
- to verify this worked, runshow config
on blue1-fw and you should see a DHCP server configuration that matches your file! -
Create 2 new files, one in the Inventory folder (
linux.yml
) and one in the Ansible folder (rocky-playbook.yml
). -
In a terminal, run a
ssh-keygen -t rsa -b 2048
. Make note of the public key. Add the following tolinux.yml
, adding in the public key found in theid_rsa.pub
file you just generated in your xUbuntu's.ssh
folder:
linux:
hosts:
children:
rocky:
hosts:
10.0.5.75:
hostname: RockyClone-1
lan_ip: 10.0.5.10
10.0.5.76:
hostname: RockyClone-2
lan_ip: 10.0.5.11
10.0.5.77:
hostname: RockyClone-3
lan_ip: 10.0.5.12
vars:
device: ens33
vars:
public_key: "<your public key>"
ansible_user: deployer
prefix: 24
gateway: 10.0.5.2
name_server: 10.0.5.5
domian: blue1.local
- After doing this, throw the following in
rocky-playbook.yml
:
- name: rocky config
hosts: rocky
tasks:
- name: create the .ssh directory if not present
file:
path: "/home/{{ ansible_user }}/.ssh"
state: directory
mode: 0700
- name: create the authorized_keys file
file:
path: "/home/{{ ansible_user }}/.ssh/authorized_keys"
state: touch
mode: '0600'
- name: copy over key block and append to authorized_keys
blockinfile:
dest: "/home/{{ ansible_user }}/.ssh/authorized_keys"
block: "{{ public_key }}"
- name: create sudoers dropin file for 480
file:
path: /etc/sudoers.d/480
state: touch
mode: 0440
become: yes
- name: create a drop in entry in /etc/sudoers.d/480
blockinfile:
dest: /etc/sudoers.d/480
block: "{{ ansible_user }} ALL=(ALL) NOPASSWD: ALL"
become: yes
- name: set the hostname
hostname:
name: "{{ hostname }}"
become: yes
- name: add host to hosts file
lineinfile:
path: /etc/hosts
line: '127.0.1.1 {{ hostname}}'
become: yes
- name: run nmcli
nmcli:
conn_name: "{{device}}"
ip4: "{{lan_ip}}/24"
gw4: "{{gateway}}"
state: present
type: ethernet
dns4:
- "{{name_server}}"
- "{{gateway}}"
method4: manual
become: yes
- name: bounce the box
shell: "sleep 5 && shutdown -r"
become: yes
async: 1
poll: 0
-
Run your
Get-IP
command usingGet-IP -vmName "RockyClone-*" -vcenter_server "vcenter.miles.local"
, check the IPs, and then execute your playbook withansible-playbook -i ansible/inventory/linux.yml --ask-pass ansible/rocky-playbook.yml -K
. Once finished, it will take a second for your VMs to reboot and apply their new IPs. Run theGet-IP
command above again to verify they've changed after a minute or two. -
Once you have new addresses, try running
ssh [email protected]
to SSH into one of the machines - you should not be prompted for a password! -
Create 2 new linked clones of the Ubuntu Server base image via your 480utils script. Rename the two to
UbuntuClone-1
andUbuntuClone-2
to keep the naming scheme consistent, and put them on the Blue network so that they get DHCP addresses (NOTE: You'll have to manually create deployer users on both. Set hostnames as well.) -
Create 3 new files. One will be in your Inventory folder named
ubuntu-linux.yml
, and two will be in your Ansible directory namedubuntu-playbook.yml
andubuntu-netplan.j2
. -
Populate
ubuntu-linux.yml
with the following:
linux:
hosts:
children:
ubuntu:
hosts:
10.0.5.78:
hostname: UbuntuClone-1
lan_ip: 10.0.5.13
10.0.5.79:
hostname: UbuntuClone-2
lan_ip: 10.0.5.14
vars:
device: ens160
vars:
ansible_user: deployer
public_key: "<your public key>"
prefix: 24
gateway: 10.0.5.2
name_server: 10.0.5.5
domian: blue1.local
- Put the following into
ubuntu-playbook.yml
:
- name: ubuntu config
hosts: ubuntu
tasks:
- name: create the .ssh directory if not present
file:
path: "/home/{{ ansible_user }}/.ssh"
state: directory
mode: 0700
- name: create the authorized_keys file
file:
path: "/home/{{ ansible_user }}/.ssh/authorized_keys"
state: touch
mode: '0600'
- name: copy over key block and append to authorized_keys
blockinfile:
dest: "/home/{{ ansible_user }}/.ssh/authorized_keys"
block: "{{ public_key }}"
- name: create sudoers dropin file for 480
file:
path: /etc/sudoers.d/480
state: touch
mode: 0440
become: yes
- name: create a drop in entry in /etc/sudoers.d/480
blockinfile:
dest: /etc/sudoers.d/480
block: "{{ ansible_user }} ALL=(ALL) NOPASSWD: ALL"
become: yes
- name: set the hostname
hostname:
name: "{{ hostname }}"
become: yes
- name: add host to hosts file
lineinfile:
path: /etc/hosts
line: '127.0.1.1 {{ hostname}}'
become: yes
- name: configure netplan
when: ansible_distribution == 'Ubuntu'
template:
src: ubuntu-netplan.j2
dest: /etc/netplan/00-installer-config.yaml
become: yes
- name: reboot ubuntu servers and apply netplan
when: ansible_distribution == 'Ubuntu'
shell: netplan apply && sleep 5 && shutdown -r now
become: yes
async: 1
poll: 0
- And finally, put the following Netplan configuration into
ubuntu-netplan.j2
:
network:
version: 2
ethernets:
{{ device }}:
addresses: [{{ lan_ip }}/{{ prefix }}]
gateway4: {{ gateway }}
nameservers:
addresses: [{{ name_server }}, {{ gateway }}]
- Execute your playbook with
ansible-playbook -i ansible/inventory/ubuntu-linux.yml --ask-pass ansible/ubuntu-playbook.yml -K
- you should be able to passwordlessly SSH into these boxes too.