Milestone 7 Part 1 ‐ Deploying and Post Provisioning of BlueX Linux Servers - Jacob-Mayotte/SYS480 GitHub Wiki

💡In this module, we will be deploying 5 linux servers directly to a DHCP enabled BlueX LAN:

(3 Rocky Linux and 2 Ubuntu). Network configuration on FW-Blue1 will be adjusted similarly to the way we provision vyos. We will then adjust the networking configuration on 5 linux hosts.

For Part 1, make sure to complete:

Milestone 7.1 - Create a Rocky Linux Base VM

  1. Retrieved ISO from: Rocky Linux THE FOURTH ONE DOWN!

image

  1. Add the ISO to our ISO folder in ESXI datastore1:

image

^ Tis uploading!

  1. Once ISO uploaded to Datastore1 we can create a new VM:

image

  • Name: Rocky.base

image

image

image

  1. Create the VM - go through install wizard:

image

image

  1. Configure the new rocky.base VM:
  • login to root and add named admin user to wheel group: usermod -aG wheel Jacob
  • Switch to named user and install sudo yum install wget
  • Retrieve the provided script w/ wget: sudo wget https://raw.githubusercontent.com/gmcyber/RangeControl/main/src/scripts/base-vms/rhel-sealer.sh
  • Now switch to root: sudo su
  • Execute: /bin/bash rhel-sealer.sh this will run the script we just received.
  • Machine reboots, create a new Snapshot now named: Base!!!!
  • Put Rocky.base in the base vms folder
  • move to milestone 7.2

Milestone 7.2 :

Set static route on 480-FW to route traffic to the blue-lan (10.0.5.0.24)

  1. Create a new Gateway named: BLUE1 on pfsense 480-fw. I did this via the GUI:

image

  1. Then I created a static route and set it to Blue1 GW:

image

  1. Create and run an ansible playbook to set up a DHCP server on fw-blue1 to provide DHCP addresses for the Blue LAN (10.0.5.0/24): vyos-blue.yml
- name: vyos network config
  hosts: vyos
  tasks:
  - name: Retrieve VyOS version info
    vyos_command:
      commands: show version
    register: version
  - debug:
      var: version.stdout_lines
    
  - name: configure vyos dhcp
    vyos_config:
      save: yes
      lines:
        - set service dhcp-server global-parameters 'local-address {{ lan_ip }};'
        - set service dhcp-server shared-network-name {{ shared_network }} authoritative
        - set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} default-router '{{ lan_ip }}'
        - set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} name-server '{{ dhcp_name_server }}'
        - set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} domain-name '{{ dhcp_domain }}'
        - set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} lease '86400'
        - set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} range {{ shared_network }}-POOL start '10.0.5.75'
        - set service dhcp-server shared-network-name {{ shared_network }} subnet {{ lan }} range {{ shared_network }}-POOL stop '10.0.5.125'

# COMMAND TO BE RUN: ansible-playbook -i ./ansible/inventories/fw-blue1-vars.yaml --ask-pass ./ansible/vyos-config.yml
  1. Configure the inventory file: fw-blue1-vars.yaml basically the .txt file from last milestone but more scalable:
vyos:
  hosts:
    10.0.17.200:
      hostname: blue1-fw
      lan_ip: 10.0.5.2
      lan: 10.0.5.0/24
      dhcp_name_server: 10.0.5.5
      shared_network: BLUE1
      dhcp_domain: blue1.local
      mac: 00:50:56:bf:38:d2
    
  vars:
    ansible_python_interpreter: /usr/bin/python3
    ansible_connection: network_cli
    ansible_network_os: vyos
    ansible_user: vyos

Use your 480-utils to launch 3 Rocky VM linked clones from your base VM that acquire DHCP address from your fw-blue1

Demo:
  1. Walk through GitHub
  2. First show pFsense Gateways & Static Route 2.5 Show rocky.base VM in GUI
  3. walk through ansible playbook
  4. Clone rocky 3 times w ./linkedcloner.ps1 then set network to the new fw.

7.3 Post Provisioning Rocky-1-3 with Ansible

  1. To start I shut off all of my rocky.linked[x] clones. I then took a 'before-ansible' snapshot on each VM.

  2. Devin was EXTREMELY nice with this week and gave us the files/process needed to do the ansible configs on the Rocky Machines.

  • First I went to my xubuntu-wan VM and generated a passphrase less RSA key:

image

  • THen I created the linux.yaml playbook. Devin provided this:
linux:
  hosts:
  children:
    rocky:
      hosts:
        10.0.5.77: // current IP addresses of machines
          hostname: rocky-1
          lan_ip: 10.0.5.10 // IP to be set
        10.0.5.75:
          hostname: rocky-2
          lan_ip: 10.0.5.11
        10.0.5.76:
          hostname: rocky-3
          lan_ip: 10.0.5.12
      vars:
        device: "{{ ansible_default_ipv4.interface }}"


  vars:
    public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC8OcD8SINWsdVvOz9SqbTHt2ZzqEK+l4HVfYT2i/EnkxaWmJX3e6XG7CbanuC78mN0wqPfQk1aEXnAlPbyok4FxehdDv38XSlgVOAIURuuRp22DVx8j+jYbjR0sXDLRjCAczvbsIedYLVuIuEuPYY/1fZSGEtW/SQMBt7x6SDMo4ZTsajAwCEYwAKkrHr7eYUDi+q+6AfX9ZKn+hcO6Afazn7xLy6/D/0cKsruZySZ/6t7EhXz8gPn118vVYO1H8R/EFxnIrJx50oep/lHsEqyit3Drdfp3BZq0hfxHgNqqglE3lT6K7zTSuufH8ycZsL4YmpXR73xV0es0xUtV2wFLoGoWmPzKlSfwvpOw0jTFtlhd9BxUXhEbRiRFHpsQBpf4vKL0+HLv6mR4yoGlY2GhxHXObynIEfZNg31GsNOAOH09qdF6tCFZGBTRI8Kt37EqNQiyYpGttPwQ3FPJbJj1bdqkp53GiyCPob0eIfxY/7I7qNbtHjdBqcpxUXkzw8LtbsnY9Zio/xK+iUwOuSwnuKSgamZS1KrEPTmOtfQZojhUJQFGlIw4r0lwxhHfA/fFbgV1Gh/5S9mdJRB/GLPqYu+aM2OD6x4uA2by8Hrp7KdpU1qbczYO7PYiNjopIXaN2Gb9bsxO4tlP7PzbKDKBNC3UDEw7n4OWLVblaGObw== jacob@xubuntu-mgmt"
    ansible_user: jacob // this is the user ON THE ROCKY LINUX MACHIENS AS OF RIGHT NOW, MAKE SURE YOU PUT THIS AS RIGHT!!!
    prefix: 24
    gateway: 10.0.5.2
    name_server: 10.0.5.5
    domain: blue1.local // this is jsut the dhcp pool of blue-fw
  • Then create the rocky-playbook.yml
#this playbook performs post provisioning configuration of rocky
- name: rocky config
  hosts: rocky
  tasks:
  - name: create the .ssh directory if it is not there
    file:
      path: "/home/{{ ansible_user }}/.ssh"
      state: directory
      mode: 0700
  - name: create authorized_kes file
    file:
      path: "/home/{{ ansible_user }}/.ssh/authorized_keys"
      state: touch
      mode: 0644
  - name: copy over key block and append to authorized_keys
    blockinfile:
      dest: "/home/{{ ansible_user }}/.ssh/authorized_keys"
      block: "{{ public_key }}"
  
  - name: create sudoers dropin file for 480
    file:
      path: /etc/sudoers.d/480
      state: touch
      mode: 0400
    become: yes
  
  - name: create a drop in entry in /etc/sudoers.d/480
    blockinfile:
      dest: /etc/sudoers.d/480
      block: "{{ ansible_user }}  ALL=(ALL) NOPASSWD: ALL"
    become: yes
  
  - name: set hostname
    hostname:
      name: "{{ hostname }}"
    become: yes

  - name: add host to hosts file
    lineinfile:
      path: /etc/hosts
      line: "127.0.1.1  {{ hostname }}"
    become: yes

  - name: run nmcli
#nmcli connection modify ens192 ipv4.address 10.0.5.10/24 ipv4.gateway 10.0.5.2 ipv4.dns '10.0.5.5 10.0.5.2' ipv4.method manual
    nmcli:
      conn_name: "{{ device }}"
      ip4: "{{ lan_ip }}/24"
      gw4: "{{ gateway }}"
      state: present
      type: ethernet
      dns4:
        - "{{ name_server }}"
        - "{{ gateway }}"
      method4: manual
    become: yes

  - name: bounce the box
    shell: "sleep 5 && shutdown -r"
    become: yes
    async: 1
    poll: 0

# COMMAND TO BE RUN: ansible-playbook -i inventories/linux.yaml --ask-pass ./rocky-playbook.yml

BEFORE EXECUTING THE COMMAND! Run Get-IP on each of the rocky VMs to show the change that takes place, the ips will be set statically.

  • Run the command, then try and ssh into any of the vms with: ssh [email protected]

  • SHow Get-IP results, IP's should be changed:

Before Playbook:

image

Expected output from playbook:

image

After Playbook:

image

IPS are now set !

7.4 Post Provisioning Ubuntu-1-2 with Ansible:

  1. Create two new linked clones of the ubuntu VM, so for me I ran linkedcloner.ps1 against ubuntu.22.0.1:

image

Do this twice and create two new VMS: ubuntu.linked[x] x being 1 or 2

  1. Now move those VMS to the BLUENET VM folder in vSphere, and use the Set-Network function to place each new ubuntu vm on the blue1-LAN adapter:

image

Do this twice for both of the new ubuntu machines

  1. Create a new snapshot now of each of the VM's named Before-ansible

image

  1. Now we can start tackling the ansible playbooks!
  • Copy and paste the rocky-playbook.yml file and rename it to: ubuntu-playbook.yml
#this playbook performs post provisioning configuration of ubuntu
- name: ubuntu config
  hosts: ubuntu
  tasks:
  - name: create the .ssh directory if it is not there
    file:
      path: "/home/{{ ansible_user }}/.ssh"
      state: directory
      mode: 0700
  - name: create authorized_key file
    file:
      path: "/home/{{ ansible_user }}/.ssh/authorized_keys"
      state: touch
      mode: 0644
  - name: copy over key block and append to authorized_keys
    blockinfile:
      dest: "/home/{{ ansible_user }}/.ssh/authorized_keys"
      block: "{{ public_key }}"
  
  - name: create sudoers dropin file for 480
    file:
      path: /etc/sudoers.d/480
      state: touch
      mode: 0400
    become: yes
  
  - name: create a drop in entry in /etc/sudoers.d/480
    blockinfile:
      dest: /etc/sudoers.d/480
      block: "{{ ansible_user }}  ALL=(ALL) NOPASSWD: ALL"
    become: yes
  
  - name: set hostname
    hostname:
      name: "{{ hostname }}"
    become: yes

  - name: add host to hosts file
    lineinfile:
      path: /etc/hosts
      line: "127.0.1.1  {{ hostname }}"
    become: yes

  - name: Netplan
    template:
      src: /home/jacob/SYS480/ansible/files/vyos/00-installer-config.yaml.j2
      dest: /etc/netplan/01-netcfg.yaml
      mode: "0644"
      owner: root
      group: root
    become: yes

  - name: Netplan Application
    shell: "sudo netplan apply"
    become: yes
    async: 1
    poll: 0

# COMMAND TO BE RUN: ansible-playbook -i inventories/linux.yaml --ask-pass ubuntu-playbook.yml
  • Construct a netplan template file as recommended in the lab, I named min: 00-installer-config.yaml.j2, making it a jinja template:
# This is the network config written by 'subiquity'
network:
  ethernets:
    ens34:
      addresses:
        - {{ lan_ip }}/24
      nameservers:
        addresses:
        - {{ name_server }}
        - {{ gateway }}
      routes:
        - to: default
          via: {{ gateway }}
  version: 2
  • Now go back to the linux.yaml file again and update it with the linked.ubuntu[x] information:
linux:
  hosts:
  children:
    rocky:
      hosts:
        10.0.5.77:
          hostname: rocky-1
          lan_ip: 10.0.5.10
        10.0.5.75:
          hostname: rocky-2
          lan_ip: 10.0.5.11
        10.0.5.76:
          hostname: rocky-3
          lan_ip: 10.0.5.12
      vars:
        device: "{{ ansible_default_ipv4.interface }}"
// BELOW LINES WERE ADDED FOR THIS STEP!
    ubuntu:
      hosts:
        10.0.5.79:
          hostname: ubuntu-1
          lan_ip: 10.0.5.30
        10.0.5.80:
          hostname: ubuntu-2
          lan_ip: 10.0.5.31

  vars:
    public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC8OcD8SINWsdVvOz9SqbTHt2ZzqEK+l4HVfYT2i/EnkxaWmJX3e6XG7CbanuC78mN0wqPfQk1aEXnAlPbyok4FxehdDv38XSlgVOAIURuuRp22DVx8j+jYbjR0sXDLRjCAczvbsIedYLVuIuEuPYY/1fZSGEtW/SQMBt7x6SDMo4ZTsajAwCEYwAKkrHr7eYUDi+q+6AfX9ZKn+hcO6Afazn7xLy6/D/0cKsruZySZ/6t7EhXz8gPn118vVYO1H8R/EFxnIrJx50oep/lHsEqyit3Drdfp3BZq0hfxHgNqqglE3lT6K7zTSuufH8ycZsL4YmpXR73xV0es0xUtV2wFLoGoWmPzKlSfwvpOw0jTFtlhd9BxUXhEbRiRFHpsQBpf4vKL0+HLv6mR4yoGlY2GhxHXObynIEfZNg31GsNOAOH09qdF6tCFZGBTRI8Kt37EqNQiyYpGttPwQ3FPJbJj1bdqkp53GiyCPob0eIfxY/7I7qNbtHjdBqcpxUXkzw8LtbsnY9Zio/xK+iUwOuSwnuKSgamZS1KrEPTmOtfQZojhUJQFGlIw4r0lwxhHfA/fFbgV1Gh/5S9mdJRB/GLPqYu+aM2OD6x4uA2by8Hrp7KdpU1qbczYO7PYiNjopIXaN2Gb9bsxO4tlP7PzbKDKBNC3UDEw7n4OWLVblaGObw== jacob@xubuntu-mgmt"
    ansible_user: rangeuser
    prefix: 24
    gateway: 10.0.5.2
    name_server: 10.0.5.5
    domain: blue1.local
  • Okay now that our files are constructed, we have a Netplan Jinja file that is getting called in on the anasible playbook, we updated our linux.yaml file with the Ubuntu machines, we should be able to run the ansible playbook with successful execution!

ansible-playbook -i inventories/linux.yaml --ask-pass ubuntu-playbook.yml -K