Automation Lab - Hsanokklis/2023-2024-Tech-journal GitHub Wiki

image

Initial Setup

You have 3 new Linux hosts in your environment, so let’s configure them similarly as you’ve done with previous new Linux hosts. For each system, make your named sudo user with the exact same name and password on each. Also, these are stand-alone hosts, and thus do not need to be joined to your AD domain.

  • 10.0.5.70 (clone01-hannelore)
  • 10.0.5.71 (clone02-hannelore)
  • 10.0.5.72 (clone03-hannelore)

Steps for Initial Configuration:

  1. Power the clones on
  2. Change adapter to LAN
  3. Login with user: root and pw: Ch@mpl@1n!22
  4. Do these steps to add a user:
    • adduser hannelore
    • passwd hannelore
    • usermod -aG wheel hannelore
  5. Change the system hostnames in nmtui to clone01-hannelore (it will change based on the clone)
  6. Configure network settings in nmtui --->

clone01-hannelore

  • Ipv4 address: 10.0.5.70/24
  • Default gateway: 10.0.5.2
  • DNS Server: 10.0.5.6

clone02-hannelore

  • Ipv4 address: 10.0.5.71/24
  • Default gateway: 10.0.5.2
  • DNS Server: 10.0.5.6

clone03-hannelore

  • Ipv4 address: 10.0.5.72/24
  • Default gateway: 10.0.5.2
  • DNS Server: 10.0.5.6
  1. reboot the systems
  2. Make a A/PTR record for all 3 clones in ad02-hannelore in the DNS manager

Deliverable 1. Perform multiple routine testing for connectivity and name resolution via 1-liners similar to the below screen:

clone01-hannelore

whoami; hostname; hostname -i; nslookup ad02-hannelore | grep -i name; ping -c1 ad02-hannelore | grep "packets transmitted"

hostname -i gets the IP address associated with the hostname

image

clone02-hannelore

image

clone03-hannelore

image

PSSH

Install both of the following yum packages on clone01 which assume ‘yes’ to any question prompted during installation:

  • epel-release
    • sudo yum install epel-release
  • pssh
    • sudo yum install pssh

We are going to use a different authentication technique for SSH. We will create an RSA public and private key-pair, with the private key protected by a passphrase.

Make sure to use the default key names (id_rsa.pub and id_rsa).

passphrase is souphello

image'

We are going to push the public component of this keypair (id_rsa.pub) to our accounts on clone2 and clone3.

Copying the key to clone02-hannelore

image

Copying the key to clone03-hannelore

image

When a public key is copied via ssh-copy-id, it shows up on the remote system here:

image

image

Deliverable 2. ssh into either clone2 or clone3 using your ssh key. The passphrase you enter is only for unlocking your local private key on clone1, as opposed to logging into the remote system itself. Provide a screenshot that shows the prompt for your passphrase as well as the login into clone2 or 3 that does not ask for a password.

Logging in to clone02-hannelore with password prompt:

image

ssh-agent

Far too many administrators create ssh keys that are not protected by a passphrase. This is analogous to leaving the keys to your Porsche laying around. They do this because they still need to type in a passphrase to unlock the keys if they are so protected. We will balance the security provided with a passphrase against the convenience of a totally passwordless solution by "caching" the passphrase in memory for an hour using the ssh-agent program

Deliverable 3. Provide a screenshot showing passwordless login to clone2 or 3 after having loaded the ssh-agent and private key.

eval `ssh-agent` 
ssh-add -t 1h 
ssh clone02-hannelore 
exit 
ssh clone03-hannelore

image

/etc/sudoers

On both clone2 and clone3, adjust /etc/sudoers so that the highlighted line is uncommented:

image

This allows elevation to root without retyping a password if the current user is in the wheel group and is a common configuration.

Deliverable 4. Provide a screenshot similar to the one below that shows passwordless access to clone2 or clone3 and elevation to root without retyping a password.

image

Deliverable 5. Review the man page for pssh and construct a pssh hosts file containing your clone2 and clone3. Then execute the following non-privileged and privileged commands displaying inline standard output & errors as each host completes. Provide screenshots showing the command and [SUCCESS OUTPUT] for all four commands:

  • uptime

  • uname -a

  • sudo yum -y install tree

  • tree /etc/yum.repos.d/

  • create a host file called hosts.txt and open the file in nano

  • put the hostnames in the file

image

TROUBLESHOOTING: When I tried to pssh I kept getting the error [Failure] exited with error code 255

image

I found this entry in the man pages for pssh and changed the configuration file to incorporate this line, but to no avail.

image

I thought maybe it was a problem with my hostnames, but when I pinged them and did nslookup, they were resolved perfectly fine. Instead of putting the hostnames in hostfile.txt, I tried it with the IP addresses as well and it still did not work.

I double checked my DNS records and the IP addresses for my clones and they were all configured correctly.

Eventually Matt Compton helped me figure out that I needed to set my ssh-agent from deliverable 3, since everytime I used ssh, a new session is started. The ssh-agent is essentially what handles the pass key when I am speaking to my clones. pssh does not have the capacity to ask me for a passphrase, so the ssh-agent needs to be enabled to be able to use pssh, since ssh-agent will handle authenticating me and making sure I am who I say I am with my passphrase.

  • eval ssh-agent``
  • ssh-add -t 1h

image

Something else to note is that running pssh directly from my clone01 and sshing to it from my ad02-hannelore are 2 different sessions. So I ran the ssh-agent command directly on my clone01-hannelore machine and the pssh command worked fine. But then I tried with ssh on my ad02-hannelore machine and it failed again, since it is a different session and I needed to run the ssh-agent commands again.

Success of commands

image

Ansible

Deliverable 6. Install the ansible package using yum on just clone1. Once installed, conduct the following test that walks through all hosts in your hosts file, and runs a module called ping against them. Take a screenshot similar to the one below that shows a ping and pong response from clone2 and clone3.

sudo yum install ansible on clone01-hannelore

image

Successful execution of ansible command

ansible all -i hostfile.txt -m ping

note: hostfile.txt is whatever file you have your hostnames in.

image

Ansible and sudo

Consider the following screenshot. As you should recall, the /etc/passwd file is readable by everyone and the /etc/shadow file (which contains hashed passwords) is only readable by root. Notice the success on tailing the /etc/passwd file and subsequent failure on /etc/shadow. This is resolved by telling Ansible that the user associated with the ssh public key at the other end of the connection is a sudoer user (-b).

Deliverable 7. Provide a screenshot similar to the one below.

image

My screenshot

image

Deliverable 8. Figure out how to add an arbitrary port to the firewall using Ansible. Show the commands used, their success, and then issue another command to list those ports as shown in the following screenshot (8080/tcp in the example).

An arbitrary port (also an Ephemeral Port) means that the specific port number doesn't have a predefined or standardized use (such as HTTP - 80)

doing the command manually

I changed the command to ansible all -b -i hostfile.txt -a "firewall-cmd --add-port=8080/tcp

image

I was able to see that the port was successfully added with `ansible all -b -i hostfile.txt -a "firewall-cmd --list-all"

image

The Ansible Playbook

  • On clone01, create a directory called “Nginx”.

    • mkdir Nginx
  • Within that directory, wget the following file: wget https://gist.githubusercontent.com/icasimpan/803955c7d43e847ce12ff9422c1cbbc4/raw/c1753594e638590ac4d54e685dd3ae1ee1d9f40a/nginx-centos7.yml

image

  • Make the SYS-255 modifications shown below to that file:

image

  • within the Nginx directory make a file index.html

image

Running the Playbook

Consider the following screenshot. Your account’s 1-hour of ssh-agent time has expired, so the private key needed to be reloaded. After executing the script, the curl command was used to validate that nginx was serving the index.html that was copied from the local directory.

Deliverable 9. Provide a screenshot that shows a successful playbook run followed by curls to the index page for clone2 and clone3.

  • ansible-playbook nginx-centos7.yml -i ../hostfile.txt
  • curl clone02-hannelore
  • curl clone03-hannelore

image