Automation Lab - Hsanokklis/2023-2024-Tech-journal GitHub Wiki
Initial Setup
You have 3 new Linux hosts in your environment, so let’s configure them similarly as you’ve done with previous new Linux hosts. For each system, make your named sudo user with the exact same name and password on each. Also, these are stand-alone hosts, and thus do not need to be joined to your AD domain.
- 10.0.5.70 (clone01-hannelore)
- 10.0.5.71 (clone02-hannelore)
- 10.0.5.72 (clone03-hannelore)
Steps for Initial Configuration:
- Power the clones on
- Change adapter to LAN
- Login with
user: root
andpw: Ch@mpl@1n!22
- Do these steps to add a user:
adduser hannelore
passwd hannelore
usermod -aG wheel hannelore
- Change the system hostnames in
nmtui
toclone01-hannelore
(it will change based on the clone) - Configure network settings in
nmtui
--->
clone01-hannelore
- Ipv4 address:
10.0.5.70/24
- Default gateway:
10.0.5.2
- DNS Server:
10.0.5.6
clone02-hannelore
- Ipv4 address:
10.0.5.71/24
- Default gateway:
10.0.5.2
- DNS Server:
10.0.5.6
clone03-hannelore
- Ipv4 address:
10.0.5.72/24
- Default gateway:
10.0.5.2
- DNS Server:
10.0.5.6
- reboot the systems
- Make a A/PTR record for all 3 clones in
ad02-hannelore
in the DNS manager
Deliverable 1. Perform multiple routine testing for connectivity and name resolution via 1-liners similar to the below screen:
clone01-hannelore
whoami; hostname; hostname -i; nslookup ad02-hannelore | grep -i name; ping -c1 ad02-hannelore | grep "packets transmitted"
hostname -i
gets the IP address associated with the hostname
clone02-hannelore
clone03-hannelore
PSSH
Install both of the following yum packages on clone01 which assume ‘yes’ to any question prompted during installation:
epel-release
sudo yum install epel-release
pssh
sudo yum install pssh
We are going to use a different authentication technique for SSH. We will create an RSA public and private key-pair, with the private key protected by a passphrase.
Make sure to use the default key names (id_rsa.pub and id_rsa).
passphrase is souphello
'
We are going to push the public component of this keypair (id_rsa.pub) to our accounts on clone2 and clone3.
Copying the key to clone02-hannelore
Copying the key to clone03-hannelore
When a public key is copied via ssh-copy-id, it shows up on the remote system here:
Deliverable 2. ssh into either clone2 or clone3 using your ssh key. The passphrase you enter is only for unlocking your local private key on clone1, as opposed to logging into the remote system itself. Provide a screenshot that shows the prompt for your passphrase as well as the login into clone2 or 3 that does not ask for a password.
Logging in to clone02-hannelore
with password prompt:
ssh-agent
Far too many administrators create ssh keys that are not protected by a passphrase. This is analogous to leaving the keys to your Porsche laying around. They do this because they still need to type in a passphrase to unlock the keys if they are so protected. We will balance the security provided with a passphrase against the convenience of a totally passwordless solution by "caching" the passphrase in memory for an hour using the ssh-agent program
Deliverable 3. Provide a screenshot showing passwordless login to clone2 or 3 after having loaded the ssh-agent and private key.
eval `ssh-agent`
ssh-add -t 1h
ssh clone02-hannelore
exit
ssh clone03-hannelore
/etc/sudoers
On both clone2 and clone3, adjust /etc/sudoers so that the highlighted line is uncommented:
This allows elevation to root without retyping a password if the current user is in the wheel group and is a common configuration.
Deliverable 4. Provide a screenshot similar to the one below that shows passwordless access to clone2 or clone3 and elevation to root without retyping a password.
Deliverable 5. Review the man page for pssh and construct a pssh hosts file containing your clone2 and clone3. Then execute the following non-privileged and privileged commands displaying inline standard output & errors as each host completes. Provide screenshots showing the command and [SUCCESS OUTPUT] for all four commands:
-
uptime
-
uname -a
-
sudo yum -y install tree
-
tree /etc/yum.repos.d/
-
create a host file called
hosts.txt
and open the file in nano -
put the hostnames in the file
TROUBLESHOOTING: When I tried to
pssh
I kept getting the error[Failure] exited with error code 255
I found this entry in the man pages for
pssh
and changed the configuration file to incorporate this line, but to no avail.
I thought maybe it was a problem with my hostnames, but when I
pinged
them and didnslookup
, they were resolved perfectly fine. Instead of putting the hostnames inhostfile.txt
, I tried it with the IP addresses as well and it still did not work.I double checked my
DNS records
and the IP addresses for my clones and they were all configured correctly.Eventually Matt Compton helped me figure out that I needed to set my
ssh-agent
from deliverable 3, since everytime I used ssh, a new session is started. Thessh-agent
is essentially what handles the pass key when I am speaking to my clones.pssh
does not have the capacity to ask me for a passphrase, so thessh-agent
needs to be enabled to be able to use pssh, sincessh-agent
will handle authenticating me and making sure I am who I say I am with my passphrase.
eval
ssh-agent``ssh-add -t 1h
Something else to note is that running
pssh
directly from my clone01 and sshing to it from myad02-hannelore
are 2 different sessions. So I ran thessh-agent
command directly on myclone01-hannelore
machine and the pssh command worked fine. But then I tried with ssh on myad02-hannelore
machine and it failed again, since it is a different session and I needed to run thessh-agent
commands again.
Success of commands
Ansible
Deliverable 6. Install the ansible package using yum on just clone1. Once installed, conduct the following test that walks through all hosts in your hosts file, and runs a module called ping against them. Take a screenshot similar to the one below that shows a ping and pong response from clone2 and clone3.
sudo yum install ansible
on clone01-hannelore
Successful execution of ansible command
ansible all -i hostfile.txt -m ping
note: hostfile.txt is whatever file you have your hostnames in.
Ansible and sudo
Consider the following screenshot. As you should recall, the /etc/passwd file is readable by everyone and the /etc/shadow file (which contains hashed passwords) is only readable by root. Notice the success on tailing the /etc/passwd file and subsequent failure on /etc/shadow. This is resolved by telling Ansible that the user associated with the ssh public key at the other end of the connection is a sudoer user (-b).
Deliverable 7. Provide a screenshot similar to the one below.
My screenshot
Deliverable 8. Figure out how to add an arbitrary port to the firewall using Ansible. Show the commands used, their success, and then issue another command to list those ports as shown in the following screenshot (8080/tcp in the example).
An arbitrary port (also an Ephemeral Port) means that the specific port number doesn't have a predefined or standardized use (such as HTTP - 80)
doing the command manually
I changed the command to ansible all -b -i hostfile.txt -a "firewall-cmd --add-port=8080/tcp
I was able to see that the port was successfully added with `ansible all -b -i hostfile.txt -a "firewall-cmd --list-all"
The Ansible Playbook
-
On clone01, create a directory called “Nginx”.
mkdir Nginx
-
Within that directory, wget the following file:
wget https://gist.githubusercontent.com/icasimpan/803955c7d43e847ce12ff9422c1cbbc4/raw/c1753594e638590ac4d54e685dd3ae1ee1d9f40a/nginx-centos7.yml
- Make the SYS-255 modifications shown below to that file:
- within the Nginx directory make a file
index.html
Running the Playbook
Consider the following screenshot. Your account’s 1-hour of ssh-agent time has expired, so the private key needed to be reloaded. After executing the script, the curl command was used to validate that nginx was serving the index.html that was copied from the local directory.
Deliverable 9. Provide a screenshot that shows a successful playbook run followed by curls to the index page for clone2 and clone3.
ansible-playbook nginx-centos7.yml -i ../hostfile.txt
curl clone02-hannelore
curl clone03-hannelore