Tech Journal - TechDragon9/SEC440-01-Tech-Journal GitHub Wiki

Week 1: 8/30/2021

Established vSphere as the working environment for current and future class projects. Working on Project 1, which is essentially setting up the vyos1, vyos2 gateway VMs and the three VMs that are connected to them (web01, xubuntu-lan and xubuntu-wan). For vyos1, before turning it on, make sure the network adapters are set. Adapter 1 should be the WAN while adapter 2 should be the LAN. This can be done through the edit VM settings button. Once set, turn on and enter vyos1 VM. Log into vyos with the default credential. To edit vyos to give it the appropriate IP Address information, use the command "configure" to enter configuration mode. This mode in vyos allows the user to edit things such as the firewall, dns settings, dhcp settings, and other factors. Configuration mode can be exited out of using the "exit" command. To edit the IP addresses, use the following commands:

set interface ethernet eth0 address "10.0.17.x"
set interface ethernet eth0 description "WAN Address"

Repeat this command process with ethernet eth1 with the appropriate LAN address that vyos1 should have and add the description for it. You may notice that both eth0 and eth1 have dhcp addresses. Remove these with the delete command (delete interfaces ehternet eth0 dhcp). You can do this with the hw-id value as well (assuming you wish to start with a clean slate). Next, you use the commit command to commit/save any changes you have made. You should also use the save command as well to save those changes for the configuration startup (when the VM is opened back up after closing). This concludes the IP address changing of vyos1, but we aren't done with it yet. However, before we can do the other stuff we need from vyos1, we need to set up the xubuntu-wan and xubuntu-lan machines as well as the web01 machine. This is because the IP addresses of these machines will be required later for some of the rules that are going to be set on vyos1 (and later vyos2).

First, make sure that xubuntu-lan and xubuntu-wan are connected to the correct networks (xubuntu-lan -> LAN & xubuntu-wan -> WAN). Web01 should also be checked to see if its on the LAN connection and not WAN. Now that they are connected to their proper networks, turn the machines on and log into the machines with default credentials. Now, there are a two ways one could configure xubuntu. The first is the more complex way, editing using netplan, in which I do so in another tech journal linked here. Another way though that is much simpler is to just edit the ethernet connection directly. In the upper right corner or so of the xubuntu box screen there should be a button called network when you hover the mouse over it. Click it and select "Edit Connections". Doing so will bring you to a list of network connections. For now there should only be one, a wired connection. Select is and click the little gear on the lower left corner of the menu. This should bring up a new menu where you can edit the connection. Go to the IPv4 settings tab in this menu, change the method to Manual and add the address you want the machine to have. Make sure it is pointing to the correct IP address for its gateway (10.0.17.2 for xubuntu-wan and 10.0.5.2 for xubuntu-lan). Xubuntu-wan's default gateway is 10.0.17.2 is because xubuntu-wan is strictly connected to the WAN network, with vyos1 and vyos2 acting as bridges between the WAN and LAN networks. The netmask should be 24 as the these networks are currently a /24 subnet. The DNS servers, for now can be set to the default gateway IP address. In the future when a DNS server is established in the network, this will be changed to point to that server, but for now, use the IP address of your default gateway for the related network. To make sure the machines can see and communicate among each other (particularly xubuntu machines to the vyos), use the ping command to ping the vyos1 which should be the default gateway.

Next, lets configure web01 before going back to vyos1. Web01 is connected to the LAN network, so make sure before turning the virtual machine on that it is connected to LAN. Then turn on the machine and log in as the default user, which should be root. Web01 is a CentOS 7 virtual machine, which I also previously configured in a previous tech journal located here. Essentially we are going to alter the ifcfg-ens192 file found in the /etc/sysconfig/network-scripts/ directory to add our IP address, the subnet prefix, the default gateway, and the DNS server (which again should be 8.8.8.8 for now as it will be changed when we get a DNS server). You can also add a Domain, but for now I'm not going to. Keep in mind that editing and saving this file is only possible as a root user, so make sure you used the sudo command prior to opening the file, or else you won't be able to save your changes. Once that is done, we are adding a new user. Use the commands useradd and passwd to create a user and a password for that user. Follow those commands with the usermod command to make this new user a sudo user. The commands should look something like this:

useradd user
passwd user
usermod user -aG wheel

Note that "wheel" is the name of the sudo group in CentOS 7. Once completed, change the hostname of the box using the hostnamectl set-hostname command and then restart the network using the systemctl restart network command. After the system is restarted and you log out of the default user, you should be able to log in as the new user you created. Again, check to see if it can ping vyos1 to make sure you configured the machine right.

Now we move back to vyos1 because there was a step I forgot initially. I need to set vyos1's default gateway and DNS server to allow internet. This is important because vyos1 needs to be able to supply, or more accurately redirect traffic from the LAN and WAN so as to receive internet access. I go over how to do this in the following tech journal. Set these gateway for 10.0.17.2 and the DNS to be the same address. Then use the ping command to check if it can ping the other machines and the default gateway. If it can, congratulations everything is now configured for now! Now we get to the fun part, address masquerading LAN so devices on LAN can have internet. This is done by creating a nat source rule on vyos. This is shown in my other tech journal, but essentially you use the command set nat source rule to create a rule that takes the source addresses of LAN (10.0.5.0/24) and makes it so that data that is outbound on the WAN network (say google.com or other internet services) be accessible to LAN. The full commands are shown in my Tech Journal for SEC 350. While doing so, this would be a good time to add DNS forwarding as well (shown in the same tech journal).

Once DNS and Routing are done for vyos1 to give web01 and xubuntu-lan internet access, go back to web01 and install httpd. This simply uses the sudo yum install httpd command which should install httpd. Then you need to start the httpd services using systemctl start httpd.service command followed by systemctl enable httpd.services to make it continuous. Next, you need web01's firewall to allow either http services or allow the http ports (80 and 443 tcp) and reload the firewall to use httpd. The command is firewall-cmd --add-service=http --permanent to add http as a service and the firewall-cmd --reload command to force the firewall to reload and update. To make sure the http service was added to the firewall, use the firewall-cmd --list-all command.

This concludes my first week on Project 1 for this class.

Week 2: 9/6/2021

Picking up where I left off from week 1, I will now start enable ssh and two factor authentication. First, I had to disable ssh root login on web01 before I could add authentication. The guide I used to help with disabling ssh root login is located here, but the general point is to go into the ssh configuration file and have the line PermitRootLogin set to no and add a line for AllowUsers with the user(s) you wish to have ssh permissions. Make sure that the ssh services are restarted after the ssh config file is altered. Next I moved onto two factor authentication for ssh. I used the following guide to help with this step. Before attempting to download google-authenticator, I suggest making sure you install the EPEL repo from fedora (centOS) using the following command:

 sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

This would actually allow google-authenticator to be found by web01. Once google-authenticator is installed and set up with a mobile device to keep track of the values and on the machine, open the /etc/pam.d/sshd file and add the line:

auth     required    pam_google_authenticator.so nullok

Nullok is temporary as it is used allow users that don't have two-factor authentication enabled to still use ssh. Given we have only one authorized user for ssh, this shouldn't pose any issues. Next modify the sshd_config file to have the line ChallengeResponseAuthentication set to yes instead of no. This should conclude setting up two factor authentication. Just test it by ssh to another machine (or sshing to web01) to check if two factor authentication is working properly.

Now we have to make it so WAN can access http and ssh. This would be relatively easy if we were doing firewall rules. However, we aren't separating the zones of our network yet and thus have no real firewall rules in vyos1 to add or change. So instead we will being adding nat destination rules. These are very similar to nat source rules used earlier in Project 1, but with a slight difference in that we will be specifying the translation address to 10.0.17.100 (a fake IP address for 10.0.5.100 to pretend to be) and the use of inbound-interface rather than outbound-interface. Use the following commands:

set nat destination rule 10 description "Port Forwarding HTTP to 10.0.5.100"
set nat destination rule 10 destination port 80
set nat destination rule 10 inbound-interface eth1
set nat destination rule 10 protocol tcp
set nat destination rule 10 translation address 10.0.5.100
set nat destination rule 10 translation port 80

This should allow xubuntu-wan to obtain http from 10.0.5.100, but disguises 10.0.5.100 as a member of the WAN network. This would work for SSH as well.

Now we must set up vyos2. This should have the exact same setup as vyos1, so refer to the setup phase of vyos1. The only difference here should be that the IP addresses for vyos2 are 10.0.17.72 for WAN facing and 10.0.5.3 for LAN facing. Everything else should be a near carbon-copy of vyos1's set up. Once done, we have to reconfigure the other machines on WAN and LAN to have a backup or second DNS or server they can refer to if one goes down. This mostly is only really needed on the LAN side since WAN already knows its DNS, so lets just focus on LAN machines for now. Primarily this redundancy should come in the form of an additional DNS. But just setting up a single new DNS isn't gonna work because we can have only one default gateway. How does that get fixed? With a virtual gateway using VRRP. To set it up, we use the following commands:

set high-availability vrrp group group-name vrid x
set high-availability vrrp group group-name interface eth1
set high-availability vrrp group group-name virtual-address 10.0.5.1

This sets up the basic vrrp. Repeat this for the WAN connection and on vyos2 (assuming you started on vyos1). Both machines will then have access to the same virtual gateway, which should then bee set as the default gateway for all LAN machines (since both vyos2 and vyos1 have access to it, nothing should conflict other than a change in which port they need to listen too for other functions).

Now that that is done, just run some tests and what not to make sure everything is working. I'll be honest, I had trouble getting http and ssh to work for some odd reason. Despite being sure that I had the configuration files correct, I can't seem to get the WAN to acknowledge LAN's or more importantly, Web01's existence. Maybe I have to have my defualt gateway for the WAN's pointing not at 10.0.17.2, but at the vyos machines and the virtual gateway, but that is for me to test and see. If so, a new rule will be needed to point WAN machines to the the correct DNS via port forwarding.

Addendum to Project 1

So, it turns out ssh and http do work from the WAN to LAN. You just need to use the IP address of the vyos machines instead since that is how web01's IP address on LAN is being masked. My bad. Also make sure that all machines such as web01 (maybe not xubuntu-wan) are pointing to the shared virtual IP address of vyos1 and vyos2, since that seemed to mess with my redundancy as well. Images of how the configurations should look like on vyos machines are located here.

Week 3: 9/13/2021

This week marks the start of Project 2 in which I attempt to make web01 and the http service redundant. To this end we have another box called web02 and two additional boxes known as ha1 and ha2. Those last two boxes are going to be on another interface called OPT, so we will have to edit both vyos1 and vyos2 to have this third interface active. For now lets focus on web02 and its configuration. It will need to be set up nearly identical to web01, along with http. Connect web02 to LAN and log in as the default user. From there, follow the steps from the previous week and tech journals to set up web02. The IP for web02 is 10.0.5.101, so that should be the only difference aside from the password for the user and hostname from web01 (assuming you didn't make a different user from web01). Now lets do something different. See, when I configured web01 and web02's http, there was something I forgot to do. I forgot to make a unique webpage that served as the default webpage. This is done simply by going to the /var/www/html/ directory and creating a file called index.html. Write anything in that file you wish, I just went with a simple "Welcome to hostname's Webpage!". Do this for both web01 and web02. This will make identifying which web server we are using much easier down the line.

Now that web01 and web02 are completed, we are going back to vyos1 and vyos2 for a second. Since we need OPT to be visible, we are going to have to add a new interface to both machines. To do this, go to Vcenter and edit settings for vyos1 and vyos2. Then click "Add New Device" and select "Network Addapter". This will be our OPT network interface. Repeat what was done previously for vyos set up and vrrp set up back in Weeks 1 and 2. The OPT network is 10.0.6.0/24, and the IP addresses for vyos1 should be 10.0.6.2, vyos2 should be 10.0.6.3, and vrrp should have it set to 10.0.6.1. There is one more thing that should also be done, web02 should have http forwarding as well. This will allow WAN networks to access http from either web01 or web02, so use the same commands used in Week 2 to configure the http forwarding.

Once vyos1 and vyos2 are set up with OPT, we move on to ha1 and ha2 configuration. Both machines are ubuntu systems that should be set to the OPT network before starting them up. Then log in with the default credentials you have access to, starting with machine ha1. Now, instead of changing a config file like I did previously for web01 and web02, we have to use netplan to configure ha1 and ha2. I go over that in my SEC-265 Tech Journal. Thankfully, netplan is already installed on ubuntu by default, so we don't need to install it. I used this handy guide to help with the use of netplan. Be warned that netplan is finicky. You have to use spaces when writing or rewritting a yaml file (the file that netplan uses for configurations). If you don't the file will not reload when you try to apply the netplan. So be very careful when writing the netplan configuration. When done and configured correctly, you then use the following command: netplan apply This will apply the new changes in the yaml file you altered assuming there is no syntax error. To make sure the changes were in fact made, simply use the ip addr command to see the new IP address. This should be done to both ha1 and ha2, making their own IP addresses unique from each other (I went with 10.0.6.10 and 10.0.6.15 for the address for them).

Now we move on to the part that I myself feared doing. Proxies. Since we need to be able to switch seamlessly from web01 to web02 if something were to fail, we will need proxy servers to help with this. This is the entire purpose of the ha1 and ha2 boxes. This will be where things can vary based on what kind of software you are most comfortable with. For me, I'll be using the software Squid. Its a free opensource software that I have used in the past. However, I haven't recorded my previous experience with this tool. Thankfully, there are helpful tutorials on how to install and set up Squid on a Linux system. The tutorial I used is located here. However, before we can even do that, we need to make sure we have forwarding enabled so that both ha1 and ha2 can access the internet to download/receive Squid. So we go back the vyos1 and vyos2 to make sure we have another NAT source rule that lets OPT connect to the internet. Same goes for DNS forwarding which would allow OPT to connect to the internet. Basically do the steps in Project 1 for OPT to get internet access. It's important to have this access otherwise it will not be possible to install Squid on the ha1 and ha2 servers. Once internet is established for OPT (test by pinging google.com from ha1), use the command apt install squid. This will install the squid software for proxy servers.

Now that Squid is installed, its time to configure it. First, use the cd /etc/squid/ command to move into the squid directory. This is because we will be making a backup of squid.conf just in case we need to go back to "factory" settings. Use the cp squid.conf squid.conf.factory command to make this backup file. Then open squid.conf with your preference of editor (mine is nano). Then you need to scroll down the file until you find the line http_port 3128. This line is located in the Network section of the config file, but you will be scrolling for a while. Once that line is found, you need to change it. Something akin to http_port 10.0.6.2:3128. Next, we scroll up to the ACL rules. Here you want to define our network. Since the proxy is in OPT, we will use the OPT network address as our "mylan" value. So define the OPT acl as acl myopt src 10.0.6.0/24. You may also have to scroll down to add dns_nameservers to squid.conf so it knows which IPs it will need to send information too (ie, the virtual address, vyos1 and vyos2 OPT IPs). Once done it should be possible to save the file and start squid. But first, make sure port 3128 is open on the machine or else it won't be able to go anywhere.

Speaking of firewalls, Ubuntu 20.04 (that is ha1 and ha2) doesn't use firewall-cmd. It uses ufw as the default. To allow port 3128/tcp, use the following command: ufw allow 3128/tcp. It's really that simple. However, you need to enable ufw using the ufw enable command. Otherwise ufw is inactive. Keep this in mind in the future because there is a good chance you might have to repeat this again later on. Now that the firewall is allowed, activate squid the same way you would any program, systemctl enable squid.service. Then start squid using systemctl start squid.service. Once done, repeat everything for Squid setup on ha2. We will need to for redundancy's sake.

Now the final steps should be simple, going over to xubuntu-wan and setting the proxy server settings there. But there is a problem. It can't be done on the network connections like I initially intended. Instead, it has to be configured on the web browser itself (which is its own issue). Go to the browser of choice and go to the settings page for that browser. Toward the bottom of that page there should be a Network Settings section. Click the settings button in that section to access proxy information. Now, as far as I know, you and only input one hostname (which is for the proxy server) in the manual proxy settings. I may be wrong about this. So redundancy will have to work/look a bit different than what I hoped.

Once the web browser settings are configured, it should be possible to use the proxy server to access web01 and web02. I very well could be wrong as I never used redundancy with Squid before and I have no idea if redundancy is possible with Squid (although it really should be possible). But we can't test it until a destination rule is made so that the proxy servers are actually seen. On vyos1 and vyos2, create a nat destination rule 30 to allow for proxy servers to be seen by the WAN network. In fact, one possible way that could result in redundancy is to create two identical rules, one for ha1 and ha2 on both vyos1 and vyos2. This way, one or the other server can be the proxy server in use based on which is active. Of course, it might not succeed unless you are using the virtual gateway as the point of access for the proxies (since vyos1 and vyos2 have different IPs).

Coming to the end of Project 2, I have come to a few conclusions. One, apparently I'm not as good at redundancy as I thought as a quick test on 9/19/2021 proved to me that I have vyos1 working, but once its not working I can't get access to web01 through vyos2 despite having all the same configurations and access to the same singular virtual IP address. Two, Squid is probably not a good proxy server to use for redundancy as it only really works with web browsers (or so it seems to me) and doesn't even do its job that well given I don't have access to web01 through the web browser, but have access with the curl command in the terminal when vyos1 is active. Third, I really need to stop procrastinating toward these projects because I currently have very little of the project actually meeting the requirements (I have two proxy servers and two web servers with their own index page that introduces them, but that's it). It will take some time before I manage to have my network working as intended. When that happens, I'll add an addendum to the projects regarding corrections and fixes to the issues I've had now.

Addendum to Project 2

Once again I am correcting mistakes I have made in Project 2. The first main mistake being the use of Squid for the project. Do not use Squid. Squid is a web proxy software, but does not act as a load balancer, which is what Project 2 requires. Instead use a software system like HAProxy. I'm using this helpful guide to install and configure my HAProxy, which acts as both a proxy and load balancer. This will give me redundancy when finished compared to Squid, which could not become redundant. Installing HAProxy is easy, simply use the apt install haproxy command to install the service/software. Before doings so however, I suggest doing an apt update on your ha1 machine. Next, go to the /etc/haproxy/ directory and open the haproxy.cfg file to configure the load balancing function. At the bottom of the configuration file, write the following:

frontend http_front
     bind *:80
     stats uri /haproxy?stats
     default_backend http_back
backend http_back
     balance roundrobin
     server web01 10.0.5.100:80 check
     server web02 10.0.5.101:80 check

This should define that if a server, such as our xubuntu-wan, tries to access a port 80 website that it will automatically send the request to our own webservers (web01 and web02). Now, all this does is set up HAProxy. This doesn't exactly give it the high-availability that we need. In any case we will need to edit our vyos1 and vyos2 VMs. Nat destination rule 10 (the rule that forwards http to web01) is now out-of-date and needs to be updated to point to our proxy server. Simply alter the rule so that the translation address is now pointing to 10.0.6.10 instead of 10.0.5.100. It would also be appropriate to change the description of nat destination rule 10 to describe that the http is through the proxy, just for future reference. If you try to test if this works now, you might be a bit disappointed to find that xubuntu-wan will be unable to see web01 anymore and be unable to curl or access the webpages. However, what you might notice if you attempt to access the web01 or web02 from a web browser is that it doesn't necessarily run into an error. Instead it appears to take forever to load, as if its trying to connect but simply can't. In fact, you might actually get a connection timed out error instead of a 404-webpage not found error. Don't worry, it just means you might have forgotten to change your ha1 VM's firewall rules to allow port 80 tcp. Do that now and it should be accessible to xubuntu-wan. Now comes the redundant part. Repeat the previous steps with ha2, installing haproxy, configuring it, allow port 80 through the firewall, disable and stop Squid from running and so on. You should also go to vyos2 and change its rules to match vyos1 so it can successfully be used as a replacement if vyos1 goes down. Now, again, this does not make the haproxies redundant. All that happened was configuring the rules and proxies/load balancers to function. So its time to go back to ha1 and begin the process of installing a method of high-availability. The method for this varies slightly based on which guide you use and which version of Ubuntu you are using. Since I'm using version 20.04 of Ubuntu, I will use the software keepalived as my higha-availability method. I can install keepalived on both ha1 and ha2 by using the apt install keepalived command. This should install keepalived services on the machines. Next, you will need to enable IP forwarding on ha1 and ha2. This can be done by running the following commands:

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf

Once done, you will need to restart sysctl by using the command sysctl -p, which should reload it and output what you had written to it. Next we'll go to the keepalived directory in the /etc/ directory. This is so we can create a configuration file for keepalived since it does not come with one by default. Create a file called keepalived.conf, which is the file that keepalived will look for when it is started. The configuration file for ha1, which is the primary proxy server, should look something like this:

The keepalived.conf file in ha2, which is the secondary proxy server, should look like this:

Now you can enable keepalived using the following command: systemctl enable --now keepalived on both ha1 and ha2. Assuming everything is done right, including making sure the interface is correct in the conf files, keepalived should now be fully functioning with a virtual IP address that both haproxy servers share. Once that is done, we will need to go back to our HAproxy configuration files to alter them slightly. Simply open the haproxy.cfg file in the /etc/haproxy/ directories and change the bind line in the frontend http_front part to point to the new virtual IP address. Do this for both ha1 and ha2 and then restart haproxy.services. This would also be a good time to update the vyos1 and vyos2 nat destination rule 10 to use this new virtual IP address instead of 10.0.6.10 or 10.0.6.15 for the proxy servers. This should allow for redundant http website access on web01 and web02! This concludes the addendum to Project 2.

Weeks 4+5 (Project 3)

Project 3 is a group project in which we create a ransomware script that targets one of our machines (we chose our new mgmt machine) and then we create some sort of mitigation that would slow or stop the malicious code from running on our machine.

Mitigation

Mitigation Proof Video

In order to mitigate the threat of a Windows Powershell Script Ransomware attack, some things had to be done to the mgmt VM. Mgmt is a windows machine with access to a server manager. To help create a mitigation, the mgmt server had to become and Active Directory. This was done by going into the "Manage" tag and selecting the "Add Roles and Features" option. From there, select the current machine (mgmt which will be known by its hostname) and in the add roles section select "Active Directory Services". This should add everything that is needed for the system. Once done, select install and let it install everything. It will finish, but not be complete because one more step is needed. That step is to promote the mgmt server into the Active Directory. During this process of promoting mgmt into an Active Directory you will be required to create a "forest", which is just a form of domain name/group. Name it whatever you want, traditionally its named yourname.local. Follow the instructions of the "wizard" for the promotion process, and all should be good at this point. Once done, restart the mgmt computer. This is important. After the computer is restarted and updated as the Active Directory, log back in as the admin user and go to the "Tools" tab. From there, simply select "Active Directory Users and Computers". This allows you to add a user or two to the computer. Keep in mind that for now, these users will need to be added to the admin group to allow for login. In the Users folder, just right-click and select new Users. From there just add in the name, password, and username of the new user as you see fit. Once done, we are going to add an OU or Organization Unit. These folders allow for unique rules and policies to be applied to them. In the yourname.local forest/file, right-click and select new Organization Unit (OU). Name this OU something like "Protected Users". When done, move your newly made user into this file. Then we go back to the server manager. Select "Tools" and then "Group Policy". This is where the mitigation rule is applied. In the opened up window, select the OU "Protected Users" and right-click. One of the options should be to "Link and Create a new GPO". Select this option to create a new Group Policy on the OU. Name it "Restricted Powershell Scripts". Then we will right click this rule and edit it. Once the new window for this process is open, select the following options: User -> Administrative Template -> Windows Components -> Windows Powershell. From there, select "Turn on Script Execution". This option, if you set it to disabled, will automatically set Powershell's execution policy to Restricted, which prevents any Powershell scripts from running as the protected users. Apply this change and exit out of the windows. To test if this worked, log out and log in as the protected user and attempt to run a simple Powershell Script. Assuming nothing goes wrong the script will fail to run on that user, but will successfully run under another user. This is how we were able to mitigate a Powershell Script Ransomware attack.

Weeks 6-7 (Project 4)

Project 4 deals with the redundancy of web applications and databases. For the first week, I'll be focusing on getting a database redundant as that can be a bit more tricky than a web application. I chose to use MySQL as a database. My first step was to configure and turn on my new Ubuntu machines added to my network (henceforth u1, u2, and u3). Since u1, u2 and u3 will hold onto my database, I need them on a secured or protected network connection. So when editing the connection for these VMs, I placed them in the LAN network as its the most "secured" out of all my networks. Next I turned the machines on and logged in. This may take some time due to the VMs needing to configure themselves after being turned on for the first time. Once you are logged in, you'll need to use netplan to configure their IP addresses and network connections. Please refer to Project 2 earlier in this tech journal about how to do so if you need a refresher. The IP I used for the machines are 10.0.5.10 (u1), 10.0.5.20 (u2), and 10.0.5.30 (u3). Once the IP of the Ubuntu machines are configured and have their default gateway pointing to the vyos virtual LAN IP address (10.0.5.1), you should be able to access the internet. Test this with a simple ping command to google.com just to see if your boxes can communicate with the internet. Now we need to install the database on the machines. I started with the u1 machine and will treat u1 as the primary database server from this point forward.

I used the following guides for installing MySQL, one from Digital Ocean and the other located here. First things first, I need to active ufw. Use the ufw enable command to enable and activate ufw on the machine. For now we won't enable ports because we don't know for sure which ports are required. So we are done with ufw for now. Next run the sudo apt update command to make sure everything is up-to-date on the VM. Finally, we use the following command sudo apt install mysql-server to install MySQL. Make sure to say "yes" or "y" when prompted to continue. You can check if MySQL was properly installed by running the mysql --version command to see if it is on the system. Once MySQL is installed, we will need to configure it since it will be insecure at this point in time. The next command I ran was mysql_secure_installation as the root user. You will be prompted a few times. Just say "y" (yes) to the first yes or no prompt, 1 for the password strength (how secure the password can be), enter your prefered password, and finally if you are satisfied with the password you should type "y" for yes to continue. From this point on you will be asked if you wish to remove anonymous users, disallow root login remotely, remove the test database and access to it, and if you wish to reload privilege tables for the database at that moment. Type "y" for yes to all of these options. If you accidently allowed root to be accessed remotely, you need to disable it later for security purposes and because you don't need it to access the database remotely. It can be disabled with the following commands for mysql later when in the database: DELETE FROM mysql.user WHERE User='root' AND Host NOT IN ('localhost', '127.0.0.1', '::1'); FLUSH PRIVILEGES. Once finished use the systemctl status mysql.service command to see if the database is active. You can log into the database with the command mysql -u root to log in as the root user. However, you will need to do so as the root user on the VM, so preface the command with sudo or use sudo -i to become the root user. Next, we can create a user for the database that isn't root. The MySQL command will have a general format that looks like this: CREATE USER 'username'@'host' IDENTIFIED BY 'password'; Specify which host network you want this user to be from, such as 10.0.5.0/24 for the LAN network. Next, grant this new user privileges to the database. The command will look something like this: GRANT CREATE, ALTER, DROP, INSERT, UPDATE, DELETE, SELECT, REFERENCES, RELOAD on *.* TO 'username'@'host' WITH GRANT OPTION;. Once done, run FLUSH PRIVILEGES; to make these changes on the database. Now exit MySQL and log back in as your new user. This can be done by the command mysql -u username -p which will prompt you for your username's password. Once logged in as your own user, congratulations you have a MySQL Database now. Repeat the Create User steps to allow for a user from 10.0.6.0/24 network to also remotely access the database. Once you have both users on U1, repeat the above steps for both U2 and U3 machines and make sure the database usernames, hosts and passwords are the exact same as U1's database.

Now that we have three MySQL databases, we will need to make them redundant. There are multiple ways to do this, but ultimately we need to make the redundancy go both ways for the databases (so if the "master" database updates it will also update the "slave" nodes and vise versa). There were two guides I found helpful for this. One is located here and the other can be found here. Both go over the same things, but now we have to do something else. We have to edit the mysqld.cnf file located in the /etc/mysql/mysql.conf/ directory. This will require restarting our databases afterwards. Going into the mysqld.cnf file on U1 with the editor of your choice (mine is 'nano'), add the following to the file under the mysqld section:

server_id           = 1
log_bin             = /var/log/mysql/mysql-bin.log
log_bin_index       = /var/log/mysql/mysql-bin.log.index
relay_log           = /var/log/mysql/mysql-relay-bin
relay_log_index     = /var/log/mysql/mysql-relay-bin.index
expire_logs_days    = 10
max_binlog_size     = 100M
log_slave_updates   = 1
auto-increment-offset = 3
bind-address            = 10.0.5.10

Keep in mind that some of these variables such as server_id, log_bin, max_binlog_size, and bind-address are already in the mysqld.cnf file but either commented out or need slight changing. Assuming you don't make the mistake of spelling a variable wrong (like I did at first), you should not have any issues using the systemctl restart mysql.service command to restart MySQL. Then we will enter the database on U1 and create a replication user. This user's specific function is to allow connectivity to the other databases and send updates to them. Create the user using the CREATE USER 'username'@'host' IDENTIFIED BY 'password'; command and then use the command GRANT REPLICATION SLAVE ON *.* TO 'username'@'host';. I suggest using the actual IP address of U2 and U3, but this will require making multiple users with the same name and password. Do NOT forget to use FLUSH PRIVILEGES; once the users are made. Now we move on to U2 as one of the "slave" or "backup" databases. The mysqld.cnf file will look similar to the one in U1, but with a different bind address to match U2's IP and the server ID will be 2 instead of 1. It'll look like this:

server_id           = 2
log_bin             = /var/log/mysql/mysql-bin.log
log_bin_index       = /var/log/mysql/mysql-bin.log.index
relay_log           = /var/log/mysql/mysql-relay-bin
relay_log_index     = /var/log/mysql/mysql-relay-bin.index
expire_logs_days    = 10
max_binlog_size     = 100M
log_slave_updates   = 1
auto-increment-offset = 3
bind-address            = 10.0.5.20

Again, restart mysql.service to make sure it is now up-to-date with the configuration file. We repeat this with U3, making the server ID 3 and bind address = 10.0.5.30. Now, the tutorials I use vary a bit in their approach. I will be doing the solution shown in the techsupport guide as that seems easier than trying to fight with root permissions to send data to one or more nodes. In order for this to work, each database must have a replication user that looks at another machine. In U1's database, the replication users will look at the IP addresses of U2 and U3 while on U2's database the users will loo at U1 and U3's IP and then U3's database replication users will look at U1 and U2's IP addresses. Just create these replication users with the permissions as show earlier for this section and test them by using the command mysql -u repslave -p -h 10.0.5.20 -P 3306. I'm assuming the test is being done on U1, so if that isn't the case, feel free to change the IP address to whichever machine you are trying to access the database of.

Once you have made sure you can log in as the repslave user on the different databases, its time to make the databases redundant. First, log in as the root user of the MySQL databases and use the following command show master status; once logged in. This command will show you the master file and position that other databases will answer to. The reason I do this for each database is because I installed U2 and U3's databases at a different time than U1's database, which has led to a different file name for U1's database. The file name is important because now we will run the following command on each database: CHANGE MASTER TO master_host='10.0.5.x', master_port=3306, master_user='repslave', master_password='Password', master_log_file='mysql-bin.y', master_log_pos=z; Please make sure that the password you enter is the same as the repslave user's password, the x is replaced by the IP address of your "master" machine, the y is the number for the file name (such as 000007 or 000249), and the z is the position that master file is located (156 in my case). Next, we will begin to start the databases' redundancy by using the command START SLAVE;. This should start making the databases "masters" and "slaves" to one another. In this case, U1 is U2's master, U2 is U3's master, and U3 is U1's master. This should cause redundancy as data replication goes around each database, assuming everything is set up correctly. To check if redundancy is active, after about 5 seconds, use the SHOW SLAVE STATUS\G; command or alternatively the SHOW SLAVE STATUS; command. Assuming everything is working, you should be able to see that each database can access its master database and replicate data. Now, I've gotten a weird error on U3 about needing a secure connection, but when I create a test database on U1, it propagates to U3 without issue, so I've elected to ignore the issue as it might just be more of a security issue and not an issue with the database's functionality. To test if the databases are able to update with redundant information, use the following commands on either U1, U2, or U3 (it shouldn't matter which one you start with):

create database reptest;
create table reptest.table (`id` varchar(10));
show tables in reptest;

This will create a test database. Now go into the other databases and use the commands show databases; and show tables in reptest;. If replication is working properly, each database should have the database reptest with the same table inside reptest. As such, we now have a form of redundancy for the databases. It isn't perfect, but if one or two of the databases goes down, the last one should be able to send changes/updates to the other databases once they come back online by a circular round-robin style replication. This does mean though that U2's database will not update unless U1's database is active/connected. But we aren't done yet. We now need a virtual IP address that each database or virtual machine can access so that when we make a redundant web application, it can use the virtual IP instead of sticking to one of the databases (since if that one database goes down, then the databases loss redundancy because the web application loses access to the databases). This can be achieved in a few different ways. The easy way is to connect the databases to our HAProxy boxes by opening up a new port on those boxes that are aware of our database machines. So, lets go into our ha1 and ha2 boxes and add a new line to the /etc/haproxy/haproxy.cfg file. The new line should look something vaguely like this:

frontend mysql_database
    bind 10.0.6.20:3306
    default_backend mysql_cluster
backend mysql_cluster
    balance roundrobin
    mode tcp
    server u1-aaron 10.0.5.10:3306 check
    server u2-aaron 10.0.5.20:3306 check
    server u3-aaron 10.0.5.30:3306 check

Once these lines are added at the end of the haproxy.cfg file, there is one last thing to add/change in the configuration file before you restart haproxy services. You need to alter the default mode from http to tcp, otherwise the configuration will refuse to work as it believes it has a fatal error. We won't worry about the sudden loss of access to web01 or web02 since these will be updated to use a web application which should be using tcp instead of http as the default protocol. Once the mode is changed to tcp under the default section of the haproxy.cfg file, save the changes and restart haproxy. Now we should have redundant access to the databases regardless if one goes down or not. Now to test this redundancy before we move on to adding a web application. I did this test by going onto my xubuntu-lan machine, installing a mysql client on xubuntu-lan using sudo apt install mysql-client-core-8.0, and then logging onto the different mysql databases using the command mysql -u aaron -p -h x.x.x.x -P 3306 where x is the IP address of the database being accessed. To more accurately test redundancy, I suggest using the virtual address of the HAProxy boxes because that can change which database you are accessing for each log in. Once logged on, I simply tested if I could create a database, add a table, and then see those appear on the other database. For me, this worked. If I wish to see which virtual machine I was on or bound too for the session to the databases, I would use the command SHOW VARIABLES WHERE Variable_name = 'bind_address';. I have tried using the mentioned command with hostname instead, but I consistently get my xubuntu-lan's hostname back instead of my database hostnames. Then simply turn one of the databases off and try inserting information into your new database. Be warned, you will need to be using a database in order to insert and select information from them. This can be done using the USE name_of_database command in mysql.

Now we are finally moving on to web application. This SHOULD be a little more straight forward and fast. To make my life easier (hopefully) I will be using the application known as Apache as my web application. This guide will help in the installation process. First, update web01 and web02 using the yum update httpd command or just use yum update. Both commands will have the same end result of updating httpd. In fact, it might even be a good idea to check if you already have httpd on web01 (you should from a previous lab, but it still helps to check). The firewall shouldn't need to be updated since http was already added back in the first lab, but since Apache can also use https services, it might be a good idea to add that just in case. Just use the command firewall-cmd --add-service=https --permanent followed by the firewall-cmd --reload command to save the addition of https. That pretty much is all there is to installing Apache as httpd services on CentOS default to Apache. But now we need PHP. This is because Apache on its own can't access databases. However, PHP which can work with Apache can allow access to the databases. So lets install PHP. Simply use the yum install php command to install PHP. Then restart httpd services. That's it, PHP should now be on the machine. Now it should be possible to make a .php file that can interact with the databases.

This guide can help with the basic creation and testing of PHP and the databases. It's a long process since you are essentially making several new web pages. Go to the /var/www/html/ directory. This will be where you put the new php files. Next, create 7 files. These files will be config.php, index.php, create.php, read.php, update.php, delete.php, and error.php. Next, make sure you have a database and table you want to use for these files (the example given by the guide I used was for a table called workers that have multiple entries). Now open each file and drop a lot of code into them. The files should look generally like the codes shown in the web guide I used. Once the files are complete, now it's time to test if php works. Log on to xubuntu-wan and test if you can access the php web page of web01 (for the sake of all sanity, we aren't going to add the php files to web02). Now one of the issues that may be ran into first is the inability to connect to the database. When I discovered this issue, I went into my web01 box to see if it had access to the databases at all since they were on the same network. As it turned out, they couldn't communicate. So I had to install a mysql client on web01 to allow it to connect to the databases. Testing if I could access the databases from web01 again revealed another issue, this time with an authentication plugin issue with caching_sha2_password not being loaded for the user. To fix this particular issue, I went back into my database and used the command ALTER USER 'user'@'host' identified with mysql_native_password by 'password'. This finally allowed my web01 machine to access the database I had on the U1 machine. I followed this up by altering the config.php file to use this particular user and password for the scripts. Unfortunately this didn't fix the issue of nothing showing up the way they should as it still didn't show any of the information that should be there. Given the progress and stuff done though, its enough to prove that I have redundancy working.

Week 8-9 Project 5/Lab

For this part of the class, I've been given access to AWS to make a new box completely separated from my networks on vSphere. This is so we can try something different and install Wireguard to connect securely to my vSphere Lan environment. So first things first, log into AWS and navigate to the AWS Console while on xubuntu-lan. Now, we don't have a machine here yet, so we are going to launch one by choosing EC2 among the options. Then we scroll down until we hit the Ubuntu Server 20.04 LTS (HVMI) option for our machine. Select the Ubuntu 20.04 and make the instance type t2.micro (it should be selected by default). Everything else should be fine, just make sure that ssh is available. Next, create a new key pair that you can use with xubuntu-lan to access the new VM. So create this pair and then download the resulting .pem file. Once it's downloaded, you should be able to access the instance with ssh. First, make sure to use the chmod 400 command on your .pem file to allow it to be used by xubuntu-lan. Then run the following ssh command: ssh -i keypair.pem [email protected] where x is the value of your new VM's public IP address. After running the command, congrats you should be logged into your new VM.

Now we need to install Wireguard, which will help give a secure connection between the AWS VM and xubuntu-lan. Since we are already logged in on the AWS virtual machine, lets install it there first. sudo apt update and sudo apt-get install wireguard are the commands you'll use for both boxes. Once Wireguard is on both machines, you will need to create another keypair specific to the Wireguard. Again, this must be done on both boxes. To do this, first raise the users to the root user with sudo -i, then switch into the /etc/wireguard/ directory, and finally run the command umask 077; wg genkey | tee privatekey | wg pubkey > publickey to create both a public and private key on both boxes. The public keys will eventually be shared between boxes. DO NOT SHARE PRIVATE KEYS! Next, we will be configuring our wireguards and connections. Frist on AWS, use the following commands:

ip link add wg0 type wireguard
ip addr add 10.0.101.1/24 dev wg0
wg set wg0 private-key privatekey
wg set wg0 listen-port 51900
ip link set wg0 up

Then on xubuntu-lan, repeat these commands but make sure the address is 10.0.101.2. Then we will configure the peer-to-peer portion of the connection. On xubuntu-lan, copy the public key and then paste the key in the following command on aws: wg set wg0 peer <xubuntu-lan public key> allowed-ips 10.0.101.2/32 endpoint 10.0.5.6:51900. We then repeat these steps, copying aws's public key and pasting it on xubuntu-lan in the following command: wg set wg0 peer <aws public key> allowed-ips 10.0.101.1/32 endpoint ec2-x-x-x-x.compute-1.amazonaws.com:51900. Next we will need to create inbound rules on AWS to allow for this connection. To do this, go to the AWS console where the aws virtual machine is kept. Next, select the machine and go to the security tag. Then click on the current security group available (launch-wizard-1), and in the menu that pops up, there should be a section for inbound rules with a button to edit inbound rules. This will allow you to enter in a new rule specifically for the connection between aws and xubuntu. Make sure the type of protocol used is Custom UDP with the port range being set to 51900. The source and IP/subnet don't really matter, so you can leave those as 0.0.0.0/0 for the time being. Save this new rule and perhaps name it Wireguard Peer so that is can be distinguished from other inbound rules in the future.

Next, we will want to go back to xubuntu. This is where we actually save the configurations done to wg0 for wireguard. We do this with the wg showconf wg0 | tee /etc/wireguard/wg0.conf command on both aws and xubuntu (you can do this while ssh to aws from xubuntu to do both from the same machine). Keep in mind you can only access /etc/wireguard/ as the root user. Next, check the new wg0.conf file. You'll notice that the addresses and maybe even the keys you saved earlier are not in the file. So it's time to fix them. Edit the file to look something like this for aws:

[Interface]
ListenPort = 51900
PrivateKey = X (or hidden)
Address = 10.0.101.1
[Peer]
PublicKey = y
AllowedIPs = 10.0.101.2/32
Endpoint = 10.0.5.6:51900

And look something like this for xubuntu-lan:

[Interface]
ListenPort = 51900
PrivateKey = X (or hidden)
Address = 10.0.101.2
[Peer]
PublicKey = y
AllowedIPs = 10.0.101.1/32
Endpoint = ec2-x-x-x-x.compute-1.amazonaws.com:51900
PersistantKeepAlive = 25

Once these files are complete, its time to test them. But first, you may want to restart wireguard on both machines. Simply use the commands wg-quick down wg0 and wg-quick up wg0. This is because you altered the configuration files for wireguard and thus the connection needs to be updated (which only happens when the link is taken down and brought back up). Once that command is done on both aws and xubuntu, you should be able to ping the two machines and ssh to the other machine using their wireguard IP address. Aside from being able to ssh to xubuntu from aws, there is another test you can do. On aws, be the normal user and create a directory called test and enter that new directory. Then use the command echo "test for port 8888" > index.html followed by the command python3 -m http.server 8888 which will make the aws box act like an http server with port 8888 open on it. Back on xubuntu-lan, run the curl http://10.0.101.1:8888 command to see the results. If everything up to now was done right, your xubuntu-lan should be able to show the index.html file on aws despite using a different port not enabled on our aws security group. Now we can move onto something interesting, being able to access web01 from aws. Now, the guide I'm following done by my professor used a machine called rocky which is essentially the replacement for web01 and web02. However, as at this time I'm still using web01 for a previous lab (the database redundacy lab), I will be using web01 instead of rocky. In truth the process should still be the same. On aws, edit the wg0.conf file again to allow a new IP address, this time targeting 10.0.5.0/24 (the LAN network). Then on xubuntu-lan, edit the wg0.conf file to have a post up and post down rules. These rules will look like this:

PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o ens160 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o ens160 -j MASQUERADE

Once finished, make sure you restart the wg0 connection on both aws and xubuntu-lan. Now enable ip forwarding on xubuntu-lan with the commands sysctl -w net.ipv4.ip_forward=1 and sysctl -p. Now you should be able to curl web01's webpage. Just be aware that it'll take a little bit to work.

Now onto Project 5 proper where I get to choose my choice of a subject. Wireguard was just an example, so now its time to branch out. For me, I chose to tackle CryFS, a free to use open-source cloud-based encryption software that is designed to store and encrypt files and directories in cloud services such as iCloud, Dropbox, and other cloud services. The problem this tackles is two-fold. One, I have a device I want to allow access to files, but said device can't be linked to my virtual environment (such as my iPhone for example). Two, it secures my files so that I can send these files to other devices without fear of my files being seen by malicious internet users. For the sake of simplicity I will be using my xubuntu-lan device as the one getting its files encrypted while leaving either my aws or my phone as the recievers of the files on xubuntu-lan. I will be using two tutorials, one from github to install CryFS on xubuntu-lan since xubuntu-lan is an Ubuntu 20.04 machine and another from the CryFS Website itself. First, run the command apt install cryfs on xubuntu-lan. Make sure you either are raised into the sudo user (root) or have sudo at the start of the install command. This should install cryfs onto xubuntu-lan with no issues. You can test to see if its on the machine by doing the command which cryfs. Now that CryFS is on xubuntu-lan, we need to create an encrypted directory or two. We do this with the cryfs basedir mountdir command. This command can create the directories if they don't exist. You would also be asked a few questions such as if you wish to use the default settings (for the sake of the demonstration, just say yes) and asked for a password to use for the encryption. For the sake of a demo, the directories are called basedir and mountdir, though you can name them however you wish to name them. Once the directories are made, its time to test them to see how/if they work. Simply enter the mountdir directory and make a simple file. The file can have anything in it and be named any name, this is simply to see if the encryption is working. You should also create a directory inside mountdir to test if directories are also being encrypted. Once you have those set up, look into the basedir directory to see if those files are there. You won't see the actual names, but you will see three files/directories that are encrypted. You'll also see a cryfs.config file, though for the moment that is not important. Now if we go back the root directory (as I was doing this all as the root user), run the command cryfs-unmount mountdir/. This will unmount the mountdir, which effectively turns CryFS off. This means when you check mountdir with the ls mountdir/ command it will appear empty. However, using ls basedir/ will still result in the encrypted files. So when you remount mountdir you should have access to the files again. To remount mountdir, use the cryfs basedir mountdir command once again and enter the password you used for the encryption.

Now we get to the tricky part, adding cloud accessibility. This effectively means making a cloud client synched to a local folder/file and uploading basedir to that synched folder. The tutorial I'm using uses Dropbox, but I want to use Google Drive so I can access these files from my phone (which has access to my google drive) and my own home computer, all of which are not connected to the network my xubuntu-lan is on. In anycase, regardless which cloud service you wish to use, you will need a cloud client to synch a folder to a local folder/file. And unfortunately these aren't always free. But fortunately there are some free clients for Google Drive that exist. The one I'm using is called google-drive-ocamlfuse, which allows me to mount my google drive to my linux system. To install and use this client, I used the tutorial on GitHub. First use the sudo add-apt-repository ppa:alessandro-strada/ppa command. Follow the command up with sudo apt-get update to make everything on the Virtual Machine up-to-date. Then run the command sudo apt-get install google-drive-ocamlfuse which will install google-drive-ocamlfuse. This is pretty much it for the installation process, but there are more steps to do. Next step is to make sure you are logged out of your google account you wish to use! I had issues getting mine to work until I decided to log off my account and try to do it while I was logged off. Run the command google-drive-ocamlfuse as a normal user (it will give an error if you try to use it as root). This command will automatically create the default application directory along with the config files for the function. In addition to this, it will also start a web browser session to obtain authorized access to your Google Drive. Select your account, sign in with your password since your logged out of it, and select "allow" when asked about access to the Google Drive. Assuming no errors are ran into (which would be 500 errors or a general web server error), you will be able to confirm your choices and once it is done, wait a moment or two as gdfuse gets the access token to your Google Drive. Once this is done, congrats! You have a Google Drive linked to google-drive-ocamlfuse. This brings us to the next step, creating a mountpoint for the drive. Create a directory for this mountpoint. I call mine GoogleDrive, but you can name yours however you want. Now run the command google-drive-ocamlfuse [mountpoint] to mount your Google Drive to the xubuntu-lan. Replace [mountpoint] with your directory name. This concludes mounting your Google Drive!

Back to CryFS, we will need to create a new mountpoint and basepoint for CryFS that are NOT on the root (I made that mistake). The reason is because root user will be denied access to the normal user's GoogleDrive mounted directory. Run the command cryfs GoogleDrive/basedir mountdir to create these new base and mount directories and to automatically have the base directory (the one that holds the encrypted files) onto the GoogleDrive directory. Once this is done and the encryption key is made, you will have the encrypted files now linked to your Google Drive. You can prove this with a quick ls GoogleDrive/basedir/ command to see the file. Let's test this out by writing a file to mountdir. Create and write a file in mountdir. What you write or name the file will not matter. Once done, check to see if the encrypted file was written to the basedir in GoogleDrive. Then unmount (in other words, stop cryfs) and go to another system. Now, CryFS has a download for Windows machine, but is currently in the highly experimental mode. So I won't simply be able to access it from my phone or computer (as my computer is a Windows System) without some risk. But you know what machines we could use? Another, unrelated Ubuntu Virtual Machine! Unfortunately this does mean repeating the steps to install both CryFS and the google-drive-ocamlfuse, but this should be easy.

Clean up

Its at this point I realized I had too many VM's hogging resources on my environment. So, this is the part where I turn most of them off. I turned off my databases (u1, u2, and u3), my proxy servers (ha1 and ha2), my vyos2 machine and my web servers (web01 and web02). Then I went into vyos1 and removed its redundancy, removing the virtual IP addresses and changed it's IPs for its connections to LAN and OPT to 10.0.5.1 and 10.0.6.1 respectively. Moving on to xubuntu-lan and xubuntu-wan, I altered the DNS to point only toward vyos1 since redundancy was removed (the Default Gateways are fine). I then added a few new machines, mgmt01 which is not going to be used just yet, and rocky01 which replaces web01. Rocky01 was placed in the OPT network and mgmt01 was placed in the LAN network of my environment. Configuring rocky01 was no different from my other linux machines at this point, though I chose to use the nmtui command this time around. Doing so brings up a menu that allows you to edit the network connections such as the IP address, Gateway, DNS, etc. I did this and made rock01 have an IP of 10.0.6.100/24 (nmtui doesn't have an option for prefixes so they have to be specified in the address) then deactivated and reactivated the connection. I changed the hostname, and just like on web01, I made a new user to use on rocky01. Next came mgmt01, which I only really give it an IP address and the gateway and DNS. Mgmt01 is a Windows machine, so setup for this is a bit different than normal. When you open mgmt01 for the first time it will ask a few things, such as your country, language, and keyboard layout. I went with the options it gave me, which were US and English configurations, and then it askes for custom settings for the built in Administrator. This is mainly a password you use for the Admin log-in. Enter your preferred password and continue. Mgmt01 will finish customizing and will place you on the starting screen. Since this is a machine on a virtual environment, we can't actually press the Ctrl+Alt+Delete buttons to enter the login screen. But thankfully there is a button on the top right of the home screen that will send a Ctrl+Alt+Delete. Press that button and log in as the admin with our newly made password. Once logged in, you'll notice the first thing that pops up is a Server Manager window. This is because mgmt01 is a server manager. However, we aren't going to be using it just yet, so we just need to set up a viable connection to vyos1. We do this by going to the Network and Sharing Center in the network settings of the computer. Next we click on the connection Ethernet0. Then we go into the properties since we are the admin user and double click the IPv4 property. Once here we can enter the IP address and DNS server to use. Give mgmt01 the appropriate information and then save the changes made. The IP should be 10.0.5.10 with a subnet mask of 255.255.255.0 and the default gateway and DNS should both be set to 10.0.5.1. This should result in internet access. If you are unsure, open up Windows Powershell and run the ping command on google.com or one of your other boxes. At this point all the boxes are set back up and should be working just fine. There is just one last thing, port forwarding to a new web server. Since rocky is replacing web01, we should have its ports for the web application it uses be forwarded through vyos1. Now, the assignment says nginx, which means we will need to install nginx on rocky01. On rocky01, use the command yum install nginx to install nginx. Make sure you are either the root user or preface the command with sudo. Once its installed, start nginx with systemctl start nginx.service. This starts nginx, but will not allow ports to be followed since rocky01 has a firewall. So use the commands firewall-cmd --add-service=http --permanent, firewall-cmd --add-service=http --permanent, and firewall-cmd --reload to add http services to rocky01's firewall. Now enable nginx and maybe restart it too just in case. Now you should have rocky01's webserver working. Before we test though, let's go into vyos1 and change the nat destination rules so that it points to rocky01 at port 80 for http (since nginx also uses port 80 by default). Once this is done, we can test to see if xubuntu-wan can access nginx on rocky from vyos1. This concludes our clean up.

Federated Identity Lab

This lab we get to play around with OAuth, an framework that more or less allows applications like my GitHub to be accessed limitedly by an http service. Similarly to our Wiregaurd lab, I had to make a brand new AWS ec2 machine. No problem since the new machine is more or less the same as the Wireguard lab machine. This new AWS machine will have a new security group that allows for tcp connections on port 5000. Once this new machine is made (you can use the same keys you made for your last ec2 machine if you want), we go on to our xubuntu-lan machine to do everything locally first. Since we wish to have our GitHub used for this application, we should log into our GitHub account. Its here that I went to the settings and into the developer settings of my GitHub account. Here I have the options of different apps to use. I selected OAuth Apps. Since I haven't used or registered any OAuth applications, the page will have no options. So I clicked the Register a new application button. Then I filled in the information asked for, those being Application Name, Homepage URL, and Authorization callback URL. The Application name can be anything, though I went with SEC440-oauth. The Homepage URL however should point to 127.0.0.1 on port 5000 (aka, it should point to the ec2 machine). Same for the callback URL. https://127.0.0.1:5000 and https://127.0.0.1:5000/callback are the URLs used. Once you create this application, you will be brought to a new page that shows you the number of users, the Client ID, Client Secrets, and more for the application. At this point, setting up the OAuth is pretty much done. At least it would be if it wasn't for code. Yeah, we need source code for the OAuth application to work in the first place. Thankfully we are on xubuntu-lan and have access to a terminal.

First step to the code is to first install Flask on our machine. This is because the source code I'm using and will be altering uses the Flask framework. In order to install Flask, we should first run sudo apt update to ensure our system is up to date. Then we run the command sudo apt install -y build-essential python3-pip libffi-dev python3-dev python3-setuptools libssl-dev followed by the command sudo apt install python3.8-venv. This is so we can use python code. Next we make a new directory and call it oauth-lab using mkdir oauth-lab. Once in the new directory, run the command python3 -m venv venv followed by typing out source venv/bin/activate. This should set the directory to use python code. Next use the command pip3 install wheel. Next we make a file called "requirements.txt". This file should have the following:

requests==2.21.0
Flask==1.0.2
oauthlib==3.0.1
requests_oauthlib
pyOpenSSL==19.0.0
Flask-Login==0.4.1

Once the file has this information and is saved, create an empty source directory and run the command pip3 install -r requirements.txt. If everything runs smoothly, it should build and install Flask. Now we can move on to the source code for our OAuth!

The source code used is put into a file called app.py in the source folder within the venv oauth-lab directory. The code I used looks something like the images below:

To get this code to work another file had to be made to hold the credentials used for the login. The credential codes are placed in a file called creds.json, which is also stored in the source directory. The code looks like this:

{
    "client_id": "<your client key>",
    "client_secret": "<your client secret>",
    "authorization_base_url": "https://github.com/login/oauth/authorize",
    "token_url": "https://github.com/login/oauth/access_token"
}

This is just an example as you will want to have your own client key and client secret replace the sample id and secret. Once you have the code and placed your own client ID and client secret in the creds.json file, it is time to test the application before trying to push it to ec2 on AWS. You can run the application by using the command python app.py, which should run the app.py script. Assuming you are in the oauth-lab directory and not the source directory, the command should still work if your use source/app.py in the command as you are giving a path to the script. But to be safe, I suggest running the command in the source directory. Of course, also make sure to double-check that you spelled the variables correctly in the code or else you'll get an error (like I did the first time I tried using the code). Once the code runs successfully, you should be able to go to the url (https://127.0.0.1:5000/) to access your own GitHub from your machine even if you aren't logged into GitHub. Since we have this successfully working on our local machine, we can send our code to GitHub and pull it on our AWS machine. Or just send the files directly to AWS using Wireguard, but we will be needing to put the files on the GitHub. But before that, we need to make a .gitignore file. The purpose of the .gitignore file is to make sure files we don't want to go to our GitHub doesn't accidentally get sent to the GitHub. Such files include our virtual environment (venv), the python cache (pycache/), and of course our credentails (creds.json). Once this file is made and placed in the oauth-lab directory, we should be able to send the files to GitHub. To do this, clone your repository onto the machine using the command git clone <GitHub repo url>, making sure to use your own GitHub repository url. Then, copy your directory with the command cp -R oauth-lab/ SEC440-01-Tech-Journal/ and finally add, commit and push the changes to the repo using git add oauth/lab/, git commit -m "Added Files for Federated ID Lab", and git push origin master commands. Make sure you also have set up your email and name for the git commit command using git config --global user.name "your name" and git config --global user.email "your email". Now, in the past, GitHub would allow you to push to the repo with no problems. However, recently that is no longer the case when doing this through command lines. So, you need to go to the developer settings of your GitHub account and set up a personal token for pushing files to the repository. Make sure this token has permissions for the repo, otherwise the token will be useless. This generated token will be used when you use the git push command to log into the account to push the files to the repo. Once done, you should be able to go to the code part of your GitHub to see the files you uploaded.

Since these files are on the GitHub, you should be able to go to the ec2 machine on AWS and clone the GitHub Repository to gain the code on your ec2 machine. Once logged into ec2, use the GitHub Clone command to get the entire repo onto ec2. The downside to this is that you won't have the creds.json file, but the good news is that you can easily make another on ec2 that has the exact same variables as the version on xubuntu-lan. In fact, I suggest using another terminal so you can cat creds.json and copy and paste it into a creds.json file on ec2. Make sure this creds.json file is in the source repository. Once that is created, you should run a few commands to make sure you have everything to run the code. Essentially, re-run the commands for Flask installation on ec2 and install venv so that the code can run properly. Once everything is done, you may run the code. Of course, you'll likely run into an issue first where there is something wrong. This is because the app.py file still is set to localhost, not for all devices as ec2 is NOT your localhost. So uncomment the line of code for listening to all devices and comment out the local host-only line. That might not fix the issue though. The issue being run into by this point, assuming you had things go wrong like me, is a jsondecoder error, which is not at all related to the creds.json file or with the app.py file. This is because I forgot the commas in my creds.json file, so please make sure you didn't forget those. Now you should be able to successfully run the app.py on your ec2 machine. Now comes the tricky part, being able to test the connection and all related elements. Because here's the thing, ec2's IP can't be used on xubuntu-lan to open the web URL in Firefox. Well, it turns out it is possible! You just have to use the public IP of ec2 (the long string of text) instead of the IP that ec2 says it's using. To see all the little data regarding your access to GitHub from another device through Oauth, move up to the web browser options/settings and choose the More tools option then select web developer tools. This will allow you to view everything from the method to reach the web page to the interactions of the web page. Just make sure the whole thing is set to have persistent logs (found in the little sun-like/gear-like symbol on the developer tools).

Kerberos Lab

In this lab, we get to play around with Kerberos. Before that, we must set up several machines. Remember my mgmt01 machine? Well, the machine needs to be updated again for this lab. Log into mgmt01 and in PowerShell use the command sconfig. This allows you to config several elements of your machine, including time, updates, and computer hostname. First, type the number 5 to change the updates to Manual. Then, after changing the update settings to Manual, type the number 9 to change the time and date of the computer. By default, the computer is set to Pacific Time (USA and Canada), but I happen to live on the East Coast, so we change the Timezone to be Eastern Time (USA and Canada) to be more accurate. Next, you can change the hostname of the computer. However, I suggest saving this for last as it will require a restart of the computer to apply this change. If you choose to do so now, just type the number 2 and give it the hostname of MGMT1. Next, we need to update the IP address. Initially, after the VM Cleanup, I had set the IP of mgmt01 to 10.0.5.10, but this time we need it to be 10.0.5.5. Once done, restart the computer so it has its new hostname assuming you didn't do that earlier. Now we need to add an Active Directory. This is doable from the Server Manager. Click the Manage button in the top right of the Server Manager screen and select Add Roles and Features. Following the "wizard" application that pops up, select MGMT1 when in Server Selection, then choose Active Directory Domain Services in the Server Roles. You may also want to add DHCP as a role to your MGMT1 machine so that it can automatically assign IP addresses on the LAN Network (though not fully required it is a good idea to have). Then, install the roles you selected (they will come with the features they need by default). This will take some time, so sit back and relax for a bit as the installation progresses. When the installation is complete, we will need to do a few more thing, primarily setting MGMT1 as the Active Directory, since installing the AD/DS on MGMT1 does not automatically make it an Active Directory. Click the flag with the yellow triangle and select the Active Directory part of the post configuration. Then select the new forest option and create your domain, calling it yourname.local. Next, you will be required to make a DSRM password. I suggest picking a good one and then writing it down somewhere as to not forget it. Ignore the DNS delegate as you did not make MGMT1 a DNS server. Continue forward with the default options given and install. If all goes well, you just promoted your MGMT1 to the level of Active Directory. This will cause your system to restart since anything relating to changing or updating the Active Directory will need to restart the computer to apply those changes. Log back in and finalize the DHCP configurations (as that too has post configurations that need to be finished). Now that it's done, now we can add an Active Directory Admin User and Normal User. These users will be used from this point forward. Right-click mgmt01 in the servers and chose Active Directory Users and Computers or alternatively click the Tools tab and then click Active Directory Users and Computers. Select your domain and then enter the Users file. Once here, just right-click in the file and choose create new user. Follow the steps provided. When the first one is done, make a second user following the same steps. One User will become the Domain Admin while the other will be your normal user. To change the membership of one of your users, just right click the user you want to be admin and select properties. Then select the Member Of tab and add in the Administrators group. You can also add in the Domain Admins as well. Click apply and the users are all set. Now if you wanted you could log out and log back in under one of the two new users. But we still have stuff to do first.

Next up is the DHCP. Select the DHCP part of your Server Manager and right-click on the mgmt01. Select the DHCP Manager option. From here, select the MGMT1.yourname.local file, then the IPv4 file. Right-click and create a new scope. When making a new scope, you need to give it a name and then give the range of IPs that can be assigned to the computers on the network. In this case, we are making a range of 10.0.5.150 to 10.0.5.200 for IPs. Keep in mind the subnet mask for the LAN network is 24 or 255.255.255.0. When moving on, skip the excluded IPs because that is a list of IPs, that are not used for the scope (for example if you really need 10.0.5.175 for another machine). Next, configure the options for the Domain Name and DNS servers as well as the Default Gateway router. This makes machines connected via the DHCP already have knowledge on where they need to send messages to for Internet access. Currently, the only DNS like server for LAN is the Default Gateway, so replace the IP being used for DNS (which is your MGMT1 server) with the Default Gateway. When asked about the WINS, just put in your MGMT01 server IP since it is the Domain Controller (this is strictly to resolve NetBios, which may or may not be needed). Active the scope once its done and we can finally move onto another machine.

Our next machine is fs01, which will become our file sharing server. Login and configure it just like with MGMT1, by using sconfig in PowerShell. You may notice that the FS1 machine is already in a command line, this is because FS1 is a Windows-Core, so it does not come with a GUI interface like that of MGMT1. The tricky part may come in when you try to join FS1 to the Domain. When you give the domain name and are asked for a valid user, you will need to include the domain as well as the user.

Finally we get to add the last new machine called WKS1. This is simply a workstation for the Windows systems. Log in once you set it to have the LAN network as it's connection and configure it as normal. Of course, it should already have an IP address and know where the Default Gateway and DNS are since our MGMT01's DHCP function should be feeding WKS1 the information. But you still have to join the Domain. Which is easier said than done. Go to the Control Panel and from there into System and Security and finally System. There should be an option for renaming the PC and changing the Domain. Change the information so the Domain is to yourname.local, and you should be done.

Now we have one last thing to do, add FS1 to the list of servers on Server Manager and add the File Server role to FS1. Adding a server to the Server Manager is easy, just select the Manage button in the top right and then choose Add Servers. Then type in the name of the server (FS1), and press ok. It'll take some time, but the server will be added. To add the File Server role to FS1, its similar to MGMT1 having DHCP and DNS added to it. Just make sure that FS1 is selected. I've done this over on my SYS-255 Tech Journal, so check it out to see how to add a file share to FS1 and how to create a new share. More or less you just add File Server and File Server Resource Manager to FS1, though make sure you also add remote server administration tools since FS1 is a separate machine (and you'll get an error if you try to access the File Server Resource Manager without remote tools). You may also need to restart FS1 to get it to work. When making the Shared File, call it kerberos-share. Once its made, move to the shared folder and make a text file in it that just simply says something. What's in the file doesn't matter.

Capturing kerberos activity is relatively easy. Boot up Wireshark on xubuntu-lan (you can install it with sudo yum install wireshark) and start capturing data traffic on your connection to LAN (ens160). Then move to your WKS1 machine and login as a normal, domain user. After that, simply access the file share and open the text file. Then move back to xubuntu-lan and stop the capture and save the file. Congrats, you have a file that has kerberos activity on it. Finding most of the stuff for kerberos requires digging into the data packets, but this pretty much concludes the lab. The only new thing beyond this is the command klist used on the WKS1 command prompt to see the ticket issued for access to the kerberos-share folder.

Windows Admin Center Lab

For this lab, we decide to check out whatever Windows Admin Center is. First, we go to FS01 and install the Windows Admin Center. Yes, we are NOT adding this to MGMT01, but rather our server core. There are a few ways that this can be done, but the way I did it was by going to FS01 as the domain admin user, then run the following commands:

Powershell
Invoke-WebRequest 'https://aka.ms/WACDownload' -OutFile "$pwd\WAC.msi"
msiexec /i WAC.msi /qn /L*v log.txt SME_PORT=6516 SSL_CERTIFICATE_OPTION=generate

The reason you type in Powershell as the first command in FS01 is because FS01 does not start in PowerShell. The web request command will take some time as it is effectively recording a valid msi file for you to install WAC. The execution (running) of the .msi file will also take some time and may force a restart of FS01. You'll probably want to make sure that port 6516 is allowed as well since that is Windows Admin Center's default port. You shouldn't need to since installing WAC will also set its port to active on the local network (i.e. this will only interact with devices on the LAN network). Now if you open a web browser on the LAN network and move to the website https://10.0.5.7:6516 to access the Windows Admin Center. Be aware though that only certain web browsers actually work with the WAC. I mention this because Internet Explorer (the default web browser of MGMT01) does not support WAC. I went to xubuntu-lan since FireFox does support WAC. Make sure you use the Domain Admine User and Password when asked to login. When you first login, you'll notice only fs1 is listed in all connections. This is because xubuntu-lan isn't a part of the Domain (makes sense since xubuntu-lan is a Linux system and not a Windows System). So you'll have to add each system to the connections. When adding MGMT1.name.local, select Add and then choose server. Search for MGMT1.name.local and make sure you use the Domain User and Password for credentials, otherwise it will fail to add the system. WKS1 is similar, but use Windows PC. Once they are added, its time to figure a few things out, such as how to check and access AD and DNS from the WAC. In order to access such information, we need to install extensions that WAC has available in the settings. Under settings, go to the extensions and install Active Directory and DNS. This will cause the WAC to reload, but once the extensions are added, you should be able to access the AD and DNS of MGMT01. Now when you connect to MGMT1 on WAC, you can preview the Active Directory using the Active Directory tool and preview the DNS with the DNS tool. In fact, one of the tools you can use is PowerShell, which invokes a remote PowerShell session to MGMT1. But what about WKS1? If you try connecting to it, it gives an error about WinRM not being able to complete the operation (connecting to WKS1). Well, lets try pinging WKS1 on xubuntu-lan's terminal to see if xubuntu-lan can even see WKS1. The answer seems to be no. We need to enable WinRM on WKS1. So lets got to WKS1 and open PowerShell as an admin. Now run the command winrm qc. This will quickly enable WinRM on WKS1, and even turn on certain rules for it to work. Now run Enable-PSRemoting to allow PowerShell remoting to WKS1. Going back to the WAC on xubuntu-lan, try connecting to WKS1. You might run into an issue where your credentials aren't working for some reason despite them working for MGMT01. Well, you'll get around this by including your domain name before the username so that the username reads domainName\adminUserName. Apparently WAC is particular about this distinction. Now you can attempt to Remote Desktop into WKS1. But you'll encounter another error, this time because the firewall on WKS won't allow you to simply Remote Desktop to WKS. However, you don't need to go directly to WKS to change this. Since you are connect to WKS on WAC, if you go to the settings tab and then choose Remote Desktop, you can allow remote connections to WKS1 from WAC. Now try again and you should successfully Remote Desktop to WKS1. This concludes the lab.

SELinux Lab

This is the final lab for SEC440. In this lab we will be using our rocky machine. First we need to have everything prepped. First, install httpd on rocky. Since we already have nginx, this will cause conflicts when we access port 80 for web information. To fix this, enter the file /etc/httpd/conf/httpd.conf and then edit the line that reads "Listen 80" so it reads "Listen 8080". This should resolve future port conflicts. Next we need to remove the httpd test page. This can be done a few ways, but the easiest way is to delete index.html from the /usr/share/httpd/noindex/ directory. Once deleted with the rm index.html command, you will need to create a new html index file or test page for use inside the /var/www/html/ directory. I simply just put in a sentence stating that it was the test page for httpd. Once done, start and enable httpd. But we aren't done yet. On rocky, you need to allow tcp port 8080 to be let through and on vyos1 you need to create a rule that lets port 8080 be forwarded. When completed, test whether or not you can access the test page on a web browser on your network (I suggest using xubuntu-wan).

Now we get into the "fun" stuff. Install SELinux packages using the command sudo yum install policycoreutils policycoreutils-python-untils selinux-policy selinux-policy-targeted libselinux-utils setroubleshoot-server setools setools-console mcstrans. This may take some time to install. When the install is complete, add four users to the system, regularuser, switcheduser, guestuser, and restricteduser. Use the command adduser to create the users and then the passwd user command to give the user a password (just replace user with the name of the actual user). Next, run the command ls -l /etc/s*.conf. This is so we can compare this output with the output of the next command. Now run ls -lZ /etc/s*.conf. This will have a different output than ls -l. The reason for this is because the command ls -lZ is SELinux. To see the process contexts, use the command ps -efZ, though I suggest piping a grep command to limit what you find else you will end up with a lot. Example: ps -efZ | grep apache. This is honestly where it gets interesting once you have cheched out a few files and processes. Now copy the /etc/redhat-release file into the /var/www/html/ directory. Test your web browser to see if it can access the new page. Once you confirmed that your web browser can access redhat-release, check the file context of both /etc/redhat-release and the new /var/www/html/redhat-release. You'll notice some differences between the two, including the users able to access them and the SELinux type being different as well. But what if we move a file instead of copying a file? Make a file on your sudo user (might want to exit root if you used sudo -i at some point), and then use the command sudo mv filename.txt /var/www/html/ to move your file. Now attempt to access that file in the web browser. You will end up with a forbidden error (as in you don't have permission to access error). Check the context of the moved file and you'll notice the SELinux type for that file is still pointing to the user's home directory instead of httpd_sys_content. This means moving a file does not change the context of the file in SELinux's eyes. In fact, we can use the command cat /etc/httpd/logs/error_log | grep finename.txt to see the particular error that happened. But again, things aren't all that they appear. The httpd logs an error whenever we try to access our filename.txt, but the permissions on the file have it so anyone can read it. So what's going on? Well, checking the /var/log/audit/audit.log for the filename.txt will help answer that. When checked, we learn that the context for the file is wrong given the area it is kept, which prevents permissions like read from being used. Again, as far as SELinux is concerned, the file doesn't have the context of an html file, so it won't let it be opened for anything. We can fix this using the command restorecon. Simply put, restorecon filename.txt will look through the file, realize it's in a different directory than where it was made (since it was moved and not copied), and will put on the default permissions and type for that file. In this case, the file changed type from the user directory to httpd system content. Now if you try accessing your file from a web browser, it should work.

Now we want to try something new. In particular, we want to enable user directories for Apache (httpd) so that users like regularuser have their own index.html file. I found this guide to be helpful. Essentually what we want to do go into the httpd conf.d directory and edit the userdir.conf file to enable user directories. Change the variable UserDir to enabled regularuser and then uncomment the line "UserDir public_html". Restart httpd and then create a directory in your regularuser for the public html. Then we need to edit the permissions of the regularuser's home directory and public_html directory. Use the following commands:

chmod 711 /home/regularuser/
chown regularuser:regularuser /home/regularuser/public_html/
chmod 755 /home/regularuser/public_html/

This will add the correct permissions. But now we have the elephant in the room. SELinux also needs to have the context for these directories updated. We can do this by using two commands. They are as follows: setsebool -P httpd_enable_homedirs true chcon -R -t httpd_sys_content_t /home/regularuser/public_html/ Now we can test this by making an indexf.html or test.html file in the regularuser's public_html directory. This can be simply a line of text or you could be fancy and actually make an html-style file. Now to check the permissions and SELinux values. You can switch into regularuser by using su regularuser to change into the regularuser to run this test. Go into the home directory of regular user and run the command ls -lRZ. Then run getenforce to see if SELinux is enforcing it's rules. Once this is confirmed, return to the normal, root user by typing the exit command.

Now we move onto another fun part of SELinux. Use the command semanage port -l | grep ssh to see the list of ports used by ssh in SELinux. The value should be 22 as tcp port 22 is the default ssh port. We will now change this. Open the sshd_config file located in /etc/ssh/ directory. Find the line "Port 22" and uncomment it while changing 22 to be 2222. Save this change. This sets the new default of ssh to port 2222. This will require updating your firewall rules, including the firewall on vyos (as it still looks for tcp port 22). But this isn't all. If you restart sshd now, your will encounter an error. This is because SELinux still has ssh looking at port 22, which means we need to change SELinux once again. Using the command semanage port -a -t ssh_port_t -p tcp 2222 will change/add port 2222 as a viable option for ssh to use. You can confirm this by using semanage port -l | grep ssh. Test this change by going to xubuntu-lan and ssh to rocky1 using port 2222. The command to do this is ssh [email protected] -p 2222. Now we will do this again, ssh as the regular user on rocky. We do this because using the su command in this state allows use on a completely different machine to enter a new user from the regular user. This is not very secure! You can test this will an su - switcheduser command. Exit back to regularuser to fix this issue if you used the su command. Run the command id -Z to see the current permissions and status of the regularuser. The permissions are all unconfined. This will not do. Return to rocky1, since we are already a root user there, and use the command semanage login -a -s user_u regularuser to change the constraints on regularuser. Now back on xubuntu-lan, ssh as regularuser again and try the su command again. If SELinux is doing its job right, you will encounter an error. You can find this error by using the cat command on the audit.log file as root, though make sure you grep the command so you only see switcheduser and failed, otherwise you will see a bunch of other errors and log info.

We will now use the command semanage login -a -s guest_u guestuser to make guestuser have the constraints of a guest. Ssh as guestuser to rocky from xubuntu-lan and create a simple friendly script. Just have an echo command that outputs a simple sentence. Change that file to have execution privileges and run the script. Guestuser should be able to run the script, but we don't want the guestuser to have execution privileges. So back on the root user, we need to change it so guest can't run scripts. Use the command getsebool -a | grep guest to see the current value for SELinux about the guest policy. We want the guest_exec_content set to off, permanently. We do this with the command setsebool -P guest_exec_content off. This should ultimately set it so that guest can't execute any script. Go back to xubuntu-lan where you should still be logged in as guestuser and try running the friendly script again. You will find an error stating you were denied permission to run the command. Success at preventing another possible security issue! This error can be found in the audit.log as well.

The last part of this lab is to prevent sudo for a member of the sudoer (wheel) DAC group. To do this, we will want to add restricteduser to the wheel group, while telling our http service to stop. To stop httpd, use the command service httpd stop or the command systemctl stop httpd.service. Then, use the command usermod -aG wheel restricteduser to add restricteduser to the wheel group. Now, switch into restricted user and start http up as the restricted user. Once http is started, swap back to your normal user and use the semanage command to make restricteduser a user_u by SELinux context. Then, log off and back on as restricteduser and try restarting http. You will get an error. Congrats, you have successfully made a member of the wheel group unable to use sudo.