DS‐ Docker Lab - 229300/SYS265-System-Admin.-Network-Services-II---Spring-2026 GitHub Wiki

AI link help: https://www.perplexity.ai/search/this-is-my-ubuntu-vm-how-do-i-VtkTdTcBS12boqCzuhQu2Q 53531/tcp

scan for all possible ports

-d is necessary


To begin enter bash:


Create a new user with my own password

  • sudo useradd -m daniel && sudo passwd daniel

Add daniel to the sudo group (Debian "admin" group)

  • sudo usermod -aG sudo daniel

Change my root account password from champlain password

  • sudo passwd root

Change the system hostname to whatever is appropriate

  • sudo hostnamectl set-hostname docker01-daniel
  • exit

Configure Network settings with Netplan

  • sudo nano /etc/netplan/00-installer-config.yaml
  • Looks like:
  • sudo netplan apply
  • ip addr to ensure changes:

Error from original sudo netplan apply

  • Looks like:
  • sudo nano /etc/hosts
  • Looks like:

Update cloud.cfg to save the new hostname

  • Looks like:

Alternatively, to make the hostname persistent you can use the command:

  • sudo hostnamectl set-hostname my.persistent.hostname

But I updated the cloud.cfg

Disable remote root SSH

  • sudo nano /etc/ssh/sshd_config
  • Uncomment and Set PermitRootLogin to no
  • Looks like:
  • sudo systemctl restart ssh
  • sudo systemctl enable ssh
  • sudo systemctl start ssh
  • sudo systemctl status ssh
  • sudo systemctl restart ssh
  • Looks like:

Create DNS records for docker01-daniel

  • Go to mgmt01-daniel --> Server Manager --> DNS Manager --> Forward Lookup Zone
  • New 'A' Host:
  • Reload the Forward and Reverse Lookups show it shows static

Join docker01-danielto the domain

  • Back on docker01-daniel VM
  • sudo apt-get update
  • sudo apt install realmd
  • realm discover daniel.local
  • Looks like:
  • sudo apt install sssd sssd-tools libnss-sss libpam-sss adcli samba-common-bin oddjob oddjob-mkhomedir packagekit
  • sudo realm join daniel.local
  • realm list
  • Looks like:

Deliverable 1. Screenshot showing PuTTY or powershell SSH session from mgmt01 (use hostname, not ip address). Elevate to root using sudo -i and within the session, ping champlain.edu.

  • Back on mgmt01-daniel --> Powershell --> ssh daniel@docker01-daniel
  • Looks like:

Install Docker

Link Steps (1-3): https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-20-04
  • Back on docker01-daniel
First, update your existing list of packages:
  • sudo apt update
Next, install a few prerequisite packages which let apt use packages over HTTPS:
  • sudo apt install ca-certificates curl gnupg
Then add the GPG key for the official Docker repository to your system using the modern keyring method:
Add the Docker repository to APT sources. This command automatically detects your Ubuntu version:
  • echo
    "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu
    $(. /etc/os-release && echo "$VERSION_CODENAME") stable" |
    sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Update the package database with the Docker packages from the newly added repo:
  • sudo apt update
Verify that you are about to install from the Docker repository instead of the default Ubuntu repository:
  • apt-cache policy docker-ce
You’ll see output indicating that Docker is available from the Docker repository. The version number will vary, but you should see the Docker repository URL in the output.
Finally, install Docker:
  • sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:
  • sudo systemctl status docker

Deliverable 2. Confirm the Docker Service is running and provide a screenshot similar to the one below:

  • On docker01-daniel it Looks like:
  • On mgmt01-daniel (using ssh) it Looks like:

Deliverable 3. Confirm that your sudo user can access and print out version information

  • sudo docker version
  • Looks like:

Deliverable 4. After running the docker hello world application as your named user & providing a screenshot similar to the one below, explain what has happened?

  • sudo docker run hello-world
  • Looks like:

Installing Docker Compose

Link to steps: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-20-04

The following command downloads the release and saves the executable to /usr/local/bin/docker-compose, making it globally accessible:
Set executable permissions for the docker-compose command:
  • sudo chmod +x /usr/local/bin/docker-compose
Verify the installation:
  • docker-compose --version
  • Outputs:

Deliverable 5. Provide a screenshot similar to the one below that shows the docker-compose version.

  • Looks like:

Hello SYS265

The following command pulls down an Arch Linux based docker image, invokes it in a container, and runs /bin/echo "HELLO SYS265 SNOWY DAYS '' before deleting the container.

  • docker run --rm archlinux:latest /bin/echo "HELLO SYS265 SNOWY DAYS"

Deliverable 6. Provide a screenshot similar to the one below showing your "Hello Message"

  • Looks like:
  • Listing docker images:

Docker Arch Linux Container - do the following commands:

Print out the current version of Ubuntu on docker01

  • cat /etc/lsb-release Print out the current version of docker01's linux kernel
  • echo "Current Kernal is: $(uname -a)" Invoke a container of the stored Ubuntu image as well as an interactive bash command prompt.

Print out the kernel being used by the Ubuntu container.

  • sudo docker run -it archlinux /bin/uname -a

Deliverable 7. Provide a screenshot similar to the one below and an answer to the question: Based upon the version of kernels you see displayed within and outside of the container, what do you think is going on?

  • All of this looks like:
What's going on is, containers share the host's kernel. Unlike virtual machines, Docker containers do not have their own separate kernel; instead, they run as isolated processes on the host system and use the host's kernel directly. In this case, I'm running an Arch Linux container (archlinux image) on an Ubuntu 24.04 host system. Even though the container provides an Arch Linux userspace (different package manager, libraries, utilities, etc.), it's still using the Ubuntu host's kernel (6.8.0-35-generic #35-Ubuntu).

Docker Web Application

The following command will pull down the image, application and dependencies associated with a simple python web application.

  • sudo docker run -d -P docker/getting-started
  • Looks like:

Deliverable 8. Research the docker run command. What does the -d and -P mean?

  • The docker run command runs a command in a new container which will pull and start the container. -d stands for --detach and it runs the container in the background (detached mode) instead of attaching my terminal to it. -P stands for “publish all exposed ports” and it automatically maps all exposed ports from the container to random available ports on the host machine.
  • Research:

Docker Networking

  • sudo docker ps
  • We will call this PortX and it Looks like:

Docker has configured packet forwarding on your base OS. Traffic destined to host PortX/tcp will be sent to the containerized application listening on 80/tcp. You will need to allow the port (32768/tcp in this case) that shows up in docker ps through your firewalld.

Allow Port 32768

So back on docker01 VM execute the following commands:

  • sudo ufw allow 32768/tcp

Check UFW Status

  • sudo ufw status

Enable UFW (if not already enabled)

  • sudo ufw enable

Check if Port is Allowed

  • sudo ufw status | grep 32768
  • Looks like:
Differences between firewalld and ufw:

Deliverable 9. Screenshot showing a browsing session between mgmt01 and docker01 on the port shown in docker ps (you may have another port)

  • Back on mgmt01-daniel
  • Looks like:

stop the testapp and ssh back to docker01 via mgmt01:

  • sudo docker ps
  • sudo docker stop interesting_hofstadter
  • sudo docker ps
  • Looks like:

Dockerized Wordpress

In this example, we will use a docker compose file (docker-compose.yml) to identify the attributes of a wordpress installation to include the operating system, software and database dependencies. We will use docker-compose (as opposed to docker run) to bring up the container.

Docker Compose vs Docker
A Dockerfile is a text document with a series of commands used to build a docker images. Docker compose is a tool for defining and running multi-container applications.
Docker run is entirely command line based, while docker-compose reads configuration data from a YAML file, and docker run can only start one container at a time, while docker compose will configure and run multiple.
Parse instructions on Quickstart: Compose and WordPress to create and configure a new wordpress image. Tip: There are plenty of related sites to achieve this.

Make a project directory back on mgmt01 using ssh:

  • mkdir WordPressProject
  • ls
  • cd WordPressProject
  • Looks like:
  • sudo nano docker-compose.yml
  • Won't work without copy/paste yet so don't save the file

Before creating and editing the file i need to enable copy/paste:

  • Edit the terminal properties --> Right click:
  • Check this box:

Now:

  • Ensure the contents were saved: cat docker-compose.yml
  • Looks like:
  • sudo docker-compose up -d
  • Output:
  • sudo docker ps
  • Output:

Successful Connection to WordPress Via browser!

  • Looks like:
  • Installing Wordpress:

Deliverable 10. Provide a screenshot showing a completed Wordpress installation that contains reference to the course and your name. You should be accessing it by hostname and not IP address.

Post uploaded to Wordpress: