Challenge 5: Building and Shipping Containers - alan9259/animal-adoption-portal GitHub Wiki
Challenge 5: Building and Shipping Containers There are many different ways of deploying code, TAA has requested an investigation to see if containers can make their solution more portable and easier to manage.
Releasing Deploying Monitoring This challenge introduces containers - which enable easily reproducible environments, perfect for deploying microservices. GitHub Actions has extensive support for building and releasing containers.
Exercise 1 The first step in containerising your application is to build your code into a deployable image that is then used to instantiate a container.
Update the web portal project to run inside a Docker container. Update your Dockerfile to set environment variables for the app settings used by the web portal. Using the ENV keyword before the build step. Test the TAA web portal container it on your development machine. Hints For Windows and Mac users, you will need to install Docker Desktop, Linux users must install the Docker Engine. Refer to the helpful resources for example Dockerfiles. Pass settings using environment variables. If you are having trouble building a Dockerfile, try adding Docker support using Visual Studio as it can generate the Dockerfile automatically. Make sure that you target the Linux operating system when building your Dockerfile. Helpful resources Adding Docker support Dockerfile reference Introduction to .NET and Docker Dockerize an ASP.NET Core Application View environment variables Exercise 2 Now that the containers are successfully running locally, update the GitHub Actions workflow to build container images of the TAA web portal when you make any changes in code. For this step instead of deploying a release artifact, push these images into an Azure Container Registry (ACR).
Create an Azure Container Registry inside your production Resource Group, the Standard SKU is sufficient. Enable the Admin account for your ACR and add the credentials as secrets to your GitHub repository. Configure a new job in your workflow that authenticates with your ACR then runs the Docker build and push functionality using one of the two approaches outlined in the hints section. Should run after the initial build job because we don't want to publish an image if the code fails to build or the unit tests fail. You may want to create environment variables for values that are used several times, such as the container login server address. The "repository" can be any value, such as "taa-portal", this will be automatically created inside your ACR. The "image" can also be any value, these are distinguished from one another by tags (similar to versions). Ensure that the workflow runs. Check that the images were successfully published to your ACR, under the Repository section. Leave the artifact steps for the TAA web portal in place for now. Hints Create a single Azure Container Registry resource, this can make it easier to ensure the image is the same across environments. When you install/configure docker, ensure that it is running in Linux container mode. Try out the docker build and push commands on your local machine in the root of the repository to ensure that they work before adding this to your workflow. Since the docker image targets a Linux environment, it needs to be built on a Linux runner, this means that it cannot be part of the same job that publishes your NuGet package because that has a dependency on a windows runner. You can resolve this by having a separate task before or after the existing build task that will deal specifically with docker images. You will need to use the checkout task as a step in order to get a copy of the code. There are multiple options available to publish the TAA web portal image to ACR: Using the Azure Container Registry Login action and the Docker CLI. Auth, then you can use a run step to call the Docker CLI. Using the Build and push Docker images action. Auth and build + push in a single step. Ensure you tag your containers when you release them to ACR. GitHub Actions exposes variables (such as the commit sha) when building that may make good candidates. It is recommended to tag your image with a unique identifier and also a generic tag such as 'latest'. Helpful resources Azure Container Registry Authenticate with an Azure container registry Azure Container Registry Login action Docker build command reference Docker push command reference An introduction to Docker tags Checkout action Azure Container Registry Login action Build and push Docker images action CI/CD for Containers Architecture Exercise 3 Now the container image is stored in a container registry and is ready to be deployed. Update the release to test environment job to deploy the container image into an Azure Web App for Containers instance in your test environment. Test your new deployment then extend the workflow to deploy to the production environment.
Create a new Web App resource in the test Resource Group. It should: Publish a Docker container Run on Linux Be located in the same region as the existing resources Reuse the existing App Service Plan Use the image with the "latest" tag from your Azure Container Registry as the Image Source Have an empty Startup Command You may have to enable the admin user to allow your Web App to communicate with your Azure Container Registry. Once the Web App has been created: Go to Overview > App Service Plan > Link and the Settings > Apps and check that you can see your new containerized Web App resource along with the existing standard Web App resources as part of the App Service Plan. Go to Settings > Container settings and view the logs to ensure that that container has started. These may take a few minutes to load. successfully. Check that the site works by visiting the URL in the Overview section. Update the deploy to test environment job with a step that deploys your image to the Web App for Containers instance instead of a Web App instance. Remove any leftover TAA web portal artifact steps from the build and deploy to test jobs in your workflow. Update the ARM template to deploy a Web Apps for Containers resource, rather than a standard Web App. Try using the export template functionality from the Azure portal to see what needs to change in your ARM template. Run your workflow with the updated deploy to test job. Once the deployment has completed, check that the site is still running successfully. Repeat the above steps to deploy the container to a Web App in the production Resource Group. Ensure that you have removed all of the TAA web portal artifact steps. Hints You will have to update your environment variable values as well to reflect the new resource names. The environment variables for GlobalPassword and SimulatedFailureChance are baked into the image that is published to your Azure Container Registry, which is why these don't need to be set in the Azure Portal or the release jobs of the workflow. You can override these baked in values by using the app-settings-json option on the deploy action. Don't forget to recreate the staging slot against your Web App for Container resource. You should also update the value for ANIMAL_ADOPTION_FUNCTIONAL_TEST_PATH to point to your new test web app. Helpful resources Azure WebApp action Microsoft.Web sites template reference Deploy to Azure using Docker ARM templates for Web App Containers References a custom Docker image in an ARM template Extra for experts Deployment slots also work with Web Apps for Containers, in order to continue utilizing the slot and the rollback functionality you will have to: Create a staging slot in the production Web App. Add an Availability Test for the new Web App resource using the existing Application Insights instance. Create an Alert Rule that uses the above Availability Test. Update the check for Azure Monitor alerts step to point to your new Alert Rule. Refer to the instructions in the earlier exercise for how to test the Alert Rule and ensure that the slots are switched (you can ignore the app settings parts). Exercise 4 Setting up an entire environment can be challenging, especially when using a microservices architecture. Docker Compose enables developers to design multi-container applications easily while ensuring that the application can be tested easily.
Although the current TAA code base only has a single service, there are plans to add more in the future. Setting up Docker Compose now will mean that less work has to be done to add new services in the future as the development and CI/CD processes will already be updated.
Make a Docker Compose file that starts all the services and runs them locally.
Add an empty Docker Compose file in the root of your repository. Update the Docker Compose file to target version 3.8 and build a single service called webapp. Add configuration for HTTP and HTTPS ports. Add overrides for the environment variables specified in your Dockerfile. Run your app with Docker Compose. You should now be able to visit http://localhost:80 to access the TAA web portal. Test that the environment overrides and access via HTTP are working as expected. HTTPS requires self-signed certificates and a complex setup, so we won't add HTTPS support in the development environment. Hints Make sure you set the context and dockerfile properties in the build section. Pass settings using environment variables from the host to reduce the number of container changes. Helpful resources Docker Compose Docker Compose service configuration reference Exercise 5 There is another benefit in defining how containers work together using an orchestrator such as Docker Compose. Some containers are built only to support the primary application and make sense to be treated as a single unit within the same lifecycle.
These containers commonly proxy traffic, log requests or provide authentication to a specific service. It is possible to expose only the services you need to the public and manage the lifecycle of those containers together while still keeping your microservices decoupled and isolated from each other.
Bundle the frontend web app into a Docker Compose file and release it to a multi-container Web App using GitHub Actions. Only expose the frontend website to public internet traffic.
Rename the existing Docker Compose file to have the .dev.yml suffix. Create a new Docker Compose file called docker-compose.prod.yml, it should use the image property instead of the build one and reference the latest image from your ACR. Test that both Docker Compose files work. Create a third Docker Compose file called docker-compose.base.yml and move the common config into its file. Update docker-compose.dev.yml and docker-compose.prod.yml to only configure/override the necessary properties for each environment. Test that each environment-specific file works in conjunction with the base Docker Compose file. Create another file called docker-compose.prod-full.yml which contains everything the production environment needs (i.e. a merge of the base and prod) files. This is required as we need to generate a base64 value from the complete Docker Compose config. In your ARM template, update the linuxFxVersion to the correct value for your configuration type of multi-container application. See the --multicontainer-config-type flag documentation for allowed values. Update your workflow to: Pass docker-compose.prod-full.yml through to the deploy to test and production jobs. Utilize the configuration-file property of the webapps-deploy action for both the test and production steps. Delete your Web App for Containers resource in the test Resource Group, so that it will be recreated by your updated ARM template. Trigger your workflow and ensure that it runs successfully. Inspect the workflow logs for the deploy to test and deploy to production steps and ensure there are lines similar to Multi-container deployment with the transformation of Docker-Compose file. and "linuxFxVersion":"COMPOSE|" present. For the test and production multi-container web apps, in the Azure Portal Load the app via the URL in the Overview tab and check that it works. Go to Settings > Container settings and ensure that your Docker Compose configuration is present. Under Settings > Export template check that the linuxFxVersion is in the form of COMPOSE|. Delete the Web App for Containers resource in the test Resource Group again. Replace the hardcoded values for environment variables in your docker-compose.prod-full.yml file with variable placeholders that will be populated from the environment that the container runs in. Run your workflow again and ensure that everything works as expected, including the variable override. Hints Version 3 of the Docker compose syntax does not support the extends keyword, so you will have keep the definition structure the same (including service names) and run both the base and environment specific compose files at the same time. Try setting the host ports for the dev and prod files to different values during debugging to ensure that no caching is occurring on the container ports. Reset the ports to the same value before committing the changes. The names of the variable placeholders should match those of your variables in the Web Apps App Settings section. Once deployed, you can try changing the value of the App Settings variable then testing again in the TAA portal to see if the changes are reflected. Don't forget to hit Save on the App Settings page! Helpful resources Docker Compose Share Compose configurations between files and projects Parameters for az webapp create command Variable substitution Substitute environment variables in Compose files CI/CD for Containers Architecture The Sidecar Pattern Create a multi-container app Spoilers Read through the steps above again and see if they make more sense reading a second time around. Otherwise, talk to your coach before looking at these, all the information you require is provided in the instructions and links above.
Running two docker compose files at once Providing an environment file as well as a compose file Example base and environment compose files Exercise 6 Using the knowledge and skills you have gained over the previous challenges and exercises, you will build a workflow for an identical portal, that has been split up into microservices.
Clone or fork the original code that lives here. Use Azure DevOps to plan how you and your team will complete the work. Check out the code locally. Containerize the various parts. Create the required Azure resources (you can re-use the existing Resource Groups). Build a workflow inside your new repository that will build, test, package, and deploy to a multi-container Web App instance in Azure. Setup alerts and monitoring for your microservice components. Create an ARM template for your environment so that it can be easily re-deployed in the future. Show your coach. Hints This is a test of the skills you and your team have learned, feel free to refer to instructions from previous exercises and search the internet. Since there is no singular, correct way to complete this exercise your coach will only be able to guide you, rather than providing a specific answer in how to fix a particular issue - there is no exemplar for this challenge!