Assignments - HenrikWM/NNUG_GAB2018 GitHub Wiki
You are a developer tasked with creating an image processing pipeline using Azure services. The customer wants you to create an ASP.NET web application for the users to upload images. The web app should immediately show processing status once the image is received by the pipeline, and gets progress-feedback as each step completes processing. It is ok that the user has to wait until images are processed and shown, but the user should see the finished processed image as soon as possible.
The customer wants the overall solution to be modular by design, so removing and adding modules should be quick and easy, without breaking or impacting other modules.
You choose Azure Functions and Azure Storage blob containers for the pipeline, as the functions are modular, trigger-based and can be chained together by having output from one function be an input-trigger for other functions. You find that polling for expected output from each processing step is a good approach, and that this "eventual-consistency" design-decision meets the requirements of the customer.
You start off with two services GAB.ImageProcessingPipeline.Web
and GAB.ImageProcessingPipeline.Functions
plus a shared utility library.
An ASP.NET Web Application that handles the user interaction and uploads an image to the pipeline.
You have created an UploadController
that uploads the image to an IngressFunction
. This function's job is to save the image as a blob in the blob container imagepipeline-originals
so that this triggers other functions.
In the Index
view for the Index
action of your controller you have created JavaScript that triggers an Ajax-upload of the image to the UploadFileAsync
action of the UploadController
. As per the modular design requirement, you have created a separate Partial View for each processing module along with a separate module JavaScript-file.
To serve the JavaScript with configuration from the back-end you have decided to JavaScript-serialize the C#-class JsConfig
, and create a config-object imagePipelineApp
at page load in _Layout.cshtml
.
Client-side logging is available on the Upload
page in the dev console
in your browser. Each module polls for the expected resource until it's available and will log the progress to the console.
An Application Insights service is available in Visual Studio when debugging, and when deployed in Azure. Look for the resource nnuggab2018<your initials>appinsights
in the Azure portal.
An Azure Functions project with functions used in the pipeline.
IngressFunction
receives the HTTP-POST from the ASP.NET Web Application with the user's image, and saves it as a blob to blob container imagepipeline-originals
.
ExifFunction
is a function that extracts image EXIF-data. It receives a BlobTrigger
when the image blob appears in the imagepipeline-originals
and saves extracted EXIF-data to a json-file in the imagepipeline-exif
blob container.
ScalerFunction
is a function that scales the image into 3 different image dimensions. It also gets triggered by a blob appearing in imagepipeline-originals
. It outputs an image-blob for each image-dimension into the blob containers: imagepipeline-scaled-lg
, imagepipeline-scaled-md
and imagepipeline-scaled-sm
.
Tracing will output in the Functions console window when debugging locally.
When deployed to Azure you can enable logging by finding the functions service nnuggab2018<your initials>functions
and:
- Click the tab Platform features.
- Enable logging by clicking on Diagnostic logs and setup logging.
- Click on the Log streaming link to watch the traces from all of the functions.
A utility library with helper-classes for configuration, function processing, Azure Storage etc.
It also has a set of helper-classes for image manipulation such as image re-sizing, apply color-matrixes and more.
Overview of how a pipeline might look using Azure Functions and App Services using blob containers:
Tip: Make sure you are logged-in with your Azure-account in Visual Studio before you begin.
Clone the repository and check out the start-here
branch.
The master
branch contains a final solution reference (overview) if you need inspiration or get stuck with code. It contains example modules and functions for: image scaling, EXIF-extraction, image matrices, Cognitive Services metadata extraction and more.
Take-away: You will have published the ASP.NET MVC web application and Azure Functions to Azure using Web Deploy from Visual Studio.
Goal: You can access the web app at https://nnuggab2018???web.azurewebsites.net (replacing '???' with your initials), and uploading images returns the image's EXIF-data along with a large, medium and small version of the processed image.
Follow the guide for creating the required Azure resources using Powershell and ARM-templates.
Follow the guide for configuring the app settings for local and Azure environments.
Follow the guide for deploying to Azure.
When complete, try accessing your deployed web app and test it by uploading an image.
Take-away: You will have added an image processing module to transform image color into greyscale using color matrixes.
Goal: After uploading an image, the function will produce a grayscale image for the large, medium and small scaled images created by the ScalerFunction
.
- In the
GAB.ImageProcessingPipeline.Web
project, add a new Partial ViewColorMatrix.cshtml
in theViews\Shared
folder:
<section class="colormatrix-section">
<div class="panel panel-default">
<div class="panel-heading">
<h3 class="panel-title">Color matrix</h3>
</div>
<div class="panel-body">
<h4>Grey</h4>
<ul class="list-group container-flex">
<li class="list-group-item">
<div>
<b class="list-group-item-heading">Large</b>
<br /><br />
<p class="list-group-item-text"><img id="image-colormatrix-grey-lg" src="../../../Content/ImageProcessingPipeline/ajax-loader.gif" alt="Grey large image" /></p>
</div>
</li>
<li class="list-group-item">
<div>
<b class="list-group-item-heading">Medium</b>
<br /><br />
<p class="list-group-item-text"><img id="image-colormatrix-grey-md" src="../../../Content/ImageProcessingPipeline/ajax-loader.gif" alt="Grey medium image" /></p>
</div>
</li>
<li class="list-group-item">
<div>
<b class="list-group-item-heading">Small</b>
<br /><br />
<p class="list-group-item-text"><img id="image-colormatrix-grey-sm" src="../../../Content/ImageProcessingPipeline/ajax-loader.gif" alt="Grey small image" /></p>
</div>
</li>
</ul>
</div>
</div>
</section>
- Add a new file
colorMatrix-module.js
in theScripts\ImageProcessingPipeline\modules
folder:
var colorMatrixModule = (function () {
var sectionClassName = "colormatrix-section";
var dataContainerId1 = "image-colormatrix-grey-lg";
var dataContainerId2 = "image-colormatrix-grey-md";
var dataContainerId3 = "image-colormatrix-grey-sm";
var onPollError = function (id, url) {
console.log("ColorMatrix-module: error");
baseModule.onPollError(sectionClassName, id, url);
};
var onPollSuccess = function (id, url) {
console.log("ColorMatrix-module: success");
baseModule.onPollSuccess(sectionClassName);
setImage(id, url);
};
var startPollingForResource = function (fileName) {
console.log("ColorMatrix-module: start polling for " + fileName);
baseModule.startPollingForResource(sectionClassName);
// TODO Assignment 2: replace '???' with the container url from config
tryLoadHttpResource(
dataContainerId1,
??? + "grayscale-" + fileName,
onPollSuccess,
onPollError);
// TODO Assignment 2: replace '???' with the container url from config
tryLoadHttpResource(
dataContainerId2,
??? + "grayscale-" + fileName,
onPollSuccess,
onPollError);
// TODO Assignment 2: replace '???' with the container url from config
tryLoadHttpResource(
dataContainerId3,
??? + "grayscale-" + fileName,
onPollSuccess,
onPollError);
};
var reset = function () {
console.log("ColorMatrix-module: reset");
baseModule.reset(sectionClassName);
setImage(dataContainerId1, window.imagePipelineApp.loaderGifUrl);
setImage(dataContainerId2, window.imagePipelineApp.loaderGifUrl);
setImage(dataContainerId3, window.imagePipelineApp.loaderGifUrl);
};
return {
startPollingForResource: startPollingForResource,
reset: reset
};
})();
- In
Views\Upload\Index.cshtml
add the following above@section scripts
:
<br />
@Html.Partial("modules/ColorMatrix")
Add reset()
and startPollingForResource()
module-calls to the resetUpload()
and processingInProgress()
JavaScript-methods in the same file.
- In
BlobContainers.cs
add constants for the blob containers for each the output-images from the colormatrix-function:
public const string ColorMatrixLarge = "imagepipeline-colormatrix-lg";
public const string ColorMatrixMedium = "imagepipeline-colormatrix-md";
public const string ColorMatrixSmall = "imagepipeline-colormatrix-sm";
Add each new constant to the All
string-array at the top of the class.
- In
JsConfig.cs
add each constant as new properties:
public string LargeColorMatrixContainerUrl => StorageBaseUrl + BlobContainers.ColorMatrixLarge + "/";
public string MediumColorMatrixContainerUrl => StorageBaseUrl + BlobContainers.ColorMatrixMedium + "/";
public string SmallColorMatrixContainerUrl => StorageBaseUrl + BlobContainers.ColorMatrixSmall + "/";
- In the
GAB.ImageProcessingPipeline.Functions
project, add 3 new functions for processing the large, medium and small images:ColorMatrixLargeFunction.cs
,ColorMatrixMediumFunction.cs
andColorMatrixSmallFunction.cs
.
Each function should be triggered by the corresponding size's blob container. In ColorMatrixLargeFunction
, add a BlobTrigger
that listens to BlobContainers.ScaledSmall
. The Blob-output should go into BlobContainers.ColorMatrixSmall
. Repeat for the other two functions. Use the following template (replace the '???'):
// TODO Assignment 2: replace '???'
public static class ???
{
[FunctionName("???")]
public static async Task Run(
[BlobTrigger(??? + "/{name}", Connection = Constants.BlobStorageConnectionName)] Stream imageStream,
[Blob(??? + "/" + ColorMatrixProcessor.OutputFileName, FileAccess.ReadWrite)] CloudBlockBlob imageBlob,
string name,
TraceWriter log)
{
AppSettings.TrackWriterLogAll(log);
await ColorMatrixProcessor.Process(imageStream, imageBlob, name, log);
}
}
- Add a new C#-class
ColorMatrixProcessor
inGAB.ImageProcessingPipeline.Common\Functions
:
public static class ColorMatrixProcessor
{
public const string OutputFileName = "grayscale-{name}";
public static async Task Process(Stream imageStream, CloudBlockBlob imageBlob, string name, TraceWriter log)
{
log.Info($"[ColorMatrix] - Triggered for image name: {name}, size: {imageStream.Length} bytes");
try
{
// TODO Assignment 2: get a Bitmap from imageStream
var sourceImage = ???
var imageFormat = sourceImage.GetImageFormat();
var mimeType = sourceImage.GetMimeType();
// TODO Assignment 2: use a BitmapExtension-method from the common-library to draw image as grayscale
var generatedImage = ???
await ImageBlobUploader.UploadBlob(generatedImage, imageFormat, mimeType, imageBlob);
log.Info($"[ColorMatrix] - Completed for: {name}. File location: {imageBlob.Uri.AbsoluteUri}");
}
catch (Exception exception)
{
log.Error($"[ColorMatrix] - Failed: {exception.Message}", exception);
}
}
}
Test your new module. When you upload an image you should see 3 greyscaled images appear in the new module's section.
Deploy to Azure and test the new functionality when you're done with local testing.
Want an extra challenge? See "Challenge 2: Color matrices - more than just grayscaling" below
Take-away: You have learnt how to use Azure Cognitive Services and Computer Vision API to extract tags and categories from an image, and display them as a new pipeline module.
Goal: User can see metadata (tags and categories) about the uploaded image.
- In the
GAB.ImageProcessingPipeline.Web
project, add a new Partial ViewCognitiveServices.cshtml
in theViews\Shared
folder:
<section class="cognitiveServices-section">
<div class="panel panel-default">
<div class="panel-heading">
<h3 class="panel-title">Computer Vision API</h3>
</div>
<div class="panel-body">
<div>
<img class="image-loading" src="../../../Content/ImageProcessingPipeline/ajax-loader.gif" alt="ComputerVision-data" />
<ul id="CognitiveServicesData"></ul>
</div>
</div>
</div>
</section>
- Add a new file
cognitiveServices-module.js
in theScripts\ImageProcessingPipeline\modules
folder:
var cognitiveServicesModule = (function () {
var sectionClassName = "cognitiveServices-section";
var dataContainerId = "CognitiveServicesData";
var onPollError = function (id, url) {
console.log("CognitiveServices-module: error");
baseModule.onPollError(sectionClassName, id, url);
};
var getFilteredData = function (data, filter) {
var items = data.filter(filter);
if (items.length === 0)
return null;
return items.map(e => e.Name).join(", ");
}
var onPollSuccess = function (id, url, data) {
console.log("CognitiveServices-module: success");
baseModule.onPollSuccess(sectionClassName);
if ($.isEmptyObject(data)) {
$("#" + id).append("<li><p><b>No data from Cognitive Services.</b></p></li>");
return;
}
var tags = getFilteredData(data.Tags, function (element) { return element.Confidence > 0.8; });
if (tags !== null) {
$("#" + id).append("<li><p><b>Tags</b>: " + tags + "</p></li>");
}
var categories = getFilteredData(data.Categories, function (element) { return element.Score > 0.6; });
if (categories !== null) {
$("#" + id).append("<li><p><b>Categories</b>: " + categories + "</p></li>");
}
};
var startPollingForResource = function (fileName) {
console.log("CognitiveServices-module: start polling for " + fileName);
baseModule.startPollingForResource(sectionClassName);
// TODO Assignment 3: replace '???' with the container url from config
tryLoadHttpResource(
dataContainerId,
??? + fileName + ".json",
onPollSuccess,
onPollError);
};
var reset = function () {
console.log("CognitiveServices-module: reset");
baseModule.reset(sectionClassName);
$("#" + dataContainerId).empty();
};
return {
startPollingForResource: startPollingForResource,
reset: reset
};
})();
- In
Views\Upload\Index.cshtml
add the following above@section scripts
:
<br />
@Html.Partial("modules/CognitiveServices")
Add reset()
and startPollingForResource()
module-calls to the resetUpload()
and processingInProgress()
JavaScript-methods in the same file.
- In
BlobContainers.cs
add constants for the blob containers for each the output-images from the colormatrix-function:
public const string CognitiveServices = "imagepipeline-cognitiveservices";
Add new constant to the All
string-array at the top of the class.
- In
JsConfig.cs
add the constant as new property:
public string CognitiveServicesContainerUrl => StorageBaseUrl + BlobContainers.CognitiveServices + "/";
- In the
GAB.ImageProcessingPipeline.Functions
project, add a new function for extracting metadata:CognitiveServicesFunction.cs
.
The function should be triggered by the original image blob container. In CognitiveServicesFunction
, add a BlobTrigger
that listens to BlobContainers.Originals
. The Blob-output should go into BlobContainers.CognitiveServices
as a json-file.
Use the following template:
// TODO Assignment 3: replace '???'
public static class ???
{
[FunctionName("???")]
public static async Task Run(
[BlobTrigger(??? + "/{name}", Connection = Constants.BlobStorageConnectionName)] Stream image,
[Blob(???, FileAccess.ReadWrite)] CloudBlockBlob jsonBlob,
string name,
TraceWriter log)
{
AppSettings.TrackWriterLogAll(log);
await CognitiveServicesProcessor.Process(image, jsonBlob, name, log);
}
}
- Add a new C#-class
CognitiveServicesProcessor
inGAB.ImageProcessingPipeline.Common\Functions
:
public static class CognitiveServicesProcessor
{
public const string OutputBlobPath = BlobContainers.CognitiveServices + "/" + "{name}.json";
public static async Task Process(Stream image, CloudBlockBlob jsonBlob, string name, TraceWriter log)
{
log.Info($"[CognitiveServices] - Triggered for image name: {name}, size: {image.Length} bytes");
try
{
image.Position = 0;
image.Seek(0, SeekOrigin.Begin);
var data = await GetCognitiveServicesData(image);
var json = JsonConvert.SerializeObject(data);
await JsonBlobUploader.UploadBlob(jsonBlob, json);
log.Info($"[CognitiveServices] - Analysis completed for {name}. File location: {jsonBlob.Uri.AbsoluteUri}");
}
catch (Exception exception)
{
log.Error($"[CognitiveServices] - Failed: {exception.Message}", exception);
}
}
private static async Task<AnalysisResult> GetCognitiveServicesData(Stream image)
{
// Remark: Use ApiRoot that is correct for the region in which
// you created your SubscriptionKey!
// Source: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/vision-api-how-to-topics/howtosubscribe
var subscriptionKey = AppSettings.CognitiveServicesSubscriptionKey;
var apiRoot = AppSettings.CognitiveServicesApiRoot;
// TODO Assignment 3: specify the visual features you want (tags and categories)
var features = ???
// TODO Assignment 3: create a VisionServiceClient
var visionClient = ???
// TODO Assignment 3: use the client to analyze the image and extract the features
return await ???
}
}
Test your new module. When you upload an image you should receive tags and categories from the Computer Vision API in the new module's section.
Deploy to Azure and test the new functionality when you're done with local testing.
Want an extra challenge? See "Challenge 3: Azure Cognitive Services & Computer Vision API - Descriptions and recognizing celebrities and landmarks" below
Take-away: You have learnt about other approaches for a client get a resource once it's ready to be served to the user.
Goal: User sees processed images quicker as you have designed a more efficient solution that does not rely on polling for the processed image.
Polling is a way of checking if an expected resource is available at a given interval, once the resource is available the polling stops. One might poll at fixed intervals or try intervals with exponential back-off to not hit the server with fixed waves of requests. In stead of the server telling each client when the resource is ready, the client asks at regular intervals. This scales very well at the cost of eventually getting the resource, instead of immediately getting it.
The customer wants the user to see the processed image as soon as possible. What other alternatives to polling are there? Can the web app get a notification from somewhere immediately when the processing of the image is complete, and then render the image immediately?
Take-away: You have learnt how to use color-matrices to add other effects to images.
Goal: User can see the uploaded image as sepia, with transparency, as a negative or other new variations of your choice.
Give Snapchat some competition by designing new fancy image filters with ColorMatrix
. Get started by looking for inspiration in BitmapExtensions.cs
.
Resources:
Challenge 3: Azure Cognitive Services & Computer Vision API - Descriptions and recognizing celebrities and landmarks
Take-away: You have learnt more about the Computer Vision API and used it to get additional metadata about an image, such as image descriptions and Domain Specific Content.
Goal: User can see a description or the name of a famous person in the image.
Use Azure Cognitive Services and the Computer Vision API to get an image description, or why not the name of a celebrity or a landmark by using Domain Specific Content. Can you create a service that recognizes the Eiffel tower or Buckingham Palace?
Resources:
Take-away: You have learnt about other types of Function-triggers and have implemented some in order to improve the architecture design by being more time and/or resource efficient.
Goal: Given you have a working solution with BlobTrigger
as triggers, now try to improve it by switching from BlobTrigger
(and perhaps also Blob Containers?) to use other types of Function-triggers such as Event hub, Web hooks, Queue storage, Service bus and others. The end-user should experience the improvements as quicker time-to-results in the User Interface. How much quicker can you get the results?
Remove the use of BlobTrigger
and try to use another type of trigger. Can you get away with just using simple web hooks, or even queue messages perhaps?