Infra Homework 06 - TheEvergreenStateCollege/upper-division-cs-23-24 GitHub Wiki

Web Infrastructure - Homework 06

Back to Web Engineering '24 Winter

Setup

To keep your AWS server up-to-date with new features from our in-class activities, follow our week-by-week, comprehensive AWS Server Setup

Lap 1: Cloud

For Lap 1 this week in Web Infrastructure, we have these goals

  • allow users to authenticate to our REST API with JSON Web Tokens (JWTs) or passwords, and learn best practices for both

Readings and Watchings

Read and/or watch these two sections of the API design with NodeJS and Express course.

Log into your AWS server and make changes to your API server from Infra HW 03 and HW 02, where you started your Prisma database schema and started making REST handlers for GET and POST methods for the Product / Update example.

Section 5: Authentication Section 6: Route & Error Handlers

The entire course for reference, to give context to those sections, let you review previous sections, or look ahead to upcoming sections. https://frontendmasters.com/courses/api-design-nodejs-v4/

Some short notes accompanying the above. https://hendrixer.github.io/API-design-v4/

Optional Tutorial

If you would still like more practice running through a Prisma + REST API tutorial from scratch with detailed step-by-step commands, you can follow this tutorial from Digital Ocean:

https://www.digitalocean.com/community/tutorials/how-to-build-a-rest-api-with-prisma-and-postgresql

Code and Submit on GitHub

You should use this assignment to begin work on your final project. It is okay if you are still looking for teammates or only have a rough idea of your project. As you go, you will develop your idea further, attract teammates that are right for you, or grow your confidence as a solo programmer.

You can apply the code that you type from the videos and in class, and adapt them for your project in the following directory.

You will need to use the cd and mkdir commands to change to correct directory and create the new directory as necessary.

<repo_dir>/web-24wi/projects/<your_project_name>

In either case, follow our Git Workflow.

If you get stuck outside of class times or office hours, reach out on the class discord.

Code Interview

For your code interview, you will be asked to demonstrate hitting two or three of the routes with curl commands from the FrontEndMasters videos above, on your AWS server host or that of your teammate.

It does not have to work perfectly for you to request a code interview, just that you be willing to attempt to describe what is happening, and to keep making progress on it.

Lap 2: Rust

Reading

Read and do the quizzes in each chapter.

Rust Book: Chapter 09 - Error Handling

Rust Book: Chapter 10 - Generic Types, Traits, Lifetimes'

Code and Submit to GitHub

We will revisit our search-crawler from the Rust Lap 2 from Infra Homework 03 .

To review: our search-crawler accepts three arguments, and fetches pages recursively in a breadth-first search traversal starting from the <start_url> argument, stopping at depth <max_depth> and returning any pages (by just printing the given line + URL) that match the <search_term>

./target/debug/<search-crawler-bin> <start_url> <search_term> <max_depth>

Before, you may have skipped over broken links or any HTTP fetches that did not return status code 200, and we only crawled <a> anchor tags with HTTP URLs.

Now, add the following features to your code:

  1. For any HTTP fetches that result in error codes (anything other than 200) create enum types and save the failed HTTP fetches into a separate list from the successful search hits. (Discard the 200 fetch requests that are not search hits.) You might consider pattern matching.
  2. In addition to crawling <a> anchor tags and HTML files, we would also like to now crawl SVG files that are linked in <img> image tags. Use generics in your crawling function to handle both these tags and file types, and both HTML and SVG files should still be searched for the search term.
  3. In the comments of your code, include a discussion of how you might incorporate trait bounds, trait implementation functions, and the various lifetimes of trait members for crawling an HTML page that has multiple SVG elements to crawl, as well as outgoing links to other HTML pages.

You can test your code on this example of a Wikipedia page with an SVG. It contains this SVG diagram, and you can use the search term albedo which should match an SVG text item.

⚠️ **GitHub.com Fallback** ⚠️