Incremental Deliverable 1 Test Plan - SeoulSKY/safe-zone-system GitHub Wiki

Safe Zone Project Test Plan v1.03

Prepared by: Spencer Pilon (Test Lead)


1.0 Introduction

  • 1.1 Product
  • 1.2 Purpose

2.0 Scope

3.0 Testing Strategy

  • 3.1 Unit Testing
  • 3.2 Integration Testing
  • 3.3 System Testing
  • 3.4 Performance Testing
  • 3.5 User Acceptance Testing
  • 3.6 Automated Regression Testing

4.0 Hardware Requirements

5.0 Environment Requirements

6.0 Test Schedule

7.0 Testing Matrices

8.0 Planned Tests/Testability Investments


1.0 Introduction

1.1 Product

The Safe Zone System is a system to support at-risk youth. The main system consists of an iOS app, an Android app, and a webpage. There is also an administrator page. Some features include:

  • “Message in a bottle” system which users can use to create messages and schedule them to be sent if not cancelled.
  • Safe plan creator
  • Risk assessment tool
  • Links/resources/materials that the youth may find beneficial

1.2 Purpose

The purpose of this document is to:

  1. ensure that the product meets user requirements
  2. ensure the the product meets a high level of quality

2.0 Scope

  • Acceptance Tests
  • Regression Tests
  • Smoke Tests
  • System Tests
  • Integration Tests
  • Component Tests
  • Unit Tests

3.0 Testing Strategy

Testing should adhere to Test-Driven Development strategies.

Each phase of testing will correspond to a phase of design:

  • Requirements are validated by Acceptance Tests
  • Architecture design and use cases are validated by System Tests
  • High and medium level design will be validated by Integration Tests
  • Low level design will be validated by Unit Tests

Pair programming will be used:

  • Each member of the dev team will have a partner from the testing team. The testing team member will be responsible for making sure their dev team partner's code is tested as outlined below.

3.1 Unit Testing

  • Definition: Testing for a unit of software. A unit is the smallest piece of code that can be encapsulated and used on its own. (e.g. functions/methods) React component tests will also be included in this category
  • Participants: Developers
  • Methodology: Tests will be written using the technical requirements as a guide. Developers will write a unit test and ensure it fails before starting to write the code for the test.

3.2 Integration Testing

  • Definition: Testing of more than 1 module together to ensure they still function as expected. Testing of anything used together within a single microservice will be considered integration testing.
  • Participants: Testing Team
  • Methodology: Tests will be written based design documentation at the microservice level.

3.3 System Testing

  • Definition: Testing of the entire system, or more than one microservice at a time, to test end to end functionality of the software.
  • Participants: Testing Team
  • Methodology: Tests will be written based on the high and medium level architecture design document(s), and test matrices.

3.4 User Acceptance Testing

  • Definition: Functional testing which takes place after the system, integration, and unit testing have occured. The purpose is to ensure the user requirements have been met so that the software can be moved into the production environment.
  • Participants: Testing Team, Stakeholder, End Users
  • Methodology: Acceptance testing for this project will mainly be automated in simulated environments. This will allow testing to be done as often as desired. Acceptance Testing with real users will be done as time and scheduling allows (preferably after Incremental Deliverables). Tests will be written based on user requirements.

3.5 Performance Testing

  • Definition: Testing the speed of the application and its components, as well as checking resource usage (memory, disk, etc). Testing the aplication under an expected normal load. Testing the application under extreme load.
  • Participants: Testing Team
  • Methodology: Performance Testing was not initially considered for ID0. Will update.

3.6 Automated Regression Testing

  • Definition: Re-running tests that have already been run to gain confidence that new changes haven't caused any defects in previously tested code.
  • Participants: Testing Team, Build Master
  • Methodology: Previously created unit, integration, system, and acceptance tests will be run every time code is pushed to the repository. If time to run tests becomes an issue, only selected tests will be run, based on code coverage and testing matrices.

4.0 Hardware Requirements

  • A Mac for building the iOS version of the application
  • A machine provided by the university to run all of the Docker containers
  • A machine provided by the university to run the Postgres database

5.0 Environment Requirements

  • Detox needs to be setup on a Mac (VM) for E2E testing of mobile apps (iOS and Android)
  • Selenium can be setup on the same machine
  • Jest tests can be run locally

6.0 Test Schedule

  • Tests will be run every time code is pushed to the repository

7.0 Testing Matrices

8.0 Planned Tests/Testability Investments

The below items are planned to be implemented for ID2:

  • Detox still needs to be set up on a university computer (a Mac) so that end-to-end testing can be done
  • This project uses microservices which allows us to not have to test everything together all at once, instead being able to test individual components independently.
  • Logging frameworks need to be researched and added to the project