master test plan - N4SJAMK/teamboard-meta GitHub Wiki

INTRODUCTION

This is the master test plan for a product called Contriboard.

Project and project objective

Objective of the master test plan

The objective of the Master Test Plan is to inform all of the individuals involved in testing about the testing process approach, including the mutual relations and activities. This plan describes approach and activities.

Involved in creating the master test plan

This test plan is created by Mikko Kemppinen and Arttu Henell.

Client

Clients are normal people who are using Contriboard.

Provider

JAMK University of Applied Sciences provides Challenge Factory with workspace, skills, computers and technology.

Scope

New upcoming testable features can be found in Roadmap. Application's testable functionalites are determined via use cases. Each use case is tested with correct and incorrect inputs for high coverage. Stability, scalability and performance are also in the scope range.

Out of scope

Acceptants and acceptance criteria

Acceptance of tested functionality is based on the correct way of working for that specific functionality. Any function cannot overthrow the application. Serious errors can't exist in the compulsory functionalities of the application. Manual test cases are predetermined for passing or failing.

DOCUMENTATION

TEST STRATEGY

Main functionalities are tested via Robot Framework, FMBT and manual testcases.

APPROACH

Each test scenarios are created by following imaginary stories about clients using the application. For effective automation testing the stories follow an common storyline for a normal user.

TEST LEVELS

1. Unit testing

  • The programmer creates functional tests during the coding phase with Karma. Created test can be found here(functional testing).

2. System testing

Functional testing

  • Automated Robot Framework scenarios are performed in "SUT(System under test)" to find any new failures before integration(regressio).

  • Manual test case pool is also created, executed and can be found here. "Contriboard" is the selected test project.

  • Explorative testing is done during the development phase by programmers.

  • Regressio testing methods are fMBT. Model based testing formula can be found here

Non-functional testing

  • Stability will be tested with looped automation test running Robot Framework scenario tests. These tests are run in local client. All the stability test stats are saved for later inspecting and analyzing. Reliability and functionality is tested in addition.

  • Locust is used to test performance. Application is stressed with users and then tested how much the system can handle. This testing is done in Foreman test environment for realistic results.

Security testing

3. Acceptance testing

  • Collaborative Testing Sessions are used to validate product quality before moving it in production mode.
  • Automated Robot Framework scenarios are again performed in SUT(System under test) or in "Foreman" environments to find any new failures before acceptance(regressio). This is also the smoke test phase.

TEST ENVIRONMENT

Testing tools

Same scenarios are used for stability tests...

Tools used in testing process:

Hardware

Workstations for local testing

MANAGEMENT

Test process

Testing process for different tests:

  • Manual test cases are executed in TestLink. Tester follows the test case steps and executes the test in SUT.

  • Testlink is used as an documentation tool for all the tests, including manual tests. Manual tests are executed and reported in Testlink for later inspection. This gives us the control to observe and manage result statistics during the testing phase.

  • Robot Framework with selenium2 library is used for automation tests. Scenarios are created in Robot Framework and ran with different versions of the application.

  • Stability Tests are run on a local client and executed for long period of times. Stats and version tags are connected for later analyzing.

  • NoiseGenerator is used to create realistic environment for the application. NoiseGenerator creates network delay.

  • Performance is tested with Foreman and locust.

Test infrastructure management

  • Test product environment called "Foreman" is created and split on a multiple computers to reduce the load. This test environment is used to run tests during the development continuum. This environment is created for performance testing purpose, it simulates the actual product environment. Build instructions can be found here.

  • We also have an additional test product environment called "SUT(System under test)". This environment runs on one computer, and it cannot simulate the actual product. Functional and stablility tests can be run on this platform. Multiple users can join this platform. Build process can be found here.

  • Robot Framework with selenium2 library is used for automation tests. Only one user can be on the board because this test environment is runned locally. Test client is installed after cloning the github repository. README file tells everything you need to know.

Test product management

Test environment version is selected by using GitHub tags for version controlling. These can also be found in contriboard's profile -> "about"-dialog. This helps us to determine what version we are testing.

Tags can be found under the "branch:master"-button:

Defects procedure / Error management

All defects and enhancement proposals should be reported using GitHub issue tracking, to address and debug defects. Defects and enhancement proposals are classified by different labels. In addition customer can report defects or propose enhancements using UserVoice feature in the product.

Milestones

Challenge factory is using scrum. Every sprint is a milestone for testing.

Goal

Our main goal is a successful test management and documentation over the project development phase.

Results