Zephyr details 2 - abeedal/Abeedal GitHub Wiki

Reviewing test cases for automation

Firstly we need to be aware that test automation doesn't mean the end of manual testing.

Rather than replacing human intuition and problem solving, test automation is about automating the right tests. Not everything that can be automated should be automated.

Good automation helps testers do their job. It's not there to replace them. It is a tool that enhances testing.

One of the most basic mistakes is NOT Selecting the correct test cases for automation.

So how do we decide which tests to automate and which tests to leave for manual testing?

We donโ€™t just select any test / test suite.

we need to analyse the test cases thoroughly and select the candidates for automation considering the most important factor i.e. Return On Investment (ROI)

Before we start automating a test, we need to understand what benefits we will get by automating the test. To allow us to understand the benefit of automating a test factor in the time, effort and resource invested in test automation.

Automation needs to be:

Focused

Informative

Adding value

Trustworthy

Repeatable

Time saving

The Review Process Once tests have been written and reviewed for execution in a test suite/cycle (Reviewing new test cases ) they will have a status of Ready.

All test cases within all features that have a status of Ready are picked up and evaluated for automation taking the following guide into consideration:

What tests should we automate

Review Steps Select a feature to review

Identify all test cases with a status of Ready within the feature

Assess the test cases against the automation guidelines (How do we evaluate tests for automation? )

Based on the results of the review set the status to

Automation Candidate : Test has been identified as a candidate for addition to automation framework

Manual : Test has been identified for manual testing only as it is not possible or beneficial to automate

Obsolete: Test is no longer relevant, related functionality has been superseded or is covered by other test(s)

Review: Functionality has changed or test may not be accurate and requires further analysis/clarification, Uncheck Approve checkbox

Once an Automation candidate is identified then it will be treated until technical debt by the SDETs and will receive a further review as part of ongoing automation work.

Add label

What tests should we automate

Created by mark.gamble Last updated: Aug 02, 222 min read 22 people viewed Below are the factors we need to consider to help identify which tests should or should not be automated.

When Should a Test Case Be Automated? A test case should be automated if:

Critical path:

Focuses on the features or user flows that if they fail, cause considerable damage to the business.

Follows key user journeys for the priority personas identified to use the app

Repetitive:

Needs to be run against every build/release of the application, such as smoke test, sanity test and regression test

Needs to run against multiple configurations โ€” e.g. different OS & Browser combinations.

Executes the same workflow but use different data for its inputs for each test run e.g. data-driven / scenario outlines.

Managed Data:

Requires specific data sets / combinations

Requires specific values or inputs that do not happen often to invoke specific functionality

Requires data to change or be updated during the test

Time-Consuming:

Take a long time to perform manually

Better suited to running overnight.

Stable

The requirements, the test, or the task are low risk, stable, and unlikely to change often.

Dependency

Dependent on external integrations (e.g. Bookmakers)

Useful

Can be utilised to support other types of testing such as performance testing, stress and load tests

Tests in other areas of the app depend on the functionality being executed and to be working

Easy to Automate (May require SDET input)

Uses common Step definitions

Interacts with components that are already well covered

These qualifications allow us to set standards for testing across the team and prioritise tests based on the value they offer.

When Should a Test Case NOT Be Automated? A test should not be automated if:

Will only run once (e.g. design specific changes).

Classed as a user experience test for usability (tests that require a user to respond as to how easy the app is to use).

Needs to be run ASAP. Usually, (e.g. a new feature which is developed requires quick feedback so testing it manually at first and later evaluated for automation)

Requires ad hoc/random testing based on domain knowledge/expertise - Exploratory Testing.

Intermittent tests. Tests without predictable results cause more noise than value.

Tests that require visual confirmation, however, we can capture images during automated testing and then have a manual check of the images.

Any test that cannot be 100% automated should not be automated at all unless doing so will save a considerable amount of preparation time for manual testing (e.g. large data setup).

Add label