06. Dynamic Testing - idavidov13/ISTQB-Foundation GitHub Wiki

Dynamic testing - Testing that involves the execution of the software of a component or system.

image

Black-box test techniques

Black-box test technique (black-box technique, specification­based technique, specification-based test technique) - A procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system, without reference to its internal structure.

They are called black-box because they view the software as a black box with inputs and outputs, but they have no knowledge of how the system or compo­nent is structured inside the box. In essence, the tester is concentrating on what the software does, not how it does it. When performing system or acceptance testing, the requirements specification or functional specification may form the basis of the tests. When performing component or integra­tion testing, a design document or low-level specification may form the basis of the tests.

1. Equivalence partitioning - A black-box test technique in which test cases are designed to exercise equivalence partitions by using one representative member of each partition.

The idea behind the technique is to divide (that is, to partition) a set of test con­ditions into groups or sets where all elements of the set can be considered the same, so the system should handle them equivalently, hence 'equivalence partitioning'. Equivalence partitions are also known as equivalence classes: the two terms mean exactly the same thing. The EP technique then requires that we need test only one condition from each partition. This is because we are assuming that all the conditions in one partition will be treated in the same way by the software. If one condition in a partition works, we assume all of the conditions in that partition will work, and so there is little point in testing any of these others. Conversely, if one of the conditions in a partition does not work, then we assume that none of the conditions in that partition will work so again there is little point in testing any more in that partition.

For example, a savings account in a bank earns a different rate of interest depend­ing on the balance in the account. In order to test the software that calculates the interest due, we can identify the ranges of balance values that earn the different rates of interest. For example, if a balance in the range $0 up to $100 has a 3% interest rate, a balance over $100 and up to $1,000 has a 5% interest rate, and balances of $1,000 and over have a 7% interest rate, we would initially identify three valid equivalence partitions and one invalid partition as shown below. image So for example, we might choose to calculate the interest on balances of -$10.00, $50.00, $260.00 and $1,348.00

Summary of EP characteristics:

• Valid values should be accepted by the component or system. An equivalence partition containing valid values is called a valid equivalence partition.

• Invalid values should be rejected by the component or system. An equivalence partition containing invalid values is called an invalid equivalence partition.

• Partitions can be identified for any data element related to the test object, including inputs, outputs, internal values, time-related values (for example before or after an event) and for interface parameters (for example integrated components being tested during integration testing).

• Any partition may be divided into sub-partitions if required, where smaller differences of behaviour are defined or possible. For example, if a valid input range goes from -100 to 100, then we could have three sub-partitions: valid and negative, valid and zero, and valid and positive.

• Each value belongs to one and only one equivalence partition from a set of partitions. However, it is possible to apply EP more than once and end up with different sets of partitions, as we will see later under 'Applying more than once' in the section on 'Extending equivalence partitioning and boundary value analysis').

• When values from valid partitions are used in test cases, they can be combined with other valid values in the same test, as the whole set should pass. We can therefore test many valid values at the same time.

• When values from invalid partitions are used in test cases, they should be tested individually, that is, not combined with other invalid equivalence partitions, to ensure that failures are not masked. Failures can be masked when several failures occur at the same time but only one is visible, causing the other failures to be undetected.

• EP is applicable at all test levels.

2. Boundary value analysis - A black-box test technique in which test cases are designed based on boundary values.

Boundary value analysis (BVA) is based on testing at the boundaries between par­titions that are ordered, such as a field with numerical input or an alphabetical list of values in a menu. It is essentially an enhancement or extension of EP and can also be used to extend other black-box (and white-box) test techniques.

As an example, consider a printer that has an input option of the number of copies to be made, from 1 to 99. image

To apply BVA, we will take the minimum and maximum (boundary) values from the valid partition (1 and 99 in this case) together with the first or last value respectively in each of the invalid partitions adjacent to the valid partition (0 and 100 in this case). In this example we would have three EP tests (one from each of the three partitions) and four boundary value tests.

Let's return to the savings account system described in the previous section: image Because the boundary values are defined as those values on the edge of a partition, we have identified the following boundary values: -$0.01 (an invalid boundary value because it is at the edge of an invalid partition), $0.00, $100.00, $100.01, $999.99 and $1,000.00, all valid boundary values.

Two- and three-value boundary analysis Two- - 0, 1, 99, 100 Three- - 0, 1, 2, 98, 99, 100

3. Decision table testing - A black-box test technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.

Decision tables provide a systematic way of stating complex business rules, which is useful for developers as well as for testers. Decision tables can be used in test design whether or not they are used in the development, as they help testers explore the effects of combinations of different inputs and other software states that must correctly implement business rules.

For example - If you are a new customer opening a credit card account, you will get a 15% discount on all your purchases today. If you are an existing customer and you hold a loyalty card, you get a 10% discount. If you have a coupon, you can get 20% off today (but it cannot be used with the new customer discount). image image

4. State transition testing (finite state testing) - A black-box test technique using a state transition diagram or state table to derive test cases to evaluate whether the test item successfully executes valid transitions and blocks invalid transitions.

State transition testing is used where some aspect of the system can be described in what is called a 'finite state machine'. This simply means that the system can be in a limited (finite) number of different states, and the transitions from one state to another are determined by the rules of the 'machine'. A state transition model has four basic parts:

• The states that the software may occupy (open/closed or funded/insufficient funds).

• The transitions from one state to another (not all transitions are allowed).

• The events that cause a transition (closing a file or withdrawing money).

• The actions that result from a transition (an error message or being given your cash). image image

5. Use case testing (scenario testing, user scenario testing) - A black-box test technique in which test cases are designed to execute scenarios of use cases.

A use case is a description of a particular use of the system by an actor (a human user of the system, external hardware or other components or systems). Each use case describes the interactions the actor has with the subject (i.e. the component or system to which the use case is applied), in order to achieve a specific task (or, at least, produce something of value to the actor). Use cases are a sequence of steps that describe the interactions between the actor and the subject. A use case is a specific way of designing interactions with software items, incorporating requirements for the software functions represented by the use cases. image

White-box test techniques

White-box test technique (structural test technique, structure-based test technique, structure­based technique, white-box technique) - A procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.

They are called white-box because they use the internal structure of the software to derive test cases and require knowledge of how the software is implemented, that is, how it works.

White-box test techniques that would typically apply at the component level of testing, focus on the structure of a software component, such as statements, decisions, branches or even distinct paths.

1. Statement coverage - The percentage of executable statements that have been exercised by a test suite.

2. Decision coverage - The coverage of decision outcomes. (Note: this is the Glossary definition at publication, but a fuller definition would be: The percentage of decision outcomes that have been exercised by a test suite.)

Experience-based test techniques

Experience-based test technique (experience­based technique) - A procedure to derive and/or select test cases based on the tester's experience, knowledge and intuition.

In experience-based test techniques, people's knowledge, skills and background are a prime contributor to the test conditions, test cases and test data. Experience-based test techniques are used to complement black-box and white­box techniques, and are also used when there is no specification, or if the specifi­cation is inadequate or out-of-date. This may be the only type of technique used for low-risk systems, but this approach may be particularly useful under extreme time pressure.

1. Error guessing - A test technique in which tests are derived on the basis of the tester's knowledge of past failures, or general knowledge of failure modes.

Error guessing is a technique that is good to be used as a complement to other more formal techniques. The success of error guessing is very much dependent on the ski 11 of the tester, as good testers know where the defects are most likely to lurk.

There are no rules for error guessing. The tester is encouraged to think of situations in which the software may not be able to cope. Here are some typical things to try: division by zero, blank (or no) input, empty files and the wrong kind of data (for example alphabetic characters where numeric are required).

Error guessing may be based on:

• How the application has worked in the past.

• What types of mistakes the developers tend to make.

• Failures that have occurred in other applications.

2. Exploratory testing - An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.

Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (one- to two-hour) ti me-boxed test effort, the objectives and possible approaches to be used.

ne typical way to organize and manage exploratory testing is to have sessions, hence this is also known as session-based testing. A test charter will give a list of test conditions (sometimes referred to as objectives for the test session), but the testing does not have to conform completely to that charter, particularly if new areas of high risk are discovered in the session. The tester is constantly making decisions about what to test next and where to spend the (limited) time.

This is an approach that is most useful when there are no or poor specifications and when time is severely limited. It can also serve to complement other, more formal testing, helping to establish greater confidence in the software.

3. Checklist-based testing - An experience­based test technique whereby the experienced tester uses a high-level list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified.

Checklist-based testing is testing based on experience, but that experience has been summarized and documented in a checklist. Testers use the checklist to design, implement and execute tests based on the items or test conditions found in the check­list. The checklist may be based on:

• experience of the tester

• knowledge, for example what is important for the user

• understanding of why and how software fails.