03. Test Levels - idavidov13/ISTQB-Foundation GitHub Wiki

Test Level

Groups of test activities that are organized and managed together. Each level needs a test environment - An environment containing hardware, instrumentation, simulators, software tools and other support elements needed to conduct a test.

Component/Unit testing

The testing of individual hardware or software components.

The objectives of component test­ing include:

• Reducing risk (for example by testing high-risk components more extensively).

• Verifying whether or not functional and non-functional behaviours of the com­ponent are as they should be (as designed and specified).

• Building confidence in the quality of the component: this may include measur­ing structural coverage of the tests, giving confidence that the component has been tested as thoroughly as was planned.

• Finding defects in the component.

• Preventing defects from escaping to later testing.

Component testing: specific approaches and responsibilities

Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test framework or debugging tool. In practice it usually involves the developer who wrote the code. The developer may change between writing code and testing it. Sometimes, depending on the applicable level of risk, component testing is carried out by a different developer, introducing independence. Defects are typically fixed as soon as they are found, without formally recording them in a defect management tool.

Integration testing

Integration testing - Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

Component integration testing (link testing) - Testing performed to expose defects in the interfaces and interactions between integrated components.

System integration testing - Testing the combination and interaction of systems.

The objectives of integration testing include:

• Reducing risk, for example by testing high-risk integrations first.

• Verifying whether or not functional and non-functional behaviours of the inter­faces are as they should be, as designed and specified.

• Building confidence in the quality of the interfaces.

• Finding defects in the interfaces themselves or in the components or systems being tested together.

• Preventing defects from escaping to later testing.

Integration testing: specific approaches and responsibilities

One extreme is that all components or systems are integrated simultaneously, after which everything is tested as a whole. This is called big-bang integration. Big-bang integration has the advantage that everything is finished before integration testing starts. There is no need to simulate (as yet unfinished) parts. The major disadvantage is that in general it is time-consuming and difficult to trace the cause of failures with this late integration.

Another extreme is that all programs are integrated one by one, and tests are carried out after each step (incremental testing). Between these two extremes, there is a range of variants. The incremental approach has the advantage that the defects are found early in a smaller assembly when it is relatively easy to detect the cause. A disadvantage is that it can be time-consuming, since mock objects or stubs and drivers may have to be developed and used in the test.

System testing

System testing - Testing an integrated system to verify that it meets specified requirements. (Note that the ISTQB definition implies that system testing is only about the verification of specified requirements. In practice, system testing is often also about validation that the system is suitable for its intended users, as well as verifying against any type of requirement.)

System testing may include tests based on risk analysis reports, system, functional or software requirements specifications, business processes, use cases or other high-level descriptions of system behavior, interactions with the operating system and system resources. The focus is on end-to-end tasks that the system should perform, including non-functional aspects, such as performance.

System testing: objectives

• reducing risk

• verifying whether or not functional and non-functional behaviours of the system are as they should be (as specified)

• validating that the system is complete and will work as it should and as expected

• building confidence in the quality of the system as a whole

• finding defects

• preventing defects from escaping to later testing or to production.

System testing: specific approaches and responsibilities

System testing is most often the final test on behalf of development to verify that the system to be delivered meets the specification and to validate that it meets expectations; one of its purposes is to find as many defects as possible. Most often it is carried out by specialist testers that form a dedicated, and sometimes independ­ent, test team within development, reporting to the development manager or project manager.

System testing should investigate end-to-end behaviour of both functional and non-functional aspects of the system. An end-to-end test may include all of the steps in a typical transaction, from logging on, accessing data, placing an order, etc. through to logging off and checking order status in a database. Typ­ical non-functional tests include performance, security and reliability.

Acceptance testing

Acceptance testing - Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Acceptance tests typically produce information to assess the system's readiness for release or deployment to end-users or custom­ers. Although defects are found at this level, that is not the main aim of acceptance testing. The focus is on validation, the use of the system for real, and how suitable the system is to be put into production or actual use by its intended users. Regulatory and legal requirements and conformance to standards may also be checked in acceptance testing, although they should also have been addressed in an earlier level of testing so that the acceptance test is confirming compliance with the standards.

Acceptance testing: objectives

• establishing confidence in the quality of the system as a whole

• validating that the system is complete and will work as expected

• verifying that functional and non-functional behaviors of the system are as specified.

Different forms of acceptance testing

  • User acceptance testing (UAT) - Acceptance testing conducted in a real or simulated operational environment by intended users focusing on their needs, requirements and business processes.

  • Operational acceptance testing (OAT) - (production acceptance testing) Operational testing in the acceptance test phase, typically performed in a (simulated) operational environment by operations and/or systems administration staff focusing on operational aspects, for example recoverability, resource-behavior, installability and technical compliance.

  • Contractual acceptance testing - Acceptance testing conducted to verify whether a system satisfies its contractual requirements.

  • Regulatory acceptance testing - Acceptance testing conducted to verify whether a system conforms to relevant laws, policies and regulations.

Acceptance testing: specific approaches and responsibilities

The goal of acceptance testing is to establish confidence in the system, part of the system, or specific non-functional characteristics, for example usability of the system. Acceptance testing is most often focused on a validation type of testing, where we are trying to determine whether the system is fit for purpose. Finding defects should not be the main focus of acceptance testing.

image image