07. Test Management - idavidov13/ISTQB-Foundation GitHub Wiki

TEST ORGANIZATION

Independent testing - If there is a separate test team, approaches to organizing it vary, as do the places in the organizational structure where the test team fits. Since testing is an assessment of quality, and since that assessment may not always be perceived as positive, many organizations strive to create an organizational climate where testers can deliver an independent, objective assessment of quality.

Levels of independence - You might find specialists in the business domain (such as users of the system), spe­cialists in technology (such as database experts), and specialists in testing (such as security testers or performance test experts) in a separate test team, as part of a larger independent test team, or as part of a contracted outsourced test team

Potential benefits of independence - An independent tester brings a skeptical attitude of professional pessimism, a sense that, if there's any doubt about the observed behavior, they should ask: 'Is this a defect?'

Potential drawbacks of test independence - It is possible for the testers and the test team to become isolated. Independent team can be a bottleneck and a source of delay. Independent testers may not have all of the information that they need about the test object, since they may be outside the development organization.

Task of the test manager and tester

Test manager - The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.

• At the outset of the project, in collaboration with the other stakeholders, devise the test objectives, organizational test policies (if not already in place), and test strategies.

• Plan the test activities, based on the test objectives and risks, and the context of the organization and the project. This may involve selecting the test approaches, estimating time, effort and cost for testing, acquiring resources, defining test levels, types and test cycles and planning defect management.

• Write and update over time any test plan(s).

• Coordinate the test plan(s) with other project stakeholders, project managers, product owners and anyone else who may affect or be affected by the project or the testing.

• Share the testing perspective with other project activities, such as integration planning, especially where third-party suppliers are involved.

• Lead, guide and monitor the analysis, design, implementation and execution of the tests, monitor test progress and results, and check the status of exit criteria (or definition of done).

• Prepare and deliver test progress reports and test summary reports, based on information gathered from the testers.

• Adapt the test planning based on test results and progress (whether documented in test progress or summary reports or not) and take any actions necessary for test control.

• Support setting up the defect management system and adequate configuration management of the testware, and traceability of the tests to the test basis.

• Produce suitable metrics for measuring test progress and evaluating the quality of the testing and the product (test object).

• Recognize when test automation is appropriate and, if it is, plan and support the selection and implementation of tools to support the test process, including setting a budget for tool selection (and possible purchase, lease and support and training of the team), allocating time and effort for pilot projects and providing continuing support in the use of the tool(s). (See Chapter 6 for more on tool sup­port for testing.)

• Decide about the implementation of test environment(s) and ensure that they are put into place before test execution and managed during test execution.

• Promote and advocate the testers, the test team and the test profession within the organization.

• Develop the skills and careers of testers, through training, performance evaluations, coaching and other activities, such as lunch-time discussions or presentations.

Tester - A skilled professional who is involved in the testing of a component or system.

• Reviewing and contributing to test plans from the tester's perspective.

• Analyzing, reviewing and assessing requirements, user stories and acceptance criteria, specifications and models (that is, the test basis) for testability and to detect defects early.

• Identifying and documenting test conditions and test cases, capturing traceability between test cases, test conditions and the test basis to assist in checking the thoroughness of testing (coverage), the impact of failed tests and the impact on the tests of changes in the test basis.

• Designing, setting up and verifying test environments(s), coordinating with sys­tem administration and network management.

• Designing and implementing test cases and test procedures, including automated tests where appropriate.

• Acquiring and preparing test data to be used in the tests.

• Creating a detailed test execution schedule (for manual tests).

• Executing the tests, evaluating the results and documenting deviations from expected results as defect reports.

• Using appropriate tools to help the test process.

• Automating tests as needed (for technical test specialists), as supported by a test automation engineer or expert or a developer. (See Chapter 6 for more on test automation.)

TEST PLANNING AND ESTIMATION

Test plan - Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities.

Test planning - The activity of establishing or updating a test plan.

Test strategy (organizational test strategy) - Documentation that expresses the generic requirements for testing one or more projects run within an organization, providing detail on how testing is to be performed and is aligned with the test policy.

Test approach - The implementation of the test strategy for a specific project.

  • Analytical - In this strategy, tests are determined by analyzing some factor, such as requirements (or other test basis) or risk.

  • Model-based - In this strategy, tests are designed based on some model of the test object.

  • Methodical - In this strategy, a pre-defined and fairly stable list of test condi­tions is used.

  • Process- or standard- compliant - In this strategy, an external standard or set of rules is used to analyze, design and implement tests.

  • Directed - In this strategy, stakeholders or experts (technology or business domain experts) may direct the testing according to their advice and guidance.

  • Regression-averse - In this strategy, the most important factor is to ensure that the system's performance does not deteriorate or get worse when it is changed and enhanced. To protect existing functionality, automated regression tests would be extensively used, as well as standard test suites and the reuse of existing tests and test data.

  • Reactive - In this strategy, the tests react and evolve based on what is found while test execution occurs, rather than being designed and implemented before test execution starts.

Entry criteria (definition of ready) - The set of conditions for officially starting a defined task.

Exit criteria (completion criteria, test completion criteria, definition of done) - The set of conditions for officially completing a defined task.

Test estimation - The calculated approximation of a result related to various aspects of testing, (for example, effort spent, completion date, costs involved, number of test cases, etc.), which is usable even if input data may be incomplete, uncertain or noisy.

Metrics-based technique: estimating the test effort based on metrics of former similar projects, or based on typical values.

Expert-based technique: estimating the test effort based on the experience of the owners of the testing tasks or by experts.

TEST MONITORING AND CONTROL

Test monitoring - A test management activity that involves checking the status of testing activities, identifying any variances from the planned or expected status and reporting the status to stakeholders.

Test control - A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.

CONFIGURATION MANAGEMENT

Configuration management - A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.

Configuration management for testing may involve ensuring the following:

• All test items of the test object are uniquely identified, version controlled, tracked for changes and related to each other, that is, what is being tested.

• All items of testware are uniquely identified, version controlled, tracked for changes, related to each other and related to a version of the test item(s) so that traceability can be maintained throughout the test process.

• All identified work products and software items are referenced unambigu­ously in test documentation.

RISKS AND TESTING

Risk - A factor that could result in future negative consequences.

Risk level (risk exposure) - The qualitative or quantitative measure of a risk defined by impact and likelihood.

Product risk - A risk impacting the quality of a product.

Project risk A risk that impacts project success.

Risk-based testing - Testing in which the management, selection, prioritization and use of testing activities and resources are based on corresponding risk types and risk levels.

Defect management - The process of recognizing and recording defects, classifying them, investigating them, taking action to resolve them and disposing of them when resolved.

image