Quality Assurance concepts - tinasamson/QAautomation GitHub Wiki
Testing
Software that doesn´t work or have minor issues might cause: lost of reputation, time, money or loss of life. Our goal should always be creating a stable functional software.
Objectives
- Identify defects: this will reduce the total level of insuficiant quality.
- Gain confidence: Confidence it has reach the quality and has met the requirements.
- Inform stakeholders: Assist the quality of software to inform stakeholders so he/she can make a decision upon the info.
- Prevent defects: Used to prevent defects by evaluating work products such as requirements, user stories, design, etc..
- Verify requirements fullfillment: And is implemented.
- Validation: Checking whether the system will meet user and other stakeholder needs in its operational environments.
- Compliance: To comply with contractual, legal, or regulatory requirements or standards (Example: ISO)
All this objective serve one great objective which is to reduce the level of risk of insufficient software. Objectives can vary depending on:
- Component or system context being tested.
- Where we are in the development life cycle.
- Development life cycle model used.
Why is testing necessary?
It can help reduce the risk of failures during operation. Fixing defects contributes to the quality of the components or systems.
Testing techniques:
- Dynamimc testing: Execution of the component or system being tested with some test data. This requires that the software code is implemented and running. It means the bug are actually there and we are trying to find them.
- Static testing: Strategy that prevents the bugs from appearing in the software. It doesn´t involve the execution of the component or system being tested (no code to execute or run). It involves techniques such as reviews of documents (requirements, user stories, etc..)
Both dynamic and static testing can be used for achieving similar objectives. It provides information that can be used to improve the system being tested and the development and testing processes,
Who can use these techiniques?
Anyone in the software life circle (devs, customer, and or system testers)
How testing contributes to the sucess of the software
- By involving tester in requirements reviews on user stories refinements so defect can be detected and this reduces the risk of incorrect or untestable functionality being developed.
- By working closely with system designers while it is designed can increase the understanding of the design and how to test it and can reduce the risk of design defects, anabeling test to be identified at an early stage.
- By working with developers while the code is being developed can increase the understanding of the code and how to test it, this can reduce the risk of defects within the code and the tests.
- By verify and validate the software prior to release can defect failures that might otherwise have been missed and support the process of removing the defects that causes the failure. Also called dynamic testing and this increases the likelihood that the software meets, stakeholders needs and satisfies requirements.
Dynamic testing: -> Good test is one that finds a defect if there is one. -> A test that doesn´t find a defect has consumed resources but added "almost" no value
How to design tests to find defects?
- Effective: By using proven documents test design techniques.
- Efficient: Find the defect with the least effort, time, cost and resources.
Quality Assurance is typically for used on advance to proper processes in order to provide confidence that appropiate levels of quality will be achieved. Work products: When processes are carried out properly, the work products or the outputs created by those processes are generally of higher quality, which contributes to defect prevention. Quality Control: Involves test activities, that support the achievement of appropiate levels of quality. These test activities are part of the overal software development or maintenance process.
Quality assurance supports proper testing. Testing contributes to the achievement of quality in a variety of ways
Quality management: Includes both quality assurance and quality control among other activities.
Reasons for errors
- Time pressure
- Human fallibility
- Inexperienced staff
- Miscommunication
- Complexity -> code, design, technologies, etc
- Complex interfaces
- New technologies
Defects
Functional: For example defect because of miscalculations Non-functional: For example a sys tem that can't accept more than 1000 users at a specific time.
With testing we ensure that key functional and non-functional requirement are examined before release and defects are reported to the devs to fix. We measure the quality of software in terms of the number of defects found, the test run, and the system covered by the tests.
Do you think testing increases the quality of the software?
No, testing gives confidence in the quality of the software and the quality increases when defects are fixed.
Failure
- Caused by defects in the code.
- Caused by environmental conditions -> defects in firmware due to pollotion, radiation, etc.
False positive vs False negative
We did something wrong (False) -> NEGATIVE: We didn´t find defect -> test that don't detect defects -> More dangerous and lead to a severe problem after release
We did something wrong (False) -> POSITIVE: We thought we found a defect -> Aren't actually defects, it is a bad bug report
The root cause of defects are the first action or conditions that contributed to creating the defects
Analize defects: Identify root causes, so that we can reduce the occurrence of similar defects in the future.
Testing vs debugging
Testing: Executing test can show failures that are caused by defects. __ Debugging: Dev activity that finds, analyzes and fixes defects. __ Re-testing / confirmation testing__: Checks whether the fixes resolved the defects.
Test coverage
Test coverage: A metric in software testing that measures the amount of testing performed by a set of tests. It measures the effectiveness of our testing.
What parts can measure for coverage?
- Requirements coverage: Has the software been tested against all requirements.
- Structural coverage: Has each design element of the software been exercised during testing classes, functions
- Implementation coverage: Has each line of code of the software been exercised during testing or not.
Why do we do test coverage?
- To find the areas in specified requirements which is not covered by our tests
- To know where we need to create more test cases to increase our test coverage
- To identify a quantitative measure of test coverage, which is an indirect method for quality check
- To identify meaningless test caes that do not increase coverage.
The seven testing principles
- Testing show the presence of defects, not their absence
- Exhaustive testing is impossible
- Early testing saves time and money
- Defect cluster together
- Beware of the pesticide paradox
- Testing is context dependent
- Abscence of error is a fallacy
How much testing is enough?
Depens on the risk: -> The risk of missing important faults -> The risk of failure costs -> The risk of releasing untested or under-tested software -> The risk of losing credibility and market share -> The risk of missing a market window -> The risk of over-testing, ineffective testing
Risk information
Is used to determine: -> What to test first -> What to test most -> Allocate the time available for testing by prioritizing testing -> What no to test -> How thoroughly to test each item
Test condition: An item or event of component or system that could be verified by one or more test cases. Test case: A set of input, values, preconditions, expected results and post conditions, developed for a particular objective or test conditions.
Test Suite:
- Allows you to categorize test procedures in such a way that they match your planning and analysis needs.
- A test procedure can be adde to multiple test suites and test plans. Test suites are created which in turn can have any number of tests.
- Are created based on the cycle or based on the scope. It can contain any type of tests, functional or non-functional.
The test process
Various test activities, one should go through to do proper testing. The test process depends on many factors depending on the context. Test process in context: -> test activities and tasks -> test work products. -> Traceability between the test basis and test work products.
Test planning: Were we define the objectives of testing, decide what to test, who will do the testing, how they will do the testing, etc...
Test Monitoring and Control
- Test monitoring: Is the ongoing activity of comparing actual progress against the test plan using any test monitoring metrics defined in the test plan.
- Test control: Taking any neccessary action to stay on track to meet the target.
Evaluating exit criteria
-> During test monitoring and control, we evaluate the exit criteria. -> Evaluating exit criteria is the activity where test execution results are assessed against the defined objectives.
Test Analysis
- It is concerned with the fine detail of knowing what to test and breaking it into fine testable elements that we called test conditions.
- It is the activity during which general testing objectives are transformed into real test conditions.
- During test analysis any information or documentation we have is analysed to identify testable features and define associated test conditions.
Test basis: Is any sort of documentation that we can use as a reference or base to know what to test.
Test Analysis activities:
- Analyzing and understanding any documentation that we will use for testing to make sure it is testable. Examples of test basis includes:
- Requirements specifications
- Design and implementation information
- The implementation of the component or system itself
- risk analysis reports.
- Evaluating the test basis and test items to identify defects of various types such as: ambiguities, omissions, inaccuracies, contradictions, inconsistencies, superfluous statements.
After analyzing, understanding and evaluating the test basis, we should be able to:
- Identify features and sets of features to be tested. Defining and prioritizing test conditions for each feature based on analysis of the test basis and considering functional, non-functional and structural characteristics, other business and technical factors and level of risks.
Test monitoring and control
- Our goals is to reduce the likelihood of omitting important test conditions and to define more precise and accurate test conditions.
- The application of black-box, white-box and experience based test techniques can be useful in the process of test analysis.
Test design
- The test conditions are elaborted into high-level test cases, sets of high-level test cases and other testware.
- Test analysis answers the question "what to test?" while test design answers the question "How to test"
Activities
- Designing and prioritizing test cases and sets of test cases.
- Identifying necessary test data to support test conditions and test cases.
- Designing the test environment and identifying any requiered infrastructure and tools.
- Capturing bi-directional traceability between the test basis, test conditions, test cases and test procedures.
Test design also result in the identification of similar types of defects in the test basis and the identificaction of defects is an important potential benefit.
Test implementation
Answers the question: "Do we now have everything in place to run the tests?"
Activities
- Developing and prioritizing test procedures, and potentially creating automated test scripts.
- Creating test suites from the test procedures and automated test scripts.
- Arraging the test suites within a test execution schedule in a way that results in efficient test execution.
- Building the test environment
- Prepare and implement test data and to ensure it is properly loaded in the test environment.
- Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites.
Test design, test analysis and test implementation
Test conditions -> Test analysis Test cases -> Test design Test procedures -> Test implementation
We create test conditions in test analysis and those test conditions grow into test cases in test design and we put the steps to create the test procedures in test implementation.
Design the data -> Test design Implement the data -> Test implementation
Test cases can´t be test cases without data.
Test execution
-> test suites are run in accordance with the test execution schedule -> As test are run, their outcome, the actual results need to be logged and compared to the expected results. -> Whenever there is a discrepancy between the expected and actual results, test incident (bug report) should be raised to trigger an investigation.
Activities
- Keeping a log of testing activities, including the outcome (pass/fail) and the versions of software, data and tools while recording the ID's and versions of the test items or test object, test tool and test ware we used in running the tests.
- Running test cases in the determined order manually or using test automation tools.
- Comparing actual results with expected results.
- Analyzing anomalies to establish their likely causes.
- Reporting defects based on the failures observed with as much information as possible and communicate them to the developer to try and fix them.
- Retest or repeat test activities to confirm that the bug was actually fixed which is called confirmation testing.
- Verifying and updating bi-derectional traceability between the test basis, test conditions, test cases, test procedures and test results.