VV Midterm - KU-SKE17/Software-Process GitHub Wiki
- week1
- week2
-
Week 3 - Test Plan, Test Report
- 1. What Information is Included in a Test Plan?
- 2. Why do we need a good test plan?
- 3. What is the "focus" of a test plan, according to the UM presenter?
- 4. Test Plan Activities
- 5. What are the Stages of Software Testing?
- Test reports
- Risk based testing
- 6. What are the 5 main Defect Reporting Activities?
- 7. What are the risks associated with testing and test planning?
- 8. Defects are recorded in a Defect Tracking System
- 9. Contents of a Defect Report
- Week 4 - Project State
- Week 5 - Test Doubles
- Week 6 - Cucumber
- Week 7 - White-box, Black-box Testing
- Is the product
right
? - Testing: consistency, completeness, correctness
- Is the
right
product? - Checking: Customer want
-
Set up
: prepare condition need for tests -
Invocation
: executing test case -
Assessment
: check test output, behavior -
Teardown
: close, delete files
- automate check by oracle
- run test case, provide feedbacks
- allow dev. to unit test methods
Fault -> Latent Error -> effective Error -> Failure
-
Fault
(ค.ผิดพลาด) service deviate from service spec -
Error
(ข้อผิดพลาด) part of system leads to Failure -
Failure
(ค.ล้มเหลว) cause of error
-
Fault avoidance
- prevent by construction -
Fault tolerance
- provide by redundancy -
Error removal
- minimize by verification -
Error forecasting
- estimate by evaluation
-
Availability
: readiness for correct service -
Reliability
: continuity of correct service -
Safety
: absence of catastrophic consequences on the user(s) and the environment -
Integrity
: absence of improper system alteration -
Maintainability
: ability for easy maintenance (repair)
Activities or phases in SDLC | V | V&V, including different kinds of testing | ||
---|---|---|---|---|
Requirements Elicition or... | <-> | create acceptance test | -> | Acceptance Testing |
System Design (Arch) | -> | create acceptance test | -> | System Testing Integration Testing |
Software, Module, or Detail Design | Module Testing | |||
Implantation on (coding) | create Test Plan | Unit Testing |
- In Agile
- developers are supposed to write and run unit tests while writing code
- use C.I.
- In V-Model
- last line [Implantation, Test Plan, Unit Testing] might overlap at the same time
-
psychological problem
- The developer doesn't try very hard to make his own code fail
- Thorough tests -> more work to fix code
-
understanding
orperspective problem
- (test from the code, not test from the spec)
- You write tests based on the code you already wrote, not based on what software should do
- Your thinking is biased (code for too long)
- Your tests might be incomplete (fail to test some part of the spec)
-
Code Coverage
-> statement coverage, branch coverage, path coverage -
Mutation Testing
-> modify some part of the code & run the tests to see if the change is defected- Change operators [+, -, *, /, //, %, or, and, not]
- System with multiple threads and/or events that can arrive in
undetermined order
- Different hardware environment can have different number of cores and CPUs, so the number of simultaneously running threads may differ, too.
Scope
Schedule
- Test Environment - equipment, software, servers
-
Resources
- people, equipment, software, servers - Requirements Traceability Matrix (RTM) - trace requirements to tests, and tests to requirements
- What is not tested
- Test cases and scripts - separate documents
Entry-exit criteria
-
Improve communication
between developers and management -
Organize
, schedule,and manage testing effort
-
List what outputs are expected
: tools, test artifacts, and reports the testers will create - Helps with
measuring software quality
Know when to stop
-
Gain management understanding
and support
"Focus on test plan
as a tool
"
Note: Concerns of tests
- Rapid change
- TBD requirement
- Test =/ no bugs left, have back up plan
- Write only what is needed, but be complete
- List what you cannot test
- Have the test plan reviewed
- Use a Template, or design one
- Make it a "living" document
- Keep it up to date & relevant
- Part of the project "information radiator" [Scrum and XP from the Trenches]
- Test plan is online
- Unit Test Plan, included RTM
- Create the test code - when is the test code written? (TDD?)
- Running unit tests (test plan: how to run tests?)
- Automate - use C.I. -> have results sent to your "informative radiator" - EveHolder used Github webhook to send notifications to Discord (information radiator)
- Integration tests: verifies that components work together as intended
- Functional tests - done by test team (& devs). The largest part of testing.
- DVT Plan & Test Cases
- System tests (E2E testing)
- Non-functional test "-ilities"
- Usability, Reliability, Scalability (load testing services)
- Performance
- Security
- Test plan is written by test group with involvement of customer
- Requires agreement by customer, management, and other "stakeholders" :Beta tests
Test status report | Test report |
---|---|
How test cycle is going | How testing effort went |
Report after each test cycle | Report at the end |
It should have:
- What’s tested / not tested
- Show where you are in the schedule
- List open defects
Risk
- Impact: Depth( severity ) Breadth( cost of damage done )
- Loss & Likelihood(how likely to have error)
- Problems
Risk analysis
: determine impact of various risk
Risk equation
: Risk = Impact * Likelihood
Risk appetite
: amount of loss that management will accept
Risk mitigation
: REDUCE, AVOID, MANAGE, TRANSFER risk
Verify it's really a defect
-
Find the root cause
(test wrong? Was it the test or app executed wrong? product wrong?) -
Determine
if it isreproducible
or repeatable (logs are keys) - how to reproduce
-
Find the minimum
steps to reproduce ((Investigatealternative path
)) - Attempt to
isolate the issue
-
Additional info
the could be useful to developers - (Determine if worth reporting)
- Ensure its not a duplicate
- Talk with the developer
- Enter into the system (defect tracker)
- Make sure it gets fixed
- what must be done so the defect report is noticed?
- when? The
status
of defect ischanged
by devs - result:
-
fixed
- passes tests -
tests still fail
- send back to devs -
fixed, but a new defect is discovered
[e.g, regression defect]
-
- Who closes a defect report? Depends on the project and organization.
- Tester can close
- Defect Review Board decides
- Lack of management support
- Lack of customer involvement
- financial
- material
- employee
- lives
- license
- reputation
- Searchable
- Consists of Defect Reports
Identifying information
- ID or number
- Submitter
- Submit Date
- Program, Component, or product it applies to.
- Product or component version
- Platform
Description
of the problem
- Title
- be brief and descriptive, try to make it useful for search
- Description
- what actually happened and what should have happened
- the test case used
- any helpful information
Status
- Overall report status: open | closed | re-open
- Severity: High (critical), Medium, Low (can be worked around), Trivial
- Priority: P1, P2, ... (Bugzilla uses these priorities)
- Resolution status: (is someone working on this? is it fixed? will it ever be fixed?)
Comment
/Notes
Miscellaneous
: (minor info of defect, step to reproduce)
Supporting Information
- Error output
- Screenshots
- Test case (code)
- "flash drive with data or files"
- trace files, error logs, etc.
activities | Unit Test | Design Verification | System Validation | Customer Acceptance |
---|---|---|---|---|
Integration testing | X | yes | X | X |
Functional test | X | yes | X | X |
Unit test plan | yes | X | X | X |
End-to-end test | X | X | yes | X |
Beta Testing | X | X | yes | yes |
Done by developers | yes | yes | X | X |
Done by test team | yes | yes | yes | yes |
Why
- Expensive to construct real component
- Want to avoid side effects on real component
- Test doubles helps avoid flakey tests(test that sometimes work sometimes don't)
- Provides ”fake” ecosystem to do efficient unit testing
Provides test inputs:
-
Test Double
-> the general term for stubs, mocks and fakes. -
Stub
-> an object that provides predefined answers to method calls. -
Mock
-> an object on which you set expectations. -
Fake
-> an object with limited capabilities (for the purposes of testing), e.g. a fake web service. -
Mockito
: framework to create test doubles
- An object that has no implementation which is used purely to populate arguments of method calls which are irrelevant to your test.
- (test that we
not care about the detail
of the objects) - Fill in dummy value into objects required as params for system under test but irrelevant to the test
- Won't bother creating, especially if it's complicated
- Dummy input data sources use by system under test (like fake database)
- don't want to mass with database, ...any?
- Lightweight implementation of heavyweight processes like database
- Use in-memory database (external database)
- Fake objects created to run the test
- Way to determine if our system are using other systems correctly
-
Spy object
: wrap around real obj to monitor interactions
In build.gradle
dependencies {
testImplementation 'org.mockito:mockito-core:3.+'
}
In testFile
import static org.mockito.Mockito.*;
import org.mockito.Mock;
import org.mockito.Mockito;
public class CoffeeMakerTest {
// test Mock objects
@Mock
private RecipeBook mockRecipeBook;
@Before
public void setUp() {
// inject mock objects for each @Mock attribute
MockitoAnnotations.initMocks(this); // deprecated
}
}
- Match .features to java file
- Looks for directory matching and string pattern matching
- And just follows the steps
- Regular expression pattern -> method arguments
- Feature:
- Keyword
- 1 per .feature file
- May have many scenarios
- Allow pattern matching for strings and can be combined into complex recognizers
-
Test Case
: Set of inputs, execution conditions, pass/fail criteria -
Test Case Specification
: Requirement to be satisfied by 1 or more tests -
Test Obligation
: A partial test specification, requirement that is more strict and require some property deemed important through testing -
Test Suite
: Set of test cases -
Test Execution
: Act of executing test case and evaluating results
-
Test Selection
: To select right input for tests -
Test Adequacy
: To determine whether or not the test meet its requirements
Goal
- approximate adequacy (to determine correctness and to reach desire level of dependency)
- by measure -> (though test suite) program structure, inputs, requirements
-
Adequacy Criteria
: A predicate of a program [true(satisfied criteria), false(not satisfied)] -
Set of test obligations
-
Functional
test: spec- At least 1 test for each requirement
-
Fault-based
test: common (bugs) dev mistakes -
Model-based
test: system model- States machines -> visit every state
- Use cases -> measure code coverage
- UML diagram -> check system behavior
-
Structural
test- Statement Coverage
- Branch Coverage
- Decision Coverage
-
Result
satisfied
- Every test obligation satisfied (specification) by 1 or more tests
- All tests pass
not satisfied
- Provide info for improvement (let you know which req need more tests)
- What to do? Unsatisfiability
- exclude (remove) unsatisfied obligation -> NO GOOD, hard to tell which executable
- conclude by measures coverage percentage (ex. 85% satisfy is okay)
- pro: satisfied %, tell progress of testing
- con: in unsatisfied %, might contain
effective faults
Comparing Adequacy Criteria
- Empirical
- Study effectiveness of different approach (depend on code structure)
- Analytical
- Now this one is on effectiveness (by stronger = give stronger guarantee)
Situations to be useful (check criteria -> define more test or check req)
-
Test Selection
approaches -> derive test cases to cover statement -
Revealing Missing Tests
-> use statement coverage -
Automate Test Generation
approaches -> use automate test
Goal
-
Effective
at finding fault -
Robust
to simple changes -
Reasonable
number of tests, analysis
-
S
pecification [implements Program] -
P
rogram [-> Program path -> asses test coverage metric -> more test] -
T
est inputs [Executed on Program] -
O
racle [Evaluate Program path]
Is testing adequate?
- Use MC/DC metric (MC/DC: good when complex bool expression)
- Generate tests to achieve the metric
TOc = P * S * 2^T * O
To find more faults
- P: non-inlined -> inlined (better)
- O: change Oracle to look at all internal state (observe every variables -> NO GOOD)
- Note: It not enough to look at input/output, also look at program structure and oracle too
-
Reachability
: likelihood to be able to reach every branch/path of the program - use: MC/DC, path coverage, test doubles
- Most coverage already ensures reachability
-
Observability
: to be able to observe the effect of inputs and state on the system by examining the outputs - use: MC/DC, strong mutation
MS = (#Dead / (#Mutates - #Equivalent)) * 100
Note.
- MS = Mutation Adequacy Score %
- Dead = value diff than original
- Mutates = all mutation programs
- Equivalent ≈ 0 (can't find)
Impact
- Reachability: Mutants in
hard-to-reach statements
arehard to kill
- Observability: Reachable mutants can still be stubborn, so observability will
improve the mutation score
by kill those mutants
- Most Oracle not Sound, not Complete
- Oracle Strength = Effectiveness (Strong > kill mutant easy -> more effective)
-
Soundness
: correctly determines that test fails - test -> alway correct
-
Completeness
: correctly determines that test succeed - program -> alway correct
-
Partition testing
: to divide a set of test conditions into groups or sets that can be considered as same
Partition principle
- Failure are usually scatters
- Find portion of input space that dense
- Partition input space into classes
Quasi Partitions
- Allow overlap partition
- Hope to find partition that dense in failures
-
Combinatorial testing
: a testing technique in which multiple combinations of the input parameters are used to perform testing of the software product
Break set of values before input
-
Category partition testing
-> put in Category -
Pairwise testing
-> pair value from each Category -
Catalog based testing
-> use experience to put attributes in Category
-> no need
program specification
-
Statement Coverage
-> every statements are executed at least once -
Branch Coverage
-> execute both if and else cases (at least 2 test) -
Decision Coverage
-> execute both if-else even when decision has conditions -
MC/DC
(Modified Condition/Decision Coverage) -> complex bool expression - OMC/DC
- use mutation, approximate by tagging semantics
- more robust than MC/DC
- check [==, !=, <, >, <=, >=]
- Selective mutant testing
- generate just simple mutant [not all]
- examine operator, and fixed #mutants base on operators
- create just enough to have confidence
Bad: unclear, not precise, ambiguous
Write test for:
- [Given] Input -> what to initiate
- [When] Test Procedure -> what/how to measure
- [Then] Expected Output -> what is expected result
*Test case can help define better requirement