VV Midterm - KU-SKE17/Software-Process GitHub Wiki

week1

Verification

  • Is the product right?
  • Testing: consistency, completeness, correctness

Validation

  • Is the right product?
  • Checking: Customer want

Testing

  • Set up: prepare condition need for tests
  • Invocation: executing test case
  • Assessment: check test output, behavior
  • Teardown: close, delete files

Test Frameworks

  • automate check by oracle
  • run test case, provide feedbacks
  • allow dev. to unit test methods

week2

Mistake

Fault -> Latent Error -> effective Error -> Failure
  • Fault (ค.ผิดพลาด) service deviate from service spec
  • Error (ข้อผิดพลาด) part of system leads to Failure
  • Failure (ค.ล้มเหลว) cause of error

Dependability

  • Fault avoidance - prevent by construction
  • Fault tolerance - provide by redundancy
  • Error removal - minimize by verification
  • Error forecasting - estimate by evaluation

Dependability Measure

  • Availability: readiness for correct service
  • Reliability: continuity of correct service
  • Safety: absence of catastrophic consequences on the user(s) and the environment
  • Integrity: absence of improper system alteration
  • Maintainability: ability for easy maintenance (repair)

V-Model

Activities or phases in SDLC V V&V, including different kinds of testing
Requirements Elicition or... <-> create acceptance test -> Acceptance Testing
System Design (Arch) -> create acceptance test -> System Testing Integration Testing
Software, Module, or Detail Design Module Testing
Implantation on (coding) create Test Plan Unit Testing
  • In Agile
    • developers are supposed to write and run unit tests while writing code
    • use C.I.
  • In V-Model
    • last line [Implantation, Test Plan, Unit Testing] might overlap at the same time

Problem when having a developer write tests for his own

  • psychological problem
    • The developer doesn't try very hard to make his own code fail
    • Thorough tests -> more work to fix code
  • understanding or perspective problem
    • (test from the code, not test from the spec)
    • You write tests based on the code you already wrote, not based on what software should do
    • Your thinking is biased (code for too long)
    • Your tests might be incomplete (fail to test some part of the spec)

2 techniques to measure how thorough our tests

  • Code Coverage -> statement coverage, branch coverage, path coverage
  • Mutation Testing -> modify some part of the code & run the tests to see if the change is defected
    • Change operators [+, -, *, /, //, %, or, and, not]

What is the Concurrent System and why are they difficult to test

  • System with multiple threads and/or events that can arrive in undetermined order
  • Different hardware environment can have different number of cores and CPUs, so the number of simultaneously running threads may differ, too.

Week 3 - Test Plan, Test Report

1. What Information is Included in a Test Plan?

  • Scope
  • Schedule
  • Test Environment - equipment, software, servers
  • Resources - people, equipment, software, servers
  • Requirements Traceability Matrix (RTM) - trace requirements to tests, and tests to requirements
  • What is not tested
  • Test cases and scripts - separate documents
  • Entry-exit criteria

2. Why do we need a good test plan?

  • Improve communication between developers and management
  • Organize, schedule, and manage testing effort
  • List what outputs are expected: tools, test artifacts, and reports the testers will create
  • Helps with measuring software quality
  • Know when to stop
  • Gain management understanding and support

3. What is the "focus" of a test plan, according to the UM presenter?

"Focus on test plan as a tool"

Note: Concerns of tests

  • Rapid change
  • TBD requirement
  • Test =/ no bugs left, have back up plan

4. Test Plan Activities

  • Write only what is needed, but be complete
  • List what you cannot test
  • Have the test plan reviewed
  • Use a Template, or design one
  • Make it a "living" document
  • Keep it up to date & relevant
  • Part of the project "information radiator" [Scrum and XP from the Trenches]
  • Test plan is online

5. What are the Stages of Software Testing?

1. Unit Test

  • Unit Test Plan, included RTM
  • Create the test code - when is the test code written? (TDD?)
  • Running unit tests (test plan: how to run tests?)
  • Automate - use C.I. -> have results sent to your "informative radiator" - EveHolder used Github webhook to send notifications to Discord (information radiator)

2. Design Verification Test

  • Integration tests: verifies that components work together as intended
  • Functional tests - done by test team (& devs). The largest part of testing.
  • DVT Plan & Test Cases

3. System Validation Test

  • System tests (E2E testing)
  • Non-functional test "-ilities"
  • Usability, Reliability, Scalability (load testing services)
  • Performance
  • Security

4. Customer Acceptance Testing

  • Test plan is written by test group with involvement of customer
  • Requires agreement by customer, management, and other "stakeholders" :Beta tests

Test reports

Test status report Test report
How test cycle is going How testing effort went
Report after each test cycle Report at the end

It should have:

  • What’s tested / not tested
  • Show where you are in the schedule
  • List open defects

Risk based testing

Risk

  • Impact: Depth( severity ) Breadth( cost of damage done )
  • Loss & Likelihood(how likely to have error)
  • Problems

Risk analysis: determine impact of various risk

Risk equation: Risk = Impact * Likelihood

Risk appetite: amount of loss that management will accept

Risk mitigation: REDUCE, AVOID, MANAGE, TRANSFER risk

6. What are the 5 main Defect Reporting Activities?

1. Analyze the defect

  • Verify it's really a defect
  • Find the root cause (test wrong? Was it the test or app executed wrong? product wrong?)
  • Determine if it is reproducible or repeatable (logs are keys)
  • how to reproduce
  • Find the minimum steps to reproduce ((Investigate alternative path))
  • Attempt to isolate the issue
  • Additional info the could be useful to developers
  • (Determine if worth reporting)

2. Report it

  1. Ensure its not a duplicate
  2. Talk with the developer
  3. Enter into the system (defect tracker)
  4. Make sure it gets fixed
    • what must be done so the defect report is noticed?

3. Track the status (make sure devs notice it and act on it)

4. Retest

  • when? The status of defect is changed by devs
  • result:
    • fixed - passes tests
    • tests still fail - send back to devs
    • fixed, but a new defect is discovered [e.g, regression defect]

5. Close the defect report

  • Who closes a defect report? Depends on the project and organization.
  • Tester can close
  • Defect Review Board decides

7. What are the risks associated with testing and test planning?

  • Lack of management support
  • Lack of customer involvement
  • financial
  • material
  • employee
  • lives
  • license
  • reputation

8. Defects are recorded in a Defect Tracking System

  • Searchable
  • Consists of Defect Reports

9. Contents of a Defect Report

Identifying information

  • ID or number
  • Submitter
  • Submit Date
  • Program, Component, or product it applies to.
  • Product or component version
  • Platform

Description of the problem

  • Title
    • be brief and descriptive, try to make it useful for search
  • Description
    • what actually happened and what should have happened
    • the test case used
    • any helpful information

Status

  • Overall report status: open | closed | re-open
  • Severity: High (critical), Medium, Low (can be worked around), Trivial
  • Priority: P1, P2, ... (Bugzilla uses these priorities)
  • Resolution status: (is someone working on this? is it fixed? will it ever be fixed?)

Comment/Notes

Miscellaneous: (minor info of defect, step to reproduce)

Supporting Information

  • Error output
  • Screenshots
  • Test case (code)
  • "flash drive with data or files"
  • trace files, error logs, etc.

Week 4 - Project State

here!

activities Unit Test Design Verification System Validation Customer Acceptance
Integration testing X yes X X
Functional test X yes X X
Unit test plan yes X X X
End-to-end test X X yes X
Beta Testing X X yes yes
Done by developers yes yes X X
Done by test team yes yes yes yes

Week 5 - Test Doubles

VAV2021 mockobjects

Why

  • Expensive to construct real component
  • Want to avoid side effects on real component
  • Test doubles helps avoid flakey tests(test that sometimes work sometimes don't)
  • Provides ”fake” ecosystem to do efficient unit testing

Provides test inputs:

Notes

  • Test Double -> the general term for stubs, mocks and fakes.
  • Stub -> an object that provides predefined answers to method calls.
  • Mock -> an object on which you set expectations.
  • Fake -> an object with limited capabilities (for the purposes of testing), e.g. a fake web service.
  • Mockito: framework to create test doubles

Dummy Objects

  • An object that has no implementation which is used purely to populate arguments of method calls which are irrelevant to your test.
  • (test that we not care about the detail of the objects)
  • Fill in dummy value into objects required as params for system under test but irrelevant to the test
  • Won't bother creating, especially if it's complicated

Test Stubs

  • Dummy input data sources use by system under test (like fake database)
  • don't want to mass with database, ...any?

Fake Objects

  • Lightweight implementation of heavyweight processes like database
  • Use in-memory database (external database)

Mock Objects

  • Fake objects created to run the test
  • Way to determine if our system are using other systems correctly
  • Spy object: wrap around real obj to monitor interactions

Create mock objects

VAV2021 using-mockito

In build.gradle

dependencies {
    testImplementation 'org.mockito:mockito-core:3.+'
}

In testFile

import static org.mockito.Mockito.*;
import org.mockito.Mock;
import org.mockito.Mockito;

public class CoffeeMakerTest {
  // test Mock objects
  @Mock
  private RecipeBook mockRecipeBook;

  @Before
  public void setUp() {
    // inject mock objects for each @Mock attribute
    MockitoAnnotations.initMocks(this);  // deprecated
  }
}

Week 6 - Cucumber

gift note

Use cases -> test cases

  • Match .features to java file
  • Looks for directory matching and string pattern matching
  • And just follows the steps
  • Regular expression pattern -> method arguments

Use Gherkin language: structural english

  • Feature:
    • Keyword
    • 1 per .feature file
    • May have many scenarios

Regular expressions (regex)

  • Allow pattern matching for strings and can be combined into complex recognizers

Week 7 - White-box, Black-box Testing

Terminology

  • Test Case: Set of inputs, execution conditions, pass/fail criteria
  • Test Case Specification: Requirement to be satisfied by 1 or more tests
  • Test Obligation: A partial test specification, requirement that is more strict and require some property deemed important through testing
  • Test Suite: Set of test cases
  • Test Execution: Act of executing test case and evaluating results

Adequacy Testing

  • Test Selection: To select right input for tests
  • Test Adequacy: To determine whether or not the test meet its requirements

Goal

  • approximate adequacy (to determine correctness and to reach desire level of dependency)
  • by measure -> (though test suite) program structure, inputs, requirements

Adequacy Criteria

  • Adequacy Criteria: A predicate of a program [true(satisfied criteria), false(not satisfied)]

  • Set of test obligations

    • Functional test: spec
      • At least 1 test for each requirement
    • Fault-based test: common (bugs) dev mistakes
    • Model-based test: system model
      • States machines -> visit every state
      • Use cases -> measure code coverage
      • UML diagram -> check system behavior
    • Structural test
      • Statement Coverage
      • Branch Coverage
      • Decision Coverage

Result

satisfied

  1. Every test obligation satisfied (specification) by 1 or more tests
  2. All tests pass

not satisfied

  • Provide info for improvement (let you know which req need more tests)
  • What to do? Unsatisfiability
    1. exclude (remove) unsatisfied obligation -> NO GOOD, hard to tell which executable
    2. conclude by measures coverage percentage (ex. 85% satisfy is okay)
      • pro: satisfied %, tell progress of testing
      • con: in unsatisfied %, might contain effective faults

Comparing Adequacy Criteria

  • Empirical
    • Study effectiveness of different approach (depend on code structure)
  • Analytical
    • Now this one is on effectiveness (by stronger = give stronger guarantee)

Situations to be useful (check criteria -> define more test or check req)

  • Test Selection approaches -> derive test cases to cover statement
  • Revealing Missing Tests -> use statement coverage
  • Automate Test Generation approaches -> use automate test

Goal

  • Effective at finding fault
  • Robust to simple changes
  • Reasonable number of tests, analysis

Test artifacts/process

  1. Specification [implements Program]
  2. Program [-> Program path -> asses test coverage metric -> more test]
  3. Test inputs [Executed on Program]
  4. Oracle [Evaluate Program path]

Is testing adequate?

  • Use MC/DC metric (MC/DC: good when complex bool expression)
  • Generate tests to achieve the metric
TOc = P * S * 2^T * O

To find more faults

  • P: non-inlined -> inlined (better)
  • O: change Oracle to look at all internal state (observe every variables -> NO GOOD)
  • Note: It not enough to look at input/output, also look at program structure and oracle too

Reachability

  • Reachability: likelihood to be able to reach every branch/path of the program
  • use: MC/DC, path coverage, test doubles
  • Most coverage already ensures reachability

Observability

  • Observability: to be able to observe the effect of inputs and state on the system by examining the outputs
  • use: MC/DC, strong mutation

Mutation test

MS = (#Dead / (#Mutates - #Equivalent)) * 100

Note.

  • MS = Mutation Adequacy Score %
  • Dead = value diff than original
  • Mutates = all mutation programs
  • Equivalent ≈ 0 (can't find)

Impact

  • Reachability: Mutants in hard-to-reach statements are hard to kill
  • Observability: Reachable mutants can still be stubborn, so observability will improve the mutation score by kill those mutants

Oracle

  • Most Oracle not Sound, not Complete
  • Oracle Strength = Effectiveness (Strong > kill mutant easy -> more effective)

Soundness

  • Soundness: correctly determines that test fails
  • test -> alway correct

Completeness

  • Completeness: correctly determines that test succeed
  • program -> alway correct

Black-box Testing: Test App

Partition Test

  • Partition testing: to divide a set of test conditions into groups or sets that can be considered as same

Partition principle

  • Failure are usually scatters
  • Find portion of input space that dense
  • Partition input space into classes

Quasi Partitions

  • Allow overlap partition
  • Hope to find partition that dense in failures

Combinatorial Test

  • Combinatorial testing: a testing technique in which multiple combinations of the input parameters are used to perform testing of the software product

Break set of values before input

  • Category partition testing -> put in Category
  • Pairwise testing -> pair value from each Category
  • Catalog based testing -> use experience to put attributes in Category

White-box Testing: Test Code

Structural testing

-> no need program specification

  • Statement Coverage -> every statements are executed at least once
  • Branch Coverage -> execute both if and else cases (at least 2 test)
  • Decision Coverage -> execute both if-else even when decision has conditions
  • MC/DC (Modified Condition/Decision Coverage) -> complex bool expression
  • OMC/DC
    • use mutation, approximate by tagging semantics
    • more robust than MC/DC

Boundary Value testing

  • check [==, !=, <, >, <=, >=]

Mutation testing

  • Selective mutant testing
    • generate just simple mutant [not all]
    • examine operator, and fixed #mutants base on operators
    • create just enough to have confidence

Requirement

Bad: unclear, not precise, ambiguous

Write test for:

  • [Given] Input -> what to initiate
  • [When] Test Procedure -> what/how to measure
  • [Then] Expected Output -> what is expected result

*Test case can help define better requirement

⚠️ **GitHub.com Fallback** ⚠️