Feedback on SWEBOK V4 - samm82/TestGen-Thesis GitHub Wiki

All information here is limited to the software-testing-related areas of the document; currently, this includes the introductory pages, Ch. 1 Software Requirements, and Ch. 5 Software Testing.


In increasing order of importance:
(O)mission (either from SWEBOK v3.0 or just not included)
(E)rror (in content)


Page xxiv

  • (G) "2004" shouldn't be on a new line after "SWEBOK", and the period should be removed
  • (G) Shouldn't it be '…the term "software". The term "software engineering" was…'? Same on page xxv, where the former term is italicized (which would also work)

Software Requirements

Page 1-20

  • (S) Why not combine the two sentences "Determining an expected result is not always possible. Additional business domain expertise might be necessary," into "…is not always possible and may require additional business domain expertise"?

Software Testing

Page 5-1

  • (G) In the sentence containing "…essential for the execution, such as…", "execution" should be more specific; for example, "execution of a test".

Page 5-4

  • (S) Subsections 1-6, 10, 12, and 14 of Section 1.2 Key Issues seem to focus more on "areas for consideration" or "topics" of testing than actual "issues"; could this section be split into two, one of actual issues and one of these "areas for consideration"

Page 5-5

  • (O) More information on the types of oracles would be beneficial; what are the differences between them?
  • (G) Shouldn't it be "The automation of mechanized oracles…"?
  • (A) What does "managing the infeasible paths" mean? Identifying them? Removing them?
  • (G) The phrase "infeasible paths can also be connected to the analysis and detection process for security vulnerabilities" seems confusing; why not say "the analysis and detection process for security vulnerabilities sometimes involves infeasible paths"; does that change the intent?
  • (E) The sentence "Additionally, infeasible paths can also be connected to the analysis and detection process for security vulnerabilities and can improve accuracy," implies that "infeasible paths … can improve accuracy", which doesn't make sense.
  • (S) Similar issue as the one described for Page 5-4: "Evaluating the SUT, measuring a testing technique’s efficacy, and judging whether testing can be stopped are important issues for software testing...."; I don't think "issues" is the best word choice here.

Page 5-6

  • (S) Similar issue as the one described for Page 5-4: The section on Off-Line vs. Online Testing should likely be moved; why not to Section 3.7. Selecting and Combining Techniques?
  • (G) It looks weird to see "off-line" with a hyphen and "online" without. I personally would remove the hyphen in "off-line".
  • (S) The last sentence of the Off-Line vs. Online Testing section doesn't seem to add much information, since this is true of almost every type of testing! Should this be removed?

Page 5-7

  • (G) Should be "…the end-users' expectations…" (with an apostrophe).

Page 5-8

  • (S) "…test cases to increase the rate of fault detection, the likelihood of revealing faults, the coverage…" seems a bit redundant. Why not just say "…test cases to increase the rate and likelihood of fault detection, the coverage…"?

Page 5-9

  • (A) "Scalability testing evaluates the capability to use and learn the system and the user documentation. It also focuses on the system's effectiveness in supporting user tasks and the ability to recover from user errors"; this seems to describe usability testing, which evaluates "how easy it is for end-users to learn to use the software" (p. 5-10).
  • (E) Software Engineering by Ian Sommerville is cited as the only source for the section on elasticity testing, but the words "elasticity" and "elastic" don't even show up in the book!
  • (A) The definition of "elasticity testing" says that one of its objectives is "to evaluate scalability"; wouldn't this be an objective of scalability testing instead?
  • (G) It should be "…after a system crash or other disasters." to be consistent with the singular "a system crash", as it was in v3.0.

Page 5-10

  • (G) It should be "…whether a component's interface provides the correct…" or "…whether the components' interfaces provides the correct…" for consistency.

Page 5-11

  • (G) Should "Boundary-Value Analysis" and "All-Combinations Testing" have hyphens or no? They don't in ISO/IEC/IEEE 29119-4:2021, for example (see 3.3 and 3.12, respectively).

Page 5-12

  • (S) It should be more explicit that "fuzzy testing" (why not "fuzz testing", which is more common?) is a subset of random testing, as mentioned in v3.0.
  • (A) Evidence-based testing isn't really defined/described, just how it applies to software engineering as a whole. What questions would be relevant? What kinds of evidence would be useful? It seems more like a "workflow" for designing other kinds of tests rather than an actual type of testing.

Page 5-13

  • (O) No explicit definition of control flow testing in general
  • (O) Does control flow testing for "blocks of statements[] or specific combinations of statements" have specific names? Definitions?
  • (O) No explicit definition of data flow testing in general, or of most of its subtypes
  • (G) Who does "his/her" refer to in Experience-Based Techniques? This should be replaced with "the tester's".

Page 5-14

  • (A) How is monkey testing different from fuzz testing?
  • (A) The definition of buddy testing seems vague; where does the "buddy" part come into play? How is it different from pair testing?
  • (G) I think "Pair testing allows for generating test cases…" is more clear.
  • (G) I think "…with broader and better test coverage." is more consistent.
  • (A) The purpose of quick testing is unclear; how can "a very small test suite" provide a guarantee of something? I'm assuming the "SUT components that are not fully operational" are what would be the target of this testing?
  • (A) Are knowledge-based testing and ML-based testing actual testing techniques, or do they just outline where information that can be used to test can be obtained? This seems like a trivial distinction (e.g., why not "literature-based testing" or "encyclopedia-based testing"?) to make.
  • (A) How are knowledge-based testing and ML-based testing different?
  • (G) "Exploit" doesn't seem like a good word choice; why not "use"?

Page 5-15

  • (O) Was the omission of "operational testing" as a form of testing intentional? It was defined in SWEBOK v3.0 (p. 4-6).

Page 5-16

  • (S) I'm not sure that this section on combining functional and structural is beneficial. I don't think anyone assumes that one would use only use tests at one phase, for example; would people really think about just using functional or structural testing?
  • (O) "Combining different testing techniques" seems to be the focus of this section (3.7. Selecting and Combining Techniques), but there is no information about how deterministic and random testing would be combined.
  • (S) Is there a better place for Section 3.7.2 Deterministic vs. Random? It doesn't really talk about testing techniques, but testing techniques for determining source the test data, and also doesn't discuss how one could/would combine the two.
  • (O) The claim is made that "Several analytical and empirical comparisons have been conducted to analyze the conditions that make one approach more effective than the other," but no examples (or their results!) are given.

Summary of Changes

Potentially incomplete

  • Acceptance testing is now in the "target" section of level instead of "objective"; other changes to it
  • Removed "Reliability Achievement and Evaluation", including definition of "operational testing"
  • Added "Prioritization Testing"
  • Stress testing separated from load testing; no longer has "the goal of determining the behavioral limits" (2014, p. 4-7)
  • Added load, volume, failover, compatibility, scalability, elasticity, and infrastructure testing and defined reliability testing
  • Added privacy testing and added "API testing" term to heading
  • Renamed "Input Domain-Based" to "Specification-Based" and "Code-Based" to "Structure-Based" Testing
  • Elaborated on combinatorial testing
  • Added syntax testing, decision tables, cause-effect graphing, state transition testing, scenario-based testing, evidence-based, and forcing exception
  • Moved "Experience-Based Techniques" to after "Specification-" and "Structure-Based Techniques"
  • Moved error guessing to be an experience-based technique
  • Added more examples of ad-hoc testing and info on exploratory testing
  • Added list of types of data flow testing
  • Moved mutation testing from being a fault-based technique to being its own category on par with it
  • Removal of Model-based Testing Techniques section, with the following moves/renames:
    • Finite-State Machines → State Transition Testing
    • Formal Specifications → Syntax Testing (although "formal specification-based testing" is given as a synonym)
    • Workflow Model → a subset of Scenario-based Testing
    • All the above, and Decision Tables, were re-categorized as Specification-based Techniques
  • Addition of cloud-, blockchain-, big data-, and AI/ML/DL-based software, security and privacy-preserving software, and mobile apps to Application-based Techniques