09. Implementing Test Cases - dewayneh/testing GitHub Wiki
Alongside delivering your code, it is extremely important to implement test cases that will validate your code with the highest level of code coverage as possible. Typically you will need to implement 3 or 4 types of test cases. With CI/CD, the goal is to implement these test cases in a fully automated fashion where the pipeline, not humans, can run AND interpret the results automatically. This allows teams to greatly increase their speed, quality, and consistency.
Refer to the Contract Testing User Guidefor more information...
PLE has these two courses available on contract-based testing:
Visit this wiki page that defines the guidelines for using contract-based testing (Contract Testing User Guide)
User Interface Testing is a testing technique to identify defects in software under test using the application's Graphical User Interface [GUI].
-
-
- Manual Based Testing: Testers manually check graphical screens in conformance with requirements stated in the business requirements document.
- Record and Replay: Performed using automation tools in 2 steps. During Record, test steps are captured by the automation tool. During playback, the recorded test steps are executed on the Application Under Test. QTP is an example of such a tool.
- Model Based Testing : A model is a graphical description of system behavior. Models help generate efficient test cases using the system requirements. Model-based testing is an evolving technique for generating test cases from requirements. Its main advantage, compared with the above methods, is it can determine undesirable states your GUI can attain.
-
Following are open-source tools available to conduct GUI Testing.
-
-
-
- Selenium
- AutoHotKey
- Sikuli
- Water
- Robot Framework
- Dojo Toolkit
-
-
The procedures explained here use JWebUnit along with Junit4 for enabling UI Testing during Maven Build phase.
- Add Jwebunit dependency to the micro service pom.xml, with test scope.
Dependency
<dependency>
<groupId>net.sourceforge.jwebunit</groupId>
<artifactId>jwebunit-htmlunit-plugin</artifactId>
<version>3.3</version>
<scope>test</scope>
</dependency>
- Create the test class under service/src/test/java directory and write the test cases. See the sample class below.
Sample Test Class
import net.sourceforge.jwebunit.junit.WebTester;
public class ExampleWebTestCase {
private WebTester tester;
@Before
public void prepare() {
tester = new WebTester();
tester.setBaseUrl(http://localhost:8080/test);
}
@Test
public void test1() {
tester.beginAt("home.xhtml"); //Open the browser on http://localhost:8080/test/home.xhtml
tester.clickLink("login");
tester.assertTitleEquals("Login");
tester.setTextField("username", "test");
tester.setTextField("password", "test123");
tester.submit();
tester.assertTitleEquals("Welcome, test!");
}
}
- Run mvn install to run the tests.
It takes some effort to write good, effective test cases. While it’s not hard to write a test case, especially a bad one,
a poor test case adds no value to the project and can create extra work, complexity, costs, and delays.
See some of the most common test-case-writing mistakes below.
Test cases are as much a part of the source code as any file that implements business function. They persist for the life of their tested function, so they can work for a long time. Because functionality changes over time, other developers will have to support your test cases.
Don’t make assumptions about the understanding of what the test case does. Be explicit; include documentation describing what it tests, its expected outcomes, and how it works in general. This will help anyone later on with fixing or extending the case.
Test cases must use the same coding conventions and standards as the rest of the product. They are after all part of the product. Using "tricky" code helps no one, so keep them clear, understandable, and supportable by others.
When one test case depends on the outcome or state set up by another, it results in a fragile and unpredictable condition. It may work under very specific circumstances, but will fail elsewhere. These dependencies must not exist. Any test case must be able to perform separately and still function correctly. Every test case must be a self-contained test.
This can happen accidentally; for example if a test case doesn’t reset the system’s state before testing it. Someone may have inadvertently coded it to rely on the state remaining in some condition based on a previous test case. To prevent accidental test case dependency, every test case must ensure the predictable and consistent state of the system/component/object prior to testing. In Junit, use "@Before" methods to do this.
Test cases must be able to perform in any order at any time. One test case must not depend on another running first.
When running in automated harnesses, they may not have a predicable order, and the order could change from one run or platform to another.
Write test cases to test a specific function. All too often, a system’s quality is measured by the number of existing test cases. This is a very poor metric because it encourages people to develop test cases just to inflate their perception of a quality product.
Instead, a test case should test one, and only one functional capability of a product or component. Test cases can be either positive or negative, but they should never try to do both.
Creating multiple test cases to test the same function results in wastes, delays, and increased costs. It’s not at all hard to do, especially as the project progresses. To avoid this, structure your test cases by functional area or components, and ensure that you test functions by the minimum number of test cases. Using code coverage reports and adjusting test cases to improve coverage will help eliminate duplicate testing.
Test cases MUST be portable! You have no way to know what environment the test cases may run on. They should have the ability to run on every developer’s workstation, as well as the build systems, and possibly as part of formal testing performed by a testing team. This means that test cases must not rely on any environment, specific software product, or configuration specific to any one environment. Structure test cases so that they use the standard tools and frameworks that are part of the project and automatically included when building it for testing (like with Maven). No developer can ever inject personalized software or configurations that are not part of the project standard, and therefore unique to only their workstation.
The Exceptions mechanism indicates abnormal conditions. Test cases should never catch and ignore exceptions, as it could report inaccurate or false positive results that could impact product quality.
In the case of a test framework like JUnit, declare on the test case any exceptions that can be thrown from the component under test. The test runner framework will catch the exceptions and report the appropriate failures. If you expect an exception as part of the test case (a positive result), then use the appropriate annotations to tell the test runner framework, and it will handle it correctly.
In some cases, the test may need to catch an exception. As part of the try-catch, use assertions to test if it handled the exceptional state correctly (a try-catch is a programmatic way to execute statements and handle any exceptions that arise from them). In this case, the catch should be for the most explicit exception classes possible.
Test cases, especially when used in a C/I (Continuous Integration) environment, must be fast and not excessively complex, and shouldn’t perform excessive setup and initialization. The presence of excessive setup usually means that the test is performing too much, or is relying on complex data stores or facilities that mock or other capabilities could simplify.
Sometimes the setup may reset a system state unrelated to the test. In this case, the extra work to do the setup/reset is wasteful, as it has no impact on the test at all.
If many test cases in the same class (such as JUnit) share a setup, but the requirements for it vary from test to test, you should create separate test classes. This would make the tests easier, and the setup more understandable and specialized for tests that need it.
Most application components utilize data bases, messaging frameworks, UI frameworks, security frameworks, and more. These dependencies add unnecessary complexity to the test environment. The test needs to ensure correct component behavior, given the appropriate inputs and the state of the component. It does not need to test the data store or messaging mechanisms; they may be too complex and require too much initialization. Using a facility such as Mock would allow you to insert a simulation of these facilities that would provide the appropriate input to the component. It’s also easier and less complex to use and initialize.
Use the test framework's ability to report on test failures, rather than rely on logging. In JUnit’s case, every "assertion" can include a message as the first argument. Written to the output when the assertion fails (a failure message), this message should provide the information needed to know why the test failed.
Getters and setters almost never need testing. If an object’s state needs testing, it can happen once in a test case that tests the class’s creation aspects. Likewise, a class often has functions that don’t need testing because they don’t impact its operation (such as toString).
Other methods may exercise lower-level methods. In that case, you can test the higher-level methods to test the lower-level methods. If using this approach adequately tests the lower-level method (verified by code coverage), then any test that uses the lower method is superfluous and wasteful.
A test case must test one, and only one, functional capability of the component. Testing more than one functional capability impacts other developers’ comprehension and supportability of the test case, risking duplicate test cases and other issues.
Test cases can be either positive or negative. A positive one tests a valid condition, expecting the component to succeed. A negative one passes known invalid data or state to the component, expecting it to fail. Both types are important and should be included. However, many developers only write positive test cases because it’s easier to think in the positive case. This results in incomplete testing; they need to use negative cases as well.
Most software mistakes occur at the boundaries between valid and invalid data. For example, if a component expects input with a length from 1 to 20 characters, and an error exists in the length checking, it may accidentally accept a value of 21 characters. Likewise, it might incorrectly reject a value with a length of 19. Known collectively as "off by one" errors, these are extremely easy to accidentally code. You should structure test cases to test on the boundaries of a component, as well as positive and negative cases.
A test case should either be positive, or negative, but not both. A test case should test one functional capability, and in one mode (positive or negative) only.
When tests fail (and they will), they need to provide enough information to understand why. Many write test cases to use assertions, without the use of the failure message. This does not help developers correct the failure, either to determine if the test is invalid or the tested functionality is broken. Use meaningful messages and tell the test case user what went wrong.
No test case should EVER rely on external manual processes, procedures, or activities of any kind. Test cases must have the ability to run stand-alone, with automation, on any platform at any time, anywhere.
Every test case must be able to perform individually. This means that the system/component must be put into a predictable initial state, and the test executed based on that condition. If tests don’t do this, then dependencies can form between them, and when the tests run in different order, they’ll have different results. This makes the tests unpredictable and useless for predicting software quality. To prevent accidental test case dependency, every test case must ensure that the system/component/object is in a predictable and consistent state prior to testing. In JUnit, use "@Before" methods to do this.
When writing test cases, it’s easy to assume that you’re testing all of the functional capability’s paths. However in reality, it’s very difficult to determine the actual path being tested, and you MUST use code coverage tools and reports to confirm the test.
Make your tests as simple and understandable as possible. They should test one functional capability of a component, and they must not mix both positive and negative testing.