TDD & Unit Testing - ayaohsu/Personal-Resources GitHub Wiki
Cheatsheet
# test fixture
import pytest
@pytest.fixture
def logProcessor():
return LogProcessor()
def test_read_lines_log_file(logProcessor):
# use logProcessor
@pytest.fixture(autouse=True)
def anotherFixture():
# automatically use this fixture for all
pass
# test if function raises exception as expected
import pytest
with pytest.raises(ErrorDuringOffException):
logProcessor.readLinesFromLog(fake_file_path)
logProcessor.getErrorTimestamps()
# skip a test case
import pytest
@pytest.mark.skip
def test_something_but_skipped():
pass
# patch and mock_open
from unittest.mock import mock_open, patch
def test_read_lines_log_file(logProcessor):
with patch('logparser.open', new=mock_open(read_data=fake_file_content)) as f:
content = logProcessor.readLinesFromLog(fake_file_path)
f.assert_called_once_with(fake_file_path, 'r')
# test approximate values
import pytest
assert 0.1 + 0.2 == pytest.approx(0.3)
# parametrization
import pytest
@pytest.mark.parametrize("maybe_palindrome, expected_result", [
("", True),
("a", True),
("Bob", True),
("Never odd or even", True),
("Do geese see God?", True),
("abc", False),
("abab", False),
])
def test_is_palindrome(maybe_palindrome, expected_result):
assert is_palindrome(maybe_palindrome) == expected_result
#command line arguments
$ pytest -v # verbose output
$ pytest -s # do not capture the stdout
$ pytest test_1.py # only test the module
$ pytest sub_directory/ # only test the sub directory
$ pytest -k "test2 or test3" # only conduct test cases of test2 and test3
$ pytest -m "test1" # only test the marked test (@pytest.mark.test1)
Why TDD
Buggy software results:
- Slow down the new feature development
- Low confidence on deploying
- Unhappy customers due to low quality software and slipped delivery
- Simply put, it can hurt the business
Solutions: Multi-layer safety nets:
- First: a suite of automated unit tests
Overview of TDD & Unit Testing
Level of testing:
- Unit Testing - Testing at the function level
- Component Testing - Testing at the library and complied binary level
- System Testing - Test external interfaces of a system or sub-systems
- Performance Testing - To verify timing and resource usages are acceptable
Unit Testing Specifics
- Tests individual functions
- A test should be written for each test case for a function (all positive and negative test cases)
- Groups of tests can be combined into test suites for better organization
- Executes in the dev rather than the prod
- Execution of the tests should be automated
Common structure of a unit test:
def test_str_length():
testStr = "a" # Setup
result = str_len(testStr) # Action
assert result == 1 # Assert
Good practice:
Write a small bit of test, and then the associated production code, and then test, code, etc. Do not write all the tests or production code at once.
Benefits:
- Gives you confidence to change the code.
- Gives you immediate feedback.
- Documents what the code is doing.
- Drives good object oriented design.
Three phases:
- RED: Write a failing unit test
- GREEN: Write just enough production code to make that test pass
- REFACTOR: Refactor the unit test and the production code to make it clean
- Repeat the three phases until completed
PyTest Overview
PyTest provides the ability to create Tests, Test Modules, and Test Fixtures.
Tests Discovery:
- Functions with "test" and the beginning of the function name.
- Classes with tests in them should have "Test" at the beginning of the class and not have an "init" method.
- Files should have name starting/ending with "test" and underscore.
Setup and Teardown:
- setup_function(function_name), teardown_function(function_name)
- setup_module(), teardown_module(): called once at the file/module level
- setup_class(), teardown_class(): called once at the class level
Test Fixtures:
- Fixtures are functions, which will run before each test function to which it is applied.
It can be applied by- function argument
- pytest usefixtures decorator
- Teardown:
- Use
yeild
keyword - Use
request.addfinalizer
- Use
Test fixtures scope:
- Function
- Class
- Module
- Session
You can pass in an optional params
argument to run the test multiple times with each element as input in params
@pytest.fixture(params=[1,2,3])
def setup(request):
retVal = request.param
print("\nSetup! retVal = {}".format(retVal)
return retVal
def test1(setup):
print("\setup = {}".format(setup))
assert True
test1 will be run three times with setup to be 1, 2, 3.
pytest provides approx function to assert approximate equivalency with default tolerance of 1e-6.
from pytest import approx
Test Doubles
Test doubles are objects that are used in unit tests as replacements to the real production system collaborators. This allows us to test in a controlled environment.
- Dummy: Objects that can be passed around as necessary but no implementation
- Fake: Objects with simplified functional implementation of a particular interface
- Stub: Implementations with canned answers
- Spies: Implementations that record the values that were passed in
- Mocks: Pre-programmed to expect specific calls
Mocking Framework
- Easy to create any of these types of test doubles at runtime; much more efficient than implementing custom mock object
- unitest.mock
Best Practices
- Do the next simplest test case
- Use descriptive test names
- Keep test fast: Because you want the fast feedback
- Use code coverage tools
- Run your test randomly (to ensure no dependencies in your test cases)
- Use a static code analysis tool
- Test behavior rather than implementation