Testing - csed-ucm/psephos GitHub Wiki
Testing
Overview
The Psephos project uses a comprehensive testing strategy that includes unit tests, integration tests, and code quality checks.
Testing Framework
Pytest
Pytest is our primary testing framework, chosen for its simplicity and powerful features.
Key Features:
- Simple assert statements (no special assertion methods)
- Powerful fixtures system for test setup
- Parallel test execution
- Rich plugin ecosystem
- Async test support
Running Tests
Basic Test Execution
# Run all tests
pytest
# Run specific test file
pytest tests/test_workspaces.py
# Run tests with verbose output
pytest -v
# Run tests with output (print statements)
pytest -s
Coverage Reports
# Run tests with coverage
pytest --cov=app
# Generate HTML coverage report
pytest --cov=app --cov-report=html
# Coverage with specific target percentage
pytest --cov=app --cov-fail-under=80
Test Structure
Directory Organization
tests/
├── conftest.py # Shared fixtures
├── test_accounts.py # Account functionality tests
├── test_workspaces.py # Workspace functionality tests
├── test_groups.py # Group functionality tests
├── test_polls.py # Poll functionality tests
├── test_permissions.py # Permission system tests
└── integration/ # Integration tests
├── test_auth_flow.py
└── test_api_endpoints.py
Test Naming Convention
- Test files:
test_<module_name>.py
- Test functions:
test_<functionality>_<expected_outcome>
- Test classes:
Test<FeatureName>
Examples:
def test_create_workspace_success()
def test_create_workspace_duplicate_name_fails()
def test_get_workspace_unauthorized_fails()
Fixtures
conftest.py
)
Common Fixtures (import pytest
from fastapi.testclient import TestClient
from app.app import app
@pytest.fixture
def client():
"""HTTP test client"""
return TestClient(app)
@pytest.fixture
async def test_user():
"""Create test user"""
# User creation logic
yield user
# Cleanup logic
@pytest.fixture
def auth_headers(test_user):
"""Authentication headers for requests"""
token = create_access_token(test_user.id)
return {"Authorization": f"Bearer {token}"}
Using Fixtures
def test_create_workspace(client, auth_headers):
response = client.post(
"/workspaces",
json={"name": "Test Workspace", "description": "Test"},
headers=auth_headers
)
assert response.status_code == 201
assert response.json()["name"] == "Test Workspace"
Test Categories
1. Unit Tests
Test individual functions and methods in isolation.
def test_hash_password():
password = "testpassword"
hashed = hash_password(password)
assert verify_password(password, hashed)
def test_create_workspace_action():
workspace_data = WorkspaceCreate(name="Test", description="Test")
workspace = create_workspace(workspace_data, user_id="123")
assert workspace.name == "Test"
2. Integration Tests
Test API endpoints and workflows.
def test_workspace_creation_flow(client):
# Register user
register_response = client.post("/auth/register", json={
"email": "[email protected]",
"password": "password",
"first_name": "Test",
"last_name": "User"
})
assert register_response.status_code == 201
# Login
login_response = client.post("/auth/jwt/login", data={
"username": "[email protected]",
"password": "password"
})
token = login_response.json()["access_token"]
# Create workspace
workspace_response = client.post("/workspaces",
json={"name": "Test Workspace", "description": "Test"},
headers={"Authorization": f"Bearer {token}"}
)
assert workspace_response.status_code == 201
3. Permission Tests
Test authorization and access control.
def test_workspace_access_permissions(client, test_user, other_user):
# Create workspace as test_user
workspace = create_test_workspace(test_user)
# Try to access as other_user (should fail)
response = client.get(f"/workspaces/{workspace.id}",
headers=auth_headers_for(other_user)
)
assert response.status_code == 403
Test Data Generation
Using Faker
from faker import Faker
fake = Faker()
def generate_test_user():
return {
"email": fake.email(),
"first_name": fake.first_name(),
"last_name": fake.last_name(),
"password": "testpassword123"
}
def test_user_registration_with_random_data():
user_data = generate_test_user()
response = client.post("/auth/register", json=user_data)
assert response.status_code == 201
Async Testing
import pytest
import asyncio
@pytest.mark.asyncio
async def test_async_database_operation():
workspace = await create_workspace_async(workspace_data)
assert workspace.id is not None
retrieved = await get_workspace_async(workspace.id)
assert retrieved.name == workspace.name
Mocking and Patching
from unittest.mock import patch, MagicMock
@patch('app.utils.mongo.get_database')
def test_database_error_handling(mock_db):
mock_db.side_effect = ConnectionError("Database unavailable")
with pytest.raises(ConnectionError):
result = get_workspace("workspace_id")
Test Configuration
pytest.ini
[tool:pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
asyncio_mode = auto
markers =
slow: marks tests as slow
integration: marks tests as integration tests
unit: marks tests as unit tests
Running Specific Test Types
# Run only unit tests
pytest -m unit
# Run only integration tests
pytest -m integration
# Skip slow tests
pytest -m "not slow"
Code Quality Tools
Flake8 (Style Checking)
flake8 app tests --max-line-length=120
Common style fixes:
- Line length (max 120 characters)
- Import organization
- Whitespace consistency
- Unused imports
MyPy (Type Checking)
mypy app
Benefits:
- Catch type-related bugs early
- Better IDE support
- Self-documenting code
- Pydantic integration
Example MyPy Configuration
[tool.mypy]
python_version = "3.11"
plugins = ["pydantic.mypy"]
ignore_missing_imports = true
strict_optional = true
warn_redundant_casts = true
Test Coverage Goals
- Minimum Coverage: 80%
- Critical Paths: 95%+ (authentication, permissions, data integrity)
- New Features: 90%+ coverage required
Coverage Report Example
---------- coverage: platform linux, python 3.11.0 -----------
Name Stmts Miss Cover
------------------------------------------------
app/actions/workspace.py 45 5 89%
app/actions/group.py 38 2 95%
app/api/workspaces.py 52 10 81%
app/dependencies.py 25 3 88%
------------------------------------------------
TOTAL 160 20 87%
Continuous Integration
Tests run automatically on:
- Pull requests
- Pushes to main branch
- Scheduled nightly builds
GitHub Actions Workflow
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.11, 3.12]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
pip install -r test-requirements.txt
- name: Run tests
run: |
pytest --cov=app --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
Best Practices
- Test Isolation: Each test should be independent
- Clear Names: Test names should describe what is being tested
- Single Responsibility: One assertion per test when possible
- Fast Execution: Keep tests fast for quick feedback
- Realistic Data: Use realistic test data that matches production
- Error Cases: Test both success and failure scenarios
- Documentation: Comment complex test logic
Debugging Tests
# Run single test with debugging
pytest tests/test_workspaces.py::test_create_workspace -v -s --pdb
# Run with print output
pytest -s tests/test_workspaces.py
# Run tests and stop on first failure
pytest -x