Testing Guide.md - himent12/FlashGenie GitHub Wiki
๐งช FlashGenie Testing Guide
Comprehensive guide to testing strategies, frameworks, and best practices for FlashGenie development. This guide covers everything from running existing tests to writing new test suites.
๐ฏ Testing Overview
FlashGenie uses a comprehensive testing strategy that includes:
- Unit Tests: Test individual components in isolation
- Integration Tests: Test component interactions
- End-to-End Tests: Test complete user workflows
- Performance Tests: Ensure optimal performance
- Security Tests: Validate security measures
Testing Framework Stack
- pytest: Primary testing framework
- pytest-cov: Code coverage reporting
- pytest-mock: Mocking and patching
- pytest-asyncio: Async testing support
- hypothesis: Property-based testing
- tox: Testing across multiple Python versions
๐ Quick Start
Running Tests
# Run all tests
python -m pytest
# Run with verbose output
python -m pytest -v
# Run specific test file
python -m pytest tests/unit/test_flashcard.py
# Run specific test method
python -m pytest tests/unit/test_flashcard.py::TestFlashcard::test_create_flashcard
# Run tests with coverage
python -m pytest --cov=flashgenie --cov-report=html
# Run tests in parallel
python -m pytest -n auto
Test Categories
# Run only unit tests
python -m pytest tests/unit/
# Run only integration tests
python -m pytest tests/integration/
# Run only fast tests (skip slow ones)
python -m pytest -m "not slow"
# Run only tests for a specific component
python -m pytest -k "flashcard"
๐ Test Structure
Directory Organization
tests/
โโโ unit/ # Unit tests
โ โโโ core/
โ โ โโโ test_flashcard.py
โ โ โโโ test_deck.py
โ โ โโโ test_quiz_engine.py
โ โ โโโ test_difficulty_analyzer.py
โ โโโ data/
โ โ โโโ test_storage.py
โ โ โโโ test_importers.py
โ โ โโโ test_exporters.py
โ โโโ interfaces/
โ โโโ test_cli.py
โโโ integration/ # Integration tests
โ โโโ test_quiz_workflow.py
โ โโโ test_data_flow.py
โ โโโ test_plugin_system.py
โโโ e2e/ # End-to-end tests
โ โโโ test_user_workflows.py
โ โโโ test_cli_commands.py
โโโ performance/ # Performance tests
โ โโโ test_large_decks.py
โ โโโ test_memory_usage.py
โโโ fixtures/ # Test data and fixtures
โ โโโ sample_decks.json
โ โโโ test_cards.csv
โ โโโ mock_data.py
โโโ conftest.py # Pytest configuration and fixtures
Test Configuration
# conftest.py
import pytest
import tempfile
from pathlib import Path
from flashgenie.core.flashcard import Flashcard
from flashgenie.core.deck import Deck
from flashgenie.data.storage import DataStorage
@pytest.fixture
def temp_data_dir():
"""Create temporary directory for test data."""
with tempfile.TemporaryDirectory() as temp_dir:
yield Path(temp_dir)
@pytest.fixture
def sample_flashcard():
"""Create a sample flashcard for testing."""
return Flashcard(
question="What is Python?",
answer="A programming language",
tags=["programming", "basics"]
)
@pytest.fixture
def sample_deck(sample_flashcard):
"""Create a sample deck with flashcards."""
cards = [
sample_flashcard,
Flashcard("What is ML?", "Machine Learning", tags=["AI", "concepts"]),
Flashcard("Define API", "Application Programming Interface", tags=["programming"])
]
return Deck("Test Deck", "A deck for testing", flashcards=cards)
@pytest.fixture
def mock_storage(temp_data_dir):
"""Create mock storage for testing."""
storage = DataStorage(data_dir=temp_data_dir)
return storage
โ๏ธ Writing Tests
Unit Test Examples
Testing Core Classes
# tests/unit/core/test_flashcard.py
import pytest
from datetime import datetime
from flashgenie.core.flashcard import Flashcard
class TestFlashcard:
"""Test suite for Flashcard class."""
def test_create_flashcard(self):
"""Test basic flashcard creation."""
card = Flashcard("What is Python?", "A programming language")
assert card.question == "What is Python?"
assert card.answer == "A programming language"
assert card.difficulty == 1.0
assert card.tags == []
assert isinstance(card.created_at, datetime)
def test_flashcard_with_tags(self):
"""Test flashcard creation with tags."""
tags = ["programming", "python", "basics"]
card = Flashcard("Test question", "Test answer", tags=tags)
assert card.tags == tags
def test_add_tag(self):
"""Test adding tags to flashcard."""
card = Flashcard("Test", "Test")
card.add_tag("new_tag")
assert "new_tag" in card.tags
def test_remove_tag(self):
"""Test removing tags from flashcard."""
card = Flashcard("Test", "Test", tags=["tag1", "tag2"])
result = card.remove_tag("tag1")
assert result is True
assert "tag1" not in card.tags
result = card.remove_tag("nonexistent")
assert result is False
@pytest.mark.parametrize("difficulty,expected", [
(0.5, 0.5),
(3.0, 3.0),
(6.0, 5.0), # Should be clamped to max
(-1.0, 0.1), # Should be clamped to min
(0.0, 0.1), # Zero should be clamped to min
])
def test_difficulty_clamping(self, difficulty, expected):
"""Test difficulty value clamping."""
card = Flashcard("Test", "Test")
card.update_difficulty(difficulty)
assert card.difficulty == expected
def test_record_review(self):
"""Test recording review attempts."""
card = Flashcard("Test", "Test")
# Record correct answer
card.record_review(correct=True)
assert card.review_count == 1
assert card.correct_count == 1
# Record incorrect answer
card.record_review(correct=False)
assert card.review_count == 2
assert card.correct_count == 1
def test_success_rate(self):
"""Test success rate calculation."""
card = Flashcard("Test", "Test")
# No reviews yet
assert card.get_success_rate() == 0.0
# Add some reviews
card.record_review(True)
card.record_review(True)
card.record_review(False)
assert card.get_success_rate() == pytest.approx(66.67, rel=1e-2)
def test_to_dict(self):
"""Test dictionary serialization."""
card = Flashcard("Question", "Answer", tags=["tag1"])
card_dict = card.to_dict()
assert card_dict['question'] == "Question"
assert card_dict['answer'] == "Answer"
assert card_dict['tags'] == ["tag1"]
assert 'id' in card_dict
assert 'created_at' in card_dict
def test_from_dict(self):
"""Test dictionary deserialization."""
card_data = {
'id': 'test-id',
'question': 'Test Question',
'answer': 'Test Answer',
'tags': ['test'],
'difficulty': 2.0,
'created_at': '2023-01-01T00:00:00',
'review_count': 5,
'correct_count': 3
}
card = Flashcard.from_dict(card_data)
assert card.question == 'Test Question'
assert card.answer == 'Test Answer'
assert card.difficulty == 2.0
assert card.review_count == 5
assert card.correct_count == 3
Testing with Mocks
# tests/unit/core/test_quiz_engine.py
import pytest
from unittest.mock import Mock, patch
from flashgenie.core.quiz_engine import QuizEngine, QuizSession
from flashgenie.core.difficulty_analyzer import DifficultyAnalyzer
class TestQuizEngine:
"""Test suite for QuizEngine class."""
def test_start_session(self, sample_deck):
"""Test starting a quiz session."""
engine = QuizEngine()
session = engine.start_session(sample_deck)
assert isinstance(session, QuizSession)
assert session.deck == sample_deck
assert session.total_questions == 0
assert session.correct_answers == 0
@patch('flashgenie.core.quiz_engine.DifficultyAnalyzer')
def test_engine_with_custom_analyzer(self, mock_analyzer_class, sample_deck):
"""Test quiz engine with custom difficulty analyzer."""
mock_analyzer = Mock(spec=DifficultyAnalyzer)
mock_analyzer_class.return_value = mock_analyzer
engine = QuizEngine(difficulty_analyzer=mock_analyzer)
session = engine.start_session(sample_deck)
# Verify analyzer is used
assert session._difficulty_analyzer == mock_analyzer
class TestQuizSession:
"""Test suite for QuizSession class."""
def test_get_next_card(self, sample_deck):
"""Test getting next card in session."""
engine = QuizEngine()
session = engine.start_session(sample_deck)
card = session.get_next_card()
assert card is not None
assert card in sample_deck.flashcards
def test_submit_answer_correct(self, sample_deck):
"""Test submitting correct answer."""
engine = QuizEngine()
session = engine.start_session(sample_deck)
card = session.get_next_card()
result = session.submit_answer(card.answer)
assert result is True
assert session.correct_answers == 1
assert session.total_questions == 1
def test_submit_answer_incorrect(self, sample_deck):
"""Test submitting incorrect answer."""
engine = QuizEngine()
session = engine.start_session(sample_deck)
card = session.get_next_card()
result = session.submit_answer("Wrong answer")
assert result is False
assert session.correct_answers == 0
assert session.total_questions == 1
def test_end_session(self, sample_deck):
"""Test ending a quiz session."""
engine = QuizEngine()
session = engine.start_session(sample_deck)
# Answer some questions
card1 = session.get_next_card()
session.submit_answer(card1.answer)
card2 = session.get_next_card()
session.submit_answer("Wrong answer")
# End session
summary = session.end_session()
assert summary['total_questions'] == 2
assert summary['correct_answers'] == 1
assert summary['score_percentage'] == 50.0
assert 'duration' in summary
Integration Test Examples
# tests/integration/test_quiz_workflow.py
import pytest
from flashgenie.core.flashcard import Flashcard
from flashgenie.core.deck import Deck
from flashgenie.core.quiz_engine import QuizEngine
from flashgenie.core.difficulty_analyzer import DifficultyAnalyzer
from flashgenie.data.storage import DataStorage
class TestQuizWorkflow:
"""Integration tests for complete quiz workflow."""
def test_complete_quiz_session(self, temp_data_dir):
"""Test complete quiz session from start to finish."""
# Create test data
cards = [
Flashcard("Q1", "A1", difficulty=1.0),
Flashcard("Q2", "A2", difficulty=2.0),
Flashcard("Q3", "A3", difficulty=1.5)
]
deck = Deck("Integration Test Deck", flashcards=cards)
# Initialize components
storage = DataStorage(data_dir=temp_data_dir)
analyzer = DifficultyAnalyzer()
quiz_engine = QuizEngine(difficulty_analyzer=analyzer)
# Save deck
storage.save_deck(deck)
# Start quiz session
session = quiz_engine.start_session(deck)
# Simulate quiz interaction
results = []
while session.has_more_cards():
card = session.get_next_card()
if card:
# Simulate user answering (alternate correct/incorrect)
is_correct = len(results) % 2 == 0
answer = card.answer if is_correct else "wrong"
result = session.submit_answer(answer)
results.append(result)
# End session
summary = session.end_session()
# Verify results
assert summary['total_questions'] == len(cards)
assert summary['correct_answers'] == len([r for r in results if r])
assert 'duration' in summary
assert 'score_percentage' in summary
# Verify difficulty adjustments were made
for card in deck.flashcards:
assert hasattr(card, 'last_reviewed')
assert card.review_count > 0
Property-Based Testing
# tests/unit/core/test_flashcard_properties.py
import pytest
from hypothesis import given, strategies as st
from flashgenie.core.flashcard import Flashcard
class TestFlashcardProperties:
"""Property-based tests for Flashcard class."""
@given(
question=st.text(min_size=1, max_size=1000),
answer=st.text(min_size=1, max_size=1000),
difficulty=st.floats(min_value=0.1, max_value=5.0)
)
def test_flashcard_creation_properties(self, question, answer, difficulty):
"""Test flashcard creation with various inputs."""
card = Flashcard(question, answer, difficulty=difficulty)
assert card.question == question
assert card.answer == answer
assert 0.1 <= card.difficulty <= 5.0
@given(
tags=st.lists(st.text(min_size=1, max_size=50), min_size=0, max_size=20)
)
def test_tag_operations_properties(self, tags):
"""Test tag operations with various tag lists."""
card = Flashcard("Test", "Test", tags=tags)
# All tags should be present
for tag in tags:
assert tag in card.tags
# Adding existing tag should not duplicate
if tags:
original_count = len(card.tags)
card.add_tag(tags[0])
assert len(card.tags) == original_count
๐ง Test Configuration
pytest.ini
[tool:pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
--strict-markers
--strict-config
--verbose
--tb=short
--cov=flashgenie
--cov-report=term-missing
--cov-report=html:htmlcov
--cov-fail-under=80
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
integration: marks tests as integration tests
unit: marks tests as unit tests
e2e: marks tests as end-to-end tests
performance: marks tests as performance tests
filterwarnings =
ignore::UserWarning
ignore::DeprecationWarning
tox.ini
[tox]
envlist = py38,py39,py310,py311,flake8,mypy,coverage
[testenv]
deps =
pytest
pytest-cov
pytest-mock
pytest-asyncio
hypothesis
commands = pytest {posargs}
[testenv:flake8]
deps = flake8
commands = flake8 flashgenie tests
[testenv:mypy]
deps = mypy
commands = mypy flashgenie
[testenv:coverage]
deps =
pytest
pytest-cov
commands =
pytest --cov=flashgenie --cov-report=html --cov-report=term
coverage report --fail-under=80
๐ Coverage and Quality
Code Coverage
# Generate coverage report
python -m pytest --cov=flashgenie --cov-report=html
# View coverage in browser
open htmlcov/index.html
# Check coverage threshold
python -m pytest --cov=flashgenie --cov-fail-under=80
Quality Metrics
# Run linting
flake8 flashgenie/ tests/
# Type checking
mypy flashgenie/
# Security scanning
bandit -r flashgenie/
# Complexity analysis
radon cc flashgenie/ -a
๐ Performance Testing
Load Testing
# tests/performance/test_large_decks.py
import pytest
import time
from flashgenie.core.deck import Deck
from flashgenie.core.flashcard import Flashcard
class TestPerformance:
"""Performance tests for FlashGenie."""
@pytest.mark.slow
def test_large_deck_creation(self):
"""Test performance with large deck."""
start_time = time.time()
# Create large deck
cards = [
Flashcard(f"Question {i}", f"Answer {i}")
for i in range(10000)
]
deck = Deck("Large Deck", flashcards=cards)
creation_time = time.time() - start_time
# Should create large deck in reasonable time
assert creation_time < 5.0 # 5 seconds max
assert len(deck.flashcards) == 10000
@pytest.mark.slow
def test_quiz_session_performance(self):
"""Test quiz session performance."""
# Create test deck
cards = [Flashcard(f"Q{i}", f"A{i}") for i in range(1000)]
deck = Deck("Performance Test", flashcards=cards)
from flashgenie.core.quiz_engine import QuizEngine
engine = QuizEngine()
start_time = time.time()
session = engine.start_session(deck)
# Answer 100 questions
for _ in range(100):
card = session.get_next_card()
if card:
session.submit_answer(card.answer)
session_time = time.time() - start_time
# Should handle 100 questions quickly
assert session_time < 1.0 # 1 second max
๐ Continuous Integration
GitHub Actions
# .github/workflows/test.yml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, "3.10", "3.11"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r dev-requirements.txt
- name: Run tests
run: |
python -m pytest --cov=flashgenie --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
๐ฏ Best Practices
Test Writing Guidelines
- Test Names: Use descriptive names that explain what is being tested
- Arrange-Act-Assert: Structure tests with clear setup, execution, and verification
- One Assertion: Focus on testing one thing per test method
- Test Data: Use fixtures and factories for consistent test data
- Mocking: Mock external dependencies to isolate units under test
Test Maintenance
- Regular Updates: Keep tests updated with code changes
- Flaky Tests: Fix or remove unreliable tests immediately
- Test Coverage: Maintain high coverage but focus on quality over quantity
- Performance: Keep test suite fast to encourage frequent running
- Documentation: Document complex test scenarios and edge cases
Ready to write comprehensive tests? Start with our test templates and ensure FlashGenie remains reliable and robust! ๐งโโ๏ธโจ