testing guide - MrTibbz2/CaveBot GitHub Wiki
Testing Guide
Overview
The testing framework provides comprehensive coverage for all system components including unit tests, integration tests, and hardware-in-the-loop testing.
Test Structure
tests/
├── __init__.py # Test package initialization
├── test_ui_simulation.py # UI simulation tests
├── test_rpi_serial.py # RPI serial interface tests
├── test_spikeprime_commands.py # Spike Prime command tests
├── test_integration.py # System integration tests
├── requirements.txt # Test dependencies
└── run_tests.py # Test runner script
Running Tests
Quick Start
# Install test dependencies
cd tests/
pip install -r requirements.txt
# Run all tests
python run_tests.py
Individual Test Modules
# Run specific test file
python -m unittest test_ui_simulation.py
# Run with verbose output
python -m unittest test_ui_simulation.py -v
# Run specific test method
python -m unittest test_ui_simulation.TestRobotSimulator.test_line_intersection_basic
Using pytest (Alternative)
# Install pytest
pip install pytest pytest-mock
# Run all tests
pytest
# Run with coverage
pytest --cov=src
# Run specific test
pytest tests/test_ui_simulation.py::TestRobotSimulator::test_sensor_config_integrity
Test Categories
Unit Tests
test_ui_simulation.py
)
UI Simulation Tests (- Line intersection calculations: Verify sensor ray collision detection
- Sensor configuration: Validate sensor positioning and angles
- Robot movement: Test movement and rotation functions
- Data structures: Verify sensor data queue operations
def test_line_intersection_basic(self):
# Test basic line segment intersection
p1, p2 = (0, 0), (10, 0)
p3, p4 = (5, -5), (5, 5)
result = self.simulator._line_segment_intersection(p1, p2, p3, p4)
self.assertEqual(result, (5, 0))
test_rpi_serial.py
)
RPI Serial Tests (- Port detection: Test Pico USB port discovery
- Message parsing: Validate JSON message parsing
- Connection handling: Test connection establishment and error recovery
- Data validation: Verify incoming data format compliance
def test_parse_pico_message_valid_json(self):
test_message = 'INFO: { "type": "system_status", "status": "ready" }'
result = self.interface._parse_pico_message(test_message)
expected = {'identifier': 'INFO', 'type': 'system_status', 'status': 'ready'}
self.assertEqual(result, expected)
test_spikeprime_commands.py
)
Spike Prime Tests (- Command formatting: Test BLE command structure
- Parameter validation: Verify input parameter checking
- Connection management: Test BLE connection handling
- Error handling: Validate error response processing
test_integration.py
)
Integration Tests (Data Flow Testing
- Pico to UI pipeline: Test complete data flow from sensors to UI
- Command protocol consistency: Verify command formats across components
- Message transformation: Test data format conversions
- Timing and synchronization: Validate real-time data handling
System Protocol Testing
- Communication protocols: Test all inter-component communication
- Error propagation: Verify error handling across system boundaries
- State synchronization: Test system state consistency
- Recovery procedures: Validate system recovery from failures
Mock Testing
Hardware Mocking
Tests use extensive mocking to simulate hardware components:
@patch('turtle.Screen')
@patch('turtle.Turtle')
def setUp(self, mock_turtle, mock_screen):
self.simulator = RobotSimulator()
@patch('serial.Serial')
def test_serial_connection(self, mock_serial):
interface = PicoSerialInterface(baudrate=115200)
# Test without actual hardware
BLE Mocking
@patch('primeCommands.BLEConnection')
def test_spike_prime_commands(self, mock_ble):
prime = Prime("TEST_HUB")
prime.moveForward(100)
mock_ble.return_value.send_command.assert_called()
Test Data
Sample Messages
# Pico message format
PICO_SENSOR_MESSAGE = 'Core1: { "type": "data_stream", "status": "active", "payload": [ {"sensor_id": "front_left", "average": 150.2} ] }'
# UI WebSocket format
UI_WEBSOCKET_MESSAGE = {
"type": "data_stream",
"subtype": "distance_read",
"timestamp": "2025-01-01T12:00:00Z",
"payload": {"sensor_leftfront": 150.2}
}
Test Fixtures
# Sensor configuration test data
EXPECTED_SENSORS = ["leftfront", "leftback", "rightfront", "rightback",
"frontleft", "frontright", "backleft", "backright"]
# Valid distance ranges
VALID_DISTANCE_RANGE = (0, 400) # HC-SR04 sensor range in cm
Hardware-in-the-Loop Testing
Manual Testing Procedures
Pico Firmware Testing
- Flash firmware to Pico
- Connect to serial terminal
- Send test commands:
CMD_STATUS
,CMD_START_SENSORREAD
- Verify response format and timing
- Test error conditions (invalid commands)
RPI Interface Testing
- Connect Pico to RPI
- Run
python main.py
in rpi/ directory - Monitor console output for connection messages
- Verify data parsing and forwarding
- Test reconnection after Pico reset
Web UI Testing
- Start Web UI server
- Open browser to
http://localhost:8000
- Verify WebSocket connection
- Test simulation controls
- Validate real-time data display
Spike Prime Testing
- Upload PyBricks code to hub
- Run Python control script
- Test movement commands
- Verify BLE communication
- Test error recovery
Automated Hardware Tests
def test_end_to_end_data_flow():
"""Test complete data flow from Pico to UI"""
# Start all system components
# Send test sensor data from Pico
# Verify data appears in UI
# Validate data accuracy and timing
Performance Testing
Timing Tests
- Sensor reading frequency: Verify 10Hz minimum data rate
- Communication latency: Test serial and WebSocket delays
- Command response time: Measure command execution timing
- System throughput: Test maximum sustainable data rate
Memory Tests
- Memory usage: Monitor RAM consumption over time
- Memory leaks: Test for memory growth during operation
- Buffer overflow: Test with high data rates
- Resource cleanup: Verify proper resource deallocation
Continuous Integration
Automated Test Execution
# Example GitHub Actions workflow
name: Run Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.10
- name: Install dependencies
run: pip install -r tests/requirements.txt
- name: Run tests
run: python tests/run_tests.py
Test Coverage
- Target: >90% code coverage
- Tools:
pytest-cov
,coverage.py
- Reports: HTML and XML coverage reports
- Integration: Coverage reporting in CI/CD pipeline
Debugging Tests
Common Test Failures
Import Errors
# Fix Python path issues
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
python -m unittest tests.test_ui_simulation
Mock Issues
# Verify mock setup
@patch('module.Class')
def test_function(self, mock_class):
mock_class.return_value.method.return_value = expected_value
# Test code here
mock_class.assert_called_once()
Timing Issues
# Use time mocking for consistent tests
@patch('time.time')
def test_timing(self, mock_time):
mock_time.return_value = 1640995200 # Fixed timestamp
# Test time-dependent code
Test Debugging Tools
- pdb: Python debugger for interactive debugging
- pytest --pdb: Drop into debugger on test failure
- logging: Add debug logging to tests
- assert messages: Include descriptive assertion messages
def test_with_debugging(self):
result = function_under_test()
self.assertEqual(result, expected, f"Expected {expected}, got {result}")