Contributing Workflow - iowarp/iowarp-mcps GitHub Wiki
This guide outlines the complete workflow for contributing to IoWarp MCPs, from issue creation to deployment.
Create well-structured issues that provide clear context:
## Issue Template
### Issue Type
- [ ] Bug Report
- [ ] Feature Request
- [ ] Documentation
- [ ] Performance Issue
- [ ] Security Issue
### Description
[Clear description of the issue or feature request]
### Expected Behavior
[What should happen]
### Actual Behavior
[What actually happens - for bugs only]
### Steps to Reproduce
[For bugs only]
1. Step 1
2. Step 2
3. Step 3
### Environment
- OS: [e.g., Ubuntu 22.04]
- Python Version: [e.g., 3.11.5]
- MCP Server: [e.g., Pandas MCP v1.2.0]
- Claude Desktop Version: [if applicable]
### Additional Context
[Any additional information, logs, screenshots, etc.]
### Acceptance Criteria
[For features only - what constitutes "done"]
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
Use consistent labels for issue categorization:
Type Labels:
-
bug
- Something isn't working -
enhancement
- New feature or improvement -
documentation
- Documentation related -
performance
- Performance related -
security
- Security related -
testing
- Testing related
Priority Labels:
-
priority/low
- Nice to have -
priority/medium
- Should have -
priority/high
- Must have -
priority/critical
- Urgent fix needed
Status Labels:
-
status/triage
- Needs initial review -
status/accepted
- Approved for development -
status/in-progress
- Currently being worked on -
status/blocked
- Blocked by external dependency -
status/review
- Ready for review
Component Labels:
-
component/server
- MCP server related -
component/tools
- Tool implementation -
component/docs
- Documentation -
component/tests
- Testing -
component/ci
- CI/CD related
Follow GitFlow-inspired branching model:
main # Production-ready code
├── dev # Integration branch
│ ├── feature/xxx # Feature branches
│ ├── bugfix/xxx # Bug fix branches
│ └── hotfix/xxx # Hotfix branches
└── release/x.x.x # Release preparation branches
-
Create Feature Branch
git checkout dev git pull origin dev git checkout -b feature/your-feature-name
-
Development Guidelines
- Follow coding standards outlined in implementation guides
- Write tests for new functionality
- Update documentation as needed
- Follow security best practices
- Add appropriate logging and error handling
-
Commit Message Format
<type>(<scope>): <subject> <body> <footer>
Types:
-
feat
: New feature -
fix
: Bug fix -
docs
: Documentation only changes -
style
: Formatting, missing semicolons, etc. -
refactor
: Code change that neither fixes a bug nor adds a feature -
perf
: Performance improvement -
test
: Adding missing tests -
chore
: Changes to build process or auxiliary tools
Examples:
feat(pandas): add data visualization tool - Implement matplotlib integration - Add support for multiple chart types - Include error handling for invalid data Closes #123
-
-
Regular Commits
- Make small, logical commits
- Each commit should represent a single logical change
- Write clear commit messages
- Push regularly to backup work
-
PR Template
## Description [Brief description of changes] ## Type of Change - [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that causes existing functionality to not work as expected) - [ ] Documentation update ## Related Issues Closes #[issue number] ## Testing - [ ] Unit tests pass - [ ] Integration tests pass - [ ] Manual testing completed - [ ] Performance testing (if applicable) ## Checklist - [ ] Code follows project style guidelines - [ ] Self-review completed - [ ] Documentation updated - [ ] Tests added/updated - [ ] No breaking changes (or properly documented) - [ ] Security considerations addressed ## Screenshots/Logs [If applicable, add screenshots or logs] ## Additional Notes [Any additional information for reviewers]
-
PR Requirements
- Target the
dev
branch (unless hotfix) - Pass all CI/CD checks
- Include comprehensive tests
- Update documentation
- Follow security guidelines
- Add meaningful description
- Target the
For Reviewers:
-
Code Quality Review
- Code follows established patterns
- Error handling is appropriate
- Security best practices followed
- Performance considerations addressed
- No code smells or anti-patterns
-
Functionality Review
- Requirements are met
- Edge cases handled
- Integration points work correctly
- User experience is intuitive
-
Testing Review
- Adequate test coverage
- Tests are meaningful and not just for coverage
- Integration tests included where appropriate
- Manual testing scenarios covered
-
Documentation Review
- Code is self-documenting
- Complex logic is commented
- API documentation updated
- User documentation updated
Review Process:
- Assign reviewers (at least 2 for significant changes)
- Address feedback promptly
- Re-request review after changes
- Require approval from all reviewers
- Merge only after all checks pass
-
Unit Tests
import pytest import asyncio from unittest.mock import Mock, patch from your_mcp_server.tools import YourTool @pytest.mark.asyncio async def test_tool_success_case(): """Test successful tool execution.""" tool = YourTool() # Arrange input_data = {"param1": "value1", "param2": "value2"} expected_result = {"status": "success", "data": "processed"} # Act result = await tool.execute(input_data) # Assert assert result == expected_result @pytest.mark.asyncio async def test_tool_error_handling(): """Test tool error handling.""" tool = YourTool() # Test invalid input with pytest.raises(ValueError): await tool.execute({"invalid": "data"})
-
Integration Tests
@pytest.mark.integration async def test_full_mcp_workflow(): """Test complete MCP server workflow.""" # Test server startup server = MCPServer() await server.start() try: # Test tool discovery tools = await server.list_tools() assert len(tools) > 0 # Test tool execution result = await server.call_tool("test_tool", {"data": "test"}) assert result["status"] == "success" finally: await server.stop()
-
Performance Tests
import time import pytest @pytest.mark.performance async def test_tool_performance(): """Test tool performance requirements.""" tool = YourTool() start_time = time.time() result = await tool.execute({"large_dataset": range(10000)}) execution_time = time.time() - start_time # Performance requirement: complete within 5 seconds assert execution_time < 5.0 assert result["status"] == "success"
# conftest.py
import pytest
import asyncio
import os
from unittest.mock import Mock
@pytest.fixture(scope="session")
def event_loop():
"""Create event loop for async tests."""
loop = asyncio.new_event_loop()
yield loop
loop.close()
@pytest.fixture
def mock_database():
"""Mock database for testing."""
db = Mock()
db.query.return_value = [{"id": 1, "name": "test"}]
return db
@pytest.fixture
def test_config():
"""Test configuration."""
return {
"api_key": "test_key",
"database_url": "sqlite:///:memory:",
"log_level": "DEBUG"
}
# Marks for test categorization
pytest.mark.unit = pytest.mark.unit
pytest.mark.integration = pytest.mark.integration
pytest.mark.performance = pytest.mark.performance
pytest.mark.security = pytest.mark.security
# .github/workflows/ci.yml
name: CI/CD Pipeline
on:
push:
branches: [ main, dev ]
pull_request:
branches: [ main, dev ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.10, 3.11, 3.12]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install uv
uv pip install --system -r requirements-dev.txt
- name: Lint with ruff
run: |
ruff check .
ruff format --check .
- name: Type check with mypy
run: mypy src/
- name: Run unit tests
run: |
pytest tests/unit/ -v --cov=src/ --cov-report=xml
- name: Run integration tests
run: |
pytest tests/integration/ -v
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run security scan
uses: pypa/[email protected]
with:
inputs: requirements.txt
- name: Run Bandit security scan
run: |
pip install bandit
bandit -r src/ -f json -o bandit-report.json
- name: Upload security scan results
uses: actions/upload-artifact@v3
with:
name: security-report
path: bandit-report.json
performance:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.11
- name: Install dependencies
run: |
pip install uv
uv pip install --system -r requirements-dev.txt
- name: Run performance tests
run: |
pytest tests/performance/ -v --benchmark-only
- name: Performance regression check
run: |
python scripts/check_performance_regression.py
# .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-json
- id: check-merge-conflict
- id: check-added-large-files
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.6
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
- id: ruff-format
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.7.1
hooks:
- id: mypy
additional_dependencies: [types-all]
- repo: https://github.com/pycqa/bandit
rev: 1.7.5
hooks:
- id: bandit
args: [-c, pyproject.toml]
- repo: local
hooks:
- id: pytest-unit
name: pytest-unit
entry: pytest tests/unit/
language: system
pass_filenames: false
always_run: true
Use semantic versioning (MAJOR.MINOR.PATCH):
- MAJOR: Breaking changes
- MINOR: New features (backward compatible)
- PATCH: Bug fixes (backward compatible)
-
Prepare Release
# Create release branch git checkout dev git pull origin dev git checkout -b release/v1.2.0 # Update version numbers # Update CHANGELOG.md # Run final tests
-
Release Checklist
- All tests passing
- Documentation updated
- CHANGELOG.md updated
- Version numbers bumped
- Security scan clean
- Performance benchmarks acceptable
- Breaking changes documented
- Migration guide created (if needed)
-
Create Release
# Merge to main git checkout main git merge release/v1.2.0 git tag -a v1.2.0 -m "Release v1.2.0" git push origin main --tags # Merge back to dev git checkout dev git merge main git push origin dev # Delete release branch git branch -d release/v1.2.0
-
Post-Release
- Create GitHub release with notes
- Update documentation
- Announce in relevant channels
- Monitor for issues
-
Code Documentation
- Update docstrings for new/changed functions
- Add type hints
- Include usage examples
- Document exceptions and edge cases
-
API Documentation
- Update OpenAPI specifications
- Add new tool descriptions
- Include parameter examples
- Document error responses
-
User Documentation
- Update README files
- Add tutorial content
- Include troubleshooting guides
- Update installation instructions
- Technical accuracy review
- Grammar and style check
- Link validation
- Example code testing
- User experience validation
A feature/fix is considered "done" when:
-
Functionality
- Requirements implemented correctly
- Edge cases handled
- Error handling implemented
- Performance requirements met
-
Code Quality
- Code review completed and approved
- Follows coding standards
- No code smells
- Security best practices followed
-
Testing
- Unit tests written and passing
- Integration tests passing
- Manual testing completed
- Performance tests passing (if applicable)
-
Documentation
- Code documented
- API documentation updated
- User documentation updated
- CHANGELOG.md updated
-
Review & Approval
- Peer review completed
- Technical lead approval
- Security review (for security-related changes)
- Architecture review (for significant changes)
Monitor these metrics to maintain quality:
- Code Coverage: > 80%
- Test Pass Rate: 100%
- Security Scan: No high/critical issues
- Performance: Within established benchmarks
- Documentation Coverage: All public APIs documented
- Review Participation: All PRs reviewed by ≥2 people
-
Test Failures
# Run tests locally pytest tests/ -v --tb=short # Check specific test pytest tests/test_specific.py::test_function -v -s # Run with debugging pytest tests/ -v --pdb
-
Linting Issues
# Fix formatting ruff format . # Fix linting issues ruff check . --fix # Check specific file ruff check src/your_file.py
-
Type Check Failures
# Run mypy locally mypy src/ # Check specific file mypy src/your_file.py --show-error-codes
-
Merge Conflicts
# Update your branch git checkout your-branch git fetch origin git rebase origin/dev # Resolve conflicts and continue git add . git rebase --continue
-
Commit Message Issues
# Amend last commit message git commit --amend -m "New commit message" # Interactive rebase to fix multiple commits git rebase -i HEAD~3
This workflow ensures consistent, high-quality contributions while maintaining security and performance standards throughout the development lifecycle.