Best‐Practices - su-record/vibe GitHub Wiki
Best Practices
Proven techniques for maximizing Vibe's effectiveness in your workflow.
Table of Contents
- Specification Best Practices
- Planning Best Practices
- Task Execution Best Practices
- Code Quality Best Practices
- Team Collaboration
- Performance Optimization
- BDD & Contract Testing Best Practices
- Common Anti-patterns
Specification Best Practices
1. Answer All 6 Questions Completely
❌ Bad:
Q1. Why?
"Because we need it"
Q2. Who?
"Users"
✅ Good:
Q1. Why?
"Current: 15% weekly churn
Problem: Excessive push notifications (20/day avg)
Goal: Reduce churn to <10%
ROI: Retain 5,000 users/month = $50k MRR"
Q2. Who?
"Primary: All users (100k DAU)
Secondary: Support team (reduced complaints)
Scale: 500k MAU, growing 20% MoM
Platform: 95% mobile (iOS/Android), 5% web"
Why it matters: Incomplete answers lead to incomplete plans and missing requirements.
2. Include Metrics and Numbers
❌ Bad:
"The API should be fast"
"We have a lot of users"
✅ Good:
"P95 API latency < 500ms"
"100k DAU, 500k MAU, peak: 10k concurrent users"
Why it matters: Quantifiable targets enable verification and prevent scope creep.
3. Document Edge Cases
❌ Bad:
Q3. What?
"Notification settings with toggles"
✅ Good:
Q3. What?
"Core: 6 category toggles (likes, comments, follows, mentions, feed, marketing)
Edge Cases:
- User disables all → Show warning modal
- Toggle fails → Retry 3x, then show error toast
- New user → Auto-apply defaults
- Existing user → Migrate with current behavior = ON
- Concurrent updates → Last write wins
- Network offline → Queue changes, sync on reconnect"
Why it matters: Edge cases discovered during implementation cause delays and technical debt.
4. Be Specific About Tech Stack
❌ Bad:
Q6. With What?
"Python backend, mobile app"
✅ Good:
Q6. With What?
Backend:
- Language: Python 3.11
- Framework: FastAPI 0.104+
- Database: PostgreSQL 17 (existing)
- Cache: Redis 7.2 (existing)
- Auth: JWT (existing)
Frontend:
- Framework: Flutter 3.24+
- State: Provider (existing)
- HTTP: Dio (existing)
Infrastructure:
- Cloud: GCP Cloud Run (existing)
- CI/CD: GitHub Actions (existing)
New Dependencies: None (100% reuse)
Why it matters: Prevents technology drift and ensures agents use correct patterns.
5. Link Business Value to Technical Requirements
✅ Good:
Q1. Why?
"Reduce churn 15% → 10% = 5k users/month = $50k MRR"
Q4. How?
"P95 < 500ms (user studies show 500ms = patience threshold)
Rate limit 100 req/min (prevents API abuse)
Redis cache 1h TTL (settings change <1% per hour)"
Why it matters: Technical decisions justified by business value survive code reviews and refactoring.
Planning Best Practices
1. Create CLAUDE.md First
Before running vibe plan, document your tech stack:
# Create CLAUDE.md
cat > CLAUDE.md << 'EOF'
# CLAUDE.md
## Tech Stack
### Backend
- Framework: FastAPI 0.104+
- Database: PostgreSQL 17
- Cache: Redis 7.2
### Frontend
- Framework: Flutter 3.24+
- State Management: Provider
- HTTP Client: Dio
### Infrastructure
- Cloud: GCP Cloud Run
- CI/CD: GitHub Actions
EOF
# Now plan
vibe plan "feature-name"
Why it matters: Agents read CLAUDE.md for tech stack detection. Without it, they may suggest wrong technologies.
2. Review Generated Plans
Don't blindly accept generated plans:
vibe plan "notification settings"
# Review the plan
cat .vibe/plans/notification-settings.md
# Check:
# - Architecture makes sense?
# - Database schema is normalized?
# - API design follows REST conventions?
# - Timeline is realistic?
# - Cost estimate is accurate?
If something is wrong:
- Edit the plan manually
- Or regenerate:
vibe plan "notification settings" --detail detailed
3. Use Detailed Planning for Complex Features
# Simple feature (< 8 hours)
vibe plan "dark mode toggle"
# Complex feature (> 24 hours)
vibe plan "search system" --detail detailed --architecture
Detailed mode includes:
- Architecture diagrams (Mermaid)
- Detailed cost breakdown
- Risk mitigation strategies
- Performance optimization plan
Task Execution Best Practices
1. Execute Phase by Phase
❌ Bad:
vibe run --all # Runs all 30 tasks without verification
✅ Good:
# Phase 1: Backend
vibe run --phase 1
git diff # Review changes
vibe verify "feature-name"
git commit -m "feat(backend): Add notification settings"
# Phase 2: Frontend
vibe run --phase 2
git diff
vibe verify "feature-name"
git commit -m "feat(frontend): Add settings UI"
# Phase 3: Integration
vibe run --phase 3
git diff
vibe verify "feature-name"
git commit -m "feat: Complete notification settings"
Why it matters:
- Early error detection
- Easier code review
- Clearer git history
- Rollback is safer
2. Use Dry Run for Complex Tasks
# See what would happen
vibe run "Task 1-5" --dry-run
# Output shows:
# - Files that would be changed
# - MCP tools that would be used
# - Estimated time
# - Dependencies
# If looks good, execute
vibe run "Task 1-5"
3. Generate Guides First
# Generate implementation guides without executing
vibe run "Task 1-1" --guide-only
vibe run "Task 1-2" --guide-only
vibe run "Task 1-3" --guide-only
# Review all guides
ls .vibe/guides/
# Then execute
vibe run --phase 1
Use case: When you want to review the approach before implementation.
4. Let Agents Use MCP Tools
❌ Bad:
# Manually analyzing code
find . -name "*.py" | xargs wc -l
grep -r "def " . | wc -l
# ... manual complexity calculation
✅ Good:
# Let Vibe's MCP tools do it
vibe analyze --code
Why it matters: MCP tools are optimized, consistent, and agents use them automatically.
5. Trust but Verify
After task completion:
# 1. Review code changes
git diff
# 2. Check quality report
cat .vibe/reports/verification-*.md
# 3. Run tests manually
pytest tests/
flutter test
# 4. Test locally
curl http://localhost:8000/api/v1/users/123/notification-settings
Code Quality Best Practices
1. Set Quality Standards in Config
Edit .vibe/config.json:
{
"quality": {
"max_complexity": 10,
"max_cognitive_complexity": 15,
"max_function_length": 20,
"max_nesting": 3,
"min_test_coverage": 80,
"require_type_hints": true,
"require_docstrings": true
}
}
Agents will enforce these standards automatically.
2. Run Analysis Before Refactoring
# Before refactoring
vibe analyze --code
# Identify hot spots
# - High complexity functions
# - Low cohesion modules
# - Tight coupling
# Create refactoring SPEC
vibe spec "refactor user service"
# Execute refactoring
vibe run --phase 1
# Verify improvements
vibe analyze --code
# Should show lower complexity
3. Use Quality Reviewer
# After feature implementation
vibe verify "notification settings"
# Quality Reviewer checks:
# - Code complexity
# - Test coverage
# - Security issues
# - Performance bottlenecks
# - Accessibility
4. Maintain Consistent Style
Create project-specific style guide:
mkdir -p skills/custom/
cat > skills/custom/project-style.md << 'EOF'
# Project Style Guide
## Naming Conventions
- Files: snake_case.py
- Classes: PascalCase
- Functions: snake_case()
- Constants: UPPER_SNAKE_CASE
## Code Structure
- Max file length: 300 lines
- Max function length: 20 lines
- Max nesting: 3 levels
## Documentation
- All public functions: docstring required
- Complex logic: inline comments
- API endpoints: OpenAPI descriptions
EOF
Agents will read and apply custom styles.
Team Collaboration
1. Share CLAUDE.md
Commit CLAUDE.md to git:
git add CLAUDE.md
git commit -m "docs: Add tech stack documentation"
git push
Benefits:
- New team members understand stack instantly
- Agents use correct technologies
- Prevents technology drift
2. Use Consistent Naming
# ✅ Good: Descriptive feature names
vibe spec "user-notification-settings"
vibe spec "oauth-social-login"
vibe spec "api-rate-limiting"
# ❌ Bad: Vague names
vibe spec "settings"
vibe spec "login"
vibe spec "feature-123"
3. Document Decisions
SPECs serve as documentation:
# 6 months later, new developer asks:
# "Why do we use Redis for settings cache?"
# Answer is in SPEC:
cat .vibe/specs/notification-settings.md
# Q4. How?
# "Redis cache 1h TTL (settings change <1% per hour,
# reduces DB load by 95%)"
4. Code Review Workflow
# Developer A
vibe spec "feature"
vibe plan "feature"
vibe tasks "feature"
git add .vibe/
git commit -m "spec: Add feature specification"
git push
# Create PR for SPEC review
gh pr create --title "SPEC: Feature"
# After approval
vibe run --phase 1
git commit -m "feat: Implement feature backend"
git push
# Create PR for code review
gh pr create --title "feat: Feature backend"
Benefits:
- SPEC reviewed before implementation
- Catches requirement issues early
- Code review focuses on implementation, not requirements
Performance Optimization
1. Profile Before Optimizing
# 1. Measure baseline
vibe analyze --code
# 2. Run performance tests
wrk -t 10 -c 100 -d 30s http://localhost:8000/api/v1/endpoint
# 3. Identify bottlenecks
cat .vibe/reports/analysis-*.md
# 4. Create optimization SPEC
vibe spec "optimize user service"
# 5. Implement optimizations
vibe run --phase 1
# 6. Measure improvement
wrk -t 10 -c 100 -d 30s http://localhost:8000/api/v1/endpoint
2. Use Caching Strategically
Document caching decisions in SPEC:
Q4. How?
"Caching Strategy:
- Redis: User settings (1h TTL, <1% change rate)
- Redis: User profiles (10m TTL, frequent updates)
- CDN: Static assets (1 day TTL)
- In-memory: Config (app lifetime)
Invalidation:
- Settings: On user update
- Profile: On profile edit
- Assets: On deployment"
3. Optimize Database Queries
# Analyze queries
vibe analyze --arch
# Check for:
# - N+1 queries
# - Missing indexes
# - Inefficient joins
# Create optimization tasks
vibe tasks "optimize user queries"
# Common optimizations:
# - Add indexes
# - Use eager loading
# - Implement query caching
BDD & Contract Testing Best Practices
1. Write Gherkin Scenarios Early
✅ Good workflow:
# 1. Create SPEC (automatically generates Feature file)
/vibe.spec "notification settings"
# 2. Review Feature file
cat .vibe/features/notification-settings.feature
# 3. Ensure scenarios cover all requirements
# - Each REQ-XXX should map to at least one Scenario
# - Edge cases should have dedicated scenarios
Feature file example:
Feature: Notification Settings
Scenario: User enables notification category
Given the user is logged in
And the notification settings page is displayed
When the user toggles "Comments" to ON
Then the setting should be saved
And the API should return 200
And response time should be < 500ms
2. Use Contract Testing for API Changes
✅ Good practice:
# Backend team (Provider):
# Task 1-9: Create Provider Contract
# - Define API schema in contract file
# - Verify provider meets contract
# Frontend team (Consumer):
# Task 2-9: Create Consumer Contract
# - Define expected API responses
# - Mock provider using contract
# Integration:
# Task 3-4: Verify contracts match
# - Run Pact verification
# - Ensure provider meets consumer expectations
Benefits:
- Frontend and backend can develop in parallel
- API changes don't break consumers
- Contract serves as living documentation
3. Map BDD Scenarios to Tasks
Ensure each scenario has corresponding implementation tasks:
## Scenario: User enables notification
→ Task 1-4: Backend API endpoint
→ Task 2-4: Frontend toggle component
→ Task 3-3: BDD Step Definition for this scenario
4. Test-First Development
✅ Correct order:
# 1. Write Contract (Provider or Consumer)
# 2. Write BDD Step Definitions
# 3. Write Unit Tests
# 4. Implement actual code
# 5. Verify all tests pass
❌ Don't:
# Write code first, tests later
Why it matters: Test-first ensures you implement only what's needed and catches issues early.
5. Run BDD Tests in Verification
# After Phase 3 completion
/vibe.verify "notification settings"
# Should show:
# ✅ BDD Scenarios: 5/5 passed
# ✅ Contract Tests: 2/2 passed (Provider + Consumer)
# ✅ Unit Tests: 45/45 passed
# ✅ Integration Tests: 8/8 passed
6. Use BDD Tools by Language
| Language | Recommended BDD Tool | Contract Tool |
|---|---|---|
| Python | pytest-bdd | Pact Python |
| JavaScript/TS | cucumber, jest-cucumber | Pact JS |
| Java/Kotlin | Cucumber JVM | Pact JVM |
| Dart/Flutter | gherkin, flutter_gherkin | Pact Dart |
Configure in .vibe/config.json:
{
"testing": {
"bdd_tool": "pytest-bdd",
"contract_tool": "pact",
"contract_broker": "https://pact-broker.example.com"
}
}
Common Anti-patterns
❌ 1. Skipping SPEC Phase
Don't:
# Jump straight to coding
vibe tasks "some feature" # Without SPEC
Problem: No clear requirements = scope creep, missing edge cases, unclear acceptance criteria.
Do:
vibe spec "feature"
vibe plan "feature"
vibe tasks "feature"
❌ 2. Running All Tasks at Once
Don't:
vibe run --all # For complex feature with 30 tasks
Problem: One failure blocks everything, hard to debug, large git commits.
Do:
vibe run --phase 1
# Review, test, commit
vibe run --phase 2
# Review, test, commit
vibe run --phase 3
❌ 3. Ignoring Verification
Don't:
vibe run --phase 1
git add .
git commit -m "done"
# No verification!
Problem: Bugs reach production, technical debt accumulates.
Do:
vibe run --phase 1
vibe verify "feature"
# Check quality score
# Address issues
git commit
❌ 4. Not Documenting Tech Stack
Don't:
# No CLAUDE.md
vibe plan "feature"
# Agent guesses wrong tech stack
Problem: Wrong patterns, incompatible libraries, refactoring needed.
Do:
# Create CLAUDE.md first
cat > CLAUDE.md << 'EOF'
## Tech Stack
...
EOF
vibe plan "feature"
❌ 5. Vague Feature Names
Don't:
vibe spec "settings"
vibe spec "feature-123"
vibe spec "stuff"
Problem: Unclear scope, confused team members, merge conflicts.
Do:
vibe spec "user-notification-settings"
vibe spec "oauth-google-login"
vibe spec "api-rate-limiting-redis"
❌ 6. Manual Tool Invocation
Don't:
# Manually calling MCP tools
# (They're for agent use, not direct CLI)
Problem: Tools are designed for agent orchestration, not manual use.
Do:
# Let agents use tools
vibe run "Task 1-1"
vibe analyze --code
vibe verify "feature"
❌ 7. Editing Task Files During Execution
Don't:
vibe run "Task 1-1"
# While running, edit .vibe/tasks/feature.md
Problem: Agent reads file at start, your changes are lost.
Do:
# Edit before execution
vim .vibe/tasks/feature.md
vibe run "Task 1-1"
Quick Checklist
Before Starting
- CLAUDE.md exists and is current
- .vibe/config.json has quality standards
- Git branch created for feature
During SPEC Phase
- All 6 questions answered completely
- Metrics and numbers included
- Edge cases documented
- Tech stack specified
- SPEC reviewed by team
During Planning
- Plan reviewed and approved
- Architecture makes sense
- Timeline is realistic
- Cost is acceptable
During Execution
- Execute phase by phase
- Review code after each phase
- Run tests after each phase
- Commit after each phase
Before Merging
- vibe verify shows 100%
- All tests passing
- Code reviewed by team
- Documentation updated
Summary: The Vibe Way
- Document First - CLAUDE.md and .vibe/config.json
- Specify Clearly - Answer 6 questions with metrics
- Plan Thoroughly - Review generated architecture
- Execute Incrementally - Phase by phase with verification
- Verify Continuously - Use MCP tools at every step
- Collaborate Openly - Share SPECs, plans, and decisions
Result: High-quality code, clear requirements, happy team 🚀
Next Steps:
- Troubleshooting - Solve common issues
- Examples - See best practices in action
- Home - Back to main page