Testing Guide - antimetal/system-agent GitHub Wiki
Testing Guide
⚠️ Work in Progress: This documentation is currently being developed and may be incomplete or subject to change.
Overview
This guide covers testing practices for the Antimetal System Agent, including unit tests, integration tests, and end-to-end testing strategies. Following these practices ensures code quality and reliability.
Testing Philosophy
- Test-Driven Development (TDD): Write tests before implementation
- Comprehensive Coverage: Aim for >80% code coverage
- Fast Feedback: Unit tests should run in seconds
- Realistic Testing: Integration tests with real environments
- Continuous Testing: All tests run in CI/CD pipeline
Testing Levels
1. Unit Tests
Unit tests verify individual components in isolation.
Structure
// collector_test.go
package collectors
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCPUCollector_Collect(t *testing.T) {
tests := []struct {
name string
setup func()
want *CPUStats
wantErr bool
}{
{
name: "successful collection",
setup: func() {
// Mock /proc/stat
},
want: &CPUStats{
Total: CPUTime{User: 100, System: 50},
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.setup()
collector := NewCPUCollector(logger, config)
got, err := collector.Collect(context.Background())
if tt.wantErr {
assert.Error(t, err)
return
}
require.NoError(t, err)
assert.Equal(t, tt.want, got)
})
}
}
Best Practices
- Use table-driven tests
- Mock external dependencies
- Test error conditions
- Verify edge cases
- Keep tests focused and fast
2. Integration Tests
Integration tests verify components working together.
Structure
// integration_test.go
//go:build integration
package integration
import (
"testing"
"k8s.io/client-go/kubernetes/fake"
)
func TestControllerIntegration(t *testing.T) {
// Create fake Kubernetes client
client := fake.NewSimpleClientset()
// Start controller
controller := NewController(client)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
go controller.Run(ctx)
// Create test resources
pod := &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pod",
},
}
_, err := client.CoreV1().Pods("default").Create(ctx, pod, metav1.CreateOptions{})
require.NoError(t, err)
// Verify controller processes resource
eventually(t, func() bool {
return controller.GetProcessedCount() > 0
}, 10*time.Second)
}
Test Categories
- Controller integration
- Collector integration
- API client integration
- Multi-component workflows
3. End-to-End Tests
E2E tests verify the complete system in real environments.
KIND Setup
# scripts/e2e-setup.sh
#!/bin/bash
set -e
# Create KIND cluster
kind create cluster --name test-cluster --config kind-config.yaml
# Install agent
kubectl apply -f deploy/
# Wait for agent to be ready
kubectl wait --for=condition=ready pod -l app=antimetal-agent -n antimetal-system
# Run tests
go test -tags=e2e ./test/e2e/...
Test Structure
// e2e_test.go
//go:build e2e
func TestE2EDataCollection(t *testing.T) {
// Deploy test workload
deployTestWorkload(t)
// Wait for data collection
time.Sleep(30 * time.Second)
// Verify data in platform
client := antimetal.NewClient(apiKey)
metrics, err := client.GetMetrics(clusterName, time.Now().Add(-1*time.Minute))
require.NoError(t, err)
assert.NotEmpty(t, metrics.CPU)
assert.NotEmpty(t, metrics.Memory)
}
Testing Utilities
Test Helpers
// testutil/helpers.go
package testutil
// TempFile creates a temporary file with content
func TempFile(t *testing.T, content string) string {
t.Helper()
file, err := os.CreateTemp("", "test")
require.NoError(t, err)
_, err = file.WriteString(content)
require.NoError(t, err)
t.Cleanup(func() {
os.Remove(file.Name())
})
return file.Name()
}
// Eventually retries a condition until timeout
func Eventually(t *testing.T, condition func() bool, timeout time.Duration) {
t.Helper()
deadline := time.Now().Add(timeout)
for time.Now().Before(deadline) {
if condition() {
return
}
time.Sleep(100 * time.Millisecond)
}
t.Fatal("condition not met within timeout")
}
Mock Generators
// testutil/mocks.go
// MockProcStat returns mock /proc/stat content
func MockProcStat() string {
return `cpu 100 0 50 1000 10 0 0 0 0 0
cpu0 50 0 25 500 5 0 0 0 0 0
cpu1 50 0 25 500 5 0 0 0 0 0`
}
// MockKubeClient returns a fake Kubernetes client with test data
func MockKubeClient() kubernetes.Interface {
return fake.NewSimpleClientset(
&v1.Node{
ObjectMeta: metav1.ObjectMeta{Name: "test-node"},
},
&v1.Pod{
ObjectMeta: metav1.ObjectMeta{Name: "test-pod"},
},
)
}
Running Tests
Unit Tests
# Run all unit tests
make test
# Run with coverage
make test-coverage
# Run specific package
go test ./pkg/collectors/...
# Run specific test
go test -run TestCPUCollector ./pkg/collectors
Integration Tests
# Run integration tests
make test-integration
# Run with real Kubernetes
KUBECONFIG=~/.kube/config make test-integration
E2E Tests
# Setup and run E2E tests
make test-e2e
# Run against existing cluster
CLUSTER_NAME=test make test-e2e-only
Test Coverage
Generating Coverage Reports
# Generate coverage report
go test -coverprofile=coverage.out ./...
# View coverage in browser
go tool cover -html=coverage.out
# Get coverage percentage
go tool cover -func=coverage.out | grep total
Coverage Requirements
- New code: >80% coverage
- Critical paths: >90% coverage
- Collectors: 100% coverage for parsing logic
Continuous Integration
GitHub Actions Workflow
# .github/workflows/test.yml
name: Tests
on: [push, pull_request]
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.21'
- name: Run unit tests
run: make test-coverage
- name: Upload coverage
uses: codecov/codecov-action@v3
integration-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Create KIND cluster
uses: helm/kind-action@v1
- name: Run integration tests
run: make test-integration
Testing Best Practices
Do's
- ✅ Write tests first (TDD)
- ✅ Use descriptive test names
- ✅ Test one thing per test
- ✅ Mock external dependencies
- ✅ Test error conditions
- ✅ Use test fixtures for complex data
- ✅ Clean up test resources
Don'ts
- ❌ Skip writing tests
- ❌ Test implementation details
- ❌ Use global state
- ❌ Ignore flaky tests
- ❌ Hard-code test data
- ❌ Leave debug code in tests
Debugging Tests
Verbose Output
# Run tests with verbose output
go test -v ./...
# Show test logs
go test -v -run TestName
Debugging in IDE
Most IDEs support Go test debugging:
- Set breakpoints in test code
- Run test in debug mode
- Step through execution
Common Issues
-
Flaky Tests
- Add retries for timing-dependent tests
- Use Eventually() helper
- Increase timeouts in CI
-
Resource Cleanup
- Use t.Cleanup() for cleanup
- Defer cleanup functions
- Verify cleanup in CI
-
Mock Failures
- Verify mock setup
- Check mock expectations
- Use real implementations when possible
Performance Testing
Benchmarks
func BenchmarkCPUCollector(b *testing.B) {
collector := NewCPUCollector(logger, config)
ctx := context.Background()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := collector.Collect(ctx)
if err != nil {
b.Fatal(err)
}
}
}
Run benchmarks:
# Run all benchmarks
go test -bench=. ./...
# Run specific benchmark
go test -bench=BenchmarkCPUCollector ./pkg/collectors
# Compare benchmarks
go test -bench=. -benchmem ./... > new.txt
benchcmp old.txt new.txt
See Also
- Contributing - Development guide
- Custom Collectors - Collector testing
- Troubleshooting - Debug test failures