Testing - JohanDevl/Export_Trakt_4_Letterboxd GitHub Wiki

Testing Framework Documentation

This document provides detailed information about the testing framework implemented for Export Trakt for Letterboxd.

Overview

The testing framework is designed to ensure code quality, prevent regressions, and make it easier to add new features. It consists of unit tests, integration tests, and benchmarks for performance testing.

Testing Structure

The tests are organized in the following directory structure:

Export_Trakt_4_Letterboxd/
├── pkg/                    # Package directory
│   ├── api/                # API package
│   │   └── api_test.go     # Tests for API functionality
│   ├── config/             # Configuration package
│   │   └── config_test.go  # Tests for configuration
│   ├── export/             # Export package
│   │   └── export_test.go  # Tests for export functions
│   ├── i18n/               # Internationalization package
│   │   └── i18n_test.go    # Tests for i18n
│   └── logger/             # Logger package
│       └── logger_test.go  # Tests for logging
├── cmd/                    # Command-line tools
│   └── export_trakt/       # Main application
│       └── main_test.go    # Tests for main
├── tests/                  # Test suite directory
│   ├── integration/        # Integration tests
│   │   └── export_test.go  # End-to-end export tests
│   ├── benchmarks/         # Performance benchmarks
│   │   ├── benchmark.go    # Benchmark code
│   │   └── README.md       # Benchmark documentation
│   └── mocks/              # Mock API responses and helpers
│       └── mock_client.go  # Mock API client
└── test_data/              # Test data files
    ├── watched.json        # Sample watched data
    ├── ratings.json        # Sample rating data
    └── config.toml         # Sample configuration

Running Tests

The testing framework uses Go's built-in testing functionality.

Unit Tests

To run unit tests:

# Run all tests
go test ./...

# Run tests for a specific package
go test ./pkg/api

# Run with verbose output
go test -v ./...

Code Coverage

To generate a code coverage report:

# Run tests with coverage
go test -coverprofile=coverage.out ./...

# Generate HTML coverage report
go tool cover -html=coverage.out -o coverage.html

The coverage report will be available in HTML format at coverage.html.

Integration Tests

Integration tests validate the interaction between multiple components:

# Run integration tests
go test ./tests/integration

Benchmarks

Benchmarks measure the performance of key operations:

# Run all benchmarks
go test -bench=. ./tests/benchmarks

# Run specific benchmark
go test -bench=BenchmarkGetWatchedMovies ./tests/benchmarks

Mock Testing

The testing framework uses mock objects to simulate API responses without making actual API calls. This allows for more reliable and faster tests.

API Mock Example

// Create a mock API client
mockClient := &mocks.MockTraktClient{
    WatchedMoviesResponse: []api.WatchedMovie{
        {
            Movie: api.Movie{
                Title: "Test Movie",
                Year:  2023,
            },
            WatchedAt: "2023-01-01T12:00:00Z",
        },
    },
}

// Use the mock client in tests
exporter := export.NewLetterboxdExporter(mockClient, config, logger)
result, err := exporter.ExportMovies()

Test Helpers

Common test helper functions:

  • createTempDir(): Creates a temporary directory for test output
  • loadTestData(): Loads test data from JSON files
  • assertFileContains(): Validates the contents of exported files
  • setupTestEnvironment(): Sets up a clean test environment
  • teardownTestEnvironment(): Cleans up after tests

Continuous Integration

Tests are automatically run in the GitHub Actions CI/CD pipeline. The workflow is defined in .github/workflows/test.yml.

Tests are executed for:

  • Pull requests targeting main branch
  • Pushes to the main branch
  • Release creation

Writing Tests

Unit Test Example

func TestGetWatchedMovies(t *testing.T) {
    // Setup
    client := NewTraktClient(mockConfig, mockLogger)
    client.httpClient = &mockHTTPClient{} // Inject mock HTTP client

    // Execute
    movies, err := client.GetWatchedMovies()

    // Assert
    assert.NoError(t, err)
    assert.Len(t, movies, 2)
    assert.Equal(t, "Test Movie", movies[0].Movie.Title)
}

Table-Driven Test Example

func TestParseDate(t *testing.T) {
    testCases := []struct {
        name     string
        input    string
        expected time.Time
        wantErr  bool
    }{
        {"Valid ISO date", "2023-01-01T12:00:00Z", time.Date(2023, 1, 1, 12, 0, 0, 0, time.UTC), false},
        {"Invalid format", "2023/01/01", time.Time{}, true},
        {"Empty string", "", time.Time{}, true},
    }

    for _, tc := range testCases {
        t.Run(tc.name, func(t *testing.T) {
            result, err := ParseDate(tc.input)
            if tc.wantErr {
                assert.Error(t, err)
            } else {
                assert.NoError(t, err)
                assert.Equal(t, tc.expected, result)
            }
        })
    }
}

Benchmarking

Performance benchmarks test the speed and memory usage of key operations:

func BenchmarkExportWatchedMovies(b *testing.B) {
    // Skip if no API credentials
    if os.Getenv("TRAKT_CLIENT_ID") == "" {
        b.Skip("Skipping benchmark: No Trakt API credentials")
    }

    // Setup
    tempDir, cfg, cleanup := setupTestEnvironment(b)
    defer cleanup()

    client, err := api.NewTraktClient(&cfg.Trakt, log)
    if err != nil {
        b.Fatal(err)
    }

    // Run the benchmark
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        err := export.ExportWatchedMovies(client, log, options)
        if err != nil {
            b.Fatal(err)
        }
    }
}

Best Practices

When writing tests, follow these best practices:

  1. Isolation: Tests should be independent of each other
  2. Cleanup: Always clean up temporary resources
  3. Mock External Services: Use mocks for APIs and external dependencies
  4. Test Edge Cases: Include tests for error conditions and edge cases
  5. Table-Driven Tests: Use table-driven tests for multiple test cases
  6. Coverage: Aim for high code coverage, particularly for complex logic
  7. Test Public APIs: Focus on testing the public interfaces of packages
  8. Test Real-World Scenarios: Integration tests should model real usage