QA Automation - jvPalma/dotrun GitHub Wiki

QA Automation Workflow

Comprehensive testing workflow covering web, API, mobile, and performance testing with automated reporting and integration.

Overview

This workflow is designed for QA teams managing:

  • Multi-platform Testing: Web, mobile, and API testing
  • Automated Test Execution: CI/CD integration and scheduling
  • Performance Testing: Load, stress, and endurance testing
  • Test Reporting: Automated reporting and metrics
  • Quality Gates: Automated quality validation

Team Structure

  • 2-3 QA Engineers
  • 1 Test Automation Engineer
  • 1 Performance Testing Specialist
  • 1 Mobile Testing Specialist

Collection Setup

Repository: company/qa-automation

# Initialize collection
mkdir qa-automation && cd qa-automation
git init
git remote add origin [email protected]:company/qa-automation.git

# Create collection structure
mkdir -p bin/{web,api,mobile,performance,reporting,infrastructure}
mkdir -p docs/{web,api,mobile,performance,reporting,infrastructure}
mkdir -p config/{environments,browsers,devices}
mkdir -p tests/{web,api,mobile,performance}
mkdir -p reports/{templates,assets}

Collection Metadata (dotrun.collection.yml):

name: "qa-automation"
description: "Comprehensive QA automation and testing tools"
author: "QA Team"
version: "2.0.0"
dependencies:
  - node
  - npm
  - docker
  - git
  - python
optional_dependencies:
  - playwright
  - selenium
  - k6
  - appium
  - newman
categories:
  - web
  - api
  - mobile
  - performance
  - reporting
  - infrastructure

Web Testing Scripts

Full Test Suite

bin/web/run-full-suite.sh:

#!/usr/bin/env bash
### DOC
# Run complete web application test suite
# Includes unit, integration, E2E, and accessibility tests
### DOC
set -euo pipefail

source "$DR_CONFIG/helpers/pkg.sh"
validatePkg node npm docker

main() {
  local environment="${1:-staging}"
  local browser="${2:-chrome}"
  local test_type="${3:-all}"
  local parallel="${4:-2}"

  echo "๐Ÿงช Running web test suite"
  echo "๐ŸŒ Environment: $environment"
  echo "๐ŸŒ Browser: $browser"
  echo "๐Ÿ“Š Test type: $test_type"
  echo "โšก Parallel workers: $parallel"

  # Create test results directory
  local results_dir="reports/web/$(date +%Y%m%d_%H%M%S)"
  mkdir -p "$results_dir"

  # Start test infrastructure
  echo "๐Ÿš€ Starting test infrastructure..."
  docker-compose -f config/docker-compose.test.yml up -d

  # Wait for services to be ready
  echo "โณ Waiting for services to start..."
  dr web/wait-for-services "$environment"

  case "$test_type" in
    unit | all)
      echo "๐Ÿ”ฌ Running unit tests..."
      npm run test:unit -- \
        --reporter=json \
        --outputFile="$results_dir/unit-tests.json" \
        --coverage \
        --coverageDirectory="$results_dir/coverage"
      ;;
  esac

  case "$test_type" in
    integration | all)
      echo "๐Ÿ”— Running integration tests..."
      npm run test:integration -- \
        --env="$environment" \
        --reporter=json \
        --outputFile="$results_dir/integration-tests.json"
      ;;
  esac

  case "$test_type" in
    e2e | all)
      echo "๐ŸŒ Running E2E tests..."
      npx playwright test \
        --project="$browser" \
        --workers="$parallel" \
        --reporter=json \
        --output="$results_dir/e2e-tests.json" \
        tests/e2e/
      ;;
  esac

  case "$test_type" in
    accessibility | all)
      echo "โ™ฟ Running accessibility tests..."
      npm run test:accessibility -- \
        --env="$environment" \
        --browser="$browser" \
        --outputFile="$results_dir/accessibility-tests.json"
      ;;
  esac

  case "$test_type" in
    visual | all)
      echo "๐Ÿ‘๏ธ  Running visual regression tests..."
      npx playwright test \
        --project="visual-$browser" \
        --output="$results_dir/visual-tests" \
        tests/visual/
      ;;
  esac

  # Generate combined report
  echo "๐Ÿ“Š Generating test report..."
  dr reporting/generate-web-report "$results_dir"

  # Cleanup infrastructure
  docker-compose -f config/docker-compose.test.yml down

  # Check test results
  local exit_code=0
  if ! dr reporting/check-test-results "$results_dir"; then
    exit_code=1
  fi

  echo "โœ… Web test suite complete!"
  echo "๐Ÿ“Š Report available at: $results_dir/index.html"
  echo "๐Ÿ“‚ Results directory: $results_dir"

  return $exit_code
}

main "$@"

Cross-Browser Testing

bin/web/cross-browser-test.sh:

#!/usr/bin/env bash
### DOC
# Run tests across multiple browsers
# Supports Chrome, Firefox, Safari, and Edge
### DOC
set -euo pipefail

main() {
  local test_suite="$1"
  local environment="${2:-staging}"
  local browsers="${3:-chrome,firefox,safari}"

  echo "๐ŸŒ Running cross-browser tests"
  echo "๐Ÿ“ Test suite: $test_suite"
  echo "๐ŸŒ Environment: $environment"
  echo "๐ŸŒ Browsers: $browsers"

  # Create results directory
  local results_dir="reports/cross-browser/$(date +%Y%m%d_%H%M%S)"
  mkdir -p "$results_dir"

  # Convert browsers string to array
  IFS=',' read -ra browser_array <<<"$browsers"

  # Run tests for each browser
  local failed_browsers=()
  for browser in "${browser_array[@]}"; do
    echo "๐Ÿš€ Testing in $browser..."

    local browser_results_dir="$results_dir/$browser"
    mkdir -p "$browser_results_dir"

    if npx playwright test \
      --project="$browser" \
      --reporter=json \
      --output="$browser_results_dir/results.json" \
      "tests/$test_suite/"; then
      echo "โœ… $browser tests passed"
    else
      echo "โŒ $browser tests failed"
      failed_browsers+=("$browser")
    fi

    # Capture browser-specific artifacts
    if [[ -d "test-results" ]]; then
      mv test-results "$browser_results_dir/"
    fi
  done

  # Generate cross-browser report
  echo "๐Ÿ“Š Generating cross-browser report..."
  dr reporting/generate-cross-browser-report "$results_dir"

  # Summary
  if [[ ${#failed_browsers[@]} -eq 0 ]]; then
    echo "๐ŸŽ‰ All browser tests passed!"
    return 0
  else
    echo "โš ๏ธ  Failed browsers: ${failed_browsers[*]}"
    echo "๐Ÿ“Š Detailed report: $results_dir/index.html"
    return 1
  fi
}

main "$@"

API Testing Scripts

API Test Suite

bin/api/run-api-tests.sh:

#!/usr/bin/env bash
### DOC
# Comprehensive API testing with Newman and custom scripts
# Tests functionality, performance, and security
### DOC
set -euo pipefail

source "$DR_CONFIG/helpers/pkg.sh"
validatePkg newman curl

main() {
  local environment="${1:-staging}"
  local test_type="${2:-all}"
  local collection_file="${3:-tests/api/postman-collection.json}"

  echo "๐Ÿ”— Running API tests"
  echo "๐ŸŒ Environment: $environment"
  echo "๐Ÿ“Š Test type: $test_type"
  echo "๐Ÿ“ Collection: $collection_file"

  # Load environment configuration
  local env_file="config/environments/$environment.json"
  if [[ ! -f "$env_file" ]]; then
    echo "โŒ Environment file not found: $env_file"
    exit 1
  fi

  # Create results directory
  local results_dir="reports/api/$(date +%Y%m%d_%H%M%S)"
  mkdir -p "$results_dir"

  case "$test_type" in
    functional | all)
      echo "โš™๏ธ  Running functional API tests..."
      newman run "$collection_file" \
        --environment "$env_file" \
        --reporters json,htmlextra \
        --reporter-json-export "$results_dir/functional-tests.json" \
        --reporter-htmlextra-export "$results_dir/functional-report.html" \
        --bail
      ;;
  esac

  case "$test_type" in
    security | all)
      echo "๐Ÿ”’ Running security API tests..."
      newman run "tests/api/security-collection.json" \
        --environment "$env_file" \
        --reporters json,htmlextra \
        --reporter-json-export "$results_dir/security-tests.json" \
        --reporter-htmlextra-export "$results_dir/security-report.html"
      ;;
  esac

  case "$test_type" in
    performance | all)
      echo "โšก Running API performance tests..."
      dr performance/api-load-test "$environment" \
        --output "$results_dir/performance"
      ;;
  esac

  case "$test_type" in
    contract | all)
      echo "๐Ÿ“‹ Running contract tests..."
      npm run test:contract -- \
        --env "$environment" \
        --output "$results_dir/contract-tests.json"
      ;;
  esac

  # Generate combined API report
  echo "๐Ÿ“Š Generating API test report..."
  dr reporting/generate-api-report "$results_dir"

  echo "โœ… API tests complete!"
  echo "๐Ÿ“Š Report: $results_dir/index.html"
}

main "$@"

API Monitoring

bin/api/monitor-endpoints.sh:

#!/usr/bin/env bash
### DOC
# Continuous API endpoint monitoring
# Tracks availability, response times, and errors
### DOC
set -euo pipefail

main() {
  local environment="$1"
  local duration="${2:-3600}" # 1 hour default
  local interval="${3:-60}"   # 1 minute default

  echo "๐Ÿ“Š Monitoring API endpoints"
  echo "๐ŸŒ Environment: $environment"
  echo "โฐ Duration: ${duration}s"
  echo "๐Ÿ”„ Interval: ${interval}s"

  # Load environment configuration
  local base_url
  base_url=$(jq -r '.values[] | select(.key=="base_url") | .value' "config/environments/$environment.json")

  # Create monitoring directory
  local monitoring_dir="reports/monitoring/$(date +%Y%m%d_%H%M%S)"
  mkdir -p "$monitoring_dir"

  # Define endpoints to monitor
  local endpoints=(
    "/health"
    "/api/v1/users"
    "/api/v1/orders"
    "/api/v1/products"
  )

  local end_time=$(($(date +%s) + duration))
  local check_count=0

  echo "timestamp,endpoint,status_code,response_time_ms,error" >"$monitoring_dir/monitoring.csv"

  while [[ $(date +%s) -lt $end_time ]]; do
    ((check_count++))
    echo "๐Ÿ” Check $check_count at $(date)"

    for endpoint in "${endpoints[@]}"; do
      local url="$base_url$endpoint"
      local timestamp
      timestamp=$(date +%s)

      # Make request and capture metrics
      local response
      response=$(curl -w "%{http_code},%{time_total}" -s -o /dev/null "$url" 2>&1 || echo "000,0")

      local status_code
      status_code=$(echo "$response" | cut -d',' -f1)
      local response_time
      response_time=$(echo "$response" | cut -d',' -f2)
      local response_time_ms
      response_time_ms=$(echo "$response_time * 1000" | bc -l)

      local error=""
      if [[ "$status_code" -ge 400 ]]; then
        error="HTTP_ERROR"
      elif [[ "$status_code" == "000" ]]; then
        error="CONNECTION_ERROR"
      fi

      echo "$timestamp,$endpoint,$status_code,$response_time_ms,$error" >>"$monitoring_dir/monitoring.csv"

      # Real-time alert for critical errors
      if [[ "$status_code" -ge 500 ]] || [[ "$status_code" == "000" ]]; then
        echo "๐Ÿšจ ALERT: $endpoint failed with status $status_code"
        dr reporting/send-alert "API Endpoint Failure" "$endpoint failed with status $status_code"
      fi
    done

    sleep "$interval"
  done

  # Generate monitoring report
  echo "๐Ÿ“Š Generating monitoring report..."
  python scripts/generate_monitoring_report.py \
    --data "$monitoring_dir/monitoring.csv" \
    --output "$monitoring_dir/monitoring_report.html"

  echo "โœ… API monitoring complete!"
  echo "๐Ÿ“Š Report: $monitoring_dir/monitoring_report.html"
}

main "$@"

Mobile Testing Scripts

Mobile Test Execution

bin/mobile/run-mobile-tests.sh:

#!/usr/bin/env bash
### DOC
# Run mobile app tests on real devices and emulators
# Supports Android and iOS testing with Appium
### DOC
set -euo pipefail

source "$DR_CONFIG/helpers/pkg.sh"
validatePkg appium

main() {
  local platform="$1"
  local device="${2:-emulator}"
  local test_suite="${3:-smoke}"
  local app_path="${4:-apps/latest.apk}"

  echo "๐Ÿ“ฑ Running mobile tests"
  echo "๐Ÿ“ฑ Platform: $platform"
  echo "๐Ÿ“ฑ Device: $device"
  echo "๐Ÿ“ Test suite: $test_suite"
  echo "๐Ÿ“ฆ App: $app_path"

  # Validate app file
  if [[ ! -f "$app_path" ]]; then
    echo "โŒ App file not found: $app_path"
    exit 1
  fi

  # Create results directory
  local results_dir="reports/mobile/$(date +%Y%m%d_%H%M%S)"
  mkdir -p "$results_dir"

  # Start Appium server
  echo "๐Ÿš€ Starting Appium server..."
  appium --port 4723 --log "$results_dir/appium.log" &
  local appium_pid=$!

  # Wait for Appium to start
  sleep 10

  # Setup device/emulator
  case "$platform" in
    android)
      dr mobile/setup-android-device "$device"
      ;;
    ios)
      dr mobile/setup-ios-device "$device"
      ;;
    *)
      echo "โŒ Unsupported platform: $platform"
      kill $appium_pid
      exit 1
      ;;
  esac

  # Install app on device
  echo "๐Ÿ“ฒ Installing app on device..."
  dr mobile/install-app "$platform" "$device" "$app_path"

  # Run tests
  echo "๐Ÿงช Running $test_suite tests..."
  case "$test_suite" in
    smoke)
      npm run test:mobile:smoke -- \
        --platform "$platform" \
        --device "$device" \
        --app "$app_path" \
        --output "$results_dir"
      ;;
    regression)
      npm run test:mobile:regression -- \
        --platform "$platform" \
        --device "$device" \
        --app "$app_path" \
        --output "$results_dir"
      ;;
    accessibility)
      npm run test:mobile:accessibility -- \
        --platform "$platform" \
        --device "$device" \
        --app "$app_path" \
        --output "$results_dir"
      ;;
    performance)
      dr mobile/performance-test "$platform" "$device" "$app_path" "$results_dir"
      ;;
    *)
      echo "โŒ Unknown test suite: $test_suite"
      kill $appium_pid
      exit 1
      ;;
  esac

  # Cleanup
  echo "๐Ÿงน Cleaning up..."
  kill $appium_pid
  dr mobile/cleanup-device "$platform" "$device"

  # Generate mobile test report
  echo "๐Ÿ“Š Generating mobile test report..."
  dr reporting/generate-mobile-report "$results_dir"

  echo "โœ… Mobile tests complete!"
  echo "๐Ÿ“Š Report: $results_dir/index.html"
}

main "$@"

Device Farm Testing

bin/mobile/device-farm-test.sh:

#!/usr/bin/env bash
### DOC
# Run tests on multiple devices in parallel
# Supports AWS Device Farm, BrowserStack, and local device farm
### DOC
set -euo pipefail

main() {
  local platform="$1"
  local test_suite="$2"
  local app_path="$3"
  local device_farm="${4:-local}"

  echo "๐Ÿญ Running device farm tests"
  echo "๐Ÿ“ฑ Platform: $platform"
  echo "๐Ÿ“ Test suite: $test_suite"
  echo "๐Ÿญ Device farm: $device_farm"

  # Create results directory
  local results_dir="reports/device-farm/$(date +%Y%m%d_%H%M%S)"
  mkdir -p "$results_dir"

  case "$device_farm" in
    local)
      # Get available local devices
      local devices
      devices=$(dr mobile/list-devices "$platform")

      # Run tests on each device in parallel
      local pids=()
      while IFS= read -r device; do
        echo "๐Ÿš€ Starting tests on $device..."
        dr mobile/run-mobile-tests "$platform" "$device" "$test_suite" "$app_path" &
        pids+=($!)
      done <<<"$devices"

      # Wait for all tests to complete
      for pid in "${pids[@]}"; do
        wait "$pid"
      done
      ;;
    aws)
      # Upload app to AWS Device Farm
      echo "โ˜๏ธ  Uploading app to AWS Device Farm..."
      aws devicefarm create-upload \
        --project-arn "$AWS_DEVICE_FARM_PROJECT" \
        --name "$(basename "$app_path")" \
        --type ANDROID_APP \
        --output text --query 'upload.arn'

      # Run tests on AWS Device Farm
      python scripts/aws_device_farm_test.py \
        --platform "$platform" \
        --test-suite "$test_suite" \
        --app-path "$app_path" \
        --output "$results_dir"
      ;;
    browserstack)
      # Upload app to BrowserStack
      echo "โ˜๏ธ  Uploading app to BrowserStack..."
      curl -u "$BROWSERSTACK_USERNAME:$BROWSERSTACK_ACCESS_KEY" \
        -X POST "https://api-cloud.browserstack.com/app-automate/upload" \
        -F "file=@$app_path"

      # Run tests on BrowserStack
      python scripts/browserstack_test.py \
        --platform "$platform" \
        --test-suite "$test_suite" \
        --app-path "$app_path" \
        --output "$results_dir"
      ;;
    *)
      echo "โŒ Unknown device farm: $device_farm"
      exit 1
      ;;
  esac

  # Generate device farm report
  echo "๐Ÿ“Š Generating device farm report..."
  dr reporting/generate-device-farm-report "$results_dir"

  echo "โœ… Device farm tests complete!"
  echo "๐Ÿ“Š Report: $results_dir/index.html"
}

main "$@"

Performance Testing Scripts

Load Testing

bin/performance/load-test.sh:

#!/usr/bin/env bash
### DOC
# Comprehensive load testing with k6
# Supports multiple load patterns and scenarios
### DOC
set -euo pipefail

source "$DR_CONFIG/helpers/pkg.sh"
validatePkg k6

main() {
  local target_url="$1"
  local test_type="${2:-standard}"
  local duration="${3:-5m}"
  local virtual_users="${4:-10}"

  if [[ -z "$target_url" ]]; then
    echo "Usage: dr performance/load-test <target-url> [test-type] [duration] [vus]"
    echo "Test types: standard, spike, soak, stress, breakpoint"
    exit 1
  fi

  echo "โšก Running load test"
  echo "๐ŸŽฏ Target: $target_url"
  echo "๐Ÿ“Š Type: $test_type"
  echo "โฐ Duration: $duration"
  echo "๐Ÿ‘ฅ Virtual users: $virtual_users"

  # Create test results directory
  local test_dir="reports/performance/$(date +%Y%m%d_%H%M%S)"
  mkdir -p "$test_dir"

  # Select test script based on type
  local test_script
  case "$test_type" in
    standard)
      test_script="tests/performance/standard-load.js"
      ;;
    spike)
      test_script="tests/performance/spike-test.js"
      ;;
    soak)
      test_script="tests/performance/soak-test.js"
      duration="30m" # Override for soak test
      ;;
    stress)
      test_script="tests/performance/stress-test.js"
      ;;
    breakpoint)
      test_script="tests/performance/breakpoint-test.js"
      ;;
    *)
      echo "โŒ Unknown test type: $test_type"
      exit 1
      ;;
  esac

  # Run k6 test
  k6 run "$test_script" \
    --out json="$test_dir/results.json" \
    --out influxdb=http://localhost:8086/k6 \
    --env TARGET_URL="$target_url" \
    --env DURATION="$duration" \
    --env VUS="$virtual_users"

  # Generate HTML report
  echo "๐Ÿ“Š Generating performance report..."
  node scripts/k6-to-html.js \
    "$test_dir/results.json" \
    "$test_dir/performance_report.html"

  # Performance analysis
  echo "๐Ÿ” Analyzing performance metrics..."
  python scripts/analyze_performance.py \
    --results "$test_dir/results.json" \
    --output "$test_dir/analysis.json"

  # Check performance thresholds
  if ! python scripts/check_performance_thresholds.py \
    --results "$test_dir/results.json" \
    --thresholds "config/performance-thresholds.json"; then
    echo "โŒ Performance thresholds not met"
    return 1
  fi

  echo "โœ… Load test complete!"
  echo "๐Ÿ“‚ Results: $test_dir"
  echo "๐Ÿ“Š Report: $test_dir/performance_report.html"
}

main "$@"

Performance Monitoring

bin/performance/monitor-performance.sh:

#!/usr/bin/env bash
### DOC
# Continuous performance monitoring
# Tracks response times, throughput, and system metrics
### DOC
set -euo pipefail

main() {
  local target_url="$1"
  local monitoring_duration="${2:-3600}" # 1 hour default
  local check_interval="${3:-30}"        # 30 seconds default

  echo "๐Ÿ“Š Starting performance monitoring"
  echo "๐ŸŽฏ Target: $target_url"
  echo "โฐ Duration: ${monitoring_duration}s"
  echo "๐Ÿ”„ Interval: ${check_interval}s"

  # Create monitoring directory
  local monitoring_dir="reports/monitoring/performance/$(date +%Y%m%d_%H%M%S)"
  mkdir -p "$monitoring_dir"

  local end_time=$(($(date +%s) + monitoring_duration))
  local check_count=0

  # Initialize CSV file
  echo "timestamp,response_time_ms,status_code,throughput_rps,cpu_percent,memory_mb,disk_io" >"$monitoring_dir/metrics.csv"

  while [[ $(date +%s) -lt $end_time ]]; do
    ((check_count++))
    local timestamp
    timestamp=$(date +%s)

    echo "๐Ÿ” Performance check $check_count at $(date)"

    # Measure response time
    local response_time
    response_time=$(curl -w "%{time_total}" -s -o /dev/null "$target_url" | awk '{print $1*1000}')

    # Get HTTP status
    local status_code
    status_code=$(curl -w "%{http_code}" -s -o /dev/null "$target_url")

    # Measure throughput (requests per second)
    local throughput
    throughput=$(k6 run --duration 10s --vus 5 tests/performance/throughput-test.js \
      --env TARGET_URL="$target_url" --quiet | grep "http_reqs" | awk '{print $3}' | cut -d'/' -f1)

    # System metrics (if monitoring local server)
    local cpu_percent="0"
    local memory_mb="0"
    local disk_io="0"

    if [[ "$target_url" =~ ^https?://localhost ]]; then
      cpu_percent=$(top -bn1 | grep "Cpu(s)" | awk '{print $2+$4}' | cut -d'%' -f1)
      memory_mb=$(free -m | awk 'NR==2{printf "%.0f", $3}')
      disk_io=$(iostat -d 1 2 | awk 'END{print $4+$5}')
    fi

    # Log metrics
    echo "$timestamp,$response_time,$status_code,$throughput,$cpu_percent,$memory_mb,$disk_io" >>"$monitoring_dir/metrics.csv"

    # Real-time alerts
    if [[ $(echo "$response_time > 2000" | bc -l) -eq 1 ]]; then
      echo "๐Ÿšจ ALERT: High response time: ${response_time}ms"
      dr reporting/send-alert "High Response Time" "Response time: ${response_time}ms"
    fi

    if [[ "$status_code" -ge 400 ]]; then
      echo "๐Ÿšจ ALERT: HTTP error: $status_code"
      dr reporting/send-alert "HTTP Error" "Status code: $status_code"
    fi

    sleep "$check_interval"
  done

  # Generate performance monitoring report
  echo "๐Ÿ“Š Generating performance monitoring report..."
  python scripts/generate_performance_monitoring_report.py \
    --data "$monitoring_dir/metrics.csv" \
    --output "$monitoring_dir/monitoring_report.html"

  echo "โœ… Performance monitoring complete!"
  echo "๐Ÿ“Š Report: $monitoring_dir/monitoring_report.html"
}

main "$@"

Reporting Scripts

Unified Test Report

bin/reporting/generate-unified-report.sh:

#!/usr/bin/env bash
### DOC
# Generate unified test report from all test types
# Combines web, API, mobile, and performance results
### DOC
set -euo pipefail

main() {
  local test_results_dir="$1"
  local output_file="${2:-$test_results_dir/unified_report.html}"

  echo "๐Ÿ“Š Generating unified test report"
  echo "๐Ÿ“‚ Results directory: $test_results_dir"
  echo "๐Ÿ“„ Output file: $output_file"

  # Collect all test result files
  local web_results=()
  local api_results=()
  local mobile_results=()
  local performance_results=()

  # Find result files
  while IFS= read -r -d '' file; do
    case "$file" in
      *web*) web_results+=("$file") ;;
      *api*) api_results+=("$file") ;;
      *mobile*) mobile_results+=("$file") ;;
      *performance*) performance_results+=("$file") ;;
    esac
  done < <(find "$test_results_dir" -name "*.json" -print0)

  # Generate unified report
  python scripts/generate_unified_report.py \
    --web-results "${web_results[@]}" \
    --api-results "${api_results[@]}" \
    --mobile-results "${mobile_results[@]}" \
    --performance-results "${performance_results[@]}" \
    --output "$output_file" \
    --template reports/templates/unified-report.html

  # Generate executive summary
  echo "๐Ÿ“‹ Generating executive summary..."
  python scripts/generate_executive_summary.py \
    --unified-report "$output_file" \
    --output "$test_results_dir/executive_summary.pdf"

  echo "โœ… Unified report generated!"
  echo "๐Ÿ“Š Report: $output_file"
  echo "๐Ÿ“‹ Summary: $test_results_dir/executive_summary.pdf"
}

main "$@"

Team Usage

QA Engineer Daily Workflow

# Import QA collection
dr import [email protected]:company/qa-automation.git qa

# Run daily smoke tests
dr qa/web/run-full-suite staging chrome smoke
dr qa/api/run-api-tests staging functional
dr qa/mobile/run-mobile-tests android emulator smoke

# Cross-browser testing
dr qa/web/cross-browser-test smoke staging "chrome,firefox,safari"

# Performance validation
dr qa/performance/load-test https://staging.app.com standard

Test Automation Engineer Workflow

# Setup test infrastructure
dr qa/infrastructure/setup-test-environment
dr qa/infrastructure/start-selenium-grid

# Run comprehensive test suites
dr qa/web/run-full-suite production firefox all 4
dr qa/api/run-api-tests production all
dr qa/mobile/device-farm-test android regression apps/latest.apk

# Generate reports
dr qa/reporting/generate-unified-report reports/latest
dr qa/reporting/send-results team-slack

Performance Testing Workflow

# Performance testing suite
dr qa/performance/load-test https://app.com standard 10m 50
dr qa/performance/load-test https://app.com stress 5m 100
dr qa/performance/monitor-performance https://app.com 7200 60

# Performance analysis
dr qa/performance/compare-baselines reports/performance
dr qa/performance/trend-analysis last-30-days

CI/CD Integration

# .github/workflows/qa-pipeline.yml
name: QA Pipeline
on: [push, pull_request]

jobs:
  web-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Install DotRun
        run: curl -fsSL https://raw.githubusercontent.com/jvPalma/dotrun/master/install.sh | sh
      - name: Import QA tools
        run: dr import . qa
      - name: Run web tests
        run: dr qa/web/run-full-suite staging chrome

  api-tests:
    runs-on: ubuntu-latest
    steps:
      - name: Run API tests
        run: dr qa/api/run-api-tests staging

  mobile-tests:
    runs-on: ubuntu-latest
    steps:
      - name: Run mobile tests
        run: dr qa/mobile/run-mobile-tests android emulator smoke

  performance-tests:
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    steps:
      - name: Run performance tests
        run: dr qa/performance/load-test ${{ secrets.STAGING_URL }} standard

This QA automation workflow provides comprehensive testing coverage across all platforms while maintaining efficiency, reliability, and detailed reporting for quality assurance teams.

โš ๏ธ **GitHub.com Fallback** โš ๏ธ