AI assisted Test Automation - up1/training-courses GitHub Wiki
AI-assisted Test Automation,MLOps for Developers
Software requirement
Day 1: The Evolution of QA in the AI Era
- Introduction to Software Development Life Cycle (SDLC)
- Shift-Left vs. Shift-Right
- Integrating AI into the SDLC
- Manual to Automation
- Identifying "Automation Debt" and how AI reduces maintenance
- AI-Driven Test Generation & Execution
- Generative AI for Test Cases: Using LLMs to transform User Stories into Playwright/Selenium scripts
- Self-Healing Tests: Implementing intelligent object recognition to fix brittle UI tests automatically
- Autonomous Exploration: AI agents for "monkey testing" and edge-case discovery
- Intelligent Defect Detection
- Pattern recognition in log files to identify root causes
- Visual Regression: Using Computer Vision to detect pixel-perfect UI anomalies
- Workshop
- Build a "Self-Healing" test suite that adapts to UI changes without manual script updates
Day 2: MLOps Foundations & Pipeline Engineering
- Shift focus to the "Ops" of AI, learning to build resilient pipelines for model training and deployment
- The MLOps Lifecycle & Architecture
- Bridging the Gap: Data Scientists (Experimentation) vs. Developers (Production)
- Core Pillars: Data Versioning, Model Registry, and Feature Stores
- Experiment Tracking & Versioning
- Tracking hyper-parameters and metrics with MLflow or Weights & Biases
- Managing Model Lineage: "Which data produced this specific model version?"
- Automating the Training Pipeline
- Designing Directed Acyclic Graphs (DAGs) for automated training
- Integrating Continuous Training (CT): Triggering retrains based on schedule or data changes
- Workshop
- Setup an MLflow server to track experiments and version-control a model artifact
Day 3: Advanced CI/CD Integration & Governance
- CI/CD Pipelines
- Validating Models in the Pipeline: Precision/Recall gates before deployment
- Performance Testing for AI: Testing inference latency and resource consumption (CPU/GPU)
- Deployment & Monitoring in Production
- Canary & Shadow Deployments: Testing new models on live traffic without impacting users
- Drift Detection: Monitoring for Data Drift and Concept Drift to trigger automated rollbacks
- Reliability, Scalability, and Governance
- Automated Rollback Strategies: Implementing "circuit breakers" for degrading models
- Responsible AI: Integrating bias detection and explainability (XAI) into the pipeline
- Workshop
- Building triggers an automated model evaluation and blocks a deployment if accuracy falls below a threshold