Telemetry & DevOps Integration for Pipelining - microsoft/AutoBrewML GitHub Wiki

We would maintain two notebooks and a pipeline can be set to trigger the Trigger notebook from Azure Data Factory–

  1. Auto Tune ML Master
  2. Auto Tune ML Trigger
    The Trigger notebook calls all the functions explained above using the desired dataset and specifying intermediate information like Dataset filepath in Azure Datalake, Key identifier of the experiment run to locate it in telemetry, location filepath to push telemetry data, Azure workspace and subscription details, Service Principle and Tenant ID of the subscription where the Workspace lies to bypass the Azure authentication process each time the Experiment is submitted as this needs to be an automated run on trigger of the pipeline.
    The Telemetry captured encompasses of the Key identifier of the experiment run, Accuracy scores for the experiment, Function call information, Results and time generated for each step. The telemetry along with actual v/s predicted data is written to the Azure Datalake and can be fetched into Power BI from the Datalake in a live connection. Each time the telemetry files are updated, they would reflect in the Power BI report with updated information and current run status as compared to previous runs.

⚠️ **GitHub.com Fallback** ⚠️