components batch_benchmark_config_generator - Azure/azureml-assets GitHub Wiki

Batch Benchmark Config Generator

batch_benchmark_config_generator

Overview

Generates the config for the batch score component.

Version: 0.0.9

View in Studio: https://ml.azure.com/registries/azureml/components/batch_benchmark_config_generator/version/0.0.9

Inputs

Name Description Type Default Optional Enum
configuration_file An optional configuration file to use for deployment settings. This overrides passed in parameters. uri_file True
scoring_url The URL of the endpoint. string False
model_type Type of model. Can be one of ('oai', 'oss', 'vision_oss') string False ['oai', 'oss', 'vision_oss']
authentication_type Authentication type for endpoint. Can be one of ('azureml_workspace_connection' or 'managed_identity') string azureml_workspace_connection False ['azureml_workspace_connection', 'managed_identity']
connection_name The name of the connection to fetch the API_KEY for the endpoint authentication. string True
deployment_name The deployment name. Only needed for managed OSS deployment. string True
debug_mode Enable debug mode will print all the debug logs in the score step. boolean False False
additional_headers A stringified json expressing additional headers to be added to each request. string True
ensure_ascii If ensure_ascii is true, the output is guaranteed to have all incoming non-ASCII characters escaped. If ensure_ascii is false, these characters will be output as-is. More detailed information can be found at https://docs.python.org/3/library/json.html boolean False False
max_retry_time_interval The maximum time (in seconds) spent retrying a payload. If unspecified, payloads are retried unlimited times. integer True
initial_worker_count The initial number of workers to use for scoring. integer 5 False
max_worker_count Overrides initial_worker_count if necessary integer 200 False
response_segment_size The maximum number of tokens to generate at a time. 0 is default. If set to 0, the full response is generated all at once. If greater than 0, tokens are generated incrementally in segments. During each increment, the request and the previous partial response are sent to the model to generate the next segment. Segments are stitched together to form the full response. integer 0 False
app_insights_connection_string Application insights connection string where the batch score component will log metrics and logs. string True

Outputs

Name Description Type
batch_score_config The config json file for the batch score component. uri_file

Environment

azureml://registries/azureml/environments/evaluation/labels/latest

⚠️ **GitHub.com Fallback** ⚠️