components compute_metrics - Azure/azureml-assets GitHub Wiki

Compute Metrics

compute_metrics

Overview

Calculate model performance metrics, given ground truth and prediction data.

Version: 0.0.36

Tags

type : evaluation sub_type : compute_metrics

View in Studio: https://ml.azure.com/registries/azureml/components/compute_metrics/version/0.0.36

Inputs

Name Description Type Default Optional Enum
task Task type string tabular-classification False ['tabular-classification', 'tabular-classification-multilabel', 'tabular-regression', 'tabular-forecasting', 'text-classification', 'text-classification-multilabel', 'text-named-entity-recognition', 'text-summarization', 'question-answering', 'text-translation', 'text-generation', 'fill-mask', 'image-classification', 'image-classification-multilabel', 'chat-completion', 'image-object-detection', 'image-instance-segmentation']
ground_truth Ground Truths of Test Data as a 1-column JSON Lines file uri_folder True
ground_truth_column_name Column name which contains ground truths in provided uri file for ground_truth. (Optional if we have one column name.) string True
prediction Model Predictions as a 1-column JSON Lines file uri_folder False
prediction_column_name Column name which contains ground truths in provided uri file for prediction. (Optional if we have one column name.) string True
prediction_probabilities Predictions Probabilities as 1-column JSON Lines file uri_folder True
evaluation_config Additional parameters required for evaluation. uri_file True
evaluation_config_params JSON Serialized string of evaluation_config string True
openai_config_params Required OpenAI Params for calculating GPT Based metrics for QnA task string True

Outputs

Name Description Type
evaluation_result uri_folder

Environment

azureml://registries/azureml/environments/model-evaluation/labels/latest

⚠️ **GitHub.com Fallback** ⚠️