models qna coherence eval - Azure/azureml-assets GitHub Wiki
The "QnA Coherence Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to achieve a high agreement with human evaluations compared to traditional mathematical measurements.
Inference type | CLI | VS Code Extension |
---|---|---|
Real time | deploy-promptflow-model-cli-example | deploy-promptflow-model-vscode-extension-example |
Batch | N/A | N/A |
{
"inputs": {
"question": "What feeds all the fixtures in low voltage tracks instead of each light having a line-to-low voltage transformer?",
"answer": "The main transformer is the object that feeds all the fixtures in low voltage tracks."
}
}
{
"outputs": {
"gpt_coherence": 5
}
}
Version: 3
View in Studio: https://ml.azure.com/registries/azureml/models/qna-coherence-eval/version/3
is-promptflow: True
azureml.promptflow.section: gallery
azureml.promptflow.type: evaluate
azureml.promptflow.name: QnA Coherence Evaluation
azureml.promptflow.description: Compute the coherence of the answer base on the question using llm.
inference-min-sku-spec: 2|0|14|28
inference-recommended-sku: Standard_DS3_v2