components openai_completions_finetune_pipeline - Azure/azureml-assets GitHub Wiki
Finetune your own OAI model. Visit https://learn.microsoft.com/en-us/azure/cognitive-services/openai/ for more info.
Version: 0.1.3
contact : [email protected]
View in Studio: https://ml.azure.com/registries/azureml/components/openai_completions_finetune_pipeline/version/0.1.3
Name | Description | Type | Default | Optional | Enum |
---|---|---|---|---|---|
model | GPT model engine | string | gpt-35-turbo | False | ['babbage-002', 'davinci-002', 'gpt-35-turbo', 'gpt-4'] |
train_dataset | Input dataset (file or folder). If a folder dataset is passed, includes all nested files. | uri_folder | False | ||
validation_dataset | Input dataset (file or folder). If a folder dataset is passed, includes all nested files. | uri_folder | True | ||
task_type | Dataset type - chat or completion | string | False | ['chat', 'completion'] | |
registered_model_name | User-defined registered model name | string | False | ||
n_epochs | Number of training epochs. If set to -1, number of epochs will be determined dynamically based on the input data. | integer | -1 | False | |
learning_rate_multiplier | The learning rate multiplier to use for training. | number | 1.0 | False | |
batch_size | Global batch size. If set to -1, batch size will be determined dynamically based on the input data. | integer | -1 | False |
Name | Description | Type |
---|---|---|
output_model | Dataset with the output model weights (LoRA weights) | uri_folder |