AI Providers - Azure/az-prototype GitHub Wiki
AI Providers
Overview
az prototype supports three AI providers, all hosted on Azure or Microsoft infrastructure. Non-Azure providers (public OpenAI, Anthropic direct, Cohere, AWS Bedrock, etc.) are explicitly blocked at multiple layers: config validation, factory construction, and endpoint checks.
| Provider | Key | Default Model | Authentication |
|---|---|---|---|
| GitHub Copilot | copilot |
claude-sonnet-4 |
GitHub Copilot subscription (OAuth token) |
| GitHub Models | github-models |
gpt-4o |
GitHub PAT with models:read scope |
| Azure OpenAI | azure-openai |
gpt-4o |
Azure identity (az login / DefaultAzureCredential) |
The provider is configured via the ai.provider key in Configuration. All AI calls flow through a common AIProvider base class that provides a unified chat() interface with tool calling support.
GitHub Copilot
The default and recommended provider. Uses the GitHub Copilot enterprise API endpoint (api.enterprise.githubcopilot.com) which exposes the full model catalogue including Claude, GPT, and Gemini families.
Setup
- You need an active GitHub Copilot subscription (individual, business, or enterprise).
- Authentication is handled automatically via
copilot_auth, which resolves tokens from the OS keychain, environment variables, or theghCLI. - No manual token configuration is needed.
Configuration
az prototype config set --key ai.provider --value copilot
az prototype config set --key ai.model --value claude-sonnet-4
Default Model
claude-sonnet-4 -- This is the default model when using the Copilot provider.
Dynamic Model Discovery
The Copilot provider supports dynamic model discovery via the /models endpoint. This means new models become available automatically as GitHub adds them to the Copilot catalogue.
Supported Model Families
- Claude (Anthropic) --
claude-sonnet-4,claude-sonnet-4.5, and other Claude models - GPT (OpenAI) --
gpt-4oand other GPT models available through Copilot - Gemini (Google) -- Gemini models available through Copilot
Technical Details
- Endpoint:
https://api.enterprise.githubcopilot.com - Authentication: Raw OAuth token (
gho_,ghu_,ghp_prefixes) sent as Bearer token - Timeout: 600 seconds (configurable via
COPILOT_TIMEOUTenvironment variable) - No SDK dependency -- uses direct HTTP calls via
requests
GitHub Models
Uses the GitHub Models inference API, which provides access to models hosted on Azure infrastructure through GitHub's model marketplace.
Setup
- Authenticate with a GitHub Personal Access Token (PAT) that has the
models:readscope. - The token is resolved automatically via
GitHubAuthManagerwhich handles authentication flow.
Configuration
az prototype config set --key ai.provider --value github-models
az prototype config set --key ai.model --value gpt-4o
Default Model
gpt-4o
Technical Details
- Endpoint:
https://models.inference.ai.azure.com - Uses the OpenAI Python SDK with an OpenAI-compatible client pointed at the GitHub Models endpoint
- Token is passed as the API key to the OpenAI client
Azure OpenAI
Uses your own Azure OpenAI Service deployment. Authentication is exclusively via Azure identity (DefaultAzureCredential) -- API key authentication is not supported.
Setup
- Deploy an Azure OpenAI resource with at least one model deployment.
- Run
az loginto authenticate with your Azure account. - Configure the endpoint and deployment name.
Configuration
az prototype config set --key ai.provider --value azure-openai
az prototype config set --key ai.azure_openai.endpoint --value https://my-resource.openai.azure.com/
az prototype config set --key ai.azure_openai.deployment --value gpt-4o
Config Keys
| Key | Description |
|---|---|
ai.azure_openai.endpoint |
Azure OpenAI endpoint URL (must be *.openai.azure.com) |
ai.azure_openai.deployment |
Deployment name within the resource (defaults to gpt-4o) |
Default Model
gpt-4o (used as the deployment name if none is specified)
Security Constraints
The endpoint is validated at three layers:
- Config validation (
config/__init__.py) -- rejects endpoints that do not match*.openai.azure.com - Provider validation (
azure_openai.py) -- regex check plus blocked-endpoint list - Factory validation (
factory.py) -- provider allowlist enforcement
Blocked endpoints include api.openai.com, chat.openai.com, platform.openai.com, and openai.com.
API key authentication is explicitly rejected. When you attempt to set ai.azure_openai.api_key, the CLI returns an error directing you to use az login instead.
Provider-Model Compatibility
Certain models are only available through specific providers. The factory enforces these constraints before making any API calls.
Copilot-Only Model Prefixes
Models with the following prefixes can only be used with the copilot provider:
claude--- All Anthropic Claude modelsgemini--- All Google Gemini models
If you attempt to use a Claude or Gemini model with github-models or azure-openai, the CLI returns an error with instructions to either switch to the Copilot provider or choose a compatible model.
Default Models by Provider
| Provider | Default Model |
|---|---|
copilot |
claude-sonnet-4 |
github-models |
gpt-4o |
azure-openai |
gpt-4o |
Switching Providers
Change the active provider with:
az prototype config set --key ai.provider --value copilot
az prototype config set --key ai.provider --value github-models
az prototype config set --key ai.provider --value azure-openai
When switching providers, ensure your model selection is compatible. If you were using claude-sonnet-4 with Copilot and switch to github-models, you should also update the model:
az prototype config set --key ai.provider --value github-models
az prototype config set --key ai.model --value gpt-4o
Model Override
The model can be set in two ways:
-
At init time using the
--modelflag:az prototype init --name my-project --model gpt-4o -
After init using the config command:
az prototype config set --key ai.model --value claude-sonnet-4.5
The model setting is stored in ai.model in prototype.yaml and applies to all subsequent AI calls. See Configuration for the full config structure.