Governance Policies Azure AI Azure Openai - Azure/az-prototype GitHub Wiki
Governance policies for Azure Openai
Domain: azure-ai
| Name | Description |
|---|---|
| Azure OpenAI with private endpoint and RBAC | Secure Azure OpenAI deployment with no public access, managed identity, and private connectivity |
| Description | Instead |
|---|---|
| Do not use API key authentication for Azure OpenAI | Set disableLocalAuth=true and use managed identity with Cognitive Services OpenAI User role |
| Do not deploy models without version pinning | Always specify model.version and set versionUpgradeOption to NoAutoUpgrade |
| Do not leave publicNetworkAccess as Enabled | Set publicNetworkAccess to Disabled and use private endpoints |
| Check | Severity | Description |
|---|---|---|
| AZ-AOI-001 | Required | Deploy Azure OpenAI with managed identity and disable API key authentication |
| AZ-AOI-002 | Required | Deploy model instances with explicit capacity and version pinning |
| AZ-AOI-003 | Recommended | Implement content filtering policies on all deployments |
| AZ-AOI-004 | Recommended | Configure rate limiting and retry logic in consuming applications |
Deploy Azure OpenAI with managed identity and disable API key authentication
Severity: Required
Rationale: API keys are long-lived credentials that cannot be scoped; managed identity eliminates credential management
Agents: terraform-agent, bicep-agent, cloud-architect
- Microsoft.CognitiveServices/accounts
| Resource | Name | Purpose |
|---|---|---|
| Microsoft.Network/privateEndpoints | pe-openai | Private endpoint for Azure OpenAI to eliminate public network exposure |
| Microsoft.Network/privateDnsZones | privatelink.openai.azure.com | Private DNS zone for Azure OpenAI private endpoint resolution |
| Microsoft.Insights/diagnosticSettings | diag-openai | Diagnostic settings to route audit and request logs to Log Analytics |
| Microsoft.Authorization/roleAssignments | Cognitive Services OpenAI User | RBAC role assignment granting consuming identity the Cognitive Services OpenAI User role |
Deploy model instances with explicit capacity and version pinning
Severity: Required
Rationale: Unpinned model versions cause non-deterministic behavior; unset capacity causes throttling
Agents: terraform-agent, bicep-agent, cloud-architect
- Microsoft.CognitiveServices/accounts
Implement content filtering policies on all deployments
Severity: Recommended
Rationale: Content filtering prevents misuse and ensures responsible AI compliance
Agents: terraform-agent, bicep-agent, cloud-architect
- Microsoft.CognitiveServices/accounts
Configure rate limiting and retry logic in consuming applications
Severity: Recommended
Rationale: Azure OpenAI enforces TPM and RPM limits; clients must handle 429 responses gracefully
Agents: app-developer, csharp-developer, python-developer, cloud-architect
- Microsoft.CognitiveServices/accounts