By Michael Crane and Mike Gibbs
In today’s cloud environments, safeguarding user identities is paramount. By implementing modern authentication protocols and robust security measures, such as multi-factor authentication, organizations can significantly reduce the risk of unauthorized access. For instance, Microsoft Entra provides a solution for securing Generative AI (GenAI) resources through Conditional Access. When paired with the principle of least privilege, this ensures users are granted only the necessary permissions to complete their tasks, effectively reducing the risk of misuse, particularly as technologies like GenAI become more integrated into cloud infrastructures.
This blog post will explore how to use Kusto Query Language (KQL) and Azure Policy to restrict the creation of GenAI resources in Azure. Additionally, we will examine how to monitor resource creation events using KQL and enforce access controls to prevent unauthorized use of GenAI resources. The steps outlined will help ensure your Azure environment remains both secure and compliant.
A critical first step in protecting GenAI is securing Azure AI (Foundry) and StudioAI, as mentioned in the previous reference on leveraging Conditional Access. Once this foundational layer of security is in place, it’s essential to deploy and further protect the environment.
1. Create an Azure AI service principal (referred to as the “Azure AI Stupid App” and for “Azure OpenAI Studio”) to apply Conditional Access policies for enhanced security.
There are several reasons for this approach. By utilizing Conditional Access, we can implement a multi-layered security strategy, tailoring authorization signals for enhanced protection. For example: **You will repeat for each Application.** After deploying the protection below and following Protect AI with CA policy, you’ll have 4 service principals ready for authN/authZ protection.
- Example 1: Enforce a policy that requires phishing-resistant authentication methods, device compliance, and trusted locations.
- Example 2: Create a Conditional Access policy to mandate authorization through a Microsoft Defender for Cloud session control policy, further controlling app access.



Example 1: Conditional Access Policy Enforcement

Example 2: Conditional Access Policy Access App Control w/ MDCA

2. Another effective solution is Azure Policy. Azure Policy offers built-in policy definitions that help govern the deployment of AI models within Managed AI Services (MaaS) and Model-as-a-Platform (MaaP). These policies enable you to control which models’ developers are permitted to deploy within the Azure AI Foundry portal. For further details, refer to the policy titled “[Preview]: Azure Machine Learning Deployments Should Only Use Approved Registry Models.”

3. The next solution is leveraging Microsoft’s SIEM solution, Microsoft Sentinel, to track model creation events. This can be achieved by forwarding Azure Activity logs—spanning from the tenant root to individual resources—into a Sentinel workspace. Azure Activity log ingestion is free for Sentinel, offering an efficient and cost-effective way to monitor activity. To strengthen your security posture, deploy the relevant analytic rules in Sentinel to trigger alerts based on specific events. See Sentinel examples below.
Analytic Rule 1: Machine Learning Creation – Any Model Deployed
Analytic Rule 2: Machine Learning Creation – Cloud App Events – Models
Analytic Rule 3: Machine Learning Creation – DeepSeek Deployed
Example 3.1: Analytic Rules

Example 3.2: Machine Learning Results

Example 3.3
4. A key strategy is to create a custom role at the tenant level to restrict AI model deployments across the environment. By defining this custom role, you can ensure that only authorized users have the necessary privileges to deploy AI models. To enforce this, assign the custom role to a Privileged Access Group, which further limits access to critical actions and provides an added layer of control. This approach helps enforce governance and reduces the risk of unauthorized deployments across the tenant.
Example 4.1: Create Custom Role **JSON template below**

{
"properties": {
"roleName": "Azure AI Deployment",
"description": "",
"assignableScopes": [
"/providers/Microsoft.Management/managementGroups/Cyberlorians"
],
"permissions": [
{
"actions": [
"Microsoft.MachineLearningServices/workspaces/hubs/write",
"Microsoft.MachineLearningServices/workspaces/write",
"Microsoft.MachineLearningServices/workspaces/endpoints/write",
"Microsoft.KeyVault/vaults/write",
"Microsoft.Storage/storageAccounts/write",
"Microsoft.Resources/deployments/validate/action"
],
"notActions": [],
"dataActions": [],
"notDataActions": []
}
]
}
}
Example 4.2: Assign Privilege Access Group to Custom Role

Example 4.3: User blocked unless member of privilege access group

Example 4.4: Tracking group elevation
Analytic Rule 4: AI Deployment – Group PIM Tracking

1 thought on “Securing Generative AI in Azure: Best Practices and Tools”