Microsoft Purview's New AI Risk Management Tools: Securing the Agent-Driven Enterprise
#Security

Microsoft Purview's New AI Risk Management Tools: Securing the Agent-Driven Enterprise

Cloud Reporter
5 min read

Microsoft Purview's latest Insider Risk Management updates bring purpose-built controls for AI agents, Security Copilot triage capabilities, and DSPM for AI to help organizations govern both human and machine-driven risks.

As AI agents become integral to enterprise workflows, organizations face a new challenge: how to govern and secure autonomous systems operating at machine speed. Microsoft's latest Compliance Meets AI session revealed significant advancements in Purview Insider Risk Management that address this exact concern, with particular focus on Copilot integration, agent governance, and AI-powered security operations.

The New Reality: Humans and Agents as Risk Vectors

One of the most striking revelations from the session was Microsoft's expanded approach to insider risk. Traditional insider risk management focused primarily on human behavior—employees accessing sensitive data, sharing files inappropriately, or violating compliance policies. But as Kevin Uy demonstrated, the landscape has fundamentally shifted.

"AI security no longer stops at users," Uy emphasized. "Agents operate at machine speed, and organizations need the same level of governance, risk scoring, and investigation capabilities to keep pace."

This recognition that AI agents are now legitimate risk vectors represents a significant evolution in enterprise security thinking. When Copilot agents autonomously access data, make decisions, and interact with sensitive systems, they create risk pathways that traditional security tools weren't designed to monitor.

Risky Agents: Purpose-Built Monitoring for AI Systems

Microsoft's new Risky Agents capability (currently in preview) provides organizations with visibility and governance specifically for agents built in Copilot Studio and Azure AI Foundry. This isn't simply repurposing existing user monitoring tools—it's a purpose-built solution designed for the unique characteristics of AI agents.

Key features include:

  • Agent-specific risk scoring that evaluates behavior patterns unique to autonomous systems
  • Policy templates designed for agent interactions with sensitive data
  • Behavioral baselines that understand normal agent operations versus anomalous activity
  • Integration with existing Insider Risk Management workflows for consistent investigation processes

This capability addresses a critical gap: while organizations have been rapidly deploying AI agents to automate workflows and enhance productivity, most lack the governance frameworks to ensure these agents operate within acceptable risk parameters.

Security Copilot Triage Agents: AI Helping Security Teams

Perhaps the most innovative announcement was the integration of Security Copilot Triage Agents. This feature leverages AI to help security teams prioritize what truly matters by summarizing and contextualizing Insider Risk and DLP alerts.

In practice, this means Security Copilot can:

  • Analyze alert patterns across multiple systems to identify genuine threats
  • Provide context-rich summaries that explain the "why" behind risk scores
  • Suggest investigation steps based on similar historical incidents
  • Automate initial triage to reduce the burden on security analysts

This represents a significant advancement in security operations, where the volume of alerts often overwhelms human analysts. By using AI to filter and contextualize risks, organizations can focus their limited security resources on the most critical threats.

DSPM for AI: Beyond the Microsoft Ecosystem

Data Security Posture Management (DSPM) for AI extends Microsoft's risk management capabilities beyond its own ecosystem. This feature provides organizations with insight into risky AI usage, prompts, and responses—regardless of where the AI services are hosted.

This cross-platform visibility is crucial as enterprises adopt AI services from multiple providers. Whether employees are using OpenAI's services, Google's AI offerings, or other third-party AI tools, DSPM for AI can monitor for:

  • Sensitive data exposure in AI prompts and responses
  • Unauthorized AI service usage that bypasses corporate controls
  • Compliance violations related to data handling in AI interactions
  • Shadow AI usage where employees use unapproved AI services

Real-World Risk Scenarios and Live Demos

The session included practical demonstrations of these capabilities in action. Uy walked through several real-world scenarios, including:

  • Agent data exfiltration: A Copilot agent that inadvertently shared sensitive customer data through an external API
  • Prompt injection attacks: Malicious prompts designed to extract confidential information from AI systems
  • Unauthorized agent deployment: Employees creating AI agents that violate corporate data policies
  • Cross-system data correlation: How multiple risk signals combine to indicate serious insider threats

These live demos illustrated not just the technical capabilities but the practical workflows security teams can implement to address AI-related risks.

Implementation Considerations for Organizations

For organizations looking to implement these new capabilities, several key considerations emerged:

1. Policy Development: Organizations need to develop specific policies for AI agent behavior, not just human user policies. This includes defining acceptable use cases, data access boundaries, and monitoring requirements.

2. Integration Planning: The new features integrate with existing Purview workflows, but organizations should plan for the expanded scope of monitoring and investigation.

3. Skill Development: Security teams need to understand AI agent behavior patterns and develop new investigative techniques specific to autonomous systems.

4. Change Management: As AI agents become subject to the same governance as human employees, organizations need to communicate these changes to all stakeholders.

The Path Forward: Balancing Innovation and Security

The session made clear that Microsoft recognizes the tension between enabling AI innovation and maintaining security. The new capabilities aren't designed to restrict AI adoption but to provide the governance frameworks necessary for responsible deployment.

As organizations continue their AI transformation journeys, tools like Risky Agents, Security Copilot Triage, and DSPM for AI provide the visibility and control needed to move forward with confidence. The message is clear: AI governance isn't optional—it's essential for any organization serious about secure AI adoption.

For those who missed the live session, the recording is available at aka.ms/Compliance-Meets-Ai-Insider-Risk-Management, and all past recordings can be found on Jay Cotton's YouTube channel. Organizations can also register for upcoming sessions in the Compliance Meets AI 2026 series through the Microsoft Community Hub.

Comments

Loading comments...