Model Security Is the Wrong Frame – The Real Risk Is Workflow Security
#Security

Model Security Is the Wrong Frame – The Real Risk Is Workflow Security

Security Reporter
7 min read

As AI systems become embedded in daily operations, security teams are still focused on protecting the models themselves. But recent incidents show the real vulnerability lies in the workflows that surround these models, where data flows, integrations, and user interactions create new attack surfaces that traditional security controls can't address.

The Shift from Model Protection to Workflow Security

Featured image

When security teams think about AI risks, they often focus on the model itself—its training data, its parameters, its potential for bias or manipulation. But the most dangerous attacks we're seeing today don't target the AI algorithms at all. They target the workflows where AI operates.

Consider two recent incidents that illustrate this shift. First, two Chrome extensions posing as AI helpers were caught stealing ChatGPT and DeepSeek chat data from over 900,000 users. These extensions didn't compromise the AI models—they simply sat between users and the AI services, siphoning off conversation data. Second, researchers demonstrated how prompt injections hidden in code repositories could trick IBM's AI coding assistant into executing malware on a developer's machine. Again, the AI model itself remained untouched; the attack exploited the context in which it operated.

These incidents reveal a fundamental misunderstanding in how we secure AI systems. We're building defenses for the model while leaving the surrounding workflow exposed.

Why AI Workflows Create Unique Vulnerabilities

New n8n Vulnerability (9.9 CVSS) Lets Authenticated Users Execute System Commands

Modern AI systems are becoming workflow engines. Businesses now rely on them to connect applications and automate tasks that were previously manual. An AI writing assistant might pull a confidential document from SharePoint and summarize it in an email draft. A sales chatbot might cross-reference internal CRM records to answer customer questions. Each scenario blurs the boundaries between applications, creating new integration pathways on the fly.

What makes this risky is how AI agents operate. They rely on probabilistic decision-making rather than hard-coded rules, generating output based on patterns and context. A carefully written input can nudge an AI to do something its designers never intended, and the AI will comply because it has no native concept of trust boundaries.

This means the attack surface includes every input, output, and integration point the model touches. Hacking the model's code becomes unnecessary when an adversary can simply manipulate the context the model sees or the channels it uses.

Why Traditional Security Controls Fall Short

Critical n8n Vulnerability (CVSS 10.0) Allows Unauthenticated Attackers to Take Full Control

These workflow threats expose a critical blind spot in traditional security approaches. Most legacy defenses were built for three assumptions that AI-driven workflows break:

1. Deterministic Software vs. Probabilistic AI Most general applications distinguish between trusted code and untrusted input. AI models don't. Everything is just text to them, so a malicious instruction hidden in a PDF looks no different than a legitimate command. Traditional input validation doesn't help because the payload isn't malicious code—it's just natural language.

2. Stable User Roles vs. Dynamic AI Behavior Traditional monitoring catches obvious anomalies like mass downloads or suspicious logins. But an AI reading a thousand records as part of a routine query looks like normal service-to-service traffic. If that data gets summarized and sent to an attacker, no rule was technically broken.

3. Clear Perimeters vs. Blurred Boundaries Most security policies specify what's allowed or blocked: don't let this user access that file, block traffic to this server. But AI behavior depends on context. How do you write a rule that says "never reveal customer data in output"?

4. Periodic Reviews vs. Continuous Change Security programs rely on periodic reviews and fixed configurations, like quarterly audits or firewall rules. AI workflows don't stay static. An integration might gain new capabilities after an update or connect to a new data source. By the time a quarterly review happens, a token may have already leaked.

Securing AI-Driven Workflows: A Practical Framework

c

A better approach treats the whole workflow as the thing you're protecting, not just the model. Here's how to implement this:

1. Map Your AI Footprint

Start by understanding where AI is actually being used, from official tools like Microsoft 365 Copilot to browser extensions employees may have installed on their own. Many organizations are surprised to find dozens of shadow AI services running across the business.

Actionable steps:

  • Conduct a comprehensive audit of AI tools in use
  • Document what data each system can access
  • Identify what actions each AI agent can perform

2. Implement Workflow-Level Guardrails

If an AI assistant is meant only for internal summarization, restrict it from sending external emails. Scan outputs for sensitive data before they leave your environment. These guardrails should live outside the model itself, in middleware that checks actions before they go out.

Practical implementation:

  • Deploy output filtering at the workflow level
  • Use data loss prevention (DLP) tools configured for AI outputs
  • Implement content scanning for sensitive information in AI-generated text

3. Apply Principle of Least Privilege

Treat AI agents like any other user or service. If an AI only needs read access to one system, don't give it blanket access to everything. Scope OAuth tokens to the minimum permissions required, and monitor for anomalies like an AI suddenly accessing data it never touched before.

Example:

  • An AI writing assistant for marketing should only access the marketing SharePoint site, not the entire corporate file system
  • A sales chatbot should only query CRM records for active leads, not all historical customer data

4. Educate and Vet

Educate users about the risks of unvetted browser extensions or copying prompts from unknown sources. Vet third-party plugins before deploying them, and treat any tool that touches AI inputs or outputs as part of the security perimeter.

Training topics:

  • How to identify legitimate vs. malicious AI extensions
  • The risks of sharing sensitive data with AI chatbots
  • How prompt injection attacks work in practice

The Role of Dynamic SaaS Security Platforms

c

In practice, doing all of this manually doesn't scale. That's why a new category of tools is emerging: dynamic SaaS security platforms. These platforms act as a real-time guardrail layer on top of AI-powered workflows, learning what normal behavior looks like and flagging anomalies when they occur.

Platforms like Reco provide this capability. They give security teams visibility into AI usage across the organization, surfacing which generative AI applications are in use and how they're connected. From there, you can enforce guardrails at the workflow level, catch risky behavior in real time, and maintain control without slowing down the business.

How These Platforms Work

  1. Discovery: Continuously scan for AI tools and extensions across the organization
  2. Behavioral Baseline: Learn normal patterns of AI usage across different workflows
  3. Anomaly Detection: Flag unusual behavior, like an AI suddenly accessing new data sources
  4. Workflow Enforcement: Apply policies at the integration points, not just at the model level

Key Features to Look For

  • Real-time monitoring of AI interactions and data flows
  • Context-aware policy enforcement that understands workflow semantics
  • Integration with existing security tools (SIEM, DLP, IAM)
  • Automated response capabilities for high-risk behaviors

Moving Forward: A New Security Mindset

The incidents with Chrome extensions stealing AI chat data and prompt injections in code repositories aren't isolated events. They're symptoms of a broader shift in how AI is being used and how it's being attacked.

Security teams need to expand their focus from model security to workflow security. This means:

  • Understanding AI as a workflow component, not just a standalone tool
  • Applying security controls at integration points, not just at the model boundary
  • Monitoring behavior patterns, not just access logs
  • Educating users about the unique risks of AI-powered workflows

The model itself is just one piece of a much larger puzzle. The real security challenge—and the real opportunity for improvement—lies in securing the entire workflow that makes AI useful in the business context.

As AI continues to embed itself in daily operations, those who recognize this shift early will be better positioned to protect their organizations from the next generation of AI-powered threats.

Comments

Loading comments...