A startup focused on securing enterprise use of generative AI has closed a significant funding round, highlighting growing corporate concerns about data leakage and model governance as employees increasingly adopt custom AI tools.

The enterprise AI security space saw a notable funding announcement this week, as WitnessAI announced a $58 million Series B round led by Sound Ventures. The investment brings the company's total funding to $85 million, according to a report by Ionut Arghire in SecurityWeek. The company plans to use the capital to accelerate its global go-to-market strategy and expand its product offerings.
The Problem: Unmonitored AI Usage
WitnessAI's core proposition addresses a specific and growing pain point in corporate environments: the proliferation of custom generative AI models used by employees. While many organizations have deployed official, company-sanctioned AI tools, employees frequently turn to third-party platforms or build their own models for specific tasks. This creates significant security risks, including data exfiltration, intellectual property leakage, and compliance violations.
The company's technology operates by intercepting these custom model interactions and applying safeguards in real-time. This approach differs from traditional security tools that might block access outright; instead, WitnessAI aims to monitor, analyze, and apply policy-based controls to AI usage without completely stifling productivity.
Technical Approach and Market Context
The security challenge WitnessAI tackles is nuanced. Unlike web filtering or data loss prevention (DLP) tools designed for structured data, monitoring generative AI usage requires understanding context, intent, and the nature of the output. A developer using a custom model to debug code presents a different risk profile than an employee using the same model to draft a sensitive email.
The company's solution likely involves several layers:
- Model Interception: Capturing prompts and responses from custom models, whether accessed via APIs, web interfaces, or locally deployed instances.
- Content Analysis: Scanning for sensitive information (PII, credentials, proprietary data) and policy violations.
- Behavioral Monitoring: Tracking usage patterns to identify anomalous or high-risk activities.
- Policy Enforcement: Applying rules that can range from alerting security teams to blocking specific actions or redacting sensitive information in outputs.
This funding round reflects a broader trend in the AI security market. As generative AI becomes embedded in workflows, the attack surface expands. Traditional security perimeters are insufficient when employees can access powerful models from anywhere. Venture capital is flowing into startups that promise to make AI usage safer and more compliant.
Limitations and Considerations
While the approach is logical, several challenges remain:
- Performance Overhead: Intercepting and analyzing every AI interaction could introduce latency, potentially frustrating users and impacting productivity. The balance between security and user experience is critical.
- Evasion Techniques: Determined employees might find ways to bypass monitoring, such as using personal devices or encrypted channels. No security solution is foolproof.
- Model Diversity: The landscape of custom models is vast and rapidly evolving. A security tool must adapt to new architectures, APIs, and usage patterns without requiring constant manual updates.
- False Positives: Overly aggressive policies could block legitimate work, leading to frustration and a potential shadow IT problem where employees seek workarounds.
The Broader AI Security Landscape
WitnessAI is not operating in a vacuum. The AI security market is becoming crowded with companies addressing different facets of the problem:
- Data Security: Companies like Protect AI and HiddenLayer focus on securing the AI development pipeline and protecting models from attacks.
- Access Control: Solutions like Palo Alto Networks' AI Security and Cisco's Secure AI integrate AI security into broader network and cloud security platforms.
- Compliance and Governance: Startups are emerging to help organizations track AI usage for regulatory compliance, such as GDPR or the EU AI Act.
WitnessAI's specific focus on custom models used by employees differentiates it from competitors focused on securing pre-trained models or the AI development lifecycle. This is a pragmatic niche, as the most immediate risk for many enterprises comes from the uncontrolled use of existing models, not the training of new ones.
What This Means for Enterprises
For security and IT leaders, the funding round for WitnessAI signals that the market for AI governance tools is maturing. It's no longer sufficient to simply provide an approved AI tool and hope employees use it exclusively. Organizations need strategies to monitor and manage the AI tools employees are already using.
Key questions for enterprises considering such solutions include:
- Integration: How does the tool integrate with existing security stacks (SIEM, DLP, identity management)?
- Scalability: Can it handle the volume and variety of AI interactions across a large organization?
- User Impact: How does it affect developer and employee workflows? Is the security friction acceptable?
- Policy Flexibility: Can policies be tailored to different departments, roles, and use cases?
The $85 million in total funding suggests investors see a substantial market for these solutions. However, the true test will be in deployment. WitnessAI's success will depend on its ability to demonstrate clear value—reducing security incidents, ensuring compliance, and doing so with minimal impact on productivity. As AI usage continues to proliferate, the demand for such guardrails will only intensify, making this a space to watch.
Sources:

Comments
Please log in or register to join the discussion