Who is liable when AI agents go wrong in business?
#Regulation

Who is liable when AI agents go wrong in business?

Regulation Reporter
4 min read

As AI agents promise to 'run the business,' the question of liability remains murky, with vendors pushing back on responsibility while regulators insist that humans remain accountable for AI-driven decisions.

When AI agents promise to "actively run the business," the question of liability becomes increasingly complex and legally ambiguous.

Featured image

The promise versus the reality

Major enterprise software vendors are touting AI agents as transformative tools capable of automating decisions across HR, finance, and supply chain management. Oracle's AI Agent Studio, for example, promises technology that can "actively run the business, with the governance, trust, and security that enterprises require."

However, the legal reality is far more complicated. Malcolm Dowden, senior technology lawyer at Pinsent Masons, explains that traditional software warranties assume predictable behavior, but AI agents operate differently. "The more we get down the chain to what used to be called non-deterministic AI – mostly what falls into that agentic AI category – that gives a much greater scope for unexpected behaviors."

Regulatory clarity: Humans remain accountable

The UK's Financial Reporting Council has been unequivocal in its guidance. "While technology changes, the fundamental principle of our regulatory framework does not: it is people – the firms and Responsible Individuals – who are accountable for audit quality."

Mark Babington, FRC executive director, put it even more bluntly: "You can't blame it on the box. If you use this technology, you are still accountable for it."

This principle extends beyond financial services. Organizations deploying AI for tasks like screening job applications must comply with data protection laws, as they're considered data controllers under UK law. The Information Commissioner's Office supports automation but requires users to monitor for bias, maintain transparency with job seekers, and explain their right to recourse.

The vendor perspective: Limited liability

Vendors are pushing back against accepting broad liability for AI agent outputs. Dowden notes that negotiations are focusing on establishing which party bears responsibility. "Both sides are essentially looking to establish the other as the liable party."

Instead of accepting legal liability, vendors are emphasizing monitoring, observability, and audit capabilities. Gartner's Balaji Abbabatulla explains that vendors are implementing "guardian agents" for continuous monitoring to identify exceptions, but liability remains the "key challenge for all vendors."

The scale of risk

Gartner predicts that by mid-2026, new categories of unlawful AI-informed decision-making will generate more than $10 billion in remediation costs across global AI vendors and enterprises. The concern is that AI agent decisions can cascade quickly and unnoticed, magnifying any errors.

Georgina Kon, Linklaters partner in digital, data and commercial law, highlights the magnification risk: "The magnification risk is massive but also there is the difficulty in working out who is responsible. A lot of the current laws don't really lend themselves particularly easily, because it assumes always that a human or company is doing something and that's not true."

Defensible AI as a solution

Gartner's Lydia Clougherty Jones recommends organizations adopt "defensible AI" – techniques that can "reliably and repeatedly withstand scrutiny, questioning, and examination." This includes:

  • Making AI-ready data "AI-decision-making ready"
  • Extensively overhauling ML model explainability
  • Deploying content and decision-making guardrails across the entire AI lifecycle
  • Implementing continuous monitoring and observability

Market dynamics and sector differences

The willingness to deploy AI agents varies significantly by sector. Financial services and healthcare are expected to be more conservative, while other industries may accept greater risk to gain competitive advantages or process efficiencies.

With AI investment set to reach $2.52 trillion this year, vendors are eager to see returns on their substantial outlays. However, the legal framework for AI liability remains underdeveloped, and court cases will likely be needed to establish clearer precedents.

The unanswered questions

When approached for comment on their liability positions regarding AI agents, major vendors including Microsoft, SAP, Workday, Salesforce, ServiceNow, and Oracle either refused to comment or did not respond. This silence underscores the sensitivity and complexity of the issue.

As the market continues to evolve, organizations deploying AI agents must navigate a landscape where:

  • Regulatory bodies insist on human accountability
  • Vendors limit their liability through contractual terms
  • The technology itself can behave in unpredictable ways
  • The legal framework remains unclear

The challenge for businesses is balancing the transformative potential of AI agents against the very real risks of liability when things go wrong – a calculation that will likely vary significantly across different industries and risk appetites.

Comments

Loading comments...