Enterprise AI agent rollouts slow outside the lab • The Register
#AI

Enterprise AI agent rollouts slow outside the lab • The Register

Regulation Reporter
6 min read

AI agent implementations are hitting reality as enterprises struggle to move beyond pilot projects, with only the largest organizations making meaningful progress despite massive investment forecasts.

Anyone scanning the news might think it's pedal to the metal as far as AI agent implementations go, but there is a slump in rollouts as many organizations figure out what to do next, Redis CEO Rowan Trollope told The Register.

The company behind the Redis database, which built a following as a cache in cloud application architecture to become the most popular database on AWS, is trying to help users get out of the lab with AI agent projects and into production.

Earlier this month, Gartner forecast that investment from software vendors and cloud providers would propel a trillion-dollar increase in AI spending this year as investment hits $2.52 trillion. Enterprise users, however, are in the "trough of disillusionment" as reactions to enterprise project pitches go from "that was a great idea" to "where's my revenue?" the research firm said.

Trollope said the phenomenon was reflected in his experience of helping customers build projects implementing AI agent platforms in business. "I've seen fewer examples of real successful production agents than I would have imagined [in terms of] anything outside of engineering," he said. "It is still quite hard to do, and only the biggest companies in the world understand this is the future they're investing in. I don't think they're going to stop. They realize they need to have this next-generation platform."

Redis started life in 2009 as an attempt to build a performant key-value database. By late 2020, it was the most popular choice as a cache and message broker in cloud-native application stacks. Redis has since broadened its ambitions, adding features for machine learning and support for JSON documents in a bid to evolve beyond its caching roots. Now it is supporting AI implementations.

Last year, it announced LangCache, a fully managed REST service designed to reduce expensive and latency-prone calls to LLMs by caching previous responses to semantically similar queries. While Gartner sees a lot of enterprise LLM spending going to large application vendors as users seek low-risk options by upgrading software they already use, Trollope said organizations need to think about the range of sources they have to draw from to get AI agents to make decisions.

While Salesforce might store what discount you gave a customer and Workday stores information about employees, agents making decisions may also require information from email, instant messaging platforms, and other sources, he argued. Hence, organizations building out AI agent systems were using frameworks from Microsoft, Google, or LangChain, an independent engineering platform for building, testing, and deploying reliable AI agents.

"The information needed to make the most relevant decisions is often not immediately obvious to the agent," said Trollope. "For example, if I were to build an agent that is going to interface with my customers and allow it to do pricing, why and when is the agent allowed to make exceptions to the standard pricing policy? If all you want is the standard pricing policy, that's very easy, but you're not going to replace any human beings with that. What you need is to find out where the humans apply their judgment and what data they used to make that decision. That's where pulling that data together is difficult, because it's often unstructured. It's sitting in Slack threads, in email chains, in text messages. That's what we see as the number one problem."

The data requirements for AI agents to make meaningful decisions are part of the motivation for vector features in databases. A slew of vendors, including Redis and Oracle, as well as specialist vendors, are backing the concept. With a paucity of successful case studies, the jury might still be out on whether returns will follow. But Redis, at least, sees big businesses continue to invest despite the challenges.

Featured image

The reality gap in enterprise AI adoption

The disconnect between AI agent hype and actual production deployments reveals a fundamental challenge in enterprise technology adoption. While vendors and cloud providers project massive spending increases, the practical reality on the ground tells a different story. Organizations are discovering that moving from proof-of-concept to production requires solving problems that weren't apparent in the lab.

Why the biggest companies are pulling ahead

Trollope's observation that only the largest organizations are making meaningful progress with AI agents points to a critical insight: successful implementation requires substantial resources, both technical and organizational. Large enterprises have the budget to experiment, the data infrastructure to support complex AI systems, and the organizational patience to work through deployment challenges.

These companies also have the advantage of scale. When an AI agent can automate processes across thousands of employees or millions of customer interactions, the potential return on investment justifies the significant upfront costs and ongoing maintenance requirements.

The data integration challenge

Perhaps the most revealing insight from Trollope's comments is the complexity of data integration for AI agents. The example of pricing exceptions illustrates a broader pattern: AI agents need access to the same contextual information that human employees use to make decisions.

This means integrating data from multiple sources:

  • Structured data in CRM systems like Salesforce
  • HR information in platforms like Workday
  • Unstructured communications in Slack and email
  • Customer interaction histories
  • Policy documents and decision logs

Each of these data sources presents its own challenges. Structured data is relatively straightforward to access, but unstructured communications require sophisticated processing to extract meaningful information. Different systems may use incompatible formats or have different access controls.

Vector databases as a solution

The push toward vector features in databases represents an attempt to solve the unstructured data problem. Vector databases can store and search through embeddings of text, images, and other data types, making it possible to find semantically similar information even when it's not explicitly labeled or categorized.

Redis's LangCache service exemplifies this approach, using vector similarity to reduce LLM calls by finding cached responses to similar queries. This not only saves costs but also improves response times, addressing two of the major concerns with AI agent implementations.

The enterprise software vendor strategy

Gartner's observation that enterprise LLM spending is flowing to large application vendors reveals another dynamic in the market. Companies are choosing the "safe" option of upgrading existing software rather than building custom AI agent solutions.

This strategy makes sense from a risk management perspective. Large vendors have the resources to ensure their AI features work reliably and comply with regulations. They can also provide support and maintenance that would be difficult for individual companies to replicate.

However, this approach may limit innovation. Custom AI agent solutions can be tailored to specific business processes and integrated with proprietary data sources in ways that off-the-shelf solutions cannot match.

Looking ahead: The path to production

The current state of enterprise AI agent adoption suggests that we're still in the early stages of this technology's evolution. The challenges that organizations face today—data integration, decision-making transparency, and measurable ROI—are likely to be solved over time.

As vector database technology matures and integration tools become more sophisticated, the barriers to entry should decrease. We may see a shift from the current pattern, where only the largest companies can afford to implement AI agents, to a more democratized landscape where mid-sized organizations can also benefit.

For now, however, the message is clear: the AI agent revolution is real, but it's happening more slowly and with more difficulty than the hype would suggest. Organizations considering AI agent projects should prepare for a challenging journey that requires significant investment in data infrastructure, integration capabilities, and organizational change management.

The companies that succeed will be those that can bridge the gap between the promise of AI agents and the practical realities of enterprise IT, finding ways to extract real value from these technologies despite the current limitations and challenges.

Comments

Loading comments...