AI workloads are straining network infrastructure as organizations focus on compute power while neglecting data movement capabilities, creating bottlenecks that threaten AI performance and ROI.
AI is reshaping the demands on network infrastructure, and many organizations are not prepared – including some of the so-called neocloud providers offering AI services. A study by analyst biz Omdia finds that many rent-a-GPU providers have scaled up their compute infrastructure to handle AI workloads, but their networking infrastructure is becoming a critical constraint.

The fundamental problem is that organizations have been laser-focused on compute capacity while treating networking as an afterthought. As Omdia Telco B2B Research Director Camille Mendler bluntly states: "Y'all been focusing on compute and forgot about how the data moves around."
The Neocloud Infrastructure Gap
Neocloud operators, or GPU-as-a-service providers, sprang up to take advantage of the huge demand for compute using GPU accelerators for AI. Many count hyperscalers such as Microsoft among their customers, as well as enterprise. This means that AI performance increasingly depends on their ability to process and move data securely across distributed environments and geographies.
However, the networking capabilities of different neoclouds vary dramatically from rudimentary to advanced, depending in part on their origins. Some, such as CoreWeave, started life as cryptocurrency mining operations, while others, such as Gcore, previously focused on content distribution or web hosting. Because of this, neocloud networking strategy is in flux globally, with many rushing to partner, buy, or build infrastructure as their dependency on networking increases.
"Network infrastructure will make or break neoclouds," warns Mendler. "Low latency, resilient and secure connectivity from backbone to edge is table stakes for success, not least because sovereignty spans where AI workloads move."
The Enterprise Network Challenge
Global network provider Lumen is jumping on the same bandwagon. CEO Kate Johnson issued an open letter to enterprise chiefs everywhere asking if their networks are AI-ready and pushing upgrades to support coming AI applications, as well she might.
Networking has traditionally been in the background, like plumbing, Johnson claims. "But in an AI-driven enterprise, the network is more like the nervous system. It controls and coordinates. It determines how fast you can move and whether your AI investments produce value."
The scale of the challenge is staggering. AI systems don't operate in one single location but involve constant data movement between clouds, datacenters, and edge endpoints, and so networks must be adaptable and able to scale dynamically.
"The new corporate workforce is comprised of AI agents and bots. They're proliferating rapidly, operating continuously, insatiably consuming and generating data and dynamically interacting with other agents, bots and humans," Johnson says in the letter.
The Bot Traffic Explosion
And despite the early days of AI adoption in most businesses, today, more than 50 percent of internet traffic is created by these autonomous workers, she claims. This claim comes from Imperva's 2025 Bad Bot Report, which states that automated traffic has now surpassed human activity, accounting for 51 percent of all internet transmissions.
This shift has profound implications for network infrastructure. Traditional networks designed for human-centric traffic patterns are buckling under the weight of machine-to-machine communication that requires different characteristics:
- Constant connectivity: Unlike human users who have intermittent access patterns, AI agents need persistent connections
- High throughput: AI models require massive data transfers for training and inference
- Low latency: Real-time AI applications cannot tolerate the delays common in traditional networks
- Dynamic scaling: Traffic patterns can spike unpredictably as AI workloads shift
The Path Forward
To support the brave new world of AI, networks need to be completely adaptable, programmable and consumption-based, just like cloud, Johnson states, before exhorting enterprise chiefs to "make sure your network supports the future you're building."
The implications extend beyond just upgrading bandwidth. Organizations need to fundamentally rethink their network architecture:
- Edge computing integration: Processing data closer to where it's generated reduces latency and bandwidth requirements
- Software-defined networking: Programmable networks can adapt to AI workload demands in real-time
- Network function virtualization: Replacing hardware appliances with software solutions increases flexibility
- AI-native networking: Using AI to optimize network performance for AI workloads creates a virtuous cycle
Omdia warns enterprise customers to scrutinize potential suppliers beyond their raw compute capacity when considering AI compute services. The networking layer is now the critical bottleneck that determines whether AI investments deliver value or become expensive paperweights.
As AI continues its rapid adoption across industries, the organizations that recognize networking as a strategic priority rather than a commodity utility will be the ones that successfully harness AI's transformative potential. Those that don't may find their AI initiatives hamstrung by infrastructure that simply wasn't designed for the data-hungry, latency-sensitive workloads of the AI era.

Comments
Please log in or register to join the discussion