A new DDN research report reveals that more than half of enterprise AI projects are delayed or canceled due to infrastructure complexity, with two-thirds of IT leaders finding their AI environments too complex to manage. The findings echo broader industry warnings about AI implementation challenges, from MIT's Project NANDA to Gartner's predictions on agentic AI project cancellations.
A new research report paints a grim picture of enterprise AI adoption, finding that more than half of AI projects have been delayed or canceled within the last two years due to infrastructure complexities. The study, commissioned by data optimization company DDN in partnership with Google Cloud and Cognizant, surveyed 600 IT and business decision-makers at US enterprises with 1,000 or more employees.

The findings reveal a significant infrastructure gap: about two-thirds of respondents said their AI environments are too complex to manage. "If you look at the enterprise, there's just enormous enthusiasm to deploy AI, but the problem is that the infrastructure, the power, and the operational foundation that is required to run it just aren't there," Alex Bouzari, CEO of DDN, told The Register.
This infrastructure complexity manifests in several costly ways. Organizations face delayed IT projects, underutilized GPUs, and rising power costs. "The economics, I think, for lots of organizations don't pencil out because of these challenges," Bouzari explained.
The Broader Pattern of AI Implementation Struggles
This isn't an isolated finding. The DDN report aligns with several other industry studies highlighting enterprise AI challenges:
- MIT's Project NANDA found that 95% of organizations see zero measurable return from their generative AI investments
- Gartner predicted that more than 40% of agentic AI projects will be canceled by the end of 2027
- Forrester discovered that 25% of planned AI spend would be delayed into 2027, with only 15% of AI decision-makers reporting an EBITDA lift
These studies collectively suggest a pattern: enterprises are rushing into AI without adequate infrastructure planning, leading to widespread project failures and wasted investments.
The Cloud Isn't a Silver Bullet
While 97% of surveyed decision-makers believe scaling AI will require cloud deployment, Bouzari cautions that cloud migration doesn't solve the underlying infrastructure problems. "The same challenges that you would have on prem will follow you into the cloud," he said. "Cloud needs unified data, and the cloud needs orchestration at scale. So, it's all of these considerations."
This insight is crucial for infrastructure planning. Moving to the cloud doesn't eliminate the need for proper data architecture, orchestration strategies, and operational foundations. Organizations that haven't addressed these fundamentals on-premises will likely face the same challenges in the cloud environment.
The Education Gap
Bouzari identifies a critical education gap within IT organizations. "There's an education process which needs to take place within the IT organization," he explained. This education gap extends beyond technical teams to include business decision-makers who need to understand AI's true capabilities and limitations.
The education challenge is compounded by the rapid evolution of AI technology. What worked for early adopters may not apply to organizations starting their AI journey today. The infrastructure requirements have become more sophisticated as models have grown larger and more complex.
The Early Mover Advantage
Bouzari highlights a widening gap between early AI adopters and current enterprise adopters. Organizations that made substantial early bets and successfully transitioned from pilot projects to production systems are now generating ROI. Meanwhile, many enterprises just beginning their AI journey face significant infrastructure hurdles that early movers didn't encounter.
This creates a competitive disadvantage for late adopters, who must navigate more complex infrastructure requirements while competing against organizations that have already established AI capabilities.
The Role of System Integrators
Bouzari sees system integrators and consultants as key facilitators for overcoming infrastructure challenges. "I think that the education process is something that the facilitators can enable," he said. "If you look at organizations like Accenture and Deloitte, resellers who know how to deploy complex, turnkey business solutions for organizations, I think there's a ramp in that curve, which is starting to take place, and then we will have an accelerated adoption."
These organizations can help enterprises navigate the infrastructure complexity by providing turnkey solutions that address data orchestration, compute scaling, and operational management.
Beyond Chatbots: Finding Real Use Cases
A significant part of the infrastructure problem stems from poorly conceived use cases. Bouzari criticizes the default tendency to focus on customer service chatbots when discussing AI applications. "Rather than defaulting to customer service chatbots when the topic of use cases comes up, vendors and advisors need to help find capabilities that bridge an organization's data with AI," he said.
He argues that focusing on incremental cost reductions misses AI's transformative potential: "As opposed to, I'm going to lower my customer service cost from 3.7% of revenue to 3.1% of revenue. That is really short changing what AI can do."
Infrastructure Requirements for Enterprise AI
For organizations planning AI deployments, the research suggests several critical infrastructure considerations:
Data Architecture: AI systems require unified data access across the organization. Siloed data creates bottlenecks and limits model effectiveness.
Compute Scaling: GPU utilization remains a challenge. Organizations need strategies for dynamic scaling and efficient resource allocation.
Power Management: As AI systems grow, power consumption becomes a significant cost factor. Infrastructure planning must account for power efficiency and cooling requirements.
Orchestration: Managing AI workloads at scale requires sophisticated orchestration tools and processes.
Operational Foundation: Beyond hardware, organizations need monitoring, maintenance, and optimization processes for AI infrastructure.
The Path Forward
The DDN report suggests that overcoming infrastructure challenges requires a combination of education, better use case selection, and leveraging experienced system integrators. Organizations must approach AI infrastructure as a strategic investment rather than a tactical add-on.
For infrastructure teams, this means:
- Assess current capabilities before committing to AI projects
- Plan for scale from the beginning, not as an afterthought
- Invest in education for both technical and business teams
- Consider total cost of ownership, including power and operational overhead
- Focus on use cases that leverage existing data assets and create measurable business value
The research indicates that while enthusiasm for AI remains high, the infrastructure reality is forcing a more measured approach. Organizations that address these foundational challenges are more likely to join the early movers generating ROI, while those that ignore infrastructure requirements may join the growing list of canceled AI projects.
As Bouzari notes, the education process is beginning to take place through system integrators and consultants. This suggests that the AI infrastructure gap may narrow as more organizations gain experience and develop better practices for managing AI infrastructure at scale.
The key takeaway for enterprises: AI success requires more than just choosing a model and a cloud provider. It demands careful infrastructure planning, realistic use case selection, and a commitment to building the operational foundation needed to support AI at scale.

Comments
Please log in or register to join the discussion