Google Cloud deepens its reliance on Intel's infrastructure processing units, ordering new custom SmartNICs while maintaining Xeon processors for AI workloads, as the chipmaker seeks to prove its datacenter relevance.
Google Cloud is doubling down on Intel's infrastructure processing units (IPUs), ordering new custom SmartNICs to power its next-generation cloud infrastructure. The expanded partnership, announced Thursday, comes as Intel seeks to prove its datacenter relevance amid growing competition from custom silicon and Arm-based alternatives.
Google's Custom Silicon Strategy Takes a Different Path
Unlike Amazon Web Services, which has developed its own Nitro NICs through its Annapurna Labs division, Google has chosen to partner with Intel for its SmartNIC needs. The collaboration began in 2022 with the Mount Evans IPU, which launched alongside Google's C3 instances and delivered 200 Gbps networking speeds.
The decision to continue working with Intel rather than developing in-house networking silicon represents a strategic choice. While AWS and Microsoft have invested heavily in custom networking solutions—Microsoft using FPGAs for custom logic—Google appears to be focusing its custom silicon efforts elsewhere, particularly on its Arm-based Axion CPU.
AI Infrastructure Demands Drive Next-Gen IPU Development
Intel's announcement hints at significant performance improvements in the next generation of Google's IPUs. Given the explosive demand for high-speed networking in AI compute clusters, industry analysts expect the new SmartNICs to far exceed Mount Evans' 200 Gbps capabilities.
The timing aligns with Google's aggressive AI infrastructure expansion. The company has been rapidly scaling its GPU and TPU deployments, requiring increasingly sophisticated networking solutions to handle the massive data flows between compute nodes in AI clusters.
Xeon Processors Remain Central to Google's AI Strategy
Despite Google's development of its own Arm-based Axion processors, Intel emphasized that Xeon remains a key component of Google Cloud's architecture. The chipmaker noted that Xeon processors power "a variety of general purpose and AI workloads" across Google's infrastructure.
This dual approach—maintaining Intel processors while developing custom Arm alternatives—mirrors strategies employed by other hyperscalers. Microsoft similarly uses both custom Arm-based Cobalt processors and Intel/AMD x86 chips across its Azure cloud.
Custom ASIC Business Shows Strong Growth
The expanded partnership announcement comes on the heels of strong performance in Intel's custom ASIC business. During its Q4 2025 earnings call, Intel CFO David Zinsner revealed that the custom ASIC division grew more than 50 percent year-over-year and reached an annualized revenue run rate exceeding $1 billion.
This growth suggests that despite challenges in Intel's broader business, its custom silicon division is finding traction with major cloud providers like Google. The ability to deliver tailored solutions for specific workloads appears to be a key differentiator.
Market Dynamics Keep Intel Relevant
Industry analysts note that Intel's position in the cloud computing market remains surprisingly secure, despite predictions of its displacement by custom silicon. Several factors contribute to this resilience:
- Customer Preference: Many enterprise customers still prefer x86 architectures for compatibility and performance reasons
- Pricing Pressure: The presence of both Intel and AMD in the market creates competitive pricing dynamics that benefit cloud providers
- Workload Diversity: Different workloads have different requirements, making a one-size-fits-all approach impractical
Google's continued investment in Xeon processors, even as it develops Axion, reflects this reality. The company appears to be maintaining flexibility rather than committing exclusively to any single architecture.
The Future of Cloud Networking Infrastructure
The expanded Intel-Google partnership signals several broader trends in cloud infrastructure development:
Performance Scaling: As AI workloads become more demanding, networking infrastructure must scale accordingly. The next-generation Google IPUs will likely target multi-terabit speeds to support large-scale AI training clusters.
Specialization Over Generalization: Rather than building general-purpose solutions, both Intel and Google appear focused on tailoring their networking infrastructure to specific use cases, particularly AI and machine learning workloads.
Strategic Partnerships: The collaboration demonstrates how even competitors in certain areas can find mutually beneficial arrangements in others. Google gets access to Intel's networking expertise while Intel secures a major customer for its custom silicon business.
What This Means for Cloud Customers
For Google Cloud customers, the expanded partnership with Intel should translate to improved performance and capabilities, particularly for AI workloads. The new SmartNICs will likely enable:
- Faster data transfer between compute nodes in AI clusters
- Improved network security through hardware-based offloading
- Better resource utilization by freeing CPU cores from networking tasks
- Enhanced support for emerging AI frameworks and workloads
The continued investment in Xeon processors also suggests that Google will maintain strong x86 offerings alongside its Arm-based Axion instances, giving customers more architectural choices for their workloads.
Industry Context and Competitive Landscape
The Intel-Google partnership must be viewed against the backdrop of intense competition in the cloud infrastructure market. AWS continues to push its custom Nitro architecture, while Microsoft leverages its FPGA expertise. Google's approach of selective custom development combined with strategic partnerships represents a middle path.
This strategy allows Google to focus its engineering resources on areas where it can achieve the greatest differentiation—such as its Tensor Processing Units for AI—while relying on partners like Intel for infrastructure components where custom development may offer less competitive advantage.

The expanded collaboration between Google and Intel represents more than just another hardware purchase. It reflects the complex, multi-faceted nature of modern cloud infrastructure, where success depends not just on raw performance but on strategic flexibility, architectural diversity, and the ability to leverage specialized expertise across the technology ecosystem.
As AI workloads continue to drive demand for ever-faster networking and more specialized compute capabilities, partnerships like this one will likely become increasingly common. The future of cloud computing may well be defined not by companies that build everything in-house, but by those that can most effectively orchestrate a diverse ecosystem of technologies and partners to deliver optimal solutions for their customers.

Comments
Please log in or register to join the discussion