Google, Amazon, Microsoft, and Meta plan to spend a combined $725 billion on capital expenditure in 2026, a 77% increase from 2025, with rising memory chip costs accounting for billions in additional spending, particularly for AI infrastructure.
The semiconductor industry's supply chain dynamics are undergoing a dramatic transformation as major technology companies prepare to invest a record $725 billion in capital expenditure in 2026, representing a 77% increase over last year's already substantial $410 billion. This unprecedented spending surge reflects not just an AI arms race, but a fundamental shift in how Big Tech approaches hardware acquisition and component supply in an era of constrained semiconductor manufacturing capacity.
Memory Cost Inflation as Primary Driver
What makes this capex increase particularly noteworthy is the explicit acknowledgment by multiple companies that rising memory chip prices are a primary factor driving their budget adjustments. Microsoft's CFO, Amy Hood, directly stated that increased prices for memory chips and other components accounted for $25 billion of the company's record capex budget. Microsoft set its 2026 spending at $190 billion, significantly above the $152 billion average analyst forecast.
Similarly, Meta raised its full-year capex range to $125 billion to $145 billion, up from a prior ceiling of $135 billion, explicitly citing "higher component pricing this year, particularly memory" alongside rising costs for land, power, and skilled workers needed to build data centers that now consume 70% of the world's memory output.
These statements provide concrete validation for what market data and industry executives have been warning about for months. According to TrendForce, DRAM contract prices rose approximately 95% quarter over quarter in Q1 2026, with a further 58% to 63% increase projected for Q2. NAND flash memory is following a similar trajectory, with Q2 contract prices expected to climb 70% to 75%.
The concentration of demand in specific memory segments is exacerbating the price increases. Server DRAM and high-density DDR5 RDIMMs are absorbing the bulk of production capacity, and all NAND output for 2026 is already committed, according to Phison CEO Khein-Seng Pua. Hood's $25 billion figure helps quantify the impact, demonstrating that memory cost inflation at a single company exceeds the entire annual capex of most semiconductor firms.
Cloud Growth and Contract Backlogs
Beyond memory costs, the cloud divisions of these companies are experiencing explosive growth, creating additional demand pressure across the semiconductor supply chain. Google's Cloud revenue reached $20 billion in the reported quarter, growing 63% year over year, outpacing both Amazon Web Services ($37.6 billion, up $8.3 billion) and Microsoft's Azure-driven cloud segment ($34.7 billion, up $7.9 billion).
The contract backlogs further illustrate the scale of future demand:
- Google's cloud contract backlog reached $460 billion, roughly double the $240 billion reported at the end of Q4 2025
- Amazon reported $364 billion in its own pipeline, which will expand further after a recent $100 billion computing contract with Anthropic over the next decade
- Microsoft's commercial remaining performance obligations hit $625 billion, up 110% year over year
Google attributes its cloud growth to its strategy of building custom AI chips, foundation models, and products in-house. The company's 7th-gen Ironwood TPU, which packs 192 GB of HBM3E per chip with 7.37 TB/s bandwidth in pods of up to 9,216 chips, is central to that strategy. Anthropic has committed to access up to one million of these TPUs. Google recently unveiled its 8th-gen TPUs, which are split into two distinct variants for training and inference.
Custom Silicon Development
The capex figures reflect more than just GPU purchases, as each hyperscaler is now deploying or developing custom accelerators to reduce dependence on Nvidia for inference-based workloads:
- Amazon's Trainium3, built on a 3nm process with 144 GB of HBM3E and roughly 4.9 TB/s of bandwidth, is described by CEO Andy Jassy as "nearly fully subscribed" for 2026
- Meta has announced four generations of its MTIA inference chip, all fabbed at TSMC alongside Broadcom, even as it signed GPU deals worth roughly $110 billion combined with AMD and Nvidia
- Microsoft's Maia 200 is deploying in U.S. Central data centers
This pattern extends beyond accelerators as CPU demand for agentic AI workloads drives a parallel supply crunch. CPU lead times currently stretch to six months, with Intel reporting billions in unmet Xeon demand. Arm CEO Rene Haas has stated that agentic workloads require roughly 120 million CPU cores per gigawatt of data center capacity, four times what traditional AI training clusters need. Per Intel CFO David Zinsner, data center CPU-to-GPU ratios have already moved from 1:8 to 1:4, with further convergence expected to reach or go beyond parity.
Supply Constraints Beyond Capital
Despite record spending, all four companies have acknowledged supply constraints that additional capital alone can't resolve. The most significant bottleneck is advanced packaging capacity. Nvidia has booked an estimated 800,000 to 850,000 wafers of TSMC's CoWoS advanced packaging capacity for 2026, consuming over half of the total output and leaving AMD, Broadcom, and Google's TPU program competing for the remainder. CoWoS remains oversubscribed through at least mid-2026, and TSMC's U.S. packaging fabs aren't expected to reach volume until 2028.
Power infrastructure represents another critical bottleneck. Large power transformer lead times extend to roughly 128 weeks, and the IEA estimates that approximately 20% of planned global data center projects could be at risk of grid-related delays. TrendForce recently downgraded its full-year server shipment growth forecast from 20% to 13% because power management ICs (PMICs) and baseboard management controllers needed to assemble complete servers are stretching to 35- to 40-week lead times. Samsung's planned closure of its S7 eight-inch wafer fab in Korea will further tighten PMIC supply.
Market Reaction and Strategic Implications
The market reaction to these announcements has been mixed. Meta's stock slipped by 6% after-hours following the earnings, erasing roughly $113 billion in market value. That drop reflected both the $10 billion capex increase and CEO Mark Zuckerberg's lack of a firm schedule for releasing improved AI models. Investors are concerned about whether Meta's historically capital-light business is becoming far more capital-intensive.
Amazon kept its $200 billion capex plan unchanged, and Microsoft CEO Satya Nadella indicated that ending his company's exclusive contract with OpenAI was beneficial, claiming royalty-free access to OpenAI's frontier models and IP through 2032.
These developments paint a picture of an industry in transition, where hardware capabilities are becoming as strategically important as software innovation. The massive capex increases reflect recognition that control of the semiconductor supply chain and infrastructure will determine competitive advantage in the AI era.

The memory chip price increases are particularly noteworthy given their scale and concentration. When Microsoft attributes $25 billion of its AI budget specifically to memory cost inflation, it provides concrete evidence of the semiconductor industry's transformation from a commodity market to a strategic resource allocation challenge.
This trend has profound implications for the entire semiconductor ecosystem. As hyperscalers vertically integrate and commit massive portions of future production capacity, traditional semiconductor companies face an increasingly difficult environment in which to secure supply and plan capacity expansion. The shift toward custom silicon further complicates this dynamic, as specialized chips require different manufacturing processes and packaging technologies than general-purpose components.

The data center power constraint represents perhaps the most fundamental limitation to AI infrastructure expansion. With transformer lead times extending beyond two years and grid capacity constraints affecting 20% of planned projects, the semiconductor industry's ability to produce chips may ultimately be limited by the power industry's ability to deliver electricity to those chips.
Looking forward, these capex figures suggest that the semiconductor industry's growth will increasingly be determined by the intersection of chip design, manufacturing capacity, and power infrastructure development. The companies that can most effectively coordinate these three elements will likely emerge as the dominant players in the AI era.

Comments
Please log in or register to join the discussion