Anthropic enters discussions with UK startup Fractile to acquire DRAM-less inference chips, adding a fourth supplier to its AI silicon portfolio while pursuing more cost-efficient computing architectures.
Anthropic has reportedly initiated early discussions with London-based chip startup Fractile regarding potential acquisition of the company's innovative inference accelerators, according to sources familiar with the matter. This strategic move would position Fractile as Anthropic's fourth supplier of AI server silicon, complementing the company's existing partnerships with Nvidia, Google, and Amazon in its pursuit of diversified computing infrastructure.
The timing of these discussions aligns with Fractile's projected commercial readiness timeline of approximately 2027, placing any potential deployment outside Anthropic's near-term procurement plans but coinciding with its Google-Broadcom TPU partnership expansion. In April 2026, Anthropic increased its TPU capacity commitment from 1GW to 3.5GW for the period from 2027 through 2031, indicating a multi-year strategic planning horizon.
Fractile's technical approach represents a significant departure from conventional AI chip architecture. Founded in 2022 by Oxford PhD Walter Goodwin, the company is developing inference accelerators that co-locate memory and compute on the same die using SRAM (Static Random-Access Memory) rather than relying on separate DRAM (Dynamic Random-Access Memory) chips. This architectural choice directly addresses one of the most persistent bottlenecks in AI computing: the energy and time-intensive process of shuttling data between processors and off-chip memory.
"The data movement between the GPU and off-chip DRAM is one of the main bottlenecks in running large AI models at speed," Goodwin explained to Fortune in July 2024. "Our design stores data needed for computations directly next to the transistors that perform the arithmetic, rather than relying on off-chip DRAM."
Based on simulations conducted prior to physical chip fabrication, Goodwin claimed that Fractile's architecture could deliver dramatic performance improvements, potentially running large language models 100 times faster while reducing costs by a factor of 10 compared to Nvidia's current GPU offerings. These figures, while still theoretical, underscore the potential of near-memory computing architectures to disrupt the AI hardware market.
The company's technical credibility is bolstered by its team composition, which reportedly includes engineers with experience at leading semiconductor firms including Graphcore, Nvidia, and Imagination Technologies. Fractile is also developing a complementary software stack to maximize the efficiency of its hardware approach.
Financially, Fractile has demonstrated significant investor confidence. The company raised $15 million in seed funding co-led by Kindred Capital, the NATO Innovation Fund, and Oxford Science Enterprises. Current discussions indicate plans to raise $200 million at a valuation exceeding $1 billion, with participation from notable venture capital firms including Founders Fund, 8VC, and Accel.
Anthropic's interest in Fractile reflects a broader strategic approach to chip procurement. The company has deliberately avoided dependence on any single vendor, currently running its Claude AI system across Nvidia GPUs, Amazon's Trainium processors through Project Rainier, and Google's TPUs. This diversification strategy provides negotiating leverage and mitigates supply chain risks.
The company's rapid growth has intensified its need for cost-efficient computing infrastructure. Anthropic's annualized revenue run rate reached $30 billion in March 2026, a substantial increase from approximately $9 billion at the end of 2025. However, inference costs have reportedly become a drag on gross margins, driving the company to explore alternative computing architectures.
Unlike competitors OpenAI and xAI, which are investing heavily in developing proprietary data center infrastructure, Anthropic has primarily opted to rent computing capacity from multiple providers. This approach allows the company to maintain flexibility while leveraging diversified chip supply to negotiate favorable terms.
Fractile operates within a competitive landscape of inference-focused startups pursuing similar architectural innovations. Companies like Groq and Cerebras have also developed SRAM-based or near-memory computing solutions. The AI chip industry took notice when Nvidia executed a $20 billion acquisition of Groq in December 2025, subsequently launching its own dedicated inference accelerator, the Groq 3 LPX. This move acknowledged the growing commercial pressure to optimize cost-per-token at scale, a metric that becomes increasingly critical as AI models grow larger and more complex.
The semiconductor industry's persistent challenges with DRAM pricing and availability have further accelerated interest in alternative memory architectures. Traditional GPU designs require substantial off-chip memory bandwidth, making them vulnerable to both DRAM cost fluctuations and supply constraints. By integrating memory directly onto the compute die, Fractile's approach potentially reduces dependency on the volatile DRAM market while improving performance characteristics.
From a manufacturing perspective, SRAM integration presents both opportunities and challenges. SRAM offers faster access times and lower power consumption compared to DRAM, but it is typically less dense and more expensive per bit. Fractile's design must balance these trade-offs to achieve the claimed performance and cost advantages. The company has not yet disclosed specific process node details or physical implementation strategies.
The potential acquisition of Fractile would represent Anthropic's most significant move into specialized AI hardware to date. While the company has historically relied on established providers, the performance claims and strategic timing of Fractile's technology appear to have warranted closer examination. A successful integration could position Anthropic with a competitive advantage in inference efficiency, potentially reducing operational costs while improving response times for its AI services.
The broader implications for the AI chip market extend beyond Anthropic's specific needs. If Fractile's technology delivers on its promises, it could accelerate industry-wide adoption of near-memory computing architectures, forcing established players like Nvidia to further optimize their product offerings. The intersection of AI workloads and hardware innovation continues to drive rapid evolution in semiconductor design, with inference efficiency emerging as a key differentiator in an increasingly competitive landscape.
As the discussions between Anthropic and Fractile continue, the industry will watch closely for any indications of potential acquisition terms, technical validation of Fractile's performance claims, and the timeline for commercial deployment. The coming years will likely see continued investment in specialized AI hardware, as companies seek to balance performance, cost, and energy efficiency in an environment of rapidly growing AI demand.

Comments
Please log in or register to join the discussion