AI chip startups collectively raised $1.1 billion in funding this week, with MatX securing $500 million for its SRAM-focused LLM accelerator, Axelera raising $250 million for edge-to-datacenter RISC-V chips, and SambaNova landing $350 million with Intel backing.
It's only Tuesday and AI chip startups have already soaked up $1.1 billion in funding, showing that venture capitalists remain undeterred by bubble fears as they chase alternatives to Nvidia's dominance.
MatX: The $500M SRAM Challenger
MatX, founded in 2022 by former Google engineers Reiner Pope and Mike Gunter, secured the largest share with a $500 million Series B round led by Jane Street and Situational Awareness LP. The startup plans to ship its first chip, the MatX One, later this year.
The chip takes an ambitious approach: optimizing for large language models while claiming to handle pre-training, reinforcement learning, and inference phases. Unlike competitors focusing solely on inference, MatX wants to be the Swiss Army knife of AI acceleration.
What makes MatX particularly interesting is its memory strategy. The company is betting heavily on SRAM (Static Random Access Memory) over the HBM (High Bandwidth Memory) used by AMD and Nvidia. SRAM offers orders-of-magnitude faster access speeds, which MatX claims will enable its chip to deliver more than 2,000 tokens per second for large 100-layer mixture-of-experts models.
However, SRAM's space efficiency is abysmal compared to HBM. Today's largest dies can only fit a few hundred megabytes of SRAM while still leaving room for compute. MatX's solution? Borrow from Cerebras and Groq's playbooks: build massive chips and scale horizontally.
Unlike Groq, which uses hundreds of chips for larger models, MatX plans to combine SRAM with HBM strategically. Rather than storing model weights in HBM, the company will use it for key-value (KV) caches—essentially the model's short-term memory that tracks states across sessions. This hybrid approach aims to deliver both the throughput of GPUs and the speed of SRAM designs.
Axelera: The $250M Edge-to-Datacenter Play
Dutch startup Axelera announced a $250 million funding round led by Innovation Industries to advance its low-power RISC-V based AI accelerators. The company takes a different approach: start at the edge and scale up.
Axelera's Europa and Metis AI accelerators target power-constrained edge workloads like computer vision and robotics. The Europa chip delivers up to 629 TOPS of INT8 performance fed by 64GB of DRAM with 200 GB/s bandwidth. This puts it on par with an Nvidia A100 while consuming just 45 watts—less than one-sixth the power.
The trade-offs are clear: Europa trails the nearly six-year-old A100 in memory capacity (80GB of HBM2E) and bandwidth (2 TB/s). But Axelera's strategy is to develop a compute architecture that scales efficiently from edge to datacenter.
The company is developing a new chip codenamed Titania in partnership with the EU's EuroHPC Digital Autonomy with RISC-V in Europe (DARE) program, aiming to create a domestic alternative to US chips for supercomputing.
SambaNova: The $350M Intel-Backed Contender
SambaNova received $350 million from Vista Equity, Cambium Capital, and Intel's investment fund. The funding was announced alongside a multi-year collaboration that will see the chip startup integrate Intel's Xeon processors into its AI servers.
The company also disclosed its new AI accelerator, the SN50, which will be deployed by SoftBank in Japanese datacenters starting later this year.
The Bigger Picture
These massive funding rounds underscore the ongoing gold rush in AI chip development. Despite concerns about an AI bubble, VCs are pouring money into startups promising alternatives to Nvidia's dominant position. Each company is taking a different tactical approach:
- MatX: SRAM-first with HBM for KV caches, targeting all AI workloads
- Axelera: Edge-first RISC-V architecture scaling to datacenter
- SambaNova: Dataflow architecture with Intel CPU integration
The common thread is that none of these companies are content to compete on Nvidia's terms. Whether through novel memory architectures, alternative instruction sets, or hybrid CPU-GPU designs, each is trying to carve out a defensible position in what remains one of tech's most competitive markets.

The question isn't whether these companies can build impressive chips—they clearly can. The challenge is whether they can build sustainable businesses in a market where Nvidia continues to iterate rapidly and maintain its ecosystem advantages. With $1.1 billion in fresh capital, these startups will have their chance to prove their approaches can scale beyond impressive benchmarks to real-world deployment at massive scale.

Comments
Please log in or register to join the discussion