Bengaluru-based hardware startup C2i Semiconductors has raised $15M in Series A funding led by Peak XV to develop direct 'grid-to-GPU' power systems, addressing the critical energy bottleneck in AI datacenters.
Bengaluru-based hardware engineering firm C2i Semiconductors has closed a $15 million Series A funding round led by Peak XV Partners (formerly Sequoia India & Southeast Asia), bringing total investment to $19 million. The company is developing what it describes as a "grid-to-GPU" power delivery system designed specifically for AI datacenters – technology arriving as power constraints threaten to throttle AI industry growth.
The Power Bottleneck Problem
Current AI infrastructure faces a fundamental limitation: while GPU computational capacity continues scaling exponentially, power delivery systems haven't kept pace. Traditional datacenter power architectures involve multiple conversion stages – from grid-level alternating current (AC) to facility-level direct current (DC), then stepped down again to voltage levels usable by individual GPUs. Each conversion stage loses 5-15% efficiency, with total system losses often exceeding 30%. For a 100MW datacenter (typical for large AI training facilities), this represents 30MW of wasted energy – enough to power 30,000 homes.
C2i claims its patented architecture eliminates intermediate conversion stages, delivering grid power directly to GPU racks at native operating voltages (typically 48V DC). Early lab prototypes reportedly achieve 94% system efficiency compared to industry-standard 70-75% for comparable power delivery. The system integrates:
- Solid-state transformers replacing magnetic transformers
- Gallium nitride (GaN) power semiconductors
- Active voltage regulation at rack level
- Real-time load balancing algorithms
Technical Constraints and Challenges
While promising, C2i's approach faces significant implementation hurdles:
- Grid Compatibility: Direct DC power delivery requires grid operators to support high-voltage DC transmission lines, currently limited to specialized installations
- Thermal Management: Concentrated power delivery creates hot spots requiring novel cooling solutions
- Fault Tolerance: Eliminating conversion stages reduces redundancy options during power anomalies
- Scalability: Current prototypes support rack-scale deployment; full datacenter integration requires substantial engineering validation
Peak XV's investment suggests confidence in C2i's engineering team, which includes power systems veterans from Intel, Qualcomm, and GE. The capital will fund pilot deployments with undisclosed hyperscalers in India and Southeast Asia – regions where new datacenter construction faces fewer legacy infrastructure constraints.
Market Context
This funding arrives amid unprecedented pressure on datacenter power systems:
- NVIDIA's Blackwell GPUs consume up to 1200W per unit
- AI training clusters regularly exceed 20MW per installation
- Power, not silicon, now constrains AI scaling according to industry analysts
Major cloud providers have begun exploring alternative approaches, including Microsoft's partnership with Helion for fusion power and Google's geothermal initiatives. C2i represents a hardware-focused solution within this landscape.
The company's technology faces competition from established power electronics firms like Delta Electronics and Vertiv, though C2i claims its integrated approach offers superior efficiency at GPU-level granularity. Successful implementation could reduce AI compute costs by 8-12% based solely on energy savings, according to independent analysis by Wood Mackenzie.
As AI adoption strains global power grids, innovations in power delivery efficiency have become strategically critical. C2i's approach warrants cautious optimism but requires extensive field validation before declaring industry impact. The upcoming pilot deployments will serve as crucial proving grounds for this unconventional power architecture.
Comments
Please log in or register to join the discussion