A cluster of Chinese AI firms just raised over $1 billion through IPOs in Hong Kong, but executives from Alibaba's Qwen, Tencent, and Zhipu AI warn that fresh capital cannot solve the fundamental compute shortage widening the gap with U.S. competitors.
China's AI sector made headlines this month with a cluster of domestic firms raising more than $1 billion through IPOs in Hong Kong. The listings were meant to signal confidence in China's AI ambitions, but they triggered unusually candid warnings from inside the industry that the gap with the U.S. is widening in ways that money alone cannot fix.
Justin Lin, head of Alibaba Group's Qwen open-source models, delivered the bluntest assessment at the AGI-Next summit in Beijing. He said Chinese companies have a "less than 20%" chance of "leapfrogging the likes of OpenAI and Anthropic with fundamental breakthroughs" in the near term. His peers at Tencent and Zhipu AI echoed similar concerns, pointing to a bottleneck that IPO proceeds cannot easily resolve.
The IPO Wave and Its Real Purpose
Zhipu AI and Minimax were among the first Chinese foundation model companies to go public, reflecting a deliberate policy shift by Beijing. Chinese regulators are actively steering tech companies toward domestic listings to reduce reliance on U.S. capital markets and funnel national savings into priority sectors like semiconductors and AI. Hong Kong serves as the preferred "offshore" venue that still offers global capital access while keeping companies within China's regulatory orbit.
For companies like Zhipu AI, training and deploying large language models is capital-intensive even before hardware constraints enter the equation. IPOs provide longer funding runways than traditional venture rounds, which have cooled significantly since 2021. They also reduce exposure to geopolitical swings and align private sector priorities with Beijing's national technology strategy.
But these listings do not provide leverage over the most expensive part of the AI stack. Capital can pay for engineers and rent data centers, but it cannot create advanced GPUs or high-bandwidth memory (HBM). After the IPOs eased funding pressure, several executives identified compute availability and power as China's decisive bottleneck.
The Compute Crunch
Lin's admission at the Beijing summit reframes how to interpret Chinese AI funding. "A massive amount of OpenAI's compute is dedicated to next-generation research, whereas we are stretched thin — just meeting delivery demands consumes most of our resources," he said.
The goal for Chinese firms isn't to outspend U.S. hyperscalers in absolute terms, but to sustain domestic AI development under constrained conditions for as long as possible. IPOs function as an endurance tool rather than a shortcut to dominance.

Chinese labs have made undeniable progress in open-source LLMs. Models such as Qwen, DeepSeek, and others have closed much of the performance gap on standardized benchmarks, particularly for Chinese-language tasks and domain-specific applications. Open models reduce duplication of effort, allow faster iteration, and make better use of limited compute by spreading training and fine-tuning workloads across a broader ecosystem. They also align with Beijing's preference for technology stacks that are auditable and controllable at a national level.
However, open models do not eliminate hardware limits. Training systems still require dense clusters of advanced accelerators, fast networking, and large pools of HBM. This is exactly where Chinese firms are hitting a wall.
The Hardware Embargo
U.S. export controls have cut China off from Nvidia's most capable data center GPUs and the advanced manufacturing tools needed to produce equivalents at scale. Domestic alternatives such as Huawei's Ascend series have improved rapidly, but even optimistic assessments place them behind current-generation U.S. hardware in raw performance and ecosystem support. More importantly, they are produced in far smaller volumes.

As a result, Chinese AI developers face a tradeoff that their U.S. counterparts largely do not. They can train more models, or they can train larger models, but doing both simultaneously strains available infrastructure. Several firms have responded by shifting emphasis away from general-purpose foundation models toward narrower, application-specific systems that can be trained and deployed with fewer resources.

The U.S. Advantage
The differentiator today is not just talent pipelines or research output—it is that the U.S. controls the bulk of the world's advanced AI compute. U.S. hyperscalers operate GPU clusters measured in tens of thousands of accelerators, with software stacks tuned over years of production use. Private investment in U.S. AI companies continues to dwarf that in China, even as Chinese firms turn to public markets.
U.S. companies can deploy capital directly into hardware procurement at a global scale, something Chinese firms cannot match under current geopolitical dynamics. Chinese executives have begun acknowledging this imbalance publicly, warning that U.S. AI infrastructure may be an order of magnitude larger than China's in effective capacity.
That gap compounds over time. More compute enables larger models, which attract more users, data, and revenue, which in turn fund even larger deployments.
A Bifurcated Future
While a $1 billion IPO week is impressive, it still leaves China well behind the U.S. in all the areas that matter for cutting-edge AI. The capital ensures that China's AI firms remain viable and competitive domestically, but it does not, in its own right, alter the global AI race.
Public listings impose discipline and transparency and lock firms more tightly into national industrial policy. Over the next few years, this is likely to produce a bifurcated outcome. China's AI ecosystem will advance quickly in areas where scale isn't quite so important, such as consumer and industrial platforms and applied AI. Meanwhile, the cutting edge of general-purpose AI remains anchored in environments that have access to abundant compute.
Capital can sustain progress, but compute ultimately determines whether that progress will have any measurable impact outside of China.

Comments
Please log in or register to join the discussion