The AI industry's insatiable demand for faster data movement is pushing photonics from the periphery to center stage, but manufacturing constraints and routing challenges threaten to create the next major bottleneck in AI scaling.
The generative AI revolution has already disrupted chip manufacturing, strained power grids, and created copper shortages. Now, the industry faces its next bottleneck: the movement of data itself.
(Image credit: Getty Images / Bloomberg)
From Chips to Data: The Next Scaling Challenge
The voracious appetite of generative AI has overhauled multiple industries in its three-year history. First, it upended demand for high-end chips, pushing companies like Nvidia to record valuations and pressuring every part of the manufacturing process. Then it began straining power grids, requiring a rethink about how we send energy to data centers. Now, those same data centers face a new challenge as they're needed more often for AI training and inference, even creating extra demand for commodities like copper.
"We're targeting the scale-up domain of AI data centers, where we're increasingly limited not just by bandwidth, but the latency of predictability, especially as we scale to larger workloads and agentic workloads," said Vaysh Kewada, CEO and co-founder at Salience Labs, a silicon-photonics company focused on networking bottlenecks in AI data centers.
The Photonics Push
The shift toward photonics isn't just industry chatter. Wells Fargo Securities analyst Aaron Rakers recently wrote that 2026 is "the year of increasing visibility into design wins and building momentum for silicon photonics." The firm estimates the total addressable market for photonics could reach $10-12 billion by 2030.
But behind the bullish forecasts, industry insiders warn of new constraints. "Photonics is something that already exists within the data center today," explained Vivek Raghunathan, CEO and co-founder at Xscape Photonics. "Optical cables and the silicon photonics technology already exist when it comes to connecting different switches as part of a pluggable transfer ecosystem."
What's changing is the push to move optics out of chip switches and into ultra-fast links that connect large numbers of GPUs into a single compute fabric. "Ultimately, the network is the bottleneck for these workloads, because they're just far too large to run on a single GPU," Kewada said.
Why Copper Can't Keep Up
Using photonics means getting between 10 and 100 times more information back and forth from memory before outputting a single stream of reference, Raghunathan explained. This matters because AI workloads are changing fundamentally. Users are moving from single prompts to running chains of tasks, and hardware is already struggling under current AI use.
"If it's a problem now, it becomes an even bigger problem when it comes to agentic workloads, and that's heavily latency- and balance-sensitive," Kewada explained.
Raghunathan warns that "the current approach is going to break down sooner than later." The solution is the photonics approach, where "you can squeeze a lot of bandwidth in a single strand of fiber, which is extremely small."
(Image credit: OpenAI video footage)
Industry Momentum and Standards
Dedicated AI data centers want to move from 200-gigabit-per-second (Gb/s) links to 400 Gb/s links as AI clusters behave less like traditional IT and more like large distributed machines. The photonics push inside AI data centers is increasingly being shaped by TSMC's COUPE (Compact Universal Photonic Engine) platform, which has become a key reference point for integrating photonic and electronic circuits on a wafer.
Other companies are also engaged in the sector. Nvidia is helping define performance requirements that become standards by default. Broadcom has been leading the debate on co-packaged optics (CPO), betting that moving optics closer to silicon is the only realistic way to get past copper's limits when scaled up. Marvell has pushed photonics-heavy designs in bigger AI clusters and spent cash on investments, including buying photonics startup Celestial AI for up to $5.5 billion.
The Routing Problem
Most focus has been on getting optics onto fiber, but once data is there, it still has to be routed around a cluster. "All billions of dollars has been spent on the I/O, but less has been thought about of what you do once the data is there," Kewada said.
Salience Labs is betting on optical circuit switching. "Rather than it being an OEO [optical-electrical-optical] switch, it's a purely optical switch; we're never transforming that data into the electronic domain," Kewada explained. This matters because every optical-to-electrical-to-optical hop costs power and, crucially for AI, adds delay.
Manufacturing Constraints Loom
The industry faces significant manufacturing challenges. "I would say that it is mentally ready for it, but practically not ready for it," Raghunathan explained.
Many photonics components rely on III-V semiconductors (compound semiconductors made from associated groups in the periodic table), which aren't produced in the same numbers as mainstream silicon. Raw materials supply is also tight, said Kewada. And that's before packaging challenges.
"This requires sub-micron alignment," Raghunathan said. "They are typically glued onto that die. So now the industry has started thinking about, how do I make that interface detachable?"
(Image credit: Lightelligence)
The Inevitable Bottleneck
The push toward photonics feels inevitable, but just because it's inevitable doesn't mean the sector is ready. Manufacturing scale, raw materials, packaging, and reliability are still unanswered issues.
"The volume here is going to be two orders of magnitude higher than what the industry has seen so far," said Raghunathan. "The entire optics industry has never seen such a volume ever before."
Kewada shares this concern: "We're in for some interesting times if we see the attention on photonics continue to grow, and especially for next-generation bandwidth, if people look towards larger-scale deployment."
The AI industry has already lived through chip shortages, copper constraints, and power grid challenges. If it doesn't get ahead of photonics manufacturing constraints, it risks replaying the same shortages—but this time, the choke point could be the network itself.

Comments
Please log in or register to join the discussion