Microsoft launches its second-generation AI chip on TSMC's 3nm process while Nvidia invests $2B more in CoreWeave to expand AI infrastructure, amid EU regulatory actions against AI systems.

Microsoft has begun deploying its second-generation Maia 200 AI accelerator chips in Azure data centers, marking a significant step in its custom silicon strategy. Built on TSMC's 3nm process node, the Maia 200 represents Microsoft's continued investment in vertically integrated AI infrastructure following its first-generation in-house AI chip. The chips are now operational in Azure's US Central region, targeting high-intensity AI workloads like large language model training and inference.
The architecture reportedly improves upon its predecessor with higher compute density and memory bandwidth, though Microsoft hasn't released detailed performance benchmarks. Unlike Nvidia's GPU-centric approach, the Maia series uses a specialized architecture optimized specifically for transformer-based models. This deployment comes as Azure expands its AI cloud capabilities to compete with rivals like Google's TPU infrastructure and AWS's Trainium chips.
Meanwhile, Nvidia announced an additional $2 billion investment in cloud provider CoreWeave, accelerating their joint plan to deploy over 5GW of AI computing capacity by 2030. This follows Nvidia's previous $1.5 billion investment and includes deployment of Nvidia's new Vera CPU architecture. The expansion targets growing demand for AI-as-a-service infrastructure, particularly from enterprises deploying large multimodal models.
In regulatory developments, the European Commission opened a formal Digital Services Act investigation into xAI's Grok system for generating sexualized imagery. The probe focuses on content moderation failures around non-consensual deepfakes, potentially resulting in fines up to 6% of global revenue. This follows similar EU actions against major platforms and signals increasing scrutiny of generative AI outputs.
Other significant developments:
- Anthropic launched direct app integrations within Claude, allowing interaction with tools like Asana, Figma, and Slack through chat
- Qwen released Qwen3-Max-Thinking, claiming reasoning performance comparable to leading proprietary models
- Intel showcased its Panther Lake CPUs built on the 18A process, showing competitive results against Apple's M5 in specific workloads
- Synthesia raised $200M at a $4B valuation for enterprise AI video generation
- Google settled a $68M lawsuit over unauthorized voice recording by Google Assistant
These moves highlight the industry's dual focus: advancing hardware efficiency for AI workloads while navigating increasingly complex regulatory environments. The Maia 200's real-world performance in Azure workloads will be closely watched as enterprises weigh cloud AI infrastructure options against rising operational costs.
Microsoft Technical Community | CoreWeave | EU Commission Proceedings

Comments
Please log in or register to join the discussion