Nvidia Corp. (NVDA) is extending its stranglehold on global AI infrastructure, and the numbers tell a pretty compelling story. The company's GB300 platform is on track to become the backbone of AI data centers worldwide, with industry analysts forecasting it will power somewhere between 70% and 80% of all AI server rack shipments next year.
GB300 Takes Center Stage
According to TrendForce, servers built around GB300 chips entered mass production last quarter and are quickly becoming the go-to choice for Taiwanese server manufacturers heading into 2026. TrendForce analyst Frank Kung noted that these systems are positioned as core models for the coming year's production runs.
This year marks a pivotal moment for AI servers overall. Shipments of GPU-based rack systems are expected to surge, driven not just by Nvidia's GB300 and its next-generation Vera Rubin 200 platforms, but also by Advanced Micro Devices Inc. (AMD)'s MI400 offerings.
At the same time, cloud providers like Alphabet Inc.'s (GOOGL) Google, Amazon.com Inc.'s (AMZN) Amazon Web Services, and Meta Platforms Inc. (META) are ramping up their own custom ASIC-based AI infrastructure, according to the Taipei Times. So while Nvidia dominates, the competitive landscape is definitely getting more crowded.
Industry analysts point out that while the GB300 represents incremental improvements within Nvidia's Blackwell lineup, the Vera Rubin 200 platform is expected to see broader adoption after the third quarter. That platform brings a significant jump in power consumption, which has implications for data center design.
TrendForce analyst Fiona Chiu highlighted that higher power density combined with ongoing AI data center expansion is fueling stronger demand for liquid cooling solutions. Translation: these chips run hot, and traditional air cooling isn't going to cut it at scale.
Massive Deployments Already Underway
The GB300 isn't just a theoretical winner. Large-scale deployments are already happening, and the numbers are staggering.
Back in October, Nscale deepened its partnership with Microsoft Corp. to roll out a massive AI infrastructure build centered squarely on Nvidia's GB300 GPUs. The plan calls for deploying approximately 200,000 units across the U.S. and Europe.
In Texas alone, Nscale is planning to install roughly 104,000 GB300 GPUs at a 240MW AI campus. Services for Microsoft are slated to begin in the third quarter of 2026, with long-term expansion targeting 1.2GW of capacity. Microsoft also holds an option for a second phase of about 700MW starting in late 2027.
Nscale's European deployments are equally ambitious. The company plans to deploy GB300 systems including around 12,600 GPUs in Portugal starting in early 2026, about 23,000 GPUs at its U.K. campus from 2027, and roughly 52,000 GPUs in Norway.












