Oracle's $Billions AMD GPU Deal Shakes Up AI Chip Market
Oracle's recent multi-billion dollar agreement with AMD to acquire 30,000 AMD MI355X GPUs is shaking up the AI chip market, signifying a move away from sole reliance on Nvidia. The deal, announced during Oracle's Q2 2025 earnings call, involves the deployment of AMD's high-performance MI355X GPUs, manufactured using TSMC's 3nm node and featuring AMD's CDNA 4 architecture. Each GPU boasts 288GB of HBM3E memory and a bandwidth of up to 8Tbps, operating at a TDP of 1,100W and requiring liquid cooling. Oracle's CTO, Larry Ellison, confirmed the significant investment, describing it as a "multi-billion dollar contract." These GPUs are considered a strong competitor to Nvidia's B100 and B200 offerings. While the location of the GPU cluster remains undisclosed, the deal's impact is undeniable.
This substantial investment in AMD chips is particularly noteworthy given Oracle's concurrent project, Stargate, which involves the deployment of 64,000 Nvidia GB200 GPUs at a Texas data center. Although Oracle anticipates signing contracts for Stargate "fairly soon," the AMD deal demonstrates a strategic diversification of hardware vendors. This trend reflects a broader shift among hyperscalers – large cloud service providers – to move beyond traditional AI chip leaders like Nvidia. Hyperscalers are increasingly seeking customized, efficient, and competitive solutions to meet their unique and evolving operational needs. The pursuit of such diverse partnerships aims to leverage the expertise and innovative technologies of multiple semiconductor companies.
This trend extends beyond Oracle. Google, for instance, has partnered with MediaTek for its next AI chip, supplementing its existing collaboration with Broadcom. Furthermore, companies like Meta are developing proprietary AI chips to reduce reliance on external vendors, even while remaining major customers of Nvidia. This internal chip development allows for optimization of processing power, memory architecture, and energy consumption, leading to enhanced performance and efficiency while mitigating supply chain risks and price volatility. The increasing complexity and scale of AI workloads, particularly the heavy demand for inference tasks, are driving this push for both higher performance and greater energy efficiency in AI chips. The competitive landscape is evolving rapidly, with hyperscalers adopting multifaceted strategies to maintain their edge in the rapidly developing AI sector.