Meta Platforms Inc. has strengthened its partnership with Nvidia Corp., agreeing to deploy millions of Nvidia processors and networking systems over the next several years. The move further cements Meta’s reliance on Nvidia as it rapidly expands its artificial intelligence infrastructure.
Meta, which accounts for roughly 9% of Nvidia’s revenue, will now also incorporate Nvidia’s Grace central processing units (CPUs) in standalone servers, a first for the company. These servers are designed to handle high-performance AI workloads independently, without relying solely on Nvidia’s AI accelerators.
The rollout will include current-generation Blackwell accelerators as well as the forthcoming Vera Rubin AI accelerators, providing the foundation for Meta’s next-generation AI clusters.
“We’re excited to expand our partnership with Nvidia to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world,” said Meta CEO Mark Zuckerberg.
Strategic Importance Amid Shifting AI Landscape
The agreement underscores Meta’s commitment to Nvidia at a time when rivals, including AMD and Intel, are offering alternatives, and many tech companies are exploring in-house AI chips. Nvidia’s systems are still widely regarded as the benchmark for AI infrastructure, generating hundreds of billions of dollars in revenue for the chipmaker.
Following the announcement, Nvidia and Meta shares rose roughly 1%, while AMD fell about 3% in late trading.
Scale and Cost of Deployment
Nvidia’s AI accelerators, essential for developing and running AI models, cost an average of $16,061 per chip, according to IDC estimates. Deploying millions of these processors could cost Meta over $16 billion, not including additional networking equipment or newer-generation chips. Meta was already Nvidia’s second-largest customer, spending about $19 billion in the previous fiscal year.
Ian Buck, Nvidia’s Vice President of Accelerated Computing, emphasized that only Nvidia offers the breadth of hardware and software ecosystem needed to lead in AI.
“Grace CPUs are an excellent back-end data center solution. They can handle a wide range of workloads and deliver twice the performance per watt on back-end tasks compared to alternatives,” said Buck.
Meta’s AI Infrastructure Expansion
Meta has made AI its top priority, pledging hundreds of billions of dollars in infrastructure spending. For 2026, the company plans to invest in multiple gigawatt-scale data centers across the U.S., including in Louisiana, Ohio, and Indiana. One gigawatt of capacity can power approximately 750,000 homes.
The inclusion of Nvidia CPUs in standalone servers represents a shift into traditional data center territory, previously dominated by Intel and AMD, and provides an alternative to in-house chips developed by major operators like Amazon Web Services.
Meta will use the new systems for AI model training, data processing, and machine learning workloads, both internally and via Nvidia-powered computing capacity offered to other companies.
“There are many different types of workloads for CPUs. What we’ve found is Grace is excellent for back-end operations, handling behind-the-scenes computing tasks efficiently,” said Buck.
Looking Ahead
As Meta accelerates its AI ambitions, the expanded partnership with Nvidia provides the company with cutting-edge hardware and software integration, positioning it to remain a dominant force in the global AI ecosystem.
Leave a Reply