Meta Deploys Over 100,000 NVIDIA AI GPUs for Llama 4 Model Development
Meta increases its competitive edge in the fight for AI superclusters by adding over 100,000 NVIDIA GPUs for Llama 4.
Meta CEO Mark Zuckerberg said that the company is using a huge stack of more than 100,000 NVIDIA H100 GPUs to work on Llama 4, the company’s next-generation AI model. This vast training model is believed to have cost more than $2 billion to develop and is one of the largest AI training groups that have ever been assembled, indicating that Meta is serious about improving AI.
Zuckerberg said on an earnings call that the first version of Llama 4 should come out later this year, with smaller models probably coming out first. Zuckerberg said, “We’re training the Llama 4 models on a cluster bigger than anything I’ve seen reported for what others are doing.” He was talking about Meta’s huge investment in AI.
Meta is now a big customer of NVIDIA, which has been at the head of AI hardware innovation, thanks to the large GPU deployment. Recently, NVIDIA CEO Jensen Huang recognized Meta’s large orders, implying that the company has bought more than 600,000 H100 GPUs altogether. As AI changes quickly, tech giants are increasingly spending on high-performance AI infrastructure to stay competitive. This partnership is an example of this trend.
Meta’s plan to strengthen its AI system puts it on the same level playing field as Elon Musk’s xAI which also aims to double the computing power of the AI supercomputer to 200,000 NVIDIA Hopper GPUs. In this battle of giants towards forging the most potent AI cluster, industry pundits anticipate major progress in AI model training efficiency and functionality despite the steep expense.
The development of Llama 4 proves that companies are taking the move to scale up the competition in order to achieve better and more sophisticated structures for AI by enhancing the computational requirements they have for such applications.