With AI, it is possible to achieve faster advancements in research, solve problems more quickly, and propel society towards a future (sometimes dystopian) with automated systems and humanoid robots. Although still a distant dream, we must continue progressing in AI models. NVIDIA graphics cards are the best option for training AI models; for this reason, Mark Zuckerberg of Meta has purchased no less than 350,000 NVIDIA H100 GPUs for this purpose.
Artificial intelligence has become a top priority for most tech companies. Giants like Google, Microsoft, and Meta are investing billions of dollars in AI advancements. Microsoft’s multi-million dollar investment has gone directly to OpenAI, the creators of ChatGPT. Working together, Microsoft has integrated various AI features into its programs and operating system. In fact, Windows 12 will focus entirely on artificial intelligence for optimal functionality.
Meta will have 350,000 NVIDIA H100 GPUs by the end of the year to train AI models
Google, on the other hand, has been developing Bard, a ChatGPT rival that hasn’t achieved the same results, and other interesting AI models. For example, Google Instrument Playground lets users create AI-generated music with more than 100 instruments from around the world. Moving on to Meta, the company has also been developing AI models, albeit with a different focus.
Their Llama and Llama 2 language models are free to use and targeted at institutions like universities for research purposes or entrepreneurs. Meta is expanding its AI resources, and indeed, Mark Zuckerberg plans to acquire a total of 350,000 NVIDIA H100 GPUs by the end of the year. These GPUs will be used for training large language models essential for AI.
Meta will spend approximately $10.5 billion on NVIDIA GPUs
Meta’s CEO will invest billions of dollars to boost AI and develop artificial general intelligence (AGI). Given that an H100 costs around $30,000, purchasing 350,000 of them would amount to a staggering $10.5 billion spent solely on NVIDIA GPUs. According to Zuckerberg, by the end of the year, they will have about 600,000 H100 GPUs in terms of computational power, including those from this latest purchase.
Mark Zuckerberg has invested more in graphics than other companies such as Microsoft, Google, or Amazon, which purchased between 50,000 and 150,000 H100 GPUs last year. On the other hand, Elon Musk bought 10,000 NVIDIA GPUs for his generative AI project. This number pales compared to Meta’s purchase, and it’s expected the company’s next LLM will be significantly better.
Meta is currently using its existing GPUs to train Llama 3, which aims to compete against ChatGPT on an open-source basis. Zuckerberg also states that they will continue working on the Metaverse, envisioning a future where virtual worlds are generated by AI, as well as the characters that inhabit these worlds and accompany us on our adventures.
The news of Meta’s multi-billion dollar investment in purchasing 350,000 NVIDIA H100 GPUs for AI appears first on El Chapuzas Informático.