Elon Musk’s AI Supercomputer: The Race to Build the Ultimate AI


**Elon Musk’s AI Startup Sets Out to Build a Supercomputer for Grok Chatbot**

LA News Center has revealed an ambitious plan by Elon Musk’s artificial intelligence (AI) startup, xAI. The company aims to construct a towering supercomputer that will act as the backbone for the next generation of its AI chatbot, Grok.

A Chatbot with Supercharged Abilities

In a presentation to investors, Musk expressed his desire to unleash the supercomputer by autumn 2025. This colossal computing machine could potentially dwarf existing GPU clusters. Sources at LA News Center indicate that xAI is weighing the possibility of collaborating with Oracle, the tech giant, to actualize this massive endeavor.

Joining Forces with Chip Giants

The proposed supercomputer’s core will be composed of interconnected Nvidia H100 graphics processing units (GPUs). Its size, according to Musk, would easily quadruple that of current industry-leading GPU clusters. Nvidia’s H100 series GPUs are highly sought after for AI applications but are often hard to acquire due to their immense popularity.

A Challenger in the AI Arena

Musk established xAI in 2022 with the goal of becoming a formidable competitor to OpenAI, Microsoft’s AI subsidiary, and Google AI, owned by Alphabet. Incidentally, Musk was a co-founder of OpenAI.

The Growth of Grok

Earlier this year, Musk disclosed that training the Grok 2 model required a staggering 20,000 Nvidia H100 GPUs. However, he anticipates that the upcoming Grok 3 model and its successors will demand an exponential leap to 100,000 Nvidia H100 chips.

  • **Elon Musk’s xAI plans to construct a supercomputer to empower the next version of its Grok AI chatbot.**
  • The targeted completion date for the supercomputer is fall 2025.
  • xAI may team up with Oracle to develop this colossal computer.
  • The supercomputer will leverage Nvidia’s H100 GPUs, which would make it at least four times larger than the most prominent current GPU clusters.
  • xAI aims to challenge established AI players like OpenAI and Google AI.
  • The Grok 2 model required 20,000 Nvidia H100 GPUs for training, whereas future models are expected to necessitate 100,000 Nvidia H100 chips.