Heaptalk, Jakarta — xAI, an artificial intelligence company led by Elon Musk, has secured $6 billion in a Series C funding round (12/23). The funds will accelerate further infrastructure, ship products, and accelerate the research and development of future technologies.
“xAI is primarily focused on the development of advanced AI systems that are truthful, competent, and maximally beneficial for all of humanity,” the company stated on its official blog (12/23).
Key investors participated in this financing round, including A16Z, Blackrock, Fidelity Management & Research Company, Kingdom Holdings, Lightspeed, MGX, Morgan Stanley, OIA, QIA, Sequoia Capital, Valor Equity Partners, and Vy Capital. Strategic investors Nvidia and AMD also participated and continue to support the company in scaling its infrastructure.
Training Grok 3
Several technical progress has been made since the announcement of Series B in May 2024, covering Colossus, Grok 2, xAI API, Aurora, and Grok on X. Colossus is a decisive hardware advantage with the world’s largest AI supercomputer using an Nvidia full-stack reference design with 100,000 Nvidia Hopper GPUs. The company plans to double the size of Colossus to a combined total of 200,000 Nvidia Hopper GPUs, achieved by using the Nvidia Spectrum-X Ethernet networking platform.
Grok 2 is a language model developed with reasoning capabilities. Grok 3 is currently training, and the company is now focused on launching innovative new consumer and enterprise products that will leverage the power of Grok, Colossus, and X to transform how we live, work, and play.
Furthermore, the large language model has been embedded in the X platform to understand what is happening in real-time. The company said, “We recently added new features enhancing the 𝕏 experience like web search, citations, and our recent image generator, Aurora.”
Aurora is xAI’s proprietary autoregressive image generation model for Grok, which enhances multimodal understanding, editing, and generation capabilities. The xAI API provides developers with programmatic access to the company’s foundation models. It is built on a new bespoke tech stack that allows multi-region inference deployments for low-latency access worldwide.