Reflection AI, a startup founded by former DeepMind researchers, has successfully raised $2 billion, significantly increasing its valuation to $8 billion.
Reflection AI, a startup established by two former researchers from Google DeepMind, has announced a remarkable fundraising achievement of $2 billion, elevating its valuation to $8 billion. This marks a substantial increase from its previous valuation of $545 million.
Initially focused on developing autonomous coding agents, Reflection AI is now positioning itself as an open-source alternative to prominent closed frontier labs like OpenAI and Anthropic. Additionally, it aims to serve as a Western counterpart to the Chinese AI company DeepSeek.
The recent funding round attracted notable investors, including Nvidia, former Google CEO Eric Schmidt, Citi, and the private equity firm 1789 Capital, which is backed by Donald Trump Jr. Existing investors such as Lightspeed and Sequoia also participated in this significant investment.
Founded in 2024 by Misha Laskin and Ioannis Antonoglou, Reflection AI focuses on creating tools that automate software development, a rapidly growing application of artificial intelligence. Following the fundraising, the company announced that it has assembled a team of top-tier talent from both DeepMind and OpenAI. It has developed an advanced AI training stack that it promises will be accessible to all. Furthermore, Reflection AI claims to have identified a scalable commercial model that aligns with its open intelligence strategy.
Currently, Reflection AI employs around 60 individuals, primarily consisting of AI researchers and engineers specializing in infrastructure, data training, and algorithm development. Laskin, who serves as the company’s CEO, revealed that Reflection AI has secured a compute cluster and aims to release a frontier language model next year, trained on “tens of trillions of tokens.”
In a post on X, Reflection AI stated, “We built something once thought possible only inside the world’s top labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale.” The company highlighted the effectiveness of its approach, particularly in the domain of autonomous coding, and expressed its intention to extend these methods to general agentic reasoning.
The Mixture-of-Experts (MoE) architecture is crucial for powering frontier large language models (LLMs), which were previously only trainable at scale by large, closed AI laboratories. DeepSeek was the first company to successfully train models at scale in an open manner, followed by other Chinese models like Qwen and Kimi.
Laskin emphasized the urgency of the situation, stating, “DeepSeek and Qwen and all these models are our wake-up call because if we don’t do anything about it, then effectively, the global standard of intelligence will be built by someone else. It won’t be built by America.”
Although Reflection AI has not yet released its first model, Laskin indicated that the initial offering will be primarily text-based, with plans for multimodal capabilities in the future. The company intends to utilize the funds from this latest round to acquire the computational resources necessary for training its new models, with the first release anticipated for early next year.
Source: Original article