Chinese AI lab DeepSeek has unveiled its reasoning model, DeepSeek-R1, which it says rivals OpenAI’s o1 on several key AI benchmarks. The model, now available on the AI development platform Hugging Face under an MIT license, is open for commercial use without restrictions.
DeepSeek claims that R1 surpasses o1 performance on benchmarks such as AIME, MATH-500, and SWE-bench Verified. AIME evaluates models using other models, MATH-500 tests word problem-solving, and SWE-bench Verified assesses programming tasks.
How R1 works and what sets it apart
R1 is designed as a reasoning model, meaning it checks its work to avoid common pitfalls faced by typical AI systems. While this self-checking process takes slightly longer — often seconds to minutes more — it ensures higher reliability, especially in science, mathematics, and physics.
The model boasts an impressive 671 billion parameters, significantly enhancing its problem-solving capabilities. For comparison, models with more parameters are typically better at understanding and solving complex problems. Alongside the full version of R1, DeepSeek has also released smaller “distilled” versions, ranging from 1.5 billion to 70 billion parameters. The smallest versions are light enough to run on a standard laptop, while the full-scale R1 requires robust hardware.
For developers who need access to the full R1 but lack the necessary infrastructure, DeepSeek offers the model through its API at costs 90%-95% lower than those of OpenAI’s o1, making it an attractive option for many users.
Challenges and geopolitical implications
However, DeepSeek’s Chinese origins bring certain limitations. The model’s outputs must comply with regulations imposed by China’s internet watchdog, ensuring that its responses align with “core socialist values.” This means R1 avoids answering politically sensitive topics, such as Tiananmen Square or Taiwan’s autonomy. Many other Chinese AI models also avoid controversial discussions to remain in compliance with the government.
The launch of R1 coincides with rising tensions between the U.S. and China over AI technology. Recently, the Biden administration proposed stricter export rules, limiting China’s access to advanced AI chips and models. These rules would tighten existing restrictions on the tools needed to develop cutting-edge AI systems if implemented.
In a policy recommendation last week, OpenAI urged the U.S. government to prioritise American AI development to maintain its competitive edge. Chris Lehane, OpenAI’s VP of policy, identified DeepSeek’s parent company, High Flyer Capital Management, as a competitor to watch.
A growing trend in Chinese AI
That a *second* paper dropped with tons of RL flywheel secrets and *multimodal* o1-style reasoning is not on my bingo card today. Kimi’s (another startup) and DeepSeek’s papers remarkably converged on similar findings:
— Jim Fan (@DrJimFan) January 20, 2025
> No need for complex tree search like MCTS. Just linearize… pic.twitter.com/NrV2WyunC9
DeepSeek is not alone in challenging U.S. dominance in AI. Other Chinese labs, such as Alibaba and Moonshot AI’s Kimi, have also developed models they claim rival OpenAI’s o1. DeepSeek, however, was the first to preview its reasoning model, R1, back in November.
Dean Ball, an AI researcher at George Mason University, noted that these developments suggest Chinese AI labs are becoming “fast followers.” He highlighted the accessibility of DeepSeek’s distilled models, which allow powerful reasoning capabilities to operate on local hardware.
With models like R1, Chinese AI firms continue to push boundaries despite regulatory challenges and geopolitical tensions.