DeepSeek, an artificial intelligence company based in Hangzhou, China, has unveiled its latest innovation: DeepSeek-Prover-V2, an advanced open-source AI model specifically designed for formal mathematical theorem proving.
DeepSeek-Prover-V2 is built to analyse and verify mathematical proofs for logical consistency. DeepSeek-Prover-V2 leverages the Lean 4 programming language and represents a significant step forward in automated reasoning and mathematics-focused AI.
We just released DeepSeek-Prover V2.
– Solves nearly 90% of miniF2F problems
– Significantly improves the SoTA performance on the PutnamBench
– Achieves a non-trivial pass rate on AIME 24 & 25 problems in their formal version
Github: https://t.co/E3p8SWFpvi pic.twitter.com/ialqSgy4uV
— Zhihong Shao (@zhs05232838) April 30, 2025
The model is available in a 7 billion parameter version and a much larger 671 billion parameter version. The larger model is based on DeepSeek-V3-Base, while the smaller one builds on DeepSeek-Prover-V1.5-Base, offering a context window of up to 32,000 tokens. DeepSeek-Prover-V2 can solve a wide range of mathematical problems, from high school to college level, and can also find and correct errors in existing proofs. Its applications extend to serving as a teaching tool, providing step-by-step explanations, and assisting researchers and mathematicians in exploring and validating new theorems.
A significant technical advancement in this release is implementing a cold-start training system. This approach encourages the base model to decompose complex problems into smaller, manageable subgoals. This method improves the model’s ability to handle intricate proofs and supports reinforcement learning from the initial training phase.
China is cooking!
DeepSeek has just dropped Deepseek-Prover-V2-671B, a new 671B parameters math model.
R2 is coming veeeery soon… pic.twitter.com/whlir4wCte
— Angry Tom (@AngryTomtweets) April 30, 2025
DeepSeek-Prover-V2 continues the company’s tradition of open-source releases, with the model available for download on both GitHub and Hugging Face. While the core architectural details and the full scope of the training dataset remain undisclosed, the release demonstrates how iterative improvements in AI training can lead to significant gains in specialised capabilities. The model is under the MIT License for code and a permissive license for weights, facilitating widespread use in academic and commercial contexts.
Read: Trump Administration Targets DeepSeek with New AI Restrictions
With DeepSeek-Prover-V2, DeepSeek is pushing the boundaries of AI in mathematics, offering a powerful tool for educators, students, and researchers worldwide, and further establishing China’s presence in the global AI landscape.