DeepSeek Releases New Mathematical Reasoning Model “DeepSeekMath-V2”

DeepSeek Releases New Mathematical Reasoning Model “DeepSeekMath-V2”

Published:November 28, 2025
Reading Time:1 min read

Want to read in a language you're more familiar with?

DeepSeekMath-V2 debuts with a self-verifying training framework, achieving gold-medal performance on top math contests and pushing forward a new path for reliable mathematical intelligence.

DeepSeek has unveiled DeepSeekMath-V2, a next-generation mathematical reasoning model built around a self-verifying training framework. Developed on top of DeepSeek V3.2-Exp-Base, the system uses a large-language-model–powered verification module to automatically check the correctness of generated mathematical proofs, while continuously refining its performance using increasingly challenging, high-difficulty samples.

The model has demonstrated top-tier results across major competitions—gold-medal-level performance on IMO 2025and CMO 2024, and an impressive 118/120 on the Putnam 2024 exam. According to the team, these results validate the feasibility of “self-verifying reasoning pathways,” offering a promising direction for building trustworthy, high-reliability mathematical intelligence systems.

Both the code and model weights have been open-sourced and are now available on Hugging Face and GitHub.