DeepSeek Officially Launches V3.2: Claims Reasoning Performance on Par with GPT-5

DeepSeek Officially Launches V3.2: Claims Reasoning Performance on Par with GPT-5

Published:December 2, 2025
Reading Time:1 min read

Want to read in a language you're more familiar with?

DeepSeek launches V3.2 and V3.2-Speciale, claiming reasoning performance on par with GPT-5 and introducing native thinking-plus-tool-calling in an open-source model.

On December 1, 2025, DeepSeek released the full version of DeepSeek-V3.2 and its long-thinking enhanced variant DeepSeek-V3.2-Speciale, with the official website, mobile app, and API platform updated simultaneously.

According to official benchmarks, DeepSeek-V3.2 achieves reasoning capabilities comparable to GPT-5 and close to Gemini-3.0-Pro, while producing significantly shorter outputs than Kimi-K2-Thinking to reduce computational cost. The Speciale edition integrates DeepSeek-Math-V2’s theorem-proving strengths and introduces a breakthrough fusion of thinking mode with tool-calling, enabling the model to invoke external tools during the reasoning process.

Trained via large-scale synthetic Agent data and reinforcement learning across more than 1,800 environments and 85,000+ complex instructions, the new release demonstrates markedly improved generalization. DeepSeek claims it reaches the highest level among current open-source models on Agent evaluations, further narrowing the gap with closed-source counterparts.

The experimental DeepSeek-V3.2-Exp released two months ago validated the DSA sparse attention mechanism with no notable performance degradation across scenarios. The Speciale version is currently available as a temporary API for community research and testing.

Source: Phoenix Technology