DeepSeek: DeepSeek V3.2 Speciale
Model Type
Proprietary Model
API access only
Recommended Use Cases
Text Generation
Try DeepSeek V3.2 Speciale
The high-compute variant of DeepSeek-V3.2, designed exclusively for deep reasoning tasks (December 2025). V3.2-Speciale pushes reasoning capabilities to the maximum, surpassing GPT-5 and rivaling Gemini-3.0-Pro, but does not support tool-calling.
Per DeepSeek:
Notably, our high-compute variant, DeepSeek-V3.2-Speciale, surpasses GPT-5 and exhibits reasoning proficiency on par with Gemini-3.0-Pro.
Role in V3.2 Series
V3.2-Speciale is optimized for maximum reasoning performance at the cost of higher token usage. Unlike the standard V3.2, it does not support tool-calling functionality, making it suited for pure reasoning tasks rather than agentic workflows.
Key Features
- Architecture: 685B MoE with DeepSeek Sparse Attention (DSA)
- Context Window: 128K tokens
- Specialization: Deep reasoning tasks only (no tool-calling)
- License: MIT
Achievements
- π₯ Gold medal on 2025 International Mathematical Olympiad (IMO)
- π₯ Gold medal on 2025 International Olympiad in Informatics (IOI)
- π₯ Gold-level on CMO and ICPC World Finals 2025