Open-weight reasoning model that triggered the DeepSeek market shock
| Benchmark | Score | Rank |
|---|---|---|
MATH Competition-level mathematics problems | 97.3% | #10 / 49 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 90.8% | #19 / 53 |
ARC-C Grade-school science questions requiring reasoning | 97.1% | #19 / 40 |
HumanEval Coding ability - generating correct Python functions | 92.8% | #24 / 49 |
Arena Elo Human preference ranking via blind comparisons | 1358 | #25 / 41 |
TerminalArtificial Analysis Agentic terminal coding tasks requiring multi-step execution | 15.9% | #31 / 37 |
SWE-bench Real-world GitHub issue resolution | 49.2% | #34 / 38 |
GPQA PhD-level science questions even experts struggle with | 71.5% | #38 / 54 |