Reasoning at 75% lower cost than o1, made chain-of-thought economically viable
| Benchmark | Score | Rank |
|---|---|---|
MATH Competition-level mathematics problems | 97.3% | #9 / 49 |
HumanEval Coding ability - generating correct Python functions | 93.1% | #22 / 49 |
Arena Elo Human preference ranking via blind comparisons | 1361 | #24 / 41 |
GPQA PhD-level science questions even experts struggle with | 77% | #31 / 54 |
SWE-bench Real-world GitHub issue resolution | 49.3% | #33 / 38 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 87.5% | #34 / 53 |
TerminalArtificial Analysis Agentic terminal coding tasks requiring multi-step execution | 6.1% | #35 / 37 |