Opus-level performance at one-fifth the cost, default model on claude.ai
| Benchmark | Score | Rank |
|---|---|---|
OSWorld Computer use in real desktop environments | 72.5% | #4 / 6 |
ARC-AGI Novel reasoning tasks requiring fluid intelligence | 60.4% | #6 / 21 |
SWE-bench Real-world GitHub issue resolution | 79.6% | #7 / 38 |
Terminal Agentic terminal coding tasks requiring multi-step execution | 59.1% | #7 / 37 |
MMMUvals.ai College-level multimodal reasoning across 30+ disciplines | 83.6% | #8 / 33 |
Arena Elo Human preference ranking via blind comparisons | 1459 | #9 / 41 |
MMLU-Provals.ai Harder 10-option successor to MMLU; more reasoning-focused | 87.3% | #10 / 30 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 82.1% | #13 / 31 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 89.3% | #23 / 53 |
HumanEval Coding ability - generating correct Python functions | 92.1% | #26 / 49 |
MATH Competition-level mathematics problems | 85.3% | #29 / 49 |
GPQA PhD-level science questions even experts struggle with | 74.1% | #35 / 54 |