Matched Opus 4 on coding at one-fifth the price, 1M context window
| Benchmark | Score | Rank |
|---|---|---|
ARC-C Grade-school science questions requiring reasoning | 97.5% | #14 / 40 |
HumanEval Coding ability - generating correct Python functions | 94.1% | #15 / 49 |
ARC-AGIARC Prize Novel reasoning tasks requiring fluid intelligence | 5.9% | #18 / 21 |
SWE-bench Real-world GitHub issue resolution | 72.7% | #20 / 38 |
MMLU-Provals.ai Harder 10-option successor to MMLU; more reasoning-focused | 83.9% | #20 / 30 |
HellaSwag Common sense reasoning about everyday situations | 92.4% | #21 / 36 |
MMMUvals.ai College-level multimodal reasoning across 30+ disciplines | 74.9% | #21 / 33 |
Arena Elo Human preference ranking via blind comparisons | 1368 | #23 / 41 |
TerminalArtificial Analysis Agentic terminal coding tasks requiring multi-step execution | 31.1% | #23 / 37 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 89.2% | #24 / 53 |
MATH Competition-level mathematics problems | 86.2% | #26 / 49 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 62.4% | #26 / 31 |
GPQA PhD-level science questions even experts struggle with | 74.8% | #34 / 54 |