Haiku-tier model matching Sonnet 4 on coding at one-third the cost
| Benchmark | Score | Rank |
|---|---|---|
OSWorld Computer use in real desktop environments | 50.7% | #6 / 6 |
MATH Competition-level mathematics problems | 95.3% | #15 / 49 |
Terminal Agentic terminal coding tasks requiring multi-step execution | 41% | #15 / 37 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 90.8% | #17 / 53 |
SWE-bench Real-world GitHub issue resolution | 73.3% | #18 / 38 |
ARC-AGIARC Prize Novel reasoning tasks requiring fluid intelligence | 5.1% | #19 / 21 |
MMLU-Provals.ai Harder 10-option successor to MMLU; more reasoning-focused | 78.7% | #27 / 30 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 41.2% | #29 / 31 |
MMMUvals.ai College-level multimodal reasoning across 30+ disciplines | 46.1% | #33 / 33 |
HumanEval Coding ability - generating correct Python functions | 85.2% | #36 / 49 |
GPQA PhD-level science questions even experts struggle with | 73% | #36 / 54 |