Matched frontier labs on reasoning, trained on 200K H100 cluster
| Benchmark | Score | Rank |
|---|---|---|
MMLU Tests knowledge across 57 subjects from STEM to humanities | 92.7% | #6 / 53 |
ARC-C Grade-school science questions requiring reasoning | 97.5% | #15 / 40 |
HumanEval Coding ability - generating correct Python functions | 93.5% | #19 / 49 |
MATH Competition-level mathematics problems | 93.3% | #20 / 49 |
Arena Elo Human preference ranking via blind comparisons | 1402 | #20 / 41 |
GPQA PhD-level science questions even experts struggle with | 84.6% | #21 / 54 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 76.2% | #21 / 31 |
MMLU-Provals.ai Harder 10-option successor to MMLU; more reasoning-focused | 81.4% | #24 / 30 |
SWE-bench Real-world GitHub issue resolution | 63.8% | #27 / 38 |
TerminalArtificial Analysis Agentic terminal coding tasks requiring multi-step execution | 17.4% | #29 / 37 |