First model to score 100% on AIME 2025 and 80% on SWE-bench
| Benchmark | Score | Rank |
|---|---|---|
MMLU Tests knowledge across 57 subjects from STEM to humanities | 93.8% | #1 / 53 |
MATH Competition-level mathematics problems | 100% | #1 / 49 |
ARC-C Grade-school science questions requiring reasoning | 98.9% | #1 / 40 |
HellaSwag Common sense reasoning about everyday situations | 98.2% | #1 / 36 |
HumanEval Coding ability - generating correct Python functions | 96.9% | #2 / 49 |
FrontierMath Unpublished research-level mathematics problems | 40.3% | #2 / 4 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 88% | #2 / 31 |
GPQA PhD-level science questions even experts struggle with | 93.2% | #3 / 54 |
SWE-bench Real-world GitHub issue resolution | 80% | #5 / 38 |
ARC-AGI Novel reasoning tasks requiring fluid intelligence | 54.2% | #7 / 21 |
Arena Elo Human preference ranking via blind comparisons | 1458 | #10 / 41 |
TerminalArtificial Analysis Agentic terminal coding tasks requiring multi-step execution | 43.2% | #12 / 37 |
MMMUArtificial Analysis College-level multimodal reasoning across 30+ disciplines | 74.6% | #22 / 33 |