First AI model instrumental in building its own successor
| Benchmark | Score | Rank |
|---|---|---|
Terminal Agentic terminal coding tasks requiring multi-step execution | 77.3% | #1 / 37 |
OSWorld Computer use in real desktop environments | 74% | #2 / 6 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 87.3% | #3 / 31 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 93% | #4 / 53 |
GPQA PhD-level science questions even experts struggle with | 92.6% | #5 / 54 |
SWE-bench Real-world GitHub issue resolution | 80% | #6 / 38 |
ARC-AGI Novel reasoning tasks requiring fluid intelligence | 54% | #8 / 21 |
MATH Competition-level mathematics problems | 96% | #14 / 49 |
MMMUArtificial Analysis College-level multimodal reasoning across 30+ disciplines | 78.5% | #16 / 33 |
HumanEval Coding ability - generating correct Python functions | 93% | #23 / 49 |