1M context, stronger coding and vision. SWE-bench Pro 64.3% beats GPT-5.4 and Gemini 3.1 Pro.
| Benchmark | Score | Rank |
|---|---|---|
SWE-bench Real-world GitHub issue resolution | 87.6% | #1 / 38 |
ARC-AGIARC Prize Novel reasoning tasks requiring fluid intelligence | 82.2% | #1 / 21 |
GPQA PhD-level science questions even experts struggle with | 94.2% | #2 / 54 |
MMLU-Provals.ai Harder 10-option successor to MMLU; more reasoning-focused | 89.9% | #3 / 30 |
MMMUvals.ai College-level multimodal reasoning across 30+ disciplines | 85.5% | #5 / 33 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 84.7% | #10 / 31 |