Incremental upgrade with improved reliability and instruction following
| Benchmark | Score | Rank |
|---|---|---|
HellaSwag Common sense reasoning about everyday situations | 97.8% | #3 / 36 |
ARC-C Grade-school science questions requiring reasoning | 98.4% | #5 / 40 |
HumanEval Coding ability - generating correct Python functions | 95.6% | #7 / 49 |
MATH Competition-level mathematics problems | 97.8% | #7 / 49 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 85.5% | #7 / 31 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 92.4% | #9 / 53 |
TerminalArtificial Analysis Agentic terminal coding tasks requiring multi-step execution | 45.5% | #10 / 37 |
SWE-bench Real-world GitHub issue resolution | 76.3% | #12 / 38 |
GPQA PhD-level science questions even experts struggle with | 88.1% | #13 / 54 |
ARC-AGIARC Prize Novel reasoning tasks requiring fluid intelligence | 18.3% | #14 / 21 |
Arena Elo Human preference ranking via blind comparisons | 1425 | #16 / 41 |
MMMUArtificial Analysis College-level multimodal reasoning across 30+ disciplines | 75.5% | #18 / 33 |