405B-class performance distilled into 70B parameters
| Benchmark | Score | Rank |
|---|---|---|
MMLU-Provals.ai Harder 10-option successor to MMLU; more reasoning-focused | 69.9% | #29 / 30 |
ARC-C Grade-school science questions requiring reasoning | 94.8% | #30 / 40 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 36.3% | #30 / 31 |
HellaSwag Common sense reasoning about everyday situations | 86.2% | #31 / 36 |
HumanEval Coding ability - generating correct Python functions | 88.4% | #33 / 49 |
MATH Competition-level mathematics problems | 77% | #34 / 49 |
Arena Elo Human preference ranking via blind comparisons | 1247 | #38 / 41 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 86% | #41 / 53 |
GPQA PhD-level science questions even experts struggle with | 49% | #48 / 54 |