Largest open-weight model at 405B parameters, GPT-4 class performance
| Benchmark | Score | Rank |
|---|---|---|
ARC-C Grade-school science questions requiring reasoning | 96.9% | #20 / 40 |
HellaSwag Common sense reasoning about everyday situations | 89.2% | #25 / 36 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 88.6% | #28 / 53 |
HumanEval Coding ability - generating correct Python functions | 89% | #32 / 49 |
MATH Competition-level mathematics problems | 73.8% | #37 / 49 |
Arena Elo Human preference ranking via blind comparisons | 1221 | #39 / 41 |
GPQA PhD-level science questions even experts struggle with | 51.1% | #47 / 54 |