Open-weight model competitive with GPT-4 class, massive fine-tuning ecosystem
| Benchmark | Score | Rank |
|---|---|---|
HellaSwag Common sense reasoning about everyday situations | 88% | #30 / 36 |
ARC-C Grade-school science questions requiring reasoning | 93% | #35 / 40 |
HumanEval Coding ability - generating correct Python functions | 81.7% | #40 / 49 |
Arena Elo Human preference ranking via blind comparisons | 1208 | #40 / 41 |
MATH Competition-level mathematics problems | 50.4% | #43 / 49 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 82% | #46 / 53 |
GPQA PhD-level science questions even experts struggle with | 41.2% | #52 / 54 |