Open-weight model that kickstarted the open-source LLM ecosystem
| Benchmark | Score | Rank |
|---|---|---|
HellaSwag Common sense reasoning about everyday situations | 85.9% | #32 / 36 |
ARC-C Grade-school science questions requiring reasoning | 85.3% | #36 / 40 |
HumanEval Coding ability - generating correct Python functions | 48.8% | #45 / 49 |
MATH Competition-level mathematics problems | 25.4% | #49 / 49 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 68.9% | #51 / 53 |