Open-weight MoE that matched frontier closed models on coding
| Benchmark | Score | Rank |
|---|---|---|
Arena Elo Human preference ranking via blind comparisons | 1473 | #7 / 41 |
MATH Competition-level mathematics problems | 97.4% | #8 / 49 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 89.5% | #22 / 53 |
TerminalArtificial Analysis Agentic terminal coding tasks requiring multi-step execution | 31.1% | #24 / 37 |
SWE-bench Real-world GitHub issue resolution | 65.8% | #26 / 38 |
GPQA PhD-level science questions even experts struggle with | 75.1% | #33 / 54 |
HumanEval Coding ability - generating correct Python functions | 85.7% | #35 / 49 |