First open MoE from Meta, natively multimodal with 1M context
| Benchmark | Score | Rank |
|---|---|---|
ARC-C Grade-school science questions requiring reasoning | 97.1% | #18 / 40 |
HellaSwag Common sense reasoning about everyday situations | 93.4% | #18 / 36 |
MMMUArtificial Analysis College-level multimodal reasoning across 30+ disciplines | 62.1% | #28 / 33 |
HumanEval Coding ability - generating correct Python functions | 91.8% | #29 / 49 |
Arena Elo Human preference ranking via blind comparisons | 1328 | #29 / 41 |
MATH Competition-level mathematics problems | 82.6% | #31 / 49 |
SWE-bench Real-world GitHub issue resolution | 52.4% | #32 / 38 |
TerminalArtificial Analysis Agentic terminal coding tasks requiring multi-step execution | 6.8% | #33 / 37 |
GPQA PhD-level science questions even experts struggle with | 68.2% | #39 / 54 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 85.5% | #43 / 53 |