Natively multimodal with voice, 2x faster and 50% cheaper than GPT-4 Turbo
| Benchmark | Score | Rank |
|---|---|---|
HellaSwag Common sense reasoning about everyday situations | 95.3% | #13 / 36 |
ARC-C Grade-school science questions requiring reasoning | 96.7% | #22 / 40 |
MMLU Tests knowledge across 57 subjects from STEM to humanities | 88.7% | #26 / 53 |
HumanEval Coding ability - generating correct Python functions | 90.2% | #30 / 49 |
MMLU-Provals.ai Harder 10-option successor to MMLU; more reasoning-focused | 62.7% | #30 / 30 |
MMMUvals.ai College-level multimodal reasoning across 30+ disciplines | 56.6% | #30 / 33 |
LiveCodeBenchvals.ai Contamination-free competitive programming (filtered by cutoff date) | 26.4% | #31 / 31 |
Arena Elo Human preference ranking via blind comparisons | 1285 | #33 / 41 |
MATH Competition-level mathematics problems | 76.6% | #35 / 49 |
SWE-bench Real-world GitHub issue resolution | 33.2% | #38 / 38 |
GPQA PhD-level science questions even experts struggle with | 53.6% | #46 / 54 |