benchmark.darvinyi.com

Benchmarks

Every major LLM benchmark explained — what it tests, how tasks work, and where models stand.

Sort:
Agent TasksActive
2023

AgentBench

LLM agents across 8 interactive environments: OS, databases, web, games, and more.

1,091 tasks
6.8
MathActive
2024

AIME

Annual olympiad-level math competition used as a fresh, contamination-proof AI benchmark.

30 tasks
96.7%
ReasoningSaturated
2018

ARC-Challenge

Grade-school science questions that simple retrieval systems can't answer.

2,590 tasks
98.1%
ReasoningNearing Saturation
2022

BIG-Bench Hard

23 hard reasoning tasks where chain-of-thought is required to exceed human performance.

6,511 tasks
93.1%
Agent TasksActive
2023

GAIA

Multi-step real-world tasks that are conceptually simple for humans but require tool-using agents.

466 tasks
67.0%
ReasoningActive
2023

GPQA Diamond

PhD-level science questions so hard that even experts with Google still struggle.

198 tasks
94.3%
MathSaturated
2021

GSM8K

Grade-school math word problems requiring 2-8 step arithmetic reasoning.

8,500 tasks
99.7%
ReasoningSaturated
2019

HellaSwag

Commonsense reasoning — pick the most plausible continuation of an everyday activity.

70,000 tasks
96.4%
CodingSaturated
2021

HumanEval / HumanEval+

Python function completion from docstrings, evaluated by test execution.

164 tasks
97.6%
Contamination-ResistantActive
2024

LiveBench

Contamination-resistant benchmark refreshed monthly from recent sources with no LLM judge.

1,000 tasks
87.3%
CodingActive
2024

LiveCodeBench

Contamination-resistant coding benchmark using freshly released competition problems.

1,055 tasks
91.7%
Human PreferenceActive
2023

LMSYS Chatbot Arena

Crowdsourced human preference Elo ratings from millions of real user comparisons.

6,000,000 tasks
1549 Elo (Coding)
MathNearing Saturation
2021

MATH Benchmark

Competition-level math problems across 7 subjects, from AMC to AIME difficulty.

12,500 tasks
97.3%
KnowledgeSaturated
2020

MMLU / MMLU-Pro

Broad academic knowledge across 57 subjects — the standard knowledge benchmark.

14,042 tasks
92.7%
CodingContaminated
2023

SWE-bench

Can AI resolve real GitHub issues on production codebases?

2,294 tasks
80.9%
CodingActive
2025

SWE-Lancer

Real Upwork freelance software tasks mapped to $1M in economic value.

1,488 tasks
66.3%
Agent TasksActive
2024

TheAgentCompany

A simulated software company with 16 AI colleagues testing real office work tasks.

175 tasks
30.3%
ReasoningActive
2021

TruthfulQA

Can AI avoid repeating common myths and falsehoods that pervade its training data?

817 tasks
~78%
Agent TasksActive
2023

WebArena / VisualWebArena

Autonomous browser agents completing realistic tasks on functional sandboxed websites.

812 tasks
71.6%
Agent TasksActive
2024

τ-bench (tau-bench)

AI customer service agents that must follow policy while solving real customer problems.

200 tasks
84.7%