Papers
arxiv:2603.07980

\$OneMillion-Bench: How Far are Language Agents from Human Experts?

Published on Mar 9
· Submitted by
Yang
on Mar 10
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

A new benchmark evaluates language models on complex, real-world professional tasks requiring multi-step reasoning, evidence resolution, and domain-specific decision-making across multiple industries.

AI-generated summary

As language models (LMs) evolve from chat assistants to long-horizon agents capable of multi-step reasoning and tool use, existing benchmarks remain largely confined to structured or exam-style tasks that fall short of real-world professional demands. To this end, we introduce \OneMillion-Bench OneMillion-Bench, a benchmark of 400 expert-curated tasks spanning Law, Finance, Industry, Healthcare, and Natural Science, built to evaluate agents across economically consequential scenarios. Unlike prior work, the benchmark requires retrieving authoritative sources, resolving conflicting evidence, applying domain-specific rules, and making constraint decisions, where correctness depends as much on the reasoning process as the final answer. We adopt a rubric-based evaluation protocol scoring factual accuracy, logical coherence, practical feasibility, and professional compliance, focused on expert-level problems to ensure meaningful differentiation across agents. Together, \$OneMillion-Bench provides a unified testbed for assessing agentic reliability, professional depth, and practical readiness in domain-intensive scenarios.

Community

Paper author Paper submitter
edited about 2 hours ago

$OneMillion-Bench honestly reframes how we should be thinking about agentic evaluation.

So the core question we kept coming back to was: why are we still measuring AI with multiple choice? When a senior professional does real work, nobody asks them to pick A, B, C, or D. They get paid. So we asked — what if that was the metric?

We spent over two thousand hours with actual domain experts across Law, Finance, Healthcare, Natural Science, and Industry, building four hundred tasks that reflect what those professionals genuinely do on a Tuesday. Not edge cases. Not gotcha questions. Real work. And we priced each task according to actual market wages. Add it all up, and the benchmark is worth just over a million dollars — which is where the name comes from.

Now, the results. Claude Opus 4.6 leads the pack, earning roughly $484k with web search enabled. And the first time I say that number, people think it sounds impressive. Then I remind them: that's less than half the benchmark value. The pass rate — meaning scoring 70% or above on a task — sits around 43%. Most models are in the 20 to 30% range. That should be sobering.

A few things surprised us. Web search helps strong agents, but it actively hurts weaker ones. If your retrieval isn't solid, you are genuinely better off not searching at all. We also built in negative scoring for hallucinations, norm violations, unsafe outputs — because that's how real professional work gets evaluated. A hallucinated citation in a legal memo isn't neutral. It's a failure.

We expected the Deep Research pipelines — your o3s, your Sonars — to dominate. They didn't. Strong generalist models with search outperformed them. Longer pipelines do not equal better outcomes when the rubric is grounded in professional judgment.

Finance was consistently our hardest domain. And the most fragile capability we observed, across the board, was instruction following under search conditions. Models drift. They retrieve something and then forget what they were actually asked to do.

The honest takeaway is this: the gap between producing fluent output and doing the actual work correctly is still enormous. That's what we built this benchmark to show — and I think it succeeded."

·

You do know we can see you're the paper author, right? Even if you speak about it in third person?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.07980 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.07980 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.