So cool, what's underappreciated imo: 17k tokens/sec doesn't just change deployment economics. It changes what evaluation means, static MMLU-style tests were designed around human-paced interaction. At this throughput you can run tens of thousands of adversarial agent interactions in the time a standard benchmark takes. Speed doesn't make static evals better it makes them even more obviously inadequate.
The split here is between AI as amplifier vs. AI as replacement. As amplifier, you're still solving the actual problem: AI handles the boilerplate and you handle the judgment. As replacement, you lose the feedback loop that makes you better over time. The developers who thrive will be the ones who know which problems still require them to be in the loop. That's a skill that takes deliberate practice and inuition to develop and almost no AI tooling is designed to teach that.
77.1% on ARC-AGI-2 and still can't stop adding drive-by refactors. ARC-AGI-2 tests novel pattern induction, it's genuinely hard to fake and the improvement is real. But it doesn't measure task scoping, instruction adherence, or knowing when to stop. Those are the capabilities practitioners actually need from a coding agent. We have excellent benchmarks for reasoning. We have almost nothing that measures reliability in agentic loops. That gap explains this thread.
The 8% one-shot / 50% unbounded injection numbers from the system card are more honest than most labs publish, and they highlight exactly why you can't evaluate safety with static tests. An attacker doesn't get one shot — they iterate. The right metric isn't "did it resist this prompt" but "how many attempts until it breaks." That's inherently an adversarial, multi-turn evaluation. Single-pass safety benchmarks are measuring the wrong thing for the same reason single-pass capability benchmarks are: real-world performance is sequential and adaptive.
This is the right direction to understanding AI capabilities. Static benchmarks let models memorize answers while a 300-turn Magic game with hidden information and sequencing decisions doesn't. The fact that frontier model ratings are "artificially low" because of tooling bugs is itself useful data: raw capability ≠ practical performance under real constraints. Curious whether you're seeing consistent skill gaps between models in specific phases (opening mulligan decisions vs. late-game combat math), or if the rankings are uniform across game stages.
A lot of models (including Opus) keep insisting in their reasoning traces that going first can be a bad idea for control decks, etc, which I find pretty interesting - my understanding is that the consensus among pros is closer to "you should go first 99.999% of the time", but the models seem to want there to be more nuance. Beyond that, most of the really interesting blunders that I've dug into have turned out to be problems with the tooling (either actual bugs, or MCP tools with affordances that are a poor fit for how LLMs assume they work). I'm hoping that I'm close to the end of those and am gonna start getting to the real limitations of the models soon.
Like you said, theres a lot of complexity in the decision making here. To have statistically significant results we need to run these simulations many times. We record latency, tool calls, token consumption, etc. as well as results. Since we log the actions and their final outcomes we can run analysis later on the decisions correlations with success here. Our hypothesis is games provide an important benchmark for how these models will adapt in intelligence as they become more capable.
For example, I'm sure an RL bot will be able to figure out an optimal strategy over millions of simulations that defeats current LLMs with context, however this may not always hold true
I've been thinking about how we can orchestrate the long-term planning logic better in this benchmark too, similar to how claude code has a planning step, maybe every X turns we introduce a planning calibrartion step much how like people are able to plan for multi-step turns.
Ie. we often see the same logic repeat:
"Turn 70: I have 4 cities with 24 military units and 3 workers. Critical issues: Roma and Antium are flagged as undefended. I see phalanx #160 at Roma (10,58) and phalanx #171 at Antium (13,59) - they need to fortify for defense."
"Turn 70: I have 4 cities with 24 military units and 3 workers. Critical issues: Roma and Antium are flagged as undefended. I see phalanx #160 at Roma (10,58) and phalanx #171 at Antium (13,59) - they need to fortify for defense. I have a massive army of warriors that should be
and just earlier
"Turn 68: I have 4 cities, opponent location unknown. Critical: Southgate (7,60) is undefended - Phalanx #167 is at (7,60), so I need to fortify it there. I have 23 military units but no enemy sighted yet. Priority: 1) Garrison Southgate with phalanx #167, 2) Fortify defenders in cities, 3) ..."
Opus 4.6 just dropped, so we’re tossing it straight into the arena.
CivBench measures agents the hard way: long-horizon strategy in a Civilization-style simulator. This benchmark is full of hidden information, shifting incentives, an adversary that’s actively trying to ruin your plan. Hundreds of turns where small mistakes compound.
In 15 minutes we're running an exhibition match: Claude Opus 4.6 vs ChatGPT 5.2, live.
One note on the setup: we’re running GPT-5.2 right now, and we’ll switch to 5.3-Codex the moment it’s available via API.
After the game, we'll have full receipts replay, logs, and transparent ELO. No “trust us” charts. If you want to see how these models actually behave under pressure (not just how they test), come watch live.
Feedback welcome, especially from people working on agent evals or RL.
We have a standard harness for each of the model's that we test. Each prompt includes the rules, access to memory, and a lookup of the complete ruleset. The prompt adapts adding legal actions per turn and guidance depending on the stage of the game (updated based on the technological progress of the player).
Unlike RL algorithms these LLMs wouldn't learn quick enough without the prior knowledge the harness provides
reply