Hacker Newsnew | past | comments | ask | show | jobs | submit | lieret's commentslogin

[On swe-bench team] We read and analyzed a lot of trajectories but seems like only recently models have started to exploit this in a small fraction of instances. But yes, clearly shouldn't have happened (and is now fixed in the new container versions).


[On the SWE-bench team] As someone pointed out SWE-bench Verified is a subset of tasks that were reviewed to be solvable (i.e., have enough context in the task description) as well are scored with unit tests that aren't overly specific to rule out valid solutions.

We've all read & analyzed a large number of agent trajectories. This loophole seems to be something that popped up with the more recent models and we simply weren't aware of it.

As discussed in the github issue, there's a fix in the new version of the SWE-bench containers (currently being rolled out) that makes sure that the relevant commits aren't available.

Part of what makes SWE-bench a very interesting benchmark is the enormous action space that agents that compete on it can take. However that also means that there's unexpected things happening when models get better. We're currently working on making all agent runs easily browsable on a website (rather than having to download our AWS buckets) to get even more eyes on the trajectories. Thanks to everyone who uncovered this loophole.


[Also on the SWE-bench team] Part of the reason why this didn't surface earlier was that it only seems to affect more recent models, maybe the result of reward hacking during posttraining. We're currently working on making trajectories easier to access for everyone through a web tool (rather than having to download things from aws) to get even more eyes on the trajectories. The interface will also include search & LM inspection tools to specifically look for anything that might qualify as cheating.


We evaluated the new GPT models with a minimal agent on SWE-bench verified. GPT-5 scores 65%, mini 60%, nano 35%. Still behind Opus 5 (68%), on par with Sonnet 4 (65%). But a lot cheaper, especially mini!

Cost is tricky to compare with agents, because agents succeed fast, but fail slowly. If an agent doesn't succeed, it should just continue trying until it succeeds, or hits a run time limit. And that's (almost) what happens.

But even so, it's very clear that

1. GPT-5 is cheaper than Sonnet 4 2. GPT-5-mini is _incredibly_ cheap for what it provides (you only sacrifice some 5%pts, but end up paying maybe 1/5th of the total cost)

All of the code to reproduce our numbers is open-source. There's a box on the bottom with the exact command to run in order to reproduce our numbers.

Also very happy to answer questions here!


I'm curious if this might help Cursor's lighting money on fire problem?

https://pivot-to-ai.com/2025/07/09/cursor-tries-setting-less...

is this enough of a price difference to make cursor profitable?


I think gpt-5-mini should really help them. At least from these benchmark scores, there probably shouldn't be a huge performance degradation for letting gpt-5-mini drive most of the workflow. Of course users might still want to just run with latest and greatest (but still gpt-5 will be cheaper I think)


In 2024, we developed SWE-bench and SWE-agent at Princeton University and helped kickstart the coding agent revolution.

Back then, LMs were optimized to be great at chatting, but not much else. This meant that agent scaffolds had to get very creative (and complicated) to make LMs perform useful work.

But in 2025, LMs are actively optimized for agentic coding, and we ask:

*What the simplest coding agent that could still score near SotA on the benchmarks?*

*Turns out, it just requires 100 lines of code!*

And this system still *resolves 65% of all GitHub issues in the SWE-bench verified benchmark* with Sonnet 4 (for comparison, when Anthropic launched Sonnet 4, they reported 70% with their own scaffold that was never made public).

Honestly, we're all pretty stunned ourselves—we've now spent more than a year developing SWE-agent, and would not have thought that such a small system could perform nearly as good.

I'll link to the project below (all open-source, of course). The hello world example is incredibly short & simple (and literally what gave us the 65%). But it is also meant as a serious command line tool + research project, so we provide a Claude-code style UI & some utilities on top of that.

We have some team members from Princeton/Stanford here today, let us know if you have any questions/feedback :)


Is there an option to learn from mistakes? most coding agents I tried, including the Sonnet 4 based one will make same mistake again and again in a new chat.

It would be great to have the agent adding a memory (even locally) to avoid mistakes, checking for new versions of libraries, and write a list of tasks first before the execution (similar to Kiro and Trae SOLO).


Sorry, I missed that!

That's a little bit out of the scope of this project (because we were aiming for the bare minimum of what is needed to get a performative agent — and unfortunately learning from mistake also isn't measured by most benchmarks as they require tasks to be solved independently).

However, you can always add "memory" to agents by asking them to write and read from a file in your repo (Claude.md, cursorrules etc.) You can also try to automate this process and have a mechanism by which the LM decides itself when to put something in them. Similar to how memories work in chatGPT. I think Cursor also recently started doing that.

> checking for new versions of libraries, and write a list of tasks first before the execution

Just add it to the prompt! That's not always desired behavior for a command line helper, but I think it shouldn't be too hard to get it to do that just by prompting alone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: