Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
LDB: Large Language Model Debugger via Verifying Runtime Execution Step by Step (github.com/floridsleeves)
2 points by panqueca on April 25, 2024 | hide | past | favorite | 1 comment


HumanEval Benchmark: 95.1 @ GPT-3.5

I wonder if it can be combined with projects like SWE-Agent to build powerful yet opensource coding agents.

- https://paperswithcode.com/sota/code-generation-on-humaneval

- https://github.com/princeton-nlp/SWE-agent




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: