I don’t know what your process is but if someone else has reviewed a PR before I take my turn I don’t ignore the code they’ve looked at. In fact I take the time to review both the original code as well as their comments or suggestions. That’s the point of review after all, to verify the thinking behind the code as well as the code itself and that applies equally to thoughts or code added by a reviewer.
> //...............................^^^^ Did we forget a closing ')'?
> ) {
> ...
> }
LLMs will only highlight exact problems they've seen before, miss other problems that linters would immediately find, and hallucinate new problems altogether.
While true in a subset of problems, linters will also miss stupid mistakes because not everything is syntactical.
AI for example can catch the fact that `phone.match(/\d{10}/)` might break because of spaces, while a linter has no concept of a correct "regex" as long as it matches the regex syntax.
I don't think anyone is arguing that replacing linters with AI is the answer, instead a combination of both is useful.
Linters are great at finding syntactical errors like the case you mentioned. But LLMs do a better job at finding logical flaws or enforcing things like non-syntactic naming conventions. The idea is not to replace linters, but to complement them. In fact, one of the flows we're building next is fixing linting issues that linters struggle to fix automatically.
I agree and disagree. You definitely need someone competent to take a look before merging in code, but you can do a first pass with an LLM to provide immediate feedback on any obvious issues as defined in your internal engineering standards.
Especially helpful if you're a team with where there's a wide variance in competency/experience levels.
This is where prompting and context is key - you need to keep the scope of the review limited and well-defined. And ideally, you want to validate the review with another LLM before passing it to the dev.
Still won't be perfect, but you'll definitely get to a point where it's a net positive overall - especially with frontier models.
That happens with human review too and often serves as an opportunity to clarify your reasoning to both the reviewer and yourself. If the code is easily misunderstood then you should take a second look at it and do something to make it easier to understand. Sometimes that process even turns up a problem that isn’t a bug now but could become one later when the code is modified by someone in the future.