Is the purpose of these tools really to spend less time? I think their main value is reducing mistakes through having one extra set of eyes, even if mechanical ones, looking at the code.
As a sole developer of a non-trivial open source project, I've recently started using CodeRabbit, very skeptical about it, but right on the first PR, it actually found a bug that my CI tests did not catch, decided to keep it after that.
Gemini Code Assist on the other hand, the first suggestion it did would actually lead to a bug, so that was out immediately.
What you are saying is true, and this is the feedback I hear every time I talk to a small team of developers (generally fewer than 15 developers).
At this stage, you don't need "another set of eyes" because it is not that big of a problem to break something, as you are not going to lose massive amounts of money because of the mistake.
All these teams need is a sanity check. They also generally (even without the AI code reviewers) do not have a strong code review process.
This is why, in the article, I have clearly mentioned that these are learning based on talking to engineers in Series-B and Series-C startups.
As a sole developer of a non-trivial open source project, I've recently started using CodeRabbit, very skeptical about it, but right on the first PR, it actually found a bug that my CI tests did not catch, decided to keep it after that.
Gemini Code Assist on the other hand, the first suggestion it did would actually lead to a bug, so that was out immediately.