When it comes to SQL writing we are more relevant, when it comes to speed this is hard to benchmark exactly against Cursor and Windsurf but we are a bit slower (around ~600ms on average) obviously and we know what we have to improve to speed it up.
Next in the list is the next edit suggestion dedicated to data work, especially with dbt (or SQL transformations) where when you change a query you have to change the downstream queries directly.
Definitely! Giving the AI the ability to fetch the context it needs was a big challenge (since larger codebases can't all fit in an LLM's context window) – it's not perfect yet, but the tools it has does give it a remarkable amount of insight into the overall codebase
Super useful, thanks for the feedback! We're definitely thinking of building something that would run the reviews in your IDE directly, before you push the code.
Being able to save the emotional budget over into the creative bucket is the most god damn win-win corporate speak I accidentally ever typed on hn. This is a wonderful strategy.
The good news with mrge is that it works just like any other AI code reviewer out there (CodeRabbit, Copilot for PRs, etc.). All AI-generated review comments sync directly back to GitHub, and interacting with the platform itself is entirely optional. In fact, several people in this thread mentioned they switched from Copilot or CodeRabbit because they found mrge's reviews more accurate.
If you prefer, you never need to leave GitHub at all.
Definitely! As AIs write a lot more code, I think that the PR/review space is going to become way more important.
If you're interested in Stack PRs, you should definitely check them out on Mrge. By the way, we natively support them (in beta atm): https://docs.mrge.io/ai-review/overview
The beta setting of stacked PRs seems to have no effect for me. Reading the mention of a cli in the docs for PR stacks gives me shivers. Please don't say you are implement it like graphite, which is the absolute worst way to do it and makes graphite useless for every sapling and jujutsu user that would need it most. You can also reach me at mrge@ntr.io would be happy to chat!
We've heard from users who've tried both that our AI reviewer tends to catch more meaningful issues with less noise, that's really something you should try for yourself and find out! (The great thing is that it's really easy to start using)
Beyond the AI agent itself (which is somewhat similar to Copilot), our biggest differentiation comes from the human review experience we've built. Our goal was to create a Linear-like review workflow designed to help human reviewers understand and merge code faster.
you are in rough competition. competing with GitHub (Microsoft) for model quality, inference cost, and GitHub UI integration (one button click, comment replies, code diff, reset of GitHub UI ecosystem), don't start me about training of LLMs... and likely they will not break down Microsoft anytime soon. it is going to be tough!
reply