Hacker News new | past | comments | ask | show | jobs | submit login

Yes.

I didn't used to, but that was a very big mistake.

Just wait until something you signed off on doesn't run and folks ask, "But didn't you review this code and say that everything 'looked good'? It doesn't even run!"

It's embarrassing to say the least.

If you're doing a code review, run the darn code. What would you think of a mechanic who didn't make sure the car was able to turn on after claiming it was fixed?




I sometimes find code reviews can create a "gap" in the center where neither person fully vets the change. The reviewer thinks "surely the author fully tested it, I'm just added assurance." And the author thinks "they approved the change so it must be good" and the end result can be code that is less reviewed overall.

If someone pushes a change without anyone reviewing it, I sometimes find their fear of making a mistake causes them to scrutinize the change more than if it had been reviewed. Not always of course, depends on the context and people involved.


I think the fix for this is for every pull request to include a written test plan. Usually this is just a few bullets (and often is just "ran the test suite" if it's applicable). It's on the author to come up with a test plan and execute it. It's on the reviewer to review both the code and the test plan and ensure that the test plan is sufficient.


I had a coworker who seemed to just open PRs as a part of the development workflow. As soon as the feature kind of works, they open a PR, wait for someone to complain about problems, fix them, ask for another review, get more complaints about bugs, rinse and repeat. If the reviewer didn't pay attention, they would always merge broken code.


Making the code correct is a shared responsibility between reviewer and author; if the code has a bug or doesn't even run that means both people missed the problem. Unless the author is very new/junior (still at a stage where they need close guidance on individual changes) then I would be more annoyed or concerned by an author who hasn't run their change at all than a reviewer who hasn't run it and misses some problem. But I guess it depends on the conventions of the organisation exactly what the balance of responsibility is.

As a reviewer (in the places I've worked so far) for me it depends what type of change it is. E.g., for a frontend change I might need to run it to be able to actually try it out and check what it looks like, whereas for a backend change I might rely more on reviewing the tests and if the tests look good then I probably wouldn't bother running locally to poke at the thing myself. Of course there are also just general issues of how big and how complicated the change is.

Anyway, wherever possible mechanical stuff like "does it build and run" should be covered by a CI system or other automation, not by spending reviewer time on it.


Letting the author run it could lead to "it ran on my pc" surprises.


To be fair, it's more like "what would you think a mechanic who doesn't verify that the mechanic who fixed the car didn't check it ran."

If you can't trust the person who created the PR to have tried to compile the code... What are they even doing.


In most of the situations where the code doesn't run, the person creating the PR didn't commit the code needed to make it run, but have it offline. I've bitched at someone who approved a PR in about about 10 minutes, but there was a merge conflict where the ">>>>>>>" would have been apparent for anyone who actually even glanced at the code.

I'd say much of the value of a code review on senior devs is making sure that it's just all properly committed and nothing breaking.


Just make the developer responsible for code they write. It is not hard or complex. You shouln't have to build code from every branch, compile, run, and test. That is not what a code review is...


My job isn't to make your code work, it is to find those little things that don't seem to be bad that are. I have tools for obvious things.

Which is to say 'it doesn't work' is nitpicking that reviewers shouldn't be wasting their time checking.


Or have a machine run the code, like a CI system?


Often, it is not that simple. Many code bases are completely lacking any integration or end-to-end tests, despite having tons of unit tests. Unit tests can pass, but the system may still "not work" due to failures to call the new code in the correct place. The original developer can miss this and it may not be obvious from the context of the PR that something not already in the PR needs to also change. "You don't know what you don't know..."


It seems to me it would be worth the effort to get CI in place over having reviewers manually run the same thing. The labor is more expensive than the computers


Yes. Ideally we would write integration and e2e tests, and run them during CI. I personally favor those sorts of tests but they require a lot of effort.


Review should ensure there are tests, and that those tests run on ci and my laptop. When tests are difficult or not worth it (sometimes ui or end to end) I think to be sufficiently confident reviewer shall run the code. Its possible to do without but that's technical debt




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: