I'm a senior frontend engineer doing a lot of code reviews and my team loves my MR (PR) submission standard:
- Require the author to attach screenshots of the feature in a markdown table format showing the before and after comparison. This is enforced using a MR template.
- Require the author to attach screen recording for complex user interactions.
Overall effect is that it saves everyone's time.
- The author has to verify that the feature works anyway, so taking screenshots / recordings doesn't take much addtional efforts.
- The reviewer can focus on code itself instead of checking whether the code works.
- Easy for both parties to spot design and UX issues/regressions visually.
But you still have to test the actual functionality, don't you? What if the author makes a gif of clicking a button, but doesn't record if the browser back button still works?
I'd think that for frontend, you'd have CI/CD pipeline that deploys the code into a staging server, where I can test it myself.
I remember a few companies ago we had a system that would deploy every branch to a separate subdomain (same name as the PR) that you could access. It was fantastically useful for just seeing what the deployed code would look like. I think (for UI things at least) this is a very reasonable solution.
Wish I could remember the github service that did this?
Isn't this what automated tests are for? Manually testing every change is a huge burden, requires most of the automation necessary for automated testing (from the infrastructure point of view), and actually discourages people from writing automated tests.
It's interesting that you brought up the issue of back button. It is indeed an area where bugs frequently occur.
I haven't found a good solution except manually asking in every MR that I sense a potential issue. Maybe it is a good idea to have it in MR template as a checklist item.
Another problem with back button is that the expected behaviour is usually missing in the design or requirements to begin with, requiring a lot of back-and-forth on what's actually expected (especially for SPA).
You can even automate this with something like Percy that takes screenshots and does visual diffs at selected points in the test suite. You just have to be careful about including random/date-based data in your tests or you’ll get more diffs than expected. Also not quite a substitute for someone calling out “this was a major UX change”.
Interesting to hear that there are automation solutions around this. In your experience / expertise, how much work can be automated by such tools? 80%? 99%?
Word is actually really great for this stuff. You get a portable file and can add notes to the screenshots. There is even a screenshot button built into Word!
For recordings, if you use Windows you can bring up the game bar with Win+G. It has a very easy to use screen capture tool meant for gaming, but perfectly fit for small software demos. It even captures sound which is sometimes useful.
The fact that people think this is an excellent idea.....screen recording the proof that the feature works wtf what is the alternative you are getting? People are submitting features that don't work? Just fire them or teach them your set of standards. If my boss asked me to do this I would laugh and find a new job. I truly feel sorry for you but also please stop ruining our industry with this nonsense.
Requiring screen recording isn't about lack of trust or lack of competence. Rather it's a way of eliminating information asymmetry between the author and the reviewer:
- The author might have checked that the code works fine (A) or haven't done it (B).
- The reviewer might decide to run the code locally (1) or not (2).
Combination A1 results in duplicate effort. Combinations A2 and B1 are perfect. Combination B2 results in potential bugs.
The MR standard merely eliminating the combinatorial possibilities by sharing information between the author and the reviewer automatically. The end result is that both parties know A2 is the process that the other person follows.
All you have to do in a code review is review the code. You do not have to run the code nor should you be unless it is a very tiny project. It is the developers responsibility to test the functionality and get it ready for the QA process. If there is a dedicated QA that can test their functionality then the QA is responsible for testing on that front. The code reviewer is not supposed to be testing the feature works as intended for product. The manager of the team or product or client or stakeholder would be the ones testing yhe functionality and giving it a go ahead. It is not the job of the code reviewer to do that. It means you have not taught your developers the set of standards and practices to follow. If you have to get a screen recording of a feature working (probbaly in local or dev so its as useful as a unit test) then you or whoever is responsible has failed at creating a proper software development process
Well I apologize if I came off as harsh. I don’t mean to say you specifically. I just adamantly believe that we need to work towards process that allow you to trust the developer. Every situation is unique however and I cannot speak on all but I do think screen recording proof of work is not useful. Also if people are forced to screen record there us no scalability in projects software development
Lucky? Im unlucky. Where tf are these jobs where you get away with submitting code that doesn’t work. The amount of process and procedures and standards we have to follow is a lot and it is exhausting. Need to find a job where i can get away with trash work
- Require the author to attach screenshots of the feature in a markdown table format showing the before and after comparison. This is enforced using a MR template.
- Require the author to attach screen recording for complex user interactions.
Overall effect is that it saves everyone's time.
- The author has to verify that the feature works anyway, so taking screenshots / recordings doesn't take much addtional efforts.
- The reviewer can focus on code itself instead of checking whether the code works.
- Easy for both parties to spot design and UX issues/regressions visually.