Having said that, I'm sure it's a double-edged sword. No one wants to feel like someone is tearing apart their code.
But yes, one of my first reactions to a lot of bone-headed choices I see is usually, "nobody noticed that X would happen?"
Even if nobody else ends up attentively reading the document, it's valuable for myself as a point of reference after the work is done. It can be humbling to go back and read my own assumptions.
Usually some of the text can be transplanted into more permanent engineering documentation (Wiki or whatever), so it's also a good habit for documenting beyond just inline comments.
Some of my favorite posts online are of the sub-genre: company rebrands all design elements, and a well-respected design professional offers genuine critique. I also love reading about process at elite design firms like Pentagram, and how much thought (or emotion) goes into work.
I think a trusted online channel where you could solicit critique for UI / UX experiments online, perhaps through synchronous live video, would be cool. It would probably be invite-only, heavily moderated, etc. But the benefit is input from use cases you yourself would never dream of. Naturally, double edged sword.
* We want to recruit certain demographics, which most would not overlap with people hanging out i developer channels online.
* Recruiting participants and filtering them is very time consuming - we can run way more tests by just paying participants
* We don't always work on 'cool' systems - not many people are interested in testing a checkout flow or employee intranet.
* Testing flows and providing feedback can be time consuming and tedious, which leads to dropouts. Paying users generally ensures they actually try to finish the tasks and provide feedback.
Live video (generally moderated testing, with follow up questions from the interviewer) is done sometimes, but logistically it's quite hard and often more expensive. Generally speaking it's easier to set up unmoderated tests, that are kind of like an interactive survey overlaid on the application being tested. With this method you can still do screen capture and an an audio/visual track of the participant giving their thoughts.
From my perspective, not everyone is able to conduct a code-review, even if the knowledge is there. As a reviewer, you should be able to communicate in a way the person being reviewed doesn't feel "bad". It's not easy, and you definitely need to know the person who's code you are reviewing.
And maybe one will read and review it as much as it is possible.
There are at least three dynamics going in here that overlap and get conflated frequently.
1. Ego, inability to accept criticism. Obviously this is just something the engineer needs to grow past.
2. Sometimes another engineer not familiar with the codebase will do a drive by commit. They need to make a specific change and find the code around that change to be confusing, so they refactor it to be easier to understand in isolation. However they don't understand the idioms and patterns of the codebase as a whole, and the net effect is that the system as a whole becomes less coherent and maintainable. This creates both-sides-are-right arguments, like, "Yes you did give that variable a clearer name, except we use the other name consistently everywhere else, so if you rename it here you should rename it everywhere, and it's not really worth doing a such a big refactor and invalidating everyone's familiarity, is it?"
3. Familiarity. When you work in the same system for a long time, you develop a mental map of how everything works. That map can be a source of great productivity. Sometimes it's actually not worth making an improvement because it will necessitate the long term maintainers having to relearn how things work, and they have other things to do more important than this improvement.