Instead the article is some strange "self-help"/"hustle-hard" blogspam, quoting Malcolm Gladwell and shading a picture of the normal distribution and pretending it's arrived at some brilliant insight.
- treating PRs as communication and ensuring the person reviewing has the information they need to check for what you coded.
- Building a culture where discussion around alternate ways of doing things ('this doesn't block merge, but...') is accepted and expected (without becoming hostile/nit-picky or devolving into bikeshedding -- if it's not blocking, the submitter can always just merge)
- Have brown bags to talk about technology in freer, non-deadline-constrained setting (sparks ideas, gets people building tech communication skills)
Really interested to hear what thoughts others have on this.
I kind of wish that there was a type of thing like a PR, but marked in a way where you couldn't actually merge it. It'd still get built by CI, if such a thing was configured; but the point of the submission of such a "proposal with prototype" object would be to discuss whether the design represented by the prototype implementation is a design worth going with.
The actions applicable to such objects would be "accept and close" or "reject and close." You'd be able to have several of these objects that live under a given issue (i.e. several potential solution-designs to the same problem), and—as long as the issue has at least one proposal-with-prototype object under it—the issue would be in a "pending" state until one such proposal object was Approved, and Approving one proposal object would Reject the others.
The point of this would be to replicate the thing that people go through with design discussions on mailing lists when they send in code samples to explain their designs—but with those code samples being working, buildable code, such that the properties of the design proposal can be tested against the current implementation and against any alternative proposals.
You could call these objects "RFCs" :)
That way you could get feedback from other devs, and pushing commits to your PR would trigger CI runs so you could be made aware of any build failures you were causing. (The builds took hours, sometimes, so running the full test suite locally was impractical)
We used Github but I'm sure it would work in a lot of workflows.
So what my team does when we want to prototype something is we create a change and then push separate patch sets for different solutions (there is a nice diff tool to view differences between patch sets). The CI runs for each patch set and we then discuss the merits of each approach and "abandon" (reject) the change and use it for reference going forward.
So you can directly use them to show a what-if PR. I did; it worked, that is, sparked a discussion and led us to designing things in a better way.
My objection to using PRs for this is that people think the point of a PR is to merge the code, and people tend to nitpick the code during code-review with the goal of making it clean enough to merge.
The idea of a separate RFC object is that, unlike a PR, you literally cannot merge an RFC, so there's no temptation to nitpick, or really to talk about anything other than the design. It much more closely mimics the social mores of a mailing-list thread discussing a code snippet.
Also, being able to explicitly track Approved and Rejected RFCs on a system level would be nice, to know what discussion needs to be referenced when doing the final implementation. If you just used "what-if" PRs, both the chosen and not-chosen designs' PRs would just end up in the Closed state, and would show up equally in search. Ideally, Rejected RFCs would be filtered out of search by default.
Building a culture where discussion around alternate
ways of doing things ('this doesn't block merge, but...')
is accepted and expected (without becoming hostile/nit-picky
or devolving into bikeshedding -- if it's not blocking,
the submitter can always just merge)
We had soooo much blocking nitpicking and bikeshedding at my last job.
The "code artistes" among us would block PRs for these sorts of debatable style issues and other nonessential issues that weren't even remotely blockers IMO. Those discussions were a real sap on productivity and team cohesion. And management was unwilling to give any direction.
(It was a particularly big problem on our team because our test suite was a real pig, and moving code through the build/test servers and out to production could take hours sometimes -- so highly debatable nitpicks could result in literal days of lost time)
When I wanted to give that sort of non-blocking constructive feedback, I always simply did what you mention: I left the feedback, discussed things with the submitter, and approved the PR. Not rocket science. Although, apparently, it was beyond some of our devs' comprehension.
Like you said, some people can't comprehend that. However, it's exactly the same "bad actor" situation. If people are executing a denial of service attack on your process in order to get their own way, then you need to address that situation -- outside the context of the PR. If you can't solve the problem, then it's probably time to consider voting with your feet. Working with bullies is never going to be fun.
It's not quite literate programming - the commentary would have to be interwoven with the code, but it's not permanently attached to it.
Existing comment systems usually approach the interwoven change and commentary alright by interleaving comments into the diff, but they still don't allow you to choose which files or diff chunks are displayed in which order.
 - https://gitlab.com/gitlab-org/gitlab-ce/issues/18037
This would IMO be a killer feature for Github or Gitlab to implement.
Typically, I explain complex PRs "manually" in the PR itself or by screen sharing them with the other devs, but this is not always very efficient.
I think, in short, every business would win a lot from having a PEP-like process where business owners and coders discuss what is wanted.
So while I am wishing
It shall be illegal, punishable by a week in the stocks to
- ask for an estimate verbally for any job that has not had at least 200 words describing the requirements and been responded to with interrogating questions and explanations
- to work on any project that does not have a 2 page summary and been broken down into a minimum of 20 seperate 100 word requirements
- and a pony
In fact, please stop doing that for all serious vocations, life is not a game and that's all sports are: a game. That's why everything in sports is super simple, straightforward, and measurable [except when it isn't, but don't mind me].
Sports are not really a good analogy for most things, because sports are entirely relative and have no fundamental value. No soccer player has any inherent value except for being better than a large amount of other soccer players. So it makes sense that soccer player skill follows a binomial distribution since it's basically that by definition.
This is not the case for developers. Developers do not need to be relatively best to produce a lot of value, and mostly produce value by the nature of, well, producing it. If you create a useful website, you created a useful website, it really doesn't matter if you are the best website creator on the planet. The value of that website has no relationship to your abilities relative to others, and has much more of a relationship to things like whether it's useful for customers.
And since there are a range of domains, you actually end up with balderdash if you measure developers against each other, since some are good at debugging, some at C, some at websites, some at arguing with product owners, and some at writing AIs in Python. There's really no point or benefit from trying to homogenize them all into a group and rank and file them.
What we need is not "great" developers, what we need is higher standards for all developers, so that across all these varied domains we have lots of people who respect quality.
The "fundamental gap" between developers is likely not there and I think this thinking is actively harmful. All we need is to encourage more developers to care, which is already a hard enough problem without getting into the useless weeds of being the developer equivalent of Muhammad Ali, which seems like a great way to discourage literally 99% of developers because that is how sports work in practice and most sane people avoid sports [as a job] for that reason.
That's great, but then it says that in order to improve you need to declare a metric so you have something to measure, and then work to improve that. I don't even know where to begin with this; Goodhart's Law is the obvious response. But even more fundamentally than that, the thing that makes programming hard is entirely the qualitative aspects. Every single choice you make in software is a tradeoff with subtle implications, good software engineering is about understanding these tradeoffs in a given context over a period of time, there is absolutely no way you are going to reduce this to any kind of metric without throwing the baby and all its siblings out with the bathwater.
You don't have to keep the metric same. Metrics are good when they helps you to measure your progress in particular tasks. They are bad if you marry specific ones for life. If you worry about throwing the baby, setup counter-metric.
Even at top tech companies l I’d say 80% of the value was added by 20% of the staff.
I’d argue it is clearly present and far more pronounced than in sports.
The key difference as you have alluded is software development is not a winner takes all competition.
That may be true but it doesn't relate to my point as much as you might think. I believe the main reason most people don't contribute in a given organization is because the organization creates little incentive to do so, so mostly people are running off of their own motivation.
What I mean is, people contributing to most of the value are not such because they were born talented programmers or ground their nose like crazy, they mostly just actually care and invest into the subject.
Am I the only person seeing HN descend into terrible click bait with flame war comment sections? In this case, I really hope my observation is the outlier, and that people continue to find this site helpful and contructive.
Yes, maybe it feels obvious. Yes, it’s doesn’t cite a whole lot of evidence. But it resonates with people, and being reminded of the obvious continues to be useful if not glamorous.
So I’ll keep upvoting those that resonate with me!
It’s not often I’ll wholesale dismiss an article on HN because of one sentence, but this is one of those times. The fabulist Gladwell is not credible in general, and his 10,000 hour “rule” can no longer be thought to even remotely approach science, or even fact. The author lost all credibility with this sentence.
Basically, I don't blame people for still thinking it's true or a good rule of thumb. It's usually just used to say "if you want to be good at something, you have to practice", which is true.
I mainly blame Gladwell for this, of course, but people repeating this are definitely getting side-eye from me.
"Hmm, sounds too cliche"
> The well-known 10,000-hour principle, popularized by Malcolm Gladwell, illustrates an important lesson..
"Much better, I've used more words"
There's an episode of the Freakonomics podcast that describe the whole situation and it has interviews with both Gladwell and Ericsson:
> ERICSSON: Now, right. Gladwell basically thought that was kind of an interesting magical number and suggested that the key here is to reach that 10,000 hours. I think he’s really done something very important, helping people see the necessity of this extended training period before you reach high levels of performance. But I think there’s really nothing magical about 10,000 hours. Just the amount of experience performing may in fact have very limited chances to improve your performance. The key seems to be that deliberate practice, where you’re actually working on improving your own performance — that is the key process, and that’s what you need to try to maximize.
I'm not perfect by any means. I've had my share of confusing interactions (particularly when explaining technical things to peers in meetings), but at least I can proofread with a critical eye and fix confusing grammar and explanations.
"Ask HN: What topics/subjects are worth learning for a new software engineer?": https://news.ycombinator.com/item?id=18000410
I've been through several boom/bust cycles and right now we're living in a boom period where:
* Programming demands a high hourly rate
* There are many programmers on the market, tending towards making them interchangeable cogs
* Income inequality is high, leading to a wide discrepancy between wealth and technical literacy
After the coming web2.0/mobile bust in a few years, these will likely no longer be true. I think what will make a good programmer then is the ability to map use cases (design) to abstractions and then to business logic in as little code as possible (preferably none at all). We're going to go back to a time more like the 1980s, when nontechnical people were able to get Real Work done with spreadsheets and DBMSs like Microsoft Access.
I looked at Airtable a bit and it has some interesting ideas but is no panacea. There are other interesting attempts in that space but almost everyone seems to be moving in the opposite direction towards large volumes of highly imperative, object oriented code which is prone to technical debt that requires a longterm support staff of technicians to maintain it. In other words maintaining is currently more lucrative than architecting, but that wasn't always true, and will likely not be true again in a few years.
- Ask Questions
- Do not Multitask. One problem at a time.
- Test control answers i.e. You want result to be 10 then test for 9 and 11. Wrong results create a more stable solution.
Speed has never been my strong suit, but when I deliver code to QA, you can bet that you will be testing a well thought out solution.
- 10,000 hours mastery: Another power-of-10 based bullshit stolen from Rocky I.
- Stacked ranking: Bullshit belief that gives you 100% probability of ruining a company. Costed Microsoft 10 years to fix the damage caused by it.
- Personality types: Horoscope-grade pseudoscientific bullshit used to justify bad management. See also: MBTI
- Hyperbolic decay: Often applied to impress people easily impressed with math, i.e.: bullshit
My thoughts and prayers to the 278+ people that upvoted this article.
Chapter 2 resonated the most, finding a meaningful metric and then measuring against that is something every engineer should do, not just in SW, but sadly it's very common that teams just bumble along hoping for the best.
I think OOP specifically derailed programming because of how big it was, how fast it took over and how long it was considered to be the one true programming paradigm. When OOP hit, it hit hard and fast. It wasn't long before colleges were teaching OOP as The One True Way To Program. Every employer required knowledge of OOP before they would interview you. And I mean every employer, from startups to corporate enterprise. And once it took hold, it took people (and me!) 10-20 years to realize the OOP emperor had no clothes on.
I still don't understand why or how it got so big so quickly. All I can figure is that the software industry as a whole pays attention to the wrong people.
Unfortunately, it worked ... but eventually the bloat caused projects to hit a wall of unmaintainability. (If it had not worked so well in the medium-term, we would have moved past it much sooner).
Another reason it derailed us is simply the level of buy-in it received from the entire industry. Everyone drank the cool-aid, so thought advancement in more powerful approaches -- such as functional programming, data structure-oriented programming and even data oriented programming -- was neglected.
But the software world bought in to OOP much more deeply than science has bought into string theory.
0 - https://cis.org/North/Oracle-Sued-White-Males-Indians-Receiv...
There is a lot of stuff to say to go kill it, but not. To kill it you have to be exceptional and usually people are not exceptional. Unfortunate is most people will think the6 are exceptional and fail.
One caveat is that much of the advice is predicated on having a supportive environment.
> Application development is a team sport. Period. Full stop.
What if you work in an environment where people primarily work on their own? I have done a lot of that in the past.
> For many, the best way to learn is to teach.
What if there aren't feasible opportunities to teach or your management pushes a pace that makes it not plausible?
> Create side projects
There are managers that support the idea of doing side project but not the execution.
I'm sure you can find more examples of suggestions that aren't plausible in your environment.
Here is my answer to these examples, I wasn't just complaining. If you don't have opportunities to fulfill sections of this framework that you feel you are lacking in, do it anyway. If your manager tells you not to spend time learning and improving ignore them. Do it anyway.
You might say, "that's fine for you Mr. internet commenter, but I'm the one that has to take the risk". Well, that's true but I've already done it. Here were my results.
Did exactly what my management asked me to do for two straight years. All reviews for two years were "Meets expectations" with a long list of things I should improve.
One year ago, I read a book with an abhorrent title but good information "stealing the corner office"
I stopped listening to what my manager told me to do and instead I picked up random side projects with little to no perceived business value for my team, spent 10x more time reading code and made commits in projects I shouldn't be working on, and started spending a lot more time understanding basic fundamentals of software engineering.
Have had "exceeds expectations" on every review since and glowing praise.
It's counter-intuitive but, don't give the people what they ask for, give them what they want!
Note: I got a bit carried away on this comment, I should have mentioned that in reality I spend 10 - 20 % of my time working rogue, and the rest of my time doing what my boss tells me.
Even when my contributions suck people think of it as a positive and often teach me the right to do the code or explain why the change is not necessary.
As far as books go I don't have anything to recommend. For me improving fundamentals was really about figuring out what I didn't understand and diving in to learn it. I prefer online materials over books but one book I have read that I thought was great is 'the pragmattic programmer'. https://www.amazon.com/Pragmatic-Programmer-Journeyman-Maste...
* When you listened to expectations and met those expectations, you were reviewed as meeting expectations
* When you met expectations PLUS did additional stuff, such as "spending a lot more time understanding basic fundamentals of software engineering", you exceeded expectations.
Sounds perfectly reasonable to me! Especially since becoming better at SE fundamentals probably made you an objectively better programmer.
I think it's often difficult for managers to describe how to exceed their expectations, it may be a natural human limitation.
Sure, it's flawed, but isn't everything?