Hacker News new | past | comments | ask | show | jobs | submit login
Why do most top performers have the highest count of commits and pull requests? (swecareer.substack.com)
159 points by thathoo 7 months ago | hide | past | favorite | 185 comments



The thing to note here is that this is a one-way correlation: top performers tend to produce lots of commits. That does not mean that people who produce a high amount of commits are the top performers in your company.

I've had to deal with plenty of colleagues who moved very fast, committed extremely often, and were praised by management for the amount of work they produced. But in reality their work was rushed, sloppy, riddled with issues and a nightmare to maintain. And it would inevitably fall upon us "lesser" performers to actually investigate and fix all those issues, making us look less productive in the process.

In my opinion a better metric to quantify someone's performance is to count the amount of problems they solve vs. the amount of problems they create. I bet that many of the rockstar programmers out there would land on the negative side of that particular scale.


> praised by management for the amount of work they produced

Management literally can't tell the difference, sometimes even if it is a former dev. There are many ways to ship code faster.

-Don't test

-Don't worry about sanitizing inputs--extra effort slows you down.

-Optimize for writing, not maintaining

-Take a tech-debt loan. Bolt your feature somewhere convenient it doesn't belong.

-Put on blinders. Your n+1 query problem is Tomorrow's Problem

-Avoid refactors

-Choose the first architecture you think of, not the one that is simpler, more flexible or more useful

-DRY code takes too long to write--you'd have to read more code to do that! That's bottom performer thinking!

-Securing credentials is hard. Hardcode them!

Remember, if you want to be a top performer and have breathless articles written about you, ship many commits, fast! Also, this is a virtuous circle--if any of these become "problems" you'll get to ship more commits!


> Don't worry about sanitizing inputs--extra effort slows you down.

I feel like many people downplay how important this is. I've wasted way too much time because of this. Doing code archeology to understand why data persisted to database many years ago breaks some seemingly unrelated feature for a customer is definitely not my favourite part of the job. Working on a validator that someone was "too busy to add" in the first place is also not fun (and a waste of time - because original author could probably do this in matter of minutes; whereas someone fixing things post-factum need to reverse engineer what is going on; check whether some funny data wasn't persisted already and potentially handle it).

To phrase my frustration in more constructive way: it's always a good idea to put explicit constraints on what you accept (What characters are allowed in text-like id - only alphanum? Only ASCII? What happens with whitespace? How long can it be?). Otherwise it's likely you will find some "implicit" constraint down the road; ie. other piece of code breaking down.


Oh, lord. I don't usually have to think about sanitizing data coming from my own database, but of course any long-running database can have all sorts of crap in it. What a nightmare.


> Avoid refactors

On the contrary, small refactors are a great way to boost commit count! Management once tried to boost productivity by rewarding commits. Our team basically started manually committing automatic refactorings. We won by a landslide, but I don't think the company did. I got a nice prize out of it though.


Hah! That's some top performer thinking! I tip my hat to your team!


What differentiates highly productive developers is their ability to ship more complete functionality from the same input as other developers. Inputs can be tickets, requirements, or even notes from a meeting. Commits and pull requests are production steps, but don't tell the whole story - they do give you some level of indication.


the converse is also true, occasionally. depending on the business model, some engineers are too comfortable with a slow, methodical process. often, just getting something to work, despite accrued tech debt, helps a team iterate toward a better solution. if requirements or the rest of the realized architecture was perfectly understood from the beginning, then slow and methodical is the minimal risk implementation. but in the real world, that’s rarely the case. what i’m saying is that true talent is able to intuit when one approach is preferred over the other.


Comments without capitalization are optimized for writing once at the expense of many readers.


Fair.


Don't forget to not waste time on documentation!


Everybody knows code self-documents by definition. /s


What if every commit has to be reviewed by a certified code reviewer before it is accepted? That is how Google does it. Means you can't skip tests etc. Not sure why not more organizations do it, like wouldn't banks etc benefit a lot from it?


Wonder to what extent that solves the problem. Quite often your code reviewer's interest/world view aligns with yours: "Right we need to ship this feature asap so let's worry about tests later" or "More commits boost your performance review so these 50 1-liner commits I get it."

It reminds me of a similar problem in academia: the LPU (Least Publishable Unit) phenomenon where people tend to break a work into multiple smaller pieces to get more paper counts. It's so widespread that lots of paper reviewers are doing them too. So you don't get punished.


depends entirely on the team culture.


In my experience, its the sloppy Engineers that create problems that get the most credit. They are called upon to be 'heros' again and again, to fix their crap they shipped with 1000 issues.


Yes, I've seen that a few times. What non-technical management sees is an engineer at the center of efforts to fix an issue in the midst of a crisis. What they don't see is that the work done by competent engineers doesn't blow up and become a visible crisis.

IMO this is a big red flag that something is wrong with both the technical and non-technical culture. Technical management isn't happening, at least not effectively, and non-technical management is getting too far into the weeds on technical details of production issues.


> -Don't test

In an environment that rewards commits, unit tests can be a source of many little commits.


Tests are fiddly to get working and could expose problems! Maybe in a new commit and PR later?


Plus tests are so slow to run, booooooring. How can you get anything done if you're staring at red text all day?


They're not fiddly if you do assertion-free testing


^^^ It will only work where no one else is reviewing your code


I'm not so sure.

When you disagree in a code review with the "top performer with all the commits" it gets escalated to management, who sides with the "top performer" because they are a "top performer."


I've seen this over and over too.

Put a Non-technical VP/Manager in charge. Maybe someone who started out as a UI designer or non technical PM and then somehow became VP of engineering because they were an early loyal employee.

Charismatic coders will take advantage and the manager will eat it up. The "rock stars" commit tons of code put create massive tech debt. They write their own buggy versions of stuff they should have just imported industry standard OSS versions of. Rule evaluation engines, broken file formats, broken home grown encryption, broken distributed job systems, heavily broken home grown DB frameworks.

They'll have a huge bias towards terse code that doesn't make a lot of sense. Abuse of functional code. Everything is the opposite of KISS.

Everyone else on the team is just constantly fixing their showstopper level bugs, of which there are always many.

They talk a good game and management constantly thinks the rock star is super smart and the rest of the team is deficient.

Then the tech debt reaches ridiculous levels, or you get bought and different management comes in and sees right through it. Managers get let go, new managers don't buy the rock star story. Rock star gets frustrated, leaves for another company where they can pull the same trick, and puts all this inflated stuff on the resume that they wrote all this stuff they shouldn't have written. A new naive manager falls for it think they must be really smart cause they're never going to find out how broken all the stuff was.

All this is WAY easier for a rock start to pull off post around 2010 because the office culture in tech has become so PC that no one can be blamed for anything. Lots of stuff that used to get someone shown the door in my early career would not even get called out today at all.


Upvoted and then retracted due to the PC bullshit at the end. You had great points, you shouldn't ruin it with your bitterness about having to be professional.


> I've had to deal with plenty of colleagues who moved very fast, committed extremely often, and were praised by management for the amount of work they produced. But in reality their work was rushed, sloppy, riddled with issues and a nightmare to maintain. And it would inevitably fall upon us "lesser" performers to actually investigate and fix all those issues, making us look less productive in the process.

This is one of the most common complaints I hear about fast devs, is the produce lots of bugs. But what I've seen is fast dev produces 5x more code than normal devs and produces 5x more bugs. Which means the ratio is the same but it feels different because they're producing so much more code. You then get the devs who say they have to spend lots of time investigating these bugs so look worse. But I've literally seen the fast dev go on to a bug that one of the other developers was spending 2 weeks investigating and find the issue in the data within 5 minutes. You then have the slow and careful devs who will write 5 functional tests, 1 unit test and add 10 seconds to the build (not much but it adds up when it's ever issue) and still have the same ratio of bugs as the guy who is doing 1 units tests.

I think the realitity is, a good dev is someone who produces the most business value. And something lots of devs don't want to hear is that the tiny little tweaks they make to improve the code quality, adds very little business value. Where as a required feature that works 90% of the time adds lots of business value. I think a lot of the complaining about these rockstar devs is just jealously. Code quality is one of the smallest tech problems yet so many devs think it's super important. No, getting stuff delivered so other departments and customers can do things is. Being able to plan things out is super important. Having a good data model is super important.


> a good dev is someone who produces the most business value

this this this.

More code might equal more bugs, but if it's a net gain in business value everything else is a secondary concern.

It doesn't mean you can just ship crap all the time. If customers start complaining/seeing errors/bad performance, business value decreases and there are going to be some $discussions.

If developers are rewarded for shoveling garbage into the pipeline all the time, there are deeper organizational issues afoot.


This, there are company cultures where crap is shoveled and the investors buy it for a while.

The rock star thrives in this environment. The bugs don't catch up to them till later.


Isn't that just how start ups code? You ship features fast and loose and then deal with the aftermath once the company is maturer? Wouldn't this be a case of the best business value is shipping fast and loose and a slow and careful dev with 10x less bugs but is 10x slower is not what the company needs?

There seems to be a time and place for everything.


Yet you can do a few simple thought experiments.

What would those fast-dev do in a vacuum? Or with only clone of themselves coworkers.

What happens when one of those fast-dev quit?

--

If everything is/would be fine, then yes, they are truly rockstars (in all the good meanings of that word and none of the bad). Or maybe merely decent among mediocre ones.

Otherwise, there is a free-rider aspect in their approach.

--

Also it can work only depending on the criticality level of the industry.

Ship broken crap quickly (but fix it quickly too)? Good if you are creating yet another social website maybe. Less good for e.g. medical devices.

--

One more point: the business value approach is not necessarily the only one to apply esp. if there are tech infrastructure components in what you ship. You can easily be stuck in local optimums too far from global ones, and fossilize some aspects of your product. See for example the classical http://blog.zorinaq.com/i-contribute-to-the-windows-kernel-w... that includes some example of death by thousands cut.

What I mean is that moving fast is partly in the eye of the evaluator. Maybe you implement what PM wants quickly, and that's cool, but maybe also doing only what PM wants is not the best thing for the project.

--

If you are easily able to plan things, have a good data model, and can develop quickly, probably you don't have a real code quality problem to begin with. At least not in the parts you contribute to. I don't actually distinguish the data model from "code" that much: it's all design.

--

Final last thought: imagine you are actually a good fast-dev like you describe, and your colleagues are less good, but imagine a case where the whole organization would actually benefit from you slowing down a bit and working on developing better way to work more efficiently with others or making them improve, overall yielding even more business value at the end. This can happen too.


> What would those fast-dev do in a vacuum? Or with only clone of themselves coworkers.

Well considering my assertion is the dev is just fast and producing the same quality as others. Carry on working.

> What happens when one of those fast-dev quit?

The company would need to replace them or have the team produce less work due to less works?

> Final last thought: imagine you are actually a good fast-dev like you describe, and your colleagues are less good, but imagine a case where the whole organization would actually benefit from you slowing down a bit and working on developing better way to work more efficiently with others or making them improve, overall yielding even more business value at the end. This can happen too.

This seems like "What if, it is better that you don't do the job you were hired for but do a higher job without getting promoted and recognised for your skills?". Well one, is if a company wants their fast dev to teach others they should make them a coach or something along those lines. Secondly, just because they're fast doesn't mean they can teach, sometimes the reason they're fast is they have less interactions with people and therefore able to continually code without having to stop to talk to Jenny from Admin about how important a bug is (not actually a devs job, there should be product/project management for this).Maybe fast dev is fast because they've been there for longer and understand the system, then it's just a case of other devs need to ramp up. Lastly, maybe the fast dev doesn't want to do this other role and just likes programming.


I'm not saying that the exact situation you describe can never happen, but I'm not 100% convinced it is all that frequent.

But if all your descriptions are precisely correct, then my opinion is that your conclusions broadly are too.

Simply, I will think and check a lot before projecting actual situations on that model. For now I don't have the feeling I've ever encountered anything like you describe. More often it was some of the variations I suggested, and maybe others.


I've worked with this type of colleague. I remember a specific example where, fresh out of university, I was assigned to pair with one. We had to create a front-end validator for the password reset feature, for users stored in LDAP. I was thinking "Wow, getting the password policies from LDAP must be a pain, it'll take a while to implement that feature". Nope, just copied whatever settings were currently in LDAP into our app's config file.

A couple months later, a pair of engineers on the team had to spend 4+ weeks developing an "installer" to properly install & configure our app, as it had grown too complex to install & configure by hand. Management couldn't really figure out why...


This used to drive me nuts and I feel like I wish the rule was, if your code breaks something you are not allowed to work on anything else until it's fixed and if possible it will be reverted.

There needs to be some incentive to not let people shit all over the code based for everyone else to clean up. Reviews are enough. All code was required to be reviewed by owners and there was still lots of this.


Most problems are multi causal and if you have blame culture will degenerate into a litany of finger pointing.


I'm not sure what "blame culture" is, but any professional software team should ideally have some kind of "accountability culture".

Whatever the fallout from people feeling "blamed" may be, attrition of all of your genuine programming talent due to tech-debt-machine peers being promoted ahead of them is not exactly an ideal outcome either.

What's more, very often genuine potential in naturally talented new programmers can be stunted if they're rewarded for lazy faux "productivity". I've seen this: a programmer has a genuine interest and passion for quality, but loses it over time due to a focus on doing (different) work that gets them promoted.

Code ownership (being required to "own" ones work along with the bugs & maintenance burden that come with it) is one of the most valuable ways programmers learn. This needs to be balanced, as one runs the risk of having a bus factor of 1, but it's still vital.


Blame culture is when bugs, downtime, etc. happen and people both look for somebody to blame and simultaneously look to exculpate their own behavior. It leads to CYA behavior, backstabbing and massive risk aversion. It can also lead to or be a result of a toxic work environment.

In multi causal bugs it can lead to people downplaying causes which they had something to do with and exaggerating the effect that co-worker thry don't like had something to do with. This often leads to confused attribution and poor rectification of systemic issues - e.g. writing more unit tests even when more unit tests won't really help.

"Accountability culture" sounds like it could be the same thing. Or not. I'm not really sure.


This sounds like a bad faith culture in general: i.e. "paying a price for mistakes", rather than "owning postmortems". Treating a mistake as an opportunity for punishment (be it purely social or otherwise), rather than for learning, is more about how you treat staff & peers than how you deal with mistakes.

> Accountability culture" sounds like it could be the same thing.

Can I ask what specific part of my description of accountability culture sounded like it lined up with your description of blame culture?


> This sounds like a bad faith culture

Yes, that’s what blame culture is: a dysfunctional organizational culture where, when mistakes are made, the organizational priority is to find and punish those responsible. The consequence is CYA and finger pointing.


There is "blame culture", where every problem turns into a witch hunt, and there is the opposite, call it "anti-blame culture" where no problem can be declared, because, well, it's never the only problem to solve, and thus can never be solved.

Competitive entities nearly never fall victim of the anti-blame kind, but where there is little competition, they are just as common as their opposite.


This is somewhat analogous to programmers who type very fast but delete half of what they type. This is very subjective again, most of the deletes can be legit deletes. There is a difference in the mistakes that are deleted and deletes that arise because of a thorough process change. I digress, but to a routine observer the person typing fast appears like a pro, someone who is really good at their job. While this can be true of other professions like a carpenter being good at his craft will be extremely fast, am not sure the same holds true for the typing speed of programmers.

If you are a trial and error type of programmer then of course speed matters a lot because it has a direct correlation on how fast you can iterate and increase your sample space of trials. The frequency of commits and PRs can be seen in the same light.

So IMO it's hard to find a true universal measure to identify the top programmers.


Yeah this is a terrible measure of productivity. Counting commits and PRs strikes me as the same as counting lines of code, just with more steps.

What's even worse is if this metric ever becomes a target, Goodhart's Law will apply, and then Campbell's law will make it as useless as LOC.


Maybe a proxy for good contributions is the lifespan of their contributions rather than frequency?


https://dl.acm.org/doi/pdf/10.1145/3386321

Check out pages 71:26 and 71:27 for "Codebase introduction and retention" between Clojure and Scala. I'd like to see more graphics like these to illustrate "lifespan of commits"


Those are great graphics. I really wouldn't mind my contributions being measured in this way - a sense I've never had a out commit metrics before.


I think this can also be problematic. Long lived code _must_ work, which is good. However, it can also live a long time for bad reasons. I have seen long lived code that only lives a long time because it is so complicated that no one can understand how it works, or written in a way that is very hard to change. So it lives a long time, because no one wants to take the time or effort to touch it. Finally, when it _must_ be changed to support a new feature, it might require a full rewrite.


Being considered the go-to person means you are first in line to make small tweaks (and config changes in code).

This increases the change count.

If there is a review process where another engineer has to approve work, this exacerbates the gap, as the go-to person can get their reviews done quickly. If they're trusted, the reviews might not be thorough.

This increases the rate at which changes go in.

These and other factors suggest that it's hard to split cause and effect here. Being seen as productive increases change count :)


This also seems to be labeling people as "top performers" based on how much code they get done.

And then measuring how many commits they do and wondering why those are correlated? Even besides the good reasons you point out ... this seems very obvious.

Also, my experience has shown that smaller commits are easier to work with so people with experience tend to make more smaller commits. This also seems to be fairly widely talked about.


With GitHub's API, commits are so easy to game. I'd never use commits as a measure of how good of a programmer someone is. I made a Python script in 15 minutes to commit to a repo every second, and the repo is sitting at about 750,000 commits.


are you the guy hosting that "time as a service" github repo? Where the time is always updated and you can curl the json or whatever and get the time?


Nope, not at all. Do you have any links to what you're talking about?


Perhaps some of the more productive workers are the ones that don't hesitate to make necessary code changes, test the changes adequately, and then move on to the next thing.

I have noticed that pretty much every software engineer to some degree has problems that they procrastinate on. Folk can spend 10x more time talking about doing something than it takes to do it. For hard problems, that discussion is necessary and beneficial, but lots of problems just need someone to open up the text editor and get it done.


I worked as a contractor with some companies and peer coded with their engineers.

What I found was that its not just procrastination. Many folks are just afraid to commit code, like literally scared and I could never get a real reason for that. At one point I added some code based on the direction what requirements were taking. But I could not convince him to commit it.

So we finally agreed to let it be there commented out, only a week later to find we need it now.


Sounds about right! I think that's an indication of lack of sufficient regression test coverage, coupled with complacency/fatigue from working on the same code base for an extended period of time.

Unfortunately, the most business critical pieces of code tend to be the least regression tested. It's the earliest stuff that was made before any test frameworks were matured, it's been hacked on countless times by half the team based on shifting requirements to the point that no one understands it fully, and any future changes are such a high priority that it's "faster" to test it manually or in production.

I am of course guilty of that myself on some pieces of code. I try to prioritize cleaning up expensive tech debt, though. When folk are hesitant to modify a piece of code, it's a strong indication that the code is due for refactoring. It's always worth it as long as you implement regression testing in the process.


Pretty much this. I suffer huge anxiety about committing code where I work.

The regression testing is flakey and the ci/cd procedure is rediculously complex and black box.

Very little transparency about what is or is not covered. Failures are common a, cryptic and intermittent.

The work where I know where the tests live I commit without hesitation.

Other random shit I happen across. Avoid it like the plague.

Doesnt help how big the sites are either.


Cause reasons for fear are not something people would tell you. I was in similar state twice and neither time I would be eager explain that to someone external.

I was "afraid" to commit in a team where code review evolved into huge micro-management with inconsistent requirements on what good code looks like. The reviewer iteratively forced you to change it again and again each time with "this is bad code cant go in" comments. But you could not learn what he considers good code, cause it was different every time. I left after.

Second time I was "afraid" to commit to part of code when our main architect had completely disproportionate blow up over previous bug. I made easy to fix bug which was my mistake. But it turned into massive public blow up over work being shitty, us intentionally ignoring his needs, there being tons of bug unusable version (there was literally one bug). Then he wanted massive refactoring to avoid possibility of same bug ... and I made bug in refactoring which led to same blow out, again claiming it was done without care etc.

After that, I really did not wanted to do any changes in that code. I cant guarantee complete lack of bugs in my code. Other people do bugs too for that matter, I dont think I make so much more of them. But, he was under pressure and stress that had nothing to do with me and I became rod for that.


That sounds miserable. I think, as a reviewer, it's important to keep in mind that most problems have many solutions. You have to be flexible and work with the person that did the hard work of implementing it. Everyone messes up and lets a bug slip through every now and then. That's what multiple layers of thorough testing is for.

On my current team, I'm considered to be the most thorough code reviewer. Folk usually thank me rather than scorn me, though. Some are afraid to send me their PRs not because they fear my feedback, but because they think reviewing their code will take up too much of my time. Reviewing code thoroughly doesn't take much time at all, though, if you make it a habit. If it's a bug, I explain my concern and suggest a fix. If it's a style nit, I link to the relevant style guide section. If it's a suggestion or personal preference, I explicitly say that, write out my rationale, then offer to chat more. If it's a weak suggestion, I tell them right off the bat that I'm fine either way. If it's a strong suggestion, I try to give them an "early out" by suggesting they simply add a // TODO comment. I'll let a coworker get away with murder as long as they leave a TODO comment.

For new team members that send me a PR for the first time, I typically send a message at the start describing what they should expect. That helps a lot, because everyone's first few PRs are going to be rough until they've gotten up to speed on the existing team's expectations.

I will say that there are some engineers that just don't take technical feedback well. When they join an existing team, they can be stubborn and refuse to adapt to the established culture. Instead, they either misinterpret criticism as personal attacks, or get frustrated and insist that the team conform to their preferences right off the bat. Team culture can and should evolve over time, but it requires respect and understanding of the status quo. It's possible for an open minded engineer to join a team, embrace its current culture, then radically change it all over the course of a few months. It's possible for a close minded engineer to not get more than a dozen PRs approved over the course of a year. Not to say that such engineers are good or bad one way or another, but rather folk should seek out projects that are compatible with their personality.


Also, it helps a lot to hammer out a lot of design and implementation details before writing any code. For any work project I start that I anticipate taking more than a few hours to code, I spend an hour or so writing an "architecture document" which I then distribute to anyone I think will have an opinion about it. That gives folk a place to ask questions and "bike shed" the problem long before I've invested any "artesianal coding energy". By the time the code reviewer looks at the code, they know what it's supposed to do and why it was written that way. As a pleasant bonus, I usually code about 3x faster when I have the document to reference.

I try to get new hires into this habit as early as possible to varying degrees of success. The folk that embrace it tend to be very productive; I don't know if it's causation or just correlation though. I have a slide deck to emphasize function over form, titled "Writing Mediocre Architecture Documents". It's a cult classic hit.


> Writing Mediocre Architecture Documents

Would you be willing to share this slide deck?

My question in regard to this is how do you write an architectural doc if you're exploring the problem space? If it takes 5h to explore, experiment and find the solution. And it takes 0.5h to write the doc. And 0.5h to make the change after you know the solution and have reverted the experimental/exploratory changes?

At that point is the doc necessary as a precursor to the change. By that I mean, do you send it out for feedback first and wait OR do you send it out/add that as a comment on you PR and push up the final change?

Additionally how do you know you're going to get feedback at all from sending out the doc? In my experience people are always busy with their own work and sending them a big chunk of text...well I wouldn't expect a response quickly compared to a short 1-2 line question


I consider any code that's written to be "speculative" until its design has been vetted by another engineer. By speculative, I mean that there's no expectation that the code will be approved and merged in its present form. An architecture document is simply a good way to get that buy-in. If I write down what I plan on doing in a document, and then solicit feedback on it from a coworker, there's a very good chance that the same coworker will approve the resulting code without anything but bug and style fixes. If you're exploring the problem space, yeah of course go ahead and hack away at some code. If the experiments go exceedingly well and you end up with production quality code, go ahead and throw it in a PR for review. You have to be receptive to "bike shedding" feedback, though, since the PR is the first opportunity you gave for folk to give that type of feedback.

It's also perfectly reasonable to write and distribute a document that effectively says, "I have no idea what the requirements are, but I have an arbitrary idea for how we should proceed anyway." More often than not, your coworkers will help flesh out the requirements, or agree with your arbitrary design decisions. Either way, it's a lot easier to have that type of discussion on a free-form wiki page rather than in a github PR that you already spent 4 hours getting to compile and pass test suites.

Re: soliciting feedback, even verbose architecture documents tend to be relatively easy for your coworkers to digest. It is communication written for human consumption rather than machine consumption, after all. For one thing, you can have a lot of fun with them. One of my recent arch documents had at least a dozen MC Hammer puns in it. I had no trouble getting anyone to read through that. Also, if you give your coworkers the opportunity and they don't follow up on it, it does give you the high road in any eventual PR contention. "I mentioned this a week ago and asked you for feedback on it, why are you making a fuss about it now?" Not that you should be setting up your coworkers like that. You should make a good faith effort to solicit feedback, and keep politely pestering folk until they get to it. Until you get buy-in, any code written is still speculative, even if you tried your best and your coworkers let you down.

It helps to lead by example. If you prioritize giving feedback to coworkers over your own implementation work (reading their documents, doing their code reviews in a timely way, etc.), then your coworkers will tend to notice and reciprocate.


That's very insightful. Appreciate the write up. Thank you.

I noticed the slide deck stuff in a sibling comment, +1


Is that slide deck publicly available?


Nah, but here's the content. You'll have to add your own clip art hastily pulled from google image search:

---

What’s the point?

Architecture documents are for you, not for everyone else

* Adds structure to chaos * Makes feedback progressive rather than regressive * You get to tell people to “Read The Friendly Manual”

---

But there’s so much boilerplate!

So don’t fill out the useless parts...

* Delete sections that aren’t relevant * If there’s a better format, just do it that way * Complain to <template owner> if the template is silly

---

The requirements are too ill-defined to write down

I’m sure writing code will solve the problem then. /sarcasm

* This is exactly why architecture documents are helpful * Just barf out some bad requirements and ask for feedback * If no one has any ideas, we’ve got bigger problems

---

I’m the one writing the code, back off!

Software engineers are experts about everything and tend to have opinions.

* Enumerate alternative approaches and politely explain why they are dumb * Detail out the chosen approach * When in doubt, write out function signatures

---

When is enough enough?

When it starts to feel passive aggressive

* Writing code without an architecture doc is speculative * “Wasting” 30 minutes on a doc can save hours in a PR * I’ve never regretted writing an architecture document

---

Super Fun Activity Time

Whatever you’re currently working on, write an architecture doc for it.

(If you already have one, quit wasting time and go write code!)

You have 15 minutes!


> where code review evolved into huge micro-management with inconsistent requirements on what good code looks like.

In my org this can be somewhat of a problem (different teams have different code style guidelines for long historical reasons which can make cross-team changes tricky).

One thing that helped a lot with this is publishing these guidelines in a central location, but more importantly we also made bots that reviewed every code review to point out common mistakes that people made, so reviewers could focus on the change itself instead of naming/spacing nits.


I know I've been testy with a coworker or junior before but its so counter productive to have a workplace with fear. Process can lead to trust and confidence so I try to focus on identifying and improving that (as well as mentoring around risky patterns and such.) Test, reviews, tooling etc. If simple mistakes are effecting the team, its really a process failure.

Fear driven development is toxic.


Obviously these are people problems. If my next team suffers from your first problem, I might try to ease it by introducing a code formatting tool like prettier. I don't always love the choices those tools make, but I do love eliminating that class of PR comment.

It doesn't eliminate the people problem, but it does limit its scope.


Commented out code is a terrible idea. It is the reason you look at source code and see it filled with garbage. When you have a developer who has no confidence in their code then you ask the usual question about requirements, testing, code review, etc. If it's still not committed then there's clearly a developer issue to workout with management.

When a developer fails to follow through on their tasks and takes extra time then this increases the budget of a project.


I understand I have put that up almost without any context, so let me add some.

It was(is) a (very) huge project which is still in development (my last interaction was more than 3 years ago) being built by a very big company in tech space, and various parts of it were outsourced to various vendors (including Indian software shops) and I was consultant to one such vendor. There were huge number of layers of management, while all the usual questions you mentioned would have the answer "yes".

So yeah the devs were in general scared about the code and management both. They wouldn't ask questions, just say "yes" to everything.


That's also a sign they aren't comfortable with the version control system. I used to be like that, and with cvs and svn it was somewhat justifiable. Now that we have git and it's everywhere I'm far less anxious. Not that git is perfect and things don't happen, but it's extremely good at not Screwing Up. It may not do exactly what you expected, but it doesn't break the repo. It also tries really hard to never forget anything unless you make an effort to force it. It might take a bit of digging around in the reflog to figure out how to get back to a sane state, but in every case I've had to deal with it was possible.


Have I been working in a bubble?

I'll give a coworker a hard time for making large infrequent commits, but I've never seen someone afraid to commit code. This sounds like the value proposition for version control hasn't really clicked for them.

Are they comfortable branching?


I think it‘s less about version control and more about that the change is then associated with the employees name and if something ever goes wrong, it would be possible (easy?) to blame him therefore he‘s being scared about doing something because it could cause trouble for him somewhen in the future.

And, imho, that goes back to not enough testing nd no safety nets to check for code errors (like code review, static analysis, ...).


Clearly I am living in a bubble, because I cannot imagine working somewhere where I would rather be seen doing literally nothing than to do the exact thing I was hired for.


By commit, I think the intended meaning is "put on a path that will ultimately affect production".

For my current project, merging bad code typically means breaking a bunch of regression test suites for any coworkers that cut their branches at the wrong time. In more nefarious outcomes, it means a delayed software version release and, potentially, damaged UAV hardware at our test site.

For my previous project, merging bad code could have resulted in someone losing control of a fully drive by wire car in a closed parking lot or test track. That's what big red buttons are for!

For my project before that, some of my code was involved in handling literally every single sensor and actuator on a rocket and the space capsule on top of it. A subtle bug that slipped through to production could have been rather serious indeed.


If individuals are fearing the repercussions from doing their jobs on a safety critical system like that, then the process has already failed.

Your process needs to be so bulletproof that everyone involved feels absolutely sure that a defect will be caught. If there's any doubt in their minds then that part of the chain needs to be addressed and corrected.

I'm not saying that people shouldn't take it seriously or should get sloppy, but if you're worried that making a commit could end in disaster then you're moving too fast.


What do you consider to be large and infrequent? I see so many micro-commits that I start to wonder if some competition is going on...


> Folk can spend 10x more time talking about doing something than it takes to do it

When I see this happening in the team, I immediately try to get the engineers to implement 90% to then talk about the other missing 10%. There are a million reasons why a solution isn't perfect and we need to put as much thought into it as possible. At the same time we have to keep in mind that everything we do is a tradeoff. If you think your solution is perfect you probably lack knowledge.

I prefer to ship 90% and then see how we can improve on that with data I can't produce if I don't ship anything. Talk about hypothetical future problems going in circles doesn't solve problems we have right now.

Don't get me wrong, I'm one of those 10x thought times guy myself. But I know how to keep the ball rolling in a team and take the responsibility for these types of decisions which look like educated corner cutting.


I think this points depends heavily on the complexity of the project. If the project is very complex (e.g. P2P, blockchain, distributed messaging, machine learning, ...) then you actually want engineers to think really hard about every line of code they write.

The best, most long-lasting code I ever wrote took a long time to write. Sometimes I have to think about a single small feature for multiple days before I begin to implement. Some foundational structural technical decisions require weeks of thinking and analyzing. It's the best way to guarantee that you don't have to come back to rewrite the code later.

When I was younger, I would refactor some of the foundation logic every few months as I added more changes on top. These days I almost never refactor the foundations. That extra time is totally worth it. It's very hard to come up with a good design.


i think this and discipline are key aspects. i see average performing engineers code by assumption. or knowing the “right way” to do something, which may take a lot of set up and refactoring and testing, and thinking no it’ll be too long. but then your code goes to review and it takes 2-4 days and 3 rounds to get approved. maybe that extra 2 hours of coding time would pay off over 4 days. i think the top people have experienced this and know it and do all of this in their first pass unprompted


There's a pretty subtle pitfall that teams can fall into here.

Let's say a team has 2 developers. Josh is 20% "better" than the John, or simply started earlier and has more context on the code base. So initially, Josh is 20% faster, but now John has to spend an extra 20% of his time reviewing Josh's code in a pull request (or understanding Josh's code so he can make a change) instead of making forward progress. So now, actually John is even 20% less effective than he can be, and he has less time to actually code. And since he's even less productive, Josh has __more__ time to code, so he's even faster, which means he writes more code. It kind of compounds, and even an initial slight advantage in speed or context for one developer can amplify itself over time.

A good engineering manager or senior engineer can detect when that's happening and try to correct the balance. But often the team kind of settles into a mode where Josh is known to be better and more productive and everything is funneled to him.


Paraphrasing:

Josh has the initiative, forcing John to react.

--

In my experience, Josh is a firehose of chaos, doesn't test their own work, colors outside the lines. So in addition to John reviewing Josh's torrent of bs, John is always playing catchup, always has to do more rework.

Further, it's not a balanced relationship. Josh creates urgency to fast track approval for their own PRs. Then will goal tend John's work. Pedantry over everything. Let PRs get stale, so John has to remerge, reseting the whole process. Insist the commits are "too big", "hard to understand", and therefore need to be broken up.

Etc.

Individual agility and velocity are evil, rewards dysfunctional behavior.

If the whole team isn't committed to getting the whole team across the finish line, it's not a proper team.

PS- Additional dysfunction if John is constitutionally incapable of refactoring, removing dead code, and other good citizenry.


I've found myself in this sort of situation a lot, including on 2-person teams. While I'm pointing out all the bugs, maintability issues, etc. on their commit, they're busy writing a new commit full of the same types of issues. And on the flip size: because I self-review my code with the same attention, I rarely have any of those same sorts of issues in code that I make others review.

I'm lucky in that my company recognizes and appreciates the quality that provide with my work and encourage from others, but I'm not sure of how to actually address the imbalance. Many times I've thought of just asking engineers to put more effort into self-reviewing their code, but I always feel like it would just be too rude.


> If they find an issue along the way, they make a note of it and come back to fix it. Or they might fix in as they go.

One consistent tension is that the old-hand developers have a "mental issue queue" that is enormous, but without fail, every time, they just can't be transferred.

These can't be made into issues and farmed out to other people. Inconsistencies in the data model, for instance, might exist, but a better solution isn't obvious. You can hand it to someone fresh, and after significant effort (on both of your parts) they agree with the inconsistency, but they won't propose a solution that's any better.

Once you've contributed enough of the main functions of a code base, you just never lack for something to do. All code is bad, because the business focus is on expansion of responsibilities over refinement of the existing ones.

Those ghost issues are best communicated with a code change, at which all observers say WOW WHY DID WE NOT SEE THIS?! But the issue without the code change gets gawking blank stares.

EDIT: but fixing unrelated issues as you go is bad, don't do it!


> old-hand developers have a "mental issue queue" that is enormous

> a better solution isn't obvious. You can hand it to someone fresh, and after significant effort (on both of your parts) they agree with the inconsistency, but they won't propose a solution that's any better

> Once you've contributed enough of the main functions of a code base, you just never lack for something to do.

Hello, friend, I see we know each other well.


Lol, when I worked at a unicorn one of my buddies got an award for "most testing code commits"

He confided after drinks it was because he didn't know how to squash/amend his commits.

So I'm not so sure about that metric....


Is that why people are always saying you should squash commits? To help with collecting metrics?!

I view it as a clear antipattern (since the history within a branch can be valuable later if you need to cherry-pick apart a feature or find a bug with git-bisect) and have asked superiors in numerous places why they require it, and the response is usually a vague mention of “it cleans things up” and “history isn’t important”. It feels like the kind of practice that was mentioned on a screencast and just got cargo-culted, but I have to think it originally had some purpose.


To me it's rebase > squash > merge. Nobody wants to read a commit history and see a page-long list of "fix audit", "fix typo", "retry ci", "test: change foo", "Revert: test: change foo", "Revert: Revert: retry ci"... That's why you rebase into a sequence of logical, if fictional, worksteps. But if you can't do that, squash is second-best.


My preferred is rebase (exactly as you describe) but keeping the original fork point, (unless it becomes necessary to depend on some other later work from another branch, the original base or not) and then merge.

So you end up with a cleaned-up branch history via rebase, and a master branch that's similarly clean, with a higher level view of 'logical commits' that 'merge X feature', 'merge Y bug fix'.


With rebase I keep my commits small, easier to merge then. I've seen strange things slip through large commits. E.g. Someone (me) missed the deleted files, rebased and hence recreated them.

Then I tried to squash them before merging. But if you squash your commits you get a large commit and you get the same problem. Oh well...

I guess it's easier when you have small features.


No, that's not why.

If your workplace requires just 1 commit per change, no matter what the scope, that doesn't make a lot of sense, but there's a lot of room between never squashing and squashing all changes always to 1 commit. Both those extreme approaches don't make much sense to me. Some history is important, some is not.

Squashing commits doesn't have to mean turning 50 commits into 1. It can mean reordering, squashing some commits, tidying up commit messages and generally editing until the set of changes is clear and coherent. This lets you commit early and often during development on an unpublished branch without concern, then tidy that up into a coherent set of changes for readers (including your future self). The absolute numbers don't really matter, for me at least it's more about reorganising and editing changes to read coherently and be properly separated. For example if you need steps 1,2,3,4 to make a change, keep those separate but don't include 2a,2b,2c which were exploring 2 and finding a few places you missed a change when you tested it.

I see it as basic respect for future readers, much as you might revise and edit an essay or novel before publication, revising and editing your code changes at least once often makes them better and clearer.


Commits should be squashed to clean up iterative work, corrections, etc. I commit constantly while developing and testing, especially if I want/need to let a CI pipeline do builds during development. It's good to squash them all when the work is ready to be merged, so that there's a single, clean, clearly explained, atomic commit to add functionality. No one gains anything from seeing a dozen work-in-progress/cleanup commits. The history of how I got to the merge point isn't important. A single unit of new, tested functionality that's ready to merge only needs one clear commit in most cases.


squashing commits make history important actually. The history how you reached the solution in your branch is not important, I am not interested in a wip commit which could indeed have a valid commit message, but still not final. Only the committed is interested into that.

So I think just before merging squashing the commits makes completely sense. Before that the committer is free to do whatever he/she likes.

Of course in the case the diff of files is so big that make multiple commits sensible, means that the PR is not broken correctly. Then, it's fine to have multiple commits, but the problem is elsewhere not in squashing them.


I think that squashing commits sometimes make sense. Like is it worth retain two commits if the second one is just fixing the formatting, logs, or metrics from the first one?

Also in open source I think it can be easier to keep track of the history if one commit == one PR.

I agree people kinda just cargo cult it though, good to be thoughtful about the trade offs for your team or project.


I think its just people protecting their egos and hiding their dev process. I've never been reading the commit history and thought, "boy I wish these were squashed." Much more often I wish commits were in smaller more reviewable bites.

I understand why its there. People want to sweep the details away. I feel it about my own commits as well. However, if I have to go back and read the history, I'd much rather read the ground truth.


It might depend on the CI process. If the CI is only run on the tip of the branch being merged, then the PR should squashed, otherwise if a rollback it required, it would be possible to rollback to a commit that was not tested by CI. Unless there is a list of 'known good commits' somewhere.


Don't most systems have a merge commit for the PR? I know TFS/Azure DevOps does.


“Top performer” here means “top performer in a team, relatively to teammates”, which narrows the definition substantially. There are mysterious geniuses who can deliver a great piece of software just as an experiment/PoC out of the blue, but they don’t tend to shine in such environments—they could be founders or indie consultants, or it could be their side-project persona.

(I.e., stating the obvious, if you are intensely working on something on your own, magically starting to atomically commit each change with a thoughtful message will not make you better but can easily eat 1/3 of your time. It’s OK for your commits to be “fat” as long as you yourself manage to keep track of your work in meantime.)


I batch it based on what could possibly make sense to revert.


I batch it based on "ok, I need to commit and push this in case my computer dies".


When I batch, I batch it based on what could make sense to lose. More often than not it's small quick-fire tweaks though. I even made a small CLI tool to quickly deploy it `happy "Message"` and even optionally release an npm version `happy --minor`.


3 years latter good commit messages are extremely useful, and most of the time very short ones and/or giant commits are mostly useless.

I occasionally read back multiple pages commit message I wrote and wished I dumped even more info from my brain at the time.

If you work on small and short projects, you can do pretty much anything though.

Of course it can still be unreasonable, but that's quite like anything else. But if in doubt, I would say write more; because if you really track your time in a detailed way, you will see that the marginal additional time is often shorter than you feel. And it taking 1/3 of your time may even be justified in some cases (but maybe you should write some doc in another way then).

And I would not be so sure about it not magically making you better though: it is sometimes extremely useful to just write things down, quite like it is useful to explain a bug to a rubber duck.


True, during times when I juggle multiple mature projects I find commits extremely handy since I can’t comfortably fit enough context.

But a lot of the time my work is a string of exploration and in-depth sizeable PoCs until something solid and worth documenting rigorously is born. I strongly feel that if I spend time writing up each change as a “green” atomic commit at earlier stages when things are routinely rewritten and rethought, I might never arrive at the latter stage in a satisfactory way.

During such intense work, commenting about my implementation is a radically smaller context jump for me than switching my mind into “commit mode”. This is likely individual: I tend to be overly perfectionist with commits, trying to never mix style and logic changes in one, to make sure a fully functional version of software can be built at any given point in history, etc.

From another angle, I think a lot can be achieved with other forms of documentation, which you mention and which should not be underused either. Of course, best is having your APIs, architecture and units properly documented. It is undeniably bad form to refer others or future yourself to comments, especially for why’s. However, it’s arguably even worse to direct them to commit messages (though in a team it could be an invaluable troubleshooting instrument, that is not the context I have in mind).

(I would never advocate “fat commit” approach to be applied to teamwork on larger projects. Just wouldn’t recommend taking commit count as an axiomatic measure applicable at all times.)


OP mentions three things: staging their work, velocity, and sense of agency/ownership. All three are problematic. Often those who have worked with a component the longest and/or wrote significant parts of it become its maintainers, either formally or de facto. As a maintainer (I've been one myself) it's simply easier to get commits in, not necessarily because you're better at the work but because of the role itself. You never have to re-work your commits to conform to someone else's idea of how things should work. People trust you, so reviews are often cursory. Many maintainers return that courtesy by subjecting others' work to excruciating review, slowing them down. Sometimes it's intentional, sometimes not, but the result's the same. Also, a maintainer often has many commits that started off as someone else's idea but that person didn't have the time or knowledge to complete them, so those are kind of low-hanging fruit that inflate the numbers. A mediocre maintainer will usually still have more commits than even the most talented non-maintainer. Calling them "top performers" because of something that's part of the role seems a bit circular.

So much for velocity and ownership. As for the part about staging commits, I'd like to see some evidence. In my experience there's no difference, or sometimes the maintainers are even less likely to break up commits. This can be because maintainers are often charged with making commits with lots of internal dependencies that make them harder to break up, or because it's easy to get a stamp even on a questionable commit from people who are dependent on their goodwill to get their own commits in.


OTOH I'm tired of people who don't "have the time or knowledge to complete [their ideas]".

Ideas are cheap. Show me the code.

Asking, explicitly or implicitly, for others to "implement/finish" their ideas is easy. I would even call never finishing / polishing anything very disrespectful: I'm not (should not be) here to cleanup after "talented" individuals. This is detrimental to my own "ideas."

So at least, if not anything else, I better be recognized for the boring and tedious maintainership work I do and that talented people with their "ideas" refuse to perform...


> Ideas are cheap. Show me the code.

I mostly agree with you, but also don't want to be too developer-centric. Sometimes the ideas come from people who aren't primarily developers - production engineers, system architects, etc. Let's say an idea from such a person is fundamentally good and will benefit the project but they lack the time or skill to do more than prototype it. What should happen?

(a) Drop it on the floor.

(b) Let their patch(es) languish and eventually get reaped. Same result in terms of functionality, plus contributes to an "maintainers aren't open to new ideas" reputation which harms recruitment/retention.

(c) Get a developer with more knowledge/skill to finish it. In general, this is just going to be the maintainer, because nobody else will have a strong enough sense of ownership to sacrifice time toward their own goals for it.

Obviously this is going to be case by case, but I'd say that (c) is at least sometimes a valid answer. Depending on the makeup of people involved with the project, which in turn might depend on the nature of the project itself, it might even be quite often.


I kind of agree with you.

I was thinking of people perfectly able to do/finish the job. If that's not their job to begin with, they have another one, and can be evaluated on it. If it's just that their time is too precious to do it (and it can even be!) and they will actually bring value in another way, then so be it, but let's not overestimate the value on dev on that project compared to people actually doing the work. Because we are not talking about design vs production, we are usually talking about rough-high-level-design vs high level refinement/debugging + all-level-design + QA + iterating, rince and repeat...

And like I said: ideas are cheap. Fundamentally. Well, most ideas, but this would need a case by case judgment to recognize remarkable ones out of the mass, so I'll continue with my simplification that "ideas are cheap." If I say that a micro kernel still makes a lot of sense and should be the future of new OS / VMs / etc, well it is controversial, and maybe Linus won't agree, but maybe if just enough people put some efforts to develop an excellent and very successful system from that, maybe Hurd done right, while I simply get to my next idea, maybe if the system is eventually successful I should not be overly praised for that. Maybe the people who did the work should. The mere idea bringers will be judged on other aspects anyway, and it does not retire anything from all their contribution everywhere, maybe other activities, etc... I'm not trying to declare the superiority of development vs other things. I'm trying to explain that development is to be judged by development criteria.

And so in the end, counting the number of commits (or whatever), yes it is kind of an idiotic metric but maybe not as much as some people seem to think. I see it more like the estimation of complexity via the number of LOC approach: it's rough, you may need to apply various adjustments here and there if you look more closely, it may be biased by various non optimal feedback loops, it may lead to a few false positives and negatives, but it is still an interesting metric to begin to understand what is happening in a project.


Well said. I guess it's a good example of how metrics can be useful but not definitive.


It does sound from the article as if incumbents will always have a huge advantage and will be more likely to be labeled top performers.


I've worked with some amazing programmers that produce fabulous amounts of code. and, often, that's who you need. I have been envious of their prodigious production. I think, often, those folks solve problems by adding more code.

I think, sometimes that mountain of code starts to become a liability. I'm a little better at reading a lot of code, consolidating, and fixing bugs. Importantly, fixing bugs without breaking other stuff (usually. coding is hard)

if you buy the adage "make it work, make it right, make it fast", you'd probably buy that most people fall into one of those categories and excel (there are rare jewels that are amazing at all three. Carmack maybe is a good example).

Anyway, I'm not a top performer. I have my moments of glory, and I think I deliver good value. I try to avoid git stats. I peek from time to time, and I'm super pleased that I've deleted about 2x the amount of code I've added but, that's maybe me protecting my ego.

Everybody needs code, some people need code to be right, even fewer people need code to be fast. Different people bring different skills to the table. Be real careful about how those different aspects play into reaching goals.


It would help if people had more awareness of the personality spectrum in the population and the distribution of traits within a team.

Not knowing leads to lot of misunderstandings.

https://en.wikipedia.org/wiki/Big_Five_personality_traits

https://en.wikipedia.org/wiki/Temperament_and_Character_Inve...

Different traits become strengths or weaknesses depending on the type of problem being solved.

If the solution is known the disciplined/conscientiousness trait holders shine. If the solution is unknown the neurotic shines cause they dont methodically explore the search space (matters a lot if its large). If there is lot of conflict everyone loves the agreeable trait holder. And teams full of introverts get boring as people don't develop deep personal connections that extroverts enable etc etc etc


> I think, often, those folks solve problems by adding more code.

Indeed I think this is a big component in differences in how many lines someone is adding. Some people solve problems by copying code and them modifying it, while others try to come up with general solutions to remove duplication. It's two different styles, both with their advantages and disadvantages. I think duplicating gets you to an initial prototype more quickly, while general solutions are more maintainable as you only have to fix a bug at one position.


Uh, kinda.I think I get what you're saying, but I think you're framing it in a fairly shallow way.

I think someone good will avoid just copy and pasting the code. Shallow similarity makes it easy to thread a boolean or do something tricky with dependency injection. What I'm getting at is folks who are very capable, very fast programmers. if things aren't easy to reuse, they'll just go ahead and implement a whole new subsystem with different logs, metrics and error conditions.

I guess an example might be a type checker versus a compile time evaluator. They have a lot in common. But they're different. Adding another traversal of the AST isn't that big of a deal, really. But, there will come a time when all those passes start to be an issue, they have a lot in common. Maybe it's better to fold them all up into one or two passes.

Sometimes things are complicated. Sometimes you need to hold all that complexity in your head at once and really pick out the commonality. But that's rare. Just adding more code is a great answer for a long, long time.


Deleting code saves so much money for your organisation as it’s deleted for every person who touches the codebase! I don’t think this is ego protection at all, maybe the people who write shed loads of code are protecting their egos just a little.


`git shortlog -sne --oneline --since='Jan 1 2010'`

I like running the above command because it gives a good sense of coding productivity at the very least. And then you can dig into specific people to understand why exactly they have lots of commits or not. Most people use Git in a very similar fashion so you can very quickly make a generalization about how they commit if you look through their last N commits.

Some 'high commit count' people have lots of low value commits, e.g. lots of 'fix it' commits.

Other 'high commit count' people are very productive but mix in a lot of atomic commits, e.g. lots of 'another commit to change this comment', leading to a PR of 9 super small commits + 1 real commit, that's readable alongside their actual work.

Others are actually just more productive than other people. That might be because their code changes are simpler, or in an area that's easier to be productive in and write lots of code. Or they just work more. Or they work at a higher velocity because they understand the codebase and domain better.

Definitely don't _only_ use these metrics because some people just code slower and put out 1 large PR, but I can definitely believe a pattern of people at the top end of productivity who put out both small and large PRs at a higher velocity than the 'less productive' people.

I would honestly just attribute those high value, high commit count people to being stronger developers overall, in my limited experience. Overall as in, not weak in any particular area, and quite strong technically in every area. The people you can put in any situation in the domain and they'd probably succeed. Because they're strong across the entire codebase, their productivity is just generally higher no matter what they're doing.


I’m somewhat self conscious about the sheer volume of commits that I make. The reason there are so many is that I want each atomic unit to be distinguishable.

Part of that is motivated by the will to save often in case something goes wrong. Part is to let continuous integration validate with high granularity. Part is to allow for binary search on revision history to reproduce rare bugs and isolate the specific change which introduced it.

These considerations may explain the correlation with “top performing” - although for what it’s worth I try to make commits which remove more lines of code than they add whenever possible.


Same. I got self conscious when a co-worker mentioned it once, but at my most productive I commit when I complete a unit of thought, usually at a pace of 3-10 commits per hour.

Edit: I do think the considered criticisms of this piece are valid, especially the idea that when a correlated measurement (e.g LOC) of a thing (e.g productivity) becomes rewarded, it blows that correlation as everyone starts optimizing the measurement rather than for that thing itself.


Top performers do more work (generally, not always). This is contrary to the HN conventional wisdom that lines of code are a bad metric, and I see where those people are coming from. But generally people capable of writing more code faster are also capable of writing the right code.


> Top performers do more work (generally, not always).

Yes, that is true by definition. What is not necessarily true is that all of that work involves writing code.

> generally people capable of writing more code faster are also capable of writing the right code

I disagree here though, and is highly dependent on what you consider to be "the right code". In my experience the people capable of writing more code faster are the people who don't necessarily take the step back and think "should this code even be written at all?".


Maybe, but I see speed as a proxy for ability. If all I know about someone is they can write code fast then I would bet on them being an above average programmer.


IDK I've made PRs with multiple trivial commits within an hour. I've also spent over 30 hours on PRs with only 3 commits. It's a useful heuristic but doesn't map very well to work invested.


For me at least. I am the most productive when I can do many SMALL commits and able to push them to prod.

You have to be able to make your change; either a refactoring or a functionality change and get it through the QA, review, deploy process. Now days I spent a lot of time, dealing with multiple branches and deployment environments and my productivity is in the floor. In a previous job I was able to deploy 2 or 3 changes a day in prod. It was very full filling.


Actually that seems to be one of main points of the blog post. Being able to stage their work.


In the olden days they used to measure performance by SLOC, Source Lines of Code.

https://en.wikipedia.org/wiki/Source_lines_of_code


I read that most enterprise software developers write about 3k sloc per year on huge projects, in terms of final code add size (not lines added), but that it varies quite a bit. Has anyone ever found these sorts of metrics to be helpful in their experience? Wouldn’t be any good for individual performance but what about for projects as a whole?


Code size is a great indicator of the complexity of some software and how much work it will be to maintain it. It is also a reasonably good, but far from perfect indicator of how much work went on building it.

That measure is also one of the arguments for highly expressive languages, because developers seem to produce approximately the same length of code whatever language they use. And leads to some very good questions about why that length changes a lot from one place to another, but doesn't change much when one changes the more development-centric options (like the language or editor/IDE).

But, of course, if you start ranking people based on it, it will stop being a good predictor of anything.



Which reminds me of this (attributed to Bill Gates):

Measuring programming progress by lines of code is like measuring aircraft building progress by weight.


Fun fact: in at least certain parts of offshore construction mass is used as a very reliable indicator - not of completion but of cost.

A simplified version:

- higher weight means more raw materials means more materials cost and time to assemble.

- higher weight means fewer cranes will be competing for the lifting operations, again driving up cost.

- and tangentially: higher weight of equipment also drives higher weight of supporting structures.

Which means if you can save a ton in equipment it can make a huge difference.

AFAIK and IIRC experienced engineers who work with calculations will have worked through decades of projects (either as part of the teams at the time or by reviewing budgets and bools afterwards) to compile cost calculations, but roughly if you tell those guys the type of the construction and the weight they already have a fair idea of the cost.


> Measuring programming progress by lines of code is like measuring aircraft building progress by weight.

...which is a very misleading comparison. Measuring the build by weight is a rough but not terrible metric, and it applies to cars as well.

It also implies that there's such thing as reaching 100%, which is very wrong for software: most poorly written software uses 10x or 100x SLOC than what is required.


Yea, I get the point of the quote, but as someone actually building an airplane (in my garage), weight is something I'm constantly aware of and looking to minimize. It's not a measure of project progress, but it's a big factor in the final performance of the end product.


Yet, if you had to build the aircraft out of metal shavings one sliver at a time in a laborious, error prone process, you might not know how far along you are but “well, we’re halfway there by mass” at least gives some measure for the project as a whole. If you game any metric it becomes trash


That assumes you have a finished plane design and know the target weight, which I don't think is what the analogy is after.


I was a top performer on my team in the past for a couple years. It was nice being an expert, but they never promoted me and were always playing games. I think it's better just be a average or below average person and not have to deal with the BS that comes with being a top performer. At this point in my career I doubt I'll ever make it back to that level of skill anyways.


Why not?


Why won't I get back to that level?

I have a kid, so I don't have any time to study outside of work or put in tons of extra hours. After years of being screwed over and passed over, I don't really have the drive/hope to get to that level since it wasn't rewarded the first time. Also, the work is very boring now and isn't transferrable to other groups or companies, so I don't have any interest in being an expert just to throw away that knowledge (like I was forced to do in the past).


> "... since it wasn't rewarded the first time"

Our industry, unfortunately, doesn't promote from within. We almost always have to leave the company to another that is willing to give you a better title and salary.


Yeah. My problem is that there aren't very many companies using FileNet or NeoXam.


Hmm. If they are important to the companies that use them, you may be in a good position to move, assuming that is also hard to find people experienced in those technologies


The problem lies in the question. In the worst circumstances, it's indicating survivorship bias, and in the best circumstances the only truths it reveals are tautological. Using number of commits to measure productivity is the new version of thinking that the more number of lines of code that a programmer writes, the better.

Not that this will stop a segment of the industry from flushing a non-trivial amount of resources down the drain learning this lesson, and inevitably leaving a chunk of stalwarts who never really learn anything. It was clear that was going to happen when the "social coding" site that everyone was flocking to put so much emphasis on activity measured in number of commits, forks, and other administrative details. Things which were only any good for advertising the site's own user engagement in its pre-IPO/-acquisition phase, and too many people mistaking it as a measurement of something else.


Because they are willing to take the risk of publicly being wrong.

Committing your source code to the company git makes it very easy for others to point at you later, if things break. And it's pretty much impossible to undo once someone else has pulled your change.

In my opinion, many top performers are simply people who do what needs to be done, and when it's needed.


These types often protect themselves and their clique with a CoC that effectively prohibits criticism from people who care about correctness and a lean code base.


I don't disagree this could be true and the reasoning (makes sense)™ but it would be nice to have some hard data when making such assertions.


Because most people don't do anything at all.

Even trying to do something puts you in the top half, even if it's poor quality, because a huge fraction of people are completely useless and don't ever do anything productive at all.


Not to take away from the premise offered in the article ...

But is there a selection bias here? Some kinds of work invariably involve more github activity, small fixes and the like. Those things are generally, unambiguously 'productive' and 'leave the code better', which lends us to believe 'top performers'. Surely, this might be true but I think within a specific context.

If find solving new or novel problems involves a lot of work that is hacky, experiemental, quick trial, often the kinds of things that in most cases don't even get checked in.


Why not? Doing experimental things that might not work out, or need radical revisions, is exactly when VCS helps the most.


Because the scratch code is usually pointless, it's the 'key notes' that matter.

200 lines of crazyballs scratch code is not 'the insight' really 'the insight' was that the API works 'really slowly upon first iteration, but very quickly after n iterations' - which implies x, y, z possible courses of action.

I suppose you could jam it in the VCS but I've never personally cared.

Now that I think about it ... it's interesting because that's definitely not what a VCS is for, though it could absolutely be used in that way.

A VCS really is not that great to store arbitrary, secondary related activity and notes wherein 'the code' really isn't the important thing.

What's missing here is really a form of document/information sharing that just hasn't caught on very well. Or perhaps I'm still caught up in the ridiculous Confluence/Atlassian garbage, which is the worst wiki ever made.


By such a metric people who write the final implementation on the first commit are complete slackers.


If you look at it from a slightly different viewpoint, there's nothing remotely surprising or controversial about this. You wouldn't be surprised to hear that the best factory workers tend to produce the most widgets per hour, or that the best butchers process the most meat per hour. Productive output is definitely highly correlated with job skill, almost tautologically so. Yes, a code commit isn't exactly the same thing as a widget, but it's similar enough in broad strokes to still be a useful measurement.


I’m trying to understand the DevOps of this concept. Is everybody working in the same repo, or are these personal forks that get hammered until nice merged PRs can be submitted? Do devs complete whole features before submitting, or do they submit partial work that everybody else is trying to contribute to? Are multiple developers working on the same code modules, or have they been split out? Are there tests and interfaces written ahead of time, or do they do tasking by prose?

The goal of software management is to keep everybody productive. If you have an imbalance in commits or pull requests in the main repo/branch, that’s a strong sign of a broken process. Tasks should be given according to familiarity and skill so this doesn’t happen.

This also says something about software design. Good design is easy to split up. Bad design requires a ‘go to’ person. Therefore, a ‘go to’ person is by definition not a good software engineer, because their design was bad, and/or they never fixed it. And if it worked the first time, you wouldn’t have to ‘go to’ anybody.

The skill ladder of software engineering goes something like: watching, practicing, contributing, designing, teaching, leading. Every developer goes through this process from scratch in every project. Getting stuck is a problem (as is skipping steps). Preventing other people from progressing is a bigger problem.

Perhaps rethink this.


In a mature project, the number of removed lines is a stronger proxy to productivities.

It is related to the unavoidable tech debt that any project has and only the strongest people see it and work on it.


Once you are the top-committer in a project, you have the advantage. You know most of the code, even if it doesn't have tests. It's easy to figure out where reported bugs are, that you have introduced. Meanwhile, the rest of the team is busy reviewing your code and rebasing their PRs.

I have seen this pattern so many times. It'a a good strategy to be recognized by management.

Also what happens when the top-committer leaves, suddenly the rest of the team can breath and flourish.


One reason not mentioned (or maybe it's only me):

If I've got bunch of problems, I'll work on the easiest ones first, saving the hardest for last. This clears my mind from the drag from the easier problems, and as a side effect I wind up committing more.

There's no particular correlation between how hard a problem is to solve vs how much time it takes to solve them. So hit the easy ones first and make your users happy!


Top performers are motivated and do more work. But that does not necessarily translate into real progress. If you shit the bed and clean it up, its not real progress. Unfortunately, management doesnt see that way.

I wont consider myself a top performer. But I have my moments of glory. Most of my impactful commits were deleting unwanted code and needless libraries in the codebase.


Old maxim: "If you want something done, give it to a busy person"


> ... top performers ... highest count of commits and pull requests

> ... define top performers as ... a go to person

So circular logic. Got it.

In a related story, why employed engineers have code contribution to the company that's way higher (actually infinitely higher) than those who are not in the company.


My theory: simply put, a large number of commits (without considering excess/cheating for the sake of themselves) can indicate lots of qualities of a good software engineer: order, planning, precision and capacity of partitioning a problem in smaller units.


> "the top performer's commits out number the second highest by 50% or more. [...] This might indicate some form of Pareto principle at play. Perhaps?"

Price's Law?

"50% of the work is done by the square root of the total number of people who participate in the work" -> "You are working on a team, and there are the superstars who do most of the work or seem to produce most of the outcomes and then there is everyone else."

https://expressingthegeniuswithin.com/prices-law-and-how-it-...


The two top performers in my organization definitely don't have a lot of commits. Instead, I would say their strongest skill is written communication; in emails, log messages, code review comments, and chat messages.


I worked once on a piece of software to track construction projects. The CEO was always grumbling about the pace of development, citing the example of programmers who added Gantt charts to it in two days, and why was I so much slower at “simpler” features.

Fast forward a couple of months, and we had a customer complain about the Gantt charts being off. I had a closer look at the thing, and it turned out that the bars on the chart had been drawn using the Math.random() function. They came out different each time you refreshed the page and were in no way related to the real thing.


When you have a lot of commits you are working off yourself while everyone else needs to get caught up on what you did and then contribute after. Its essentially twice (or whatever) as much work as a laggard. It becomes 1 persons project and the others are observing. For better or worse because just running with something is often faster than deliberating at every turn. But also can backfire in many ways. Its similar to group projects in college.


I hope that someday this "version control craze" will end, this perpetual race where everybody is urging to fire PRs to up his stauts in the team, this sensation that before being a good engineer you have to know git dark arts to the core, in order to rewrite "HISTORY", and amend each misdoing as you see fit. We need to remember what is our job: writing software that makes sense, not commiting code.


"version control craze" !

Really! version control is one of the big improvements to software development I have seen.

Now if we could get the rest of the customer chain to start version controlling their requirement docs and properly minuting meetings and action points


Version control is great as a productivity improvement tool and a way to organize contributions from multiple developers. It's not great when treated as a social network or productivity measuring tool. When you're firing off PRs to get your name up there again or to fill in your square on the calendar, you're kind of abusing the tool.


I don't get this. Version control is essential. I've worked at places that didn't have it and it's absurdly unnecessarily unpleasant.

If you really don't like git try Mercurial. I haven't worked with it in years but it was very easy to work with


This is inline with my experience as well. I know it is the common wisdom to not measure performance with lines of code or number of commits. But in a large picture it's always the top performers that make the most commits.

If I was a manager and asked to lay off X percent of the engineers, I would totally take into account number of commits among other signals.


One important point missed:

- Top performers tend to be motivated.

- Just because someone is a "top performer" in one project/team/company doesn't mean this person will be one in another project/team/company. Reasons can be manifold, motivation can play a big role between someone being a good and someone being a top performer.


That's plainly not true.

There are top performers that do not produce the largest number of commits and PRs.

They produces the most difficult ones to get right.

There are mediocre performers that produces an awful lot of commits and PRs, confusing volume with substance.


“top performer” is meaningless unless you’re looking back with enough distance to quantify all manner of contributions.. the people who generated the most complexity the fastest are just as likely to be dangerous forces of chaos.


Could not disagree more. The article's main argument is wrong and dangerous to an extent! A software engineer's productivity depends on the quality of his thinking that goes on between his two ears - not on the number of lines of code (LOC), number of commits and some other meaningless measure. I don't think the author knows/understands what it takes to operate at a Senior/Staff level and beyond. Yeah, may be for someone just out of school productivity can be measured in terms of lines of code. OTOH, if a Principle Engineer in the team comes to me and says that he feels great just because he added 500 commits/30k lines of new code to the code base I will just say one thing: "run for your life"! as soon as possible.


It's not really constructive to start your comment with a "utter piece of crap".


Since this comment was downvoted let me elaborate this with a personal experience (of course it is anecdotal).

In June of this year, my entire team was struggling after moving to a new Kafka cluster because of low throughout in one of the backend services (about 250k/min with approx 32 instances). This was causing issues for our downstream dependencies as we were not able to reply back within one hour of consuming the record from upstream and our L2 support was daily getting numerous pages which were also escalating to us. Then my manager asked me to take a look at what is going on and if we can improve the throughput somehow. After almost spending one week of banging my head against different theories and tons of experiments in QA I finally figured out that the new Kafka client we were using had a setting where it required acks from all brokers (which has increased to 5 in the new cluster from 3 in previous one) and this caused a huge increase in the blocking time even though we were using an Async framework. The async task just waited too long for completion and once completed could not get the threadpool back due competition with other threads. Solution: simple, I just changed the Kafka producer ack from "ALL" to "ONE" requiring acknowledgement from one broker only. Throughput with same 32 instance jumped from 250k/min to around 700k.

I ask the intelligent readers of HN - do you think that based on this I should be penalized since the change was only in one line of config code? Yes, that's all what it took to resolve this outstanding issue - one line of config change/one commit albeit one week of thinking and experimenting time!


Agreed that your week of combing through the settings to get that kind of increase was well worth the time, and it would not be fair to measure your productivity by number of commits

I think you are getting downvoted because of your comment "utter piece of crap" as against the guidelines

I suggest you edit that post and take it out

Your observation, stripped of negativity and emotionalism, would probably be up voted!


Thanks. Updated the main reply.


Quality is usually how fast they can complete features with least bugs for a faster release while navigating feature changes they go. That’s where rubber hits the road.


Another interesting characteristic I've seen over the years from top performers is that they also have deleted the most code.


Maybe, just maybe... the person you think is a Top Performer is not actually the Top Performer and you are creating a fake correlation to their high number of commits and pull request. Maybe we need to define who is the Top Performer in the first place.


I mean I won't disagree with your first point but if you read the article you would know that the author very explicitly defines 'Top Performer.'


I read it and don't think a go to person is a Top Performer. I wasn't talking about the author view of Top Performer but the realty. I think i can't tell what I'm talking about this KPI situation correctly with a comment. I will try to write a blog post about that asap. I noticed I have a lot to talk about creating KPI for humans. :)


I suspect that it's nearly impossible to separate one's definition of a top performing developer from some sense of how often they commit. For better for for worse.

I've worked on teams where I had colleagues who were incredibly deliberative. They would spend lots of time working with stakeholders to deeply understand their problems, and then produce elegant solutions that made their jobs drastically easier. Small changes with huge payoff. But management, and even most other team members, didn't recognize this. They just saw a slow programmer. Credit tended to go almost exclusively to other teams, when their members started using those tools to radically improve their own processes. The company's dev org largely didn't care about that effect, because it didn't positively influence their own KPIs.


Obligatory: https://github.com/artiebits/fake-git-history (does just what it says on the tin)

"I don't encourage people to cheat. But if anybody is judging your professional skills by the graph at your GitHub profile, they deserve to see a rich graph"


Wow, the premise of this article is very wrong; it's deeply concerning that people are falling for this.

Those who make a lot of commits are not top performers, they are mostly engineers who are overly concerned with their 'optics'; they are engineers who are good at projecting themselves as 'top performers' but if you actually look at the results of their work a few years down the line, you will see that they are in fact low performers of the worst kind because they tend to add a lot of unnecessary complexity and technical debt because they don't think through things enough and just implement.

I'm absolutely shocked to learn that people are falling for this.


That's why there's 'most' in the title. Noone here assumes that more commits = better, but the article simply points out a correlation that MOST top performers, tend to have most git activity, and I see a similar thing in my professional experience. That doesn't mean I think that anyone with a lot of PR reviews or commits is better than other people...


I disagree with the 'most' premise as well. Totally a false signal. In my view, the opposite is true. The top performers tend to commit less. People who are genuinely passionate about coding don't tend to put that much effort into the more pedantic aspects of the process; they're more focused on big ideas such as the structurally important parts of the architecture.

The developers who make a lot of commits are often more concerned with optics and they will argue for hours over unimportant tedium while sometimes missing the big very important ideas (the ones which will have real impact and flow-on effects for years to come). Basically they can't tell the distinction between what is important and what is not. They can't tell apart bureaucracy from value creation. Obsessive focus on commit size and other tedium is a key signal that someone doesn't know what is really important. They tend to be conventional thinkers (driven by peer pressure and peer approval) and their idea of productivity is distorted by false consensus (like the one this article attempts to instigate).


My experience is the same as the author's. If one defines top performer as you did, those who are "more focused on big ideas such as the structurally important parts of the architecture", and you run the numbers, you'll find most of these people will also be outliers in commit count.

This is true in just about every team I've worked in. And it makes sense too. The person who best understands the design would be the fastest to implement it. They can both do the design and implement 2/3 of it in the time it takes the other engineer to grok the design and fumble through the other third.

Frankly, I think the role of "design architect" is make-believe. A project only has so much "design work" to be done, so the top dev isn't going to be sitting around twiddling their thumbs through the implementation phase. Even people like Jeff Dean still write a ton of code, to my understanding.

What you describe in the second paragraph sounds like something that would occur only in a dysfunctional workplace with dysfunctional manager and dysfunctional colleagues. (And even then I have a hard time seeing it happen for long -- someone is going to get annoyed doing all those code reviews of pointless changes and put a stop to it).


If you have to work on a very complex project, you need to plan the architecture at a high level or else it will be a disaster. I've seen the code-as-you-go approach fail miserably over and over.

Try coding a decentralized P2P system where multiple instances of the same code running on many different kinds of machines communicates with itself in a scalable and DoS-resistant way. There is no way anyone can implement this properly without careful architecture planning. You would never get it right the first time unless you plan and analyze all the main use cases carefully.

Or try to design a distributed pub/sub system or message queue with automatic sharding across the cluster. Also needs a lot of planning. You can't just code along and hope to get it right.


Yep, I agree completely. I'm just saying that in my experience, the person who leads that planning also does a whole lot of the implementation. I've been around the business 20 years from startups to enterprise to FAANG, from embedded life support devices to dev tooling to cloud services, and I can't recall an instance where there was a dedicated "architect" role that didn't dig pretty heavily into the codebase as well.




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: