The thing to note here is that this is a one-way correlation: top performers tend to produce lots of commits. That does not mean that people who produce a high amount of commits are the top performers in your company.
I've had to deal with plenty of colleagues who moved very fast, committed extremely often, and were praised by management for the amount of work they produced. But in reality their work was rushed, sloppy, riddled with issues and a nightmare to maintain. And it would inevitably fall upon us "lesser" performers to actually investigate and fix all those issues, making us look less productive in the process.
In my opinion a better metric to quantify someone's performance is to count the amount of problems they solve vs. the amount of problems they create. I bet that many of the rockstar programmers out there would land on the negative side of that particular scale.
> praised by management for the amount of work they produced
Management literally can't tell the difference, sometimes even if it is a former dev. There are many ways to ship code faster.
-Don't test
-Don't worry about sanitizing inputs--extra effort slows you down.
-Optimize for writing, not maintaining
-Take a tech-debt loan. Bolt your feature somewhere convenient it doesn't belong.
-Put on blinders. Your n+1 query problem is Tomorrow's Problem
-Avoid refactors
-Choose the first architecture you think of, not the one that is simpler, more flexible or more useful
-DRY code takes too long to write--you'd have to read more code to do that! That's bottom performer thinking!
-Securing credentials is hard. Hardcode them!
Remember, if you want to be a top performer and have breathless articles written about you, ship many commits, fast! Also, this is a virtuous circle--if any of these become "problems" you'll get to ship more commits!
> Don't worry about sanitizing inputs--extra effort slows you down.
I feel like many people downplay how important this is. I've wasted way too much time because of this. Doing code archeology to understand why data persisted to database many years ago breaks some seemingly unrelated feature for a customer is definitely not my favourite part of the job. Working on a validator that someone was "too busy to add" in the first place is also not fun (and a waste of time - because original author could probably do this in matter of minutes; whereas someone fixing things post-factum need to reverse engineer what is going on; check whether some funny data wasn't persisted already and potentially handle it).
To phrase my frustration in more constructive way: it's always a good idea to put explicit constraints on what you accept (What characters are allowed in text-like id - only alphanum? Only ASCII? What happens with whitespace? How long can it be?). Otherwise it's likely you will find some "implicit" constraint down the road; ie. other piece of code breaking down.
Oh, lord. I don't usually have to think about sanitizing data coming from my own database, but of course any long-running database can have all sorts of crap in it. What a nightmare.
On the contrary, small refactors are a great way to boost commit count! Management once tried to boost productivity by rewarding commits. Our team basically started manually committing automatic refactorings. We won by a landslide, but I don't think the company did. I got a nice prize out of it though.
What differentiates highly productive developers is their ability to ship more complete functionality from the same input as other developers. Inputs can be tickets, requirements, or even notes from a meeting. Commits and pull requests are production steps, but don't tell the whole story - they do give you some level of indication.
the converse is also true, occasionally. depending on the business model, some engineers are too comfortable with a slow, methodical process. often, just getting something to work, despite accrued tech debt, helps a team iterate toward a better solution. if requirements or the rest of the realized architecture was perfectly understood from the beginning, then slow and methodical is the minimal risk implementation. but in the real world, that’s rarely the case.
what i’m saying is that true talent is able to intuit when one approach is preferred over the other.
What if every commit has to be reviewed by a certified code reviewer before it is accepted? That is how Google does it. Means you can't skip tests etc. Not sure why not more organizations do it, like wouldn't banks etc benefit a lot from it?
Wonder to what extent that solves the problem. Quite often your code reviewer's interest/world view aligns with yours: "Right we need to ship this feature asap so let's worry about tests later" or "More commits boost your performance review so these 50 1-liner commits I get it."
It reminds me of a similar problem in academia: the LPU (Least Publishable Unit) phenomenon where people tend to break a work into multiple smaller pieces to get more paper counts. It's so widespread that lots of paper reviewers are doing them too. So you don't get punished.
In my experience, its the sloppy Engineers that create problems that get the most credit. They are called upon to be 'heros' again and again, to fix their crap they shipped with 1000 issues.
Yes, I've seen that a few times. What non-technical management sees is an engineer at the center of efforts to fix an issue in the midst of a crisis. What they don't see is that the work done by competent engineers doesn't blow up and become a visible crisis.
IMO this is a big red flag that something is wrong with both the technical and non-technical culture. Technical management isn't happening, at least not effectively, and non-technical management is getting too far into the weeds on technical details of production issues.
When you disagree in a code review with the "top performer with all the commits" it gets escalated to management, who sides with the "top performer" because they are a "top performer."
Put a Non-technical VP/Manager in charge. Maybe someone who started out as a UI designer or non technical PM and then somehow became VP of engineering because they were an early loyal employee.
Charismatic coders will take advantage and the manager will eat it up. The "rock stars" commit tons of code put create massive tech debt. They write their own buggy versions of stuff they should have just imported industry standard OSS versions of. Rule evaluation engines, broken file formats, broken home grown encryption, broken distributed job systems, heavily broken home grown DB frameworks.
They'll have a huge bias towards terse code that doesn't make a lot of sense. Abuse of functional code. Everything is the opposite of KISS.
Everyone else on the team is just constantly fixing their showstopper level bugs, of which there are always many.
They talk a good game and management constantly thinks the rock star is super smart and the rest of the team is deficient.
Then the tech debt reaches ridiculous levels, or you get bought and different management comes in and sees right through it. Managers get let go, new managers don't buy the rock star story. Rock star gets frustrated, leaves for another company where they can pull the same trick, and puts all this inflated stuff on the resume that they wrote all this stuff they shouldn't have written. A new naive manager falls for it think they must be really smart cause they're never going to find out how broken all the stuff was.
All this is WAY easier for a rock start to pull off post around 2010 because the office culture in tech has become so PC that no one can be blamed for anything. Lots of stuff that used to get someone shown the door in my early career would not even get called out today at all.
Upvoted and then retracted due to the PC bullshit at the end. You had great points, you shouldn't ruin it with your bitterness about having to be professional.
> I've had to deal with plenty of colleagues who moved very fast, committed extremely often, and were praised by management for the amount of work they produced. But in reality their work was rushed, sloppy, riddled with issues and a nightmare to maintain. And it would inevitably fall upon us "lesser" performers to actually investigate and fix all those issues, making us look less productive in the process.
This is one of the most common complaints I hear about fast devs, is the produce lots of bugs. But what I've seen is fast dev produces 5x more code than normal devs and produces 5x more bugs. Which means the ratio is the same but it feels different because they're producing so much more code. You then get the devs who say they have to spend lots of time investigating these bugs so look worse. But I've literally seen the fast dev go on to a bug that one of the other developers was spending 2 weeks investigating and find the issue in the data within 5 minutes. You then have the slow and careful devs who will write 5 functional tests, 1 unit test and add 10 seconds to the build (not much but it adds up when it's ever issue) and still have the same ratio of bugs as the guy who is doing 1 units tests.
I think the realitity is, a good dev is someone who produces the most business value. And something lots of devs don't want to hear is that the tiny little tweaks they make to improve the code quality, adds very little business value. Where as a required feature that works 90% of the time adds lots of business value. I think a lot of the complaining about these rockstar devs is just jealously. Code quality is one of the smallest tech problems yet so many devs think it's super important. No, getting stuff delivered so other departments and customers can do things is. Being able to plan things out is super important. Having a good data model is super important.
> a good dev is someone who produces the most business value
this this this.
More code might equal more bugs, but if it's a net gain in business value everything else is a secondary concern.
It doesn't mean you can just ship crap all the time. If customers start complaining/seeing errors/bad performance, business value decreases and there are going to be some $discussions.
If developers are rewarded for shoveling garbage into the pipeline all the time, there are deeper organizational issues afoot.
Isn't that just how start ups code? You ship features fast and loose and then deal with the aftermath once the company is maturer? Wouldn't this be a case of the best business value is shipping fast and loose and a slow and careful dev with 10x less bugs but is 10x slower is not what the company needs?
There seems to be a time and place for everything.
What would those fast-dev do in a vacuum? Or with only clone of themselves coworkers.
What happens when one of those fast-dev quit?
--
If everything is/would be fine, then yes, they are truly rockstars (in all the good meanings of that word and none of the bad). Or maybe merely decent among mediocre ones.
Otherwise, there is a free-rider aspect in their approach.
--
Also it can work only depending on the criticality level of the industry.
Ship broken crap quickly (but fix it quickly too)? Good if you are creating yet another social website maybe. Less good for e.g. medical devices.
--
One more point: the business value approach is not necessarily the only one to apply esp. if there are tech infrastructure components in what you ship. You can easily be stuck in local optimums too far from global ones, and fossilize some aspects of your product. See for example the classical http://blog.zorinaq.com/i-contribute-to-the-windows-kernel-w... that includes some example of death by thousands cut.
What I mean is that moving fast is partly in the eye of the evaluator. Maybe you implement what PM wants quickly, and that's cool, but maybe also doing only what PM wants is not the best thing for the project.
--
If you are easily able to plan things, have a good data model, and can develop quickly, probably you don't have a real code quality problem to begin with. At least not in the parts you contribute to. I don't actually distinguish the data model from "code" that much: it's all design.
--
Final last thought: imagine you are actually a good fast-dev like you describe, and your colleagues are less good, but imagine a case where the whole organization would actually benefit from you slowing down a bit and working on developing better way to work more efficiently with others or making them improve, overall yielding even more business value at the end. This can happen too.
> What would those fast-dev do in a vacuum? Or with only clone of themselves coworkers.
Well considering my assertion is the dev is just fast and producing the same quality as others. Carry on working.
> What happens when one of those fast-dev quit?
The company would need to replace them or have the team produce less work due to less works?
> Final last thought: imagine you are actually a good fast-dev like you describe, and your colleagues are less good, but imagine a case where the whole organization would actually benefit from you slowing down a bit and working on developing better way to work more efficiently with others or making them improve, overall yielding even more business value at the end. This can happen too.
This seems like "What if, it is better that you don't do the job you were hired for but do a higher job without getting promoted and recognised for your skills?". Well one, is if a company wants their fast dev to teach others they should make them a coach or something along those lines. Secondly, just because they're fast doesn't mean they can teach, sometimes the reason they're fast is they have less interactions with people and therefore able to continually code without having to stop to talk to Jenny from Admin about how important a bug is (not actually a devs job, there should be product/project management for this).Maybe fast dev is fast because they've been there for longer and understand the system, then it's just a case of other devs need to ramp up. Lastly, maybe the fast dev doesn't want to do this other role and just likes programming.
I'm not saying that the exact situation you describe can never happen, but I'm not 100% convinced it is all that frequent.
But if all your descriptions are precisely correct, then my opinion is that your conclusions broadly are too.
Simply, I will think and check a lot before projecting actual situations on that model. For now I don't have the feeling I've ever encountered anything like you describe. More often it was some of the variations I suggested, and maybe others.
I've worked with this type of colleague. I remember a specific example where, fresh out of university, I was assigned to pair with one. We had to create a front-end validator for the password reset feature, for users stored in LDAP. I was thinking "Wow, getting the password policies from LDAP must be a pain, it'll take a while to implement that feature". Nope, just copied whatever settings were currently in LDAP into our app's config file.
A couple months later, a pair of engineers on the team had to spend 4+ weeks developing an "installer" to properly install & configure our app, as it had grown too complex to install & configure by hand. Management couldn't really figure out why...
This used to drive me nuts and I feel like I wish the rule was, if your code breaks something you are not allowed to work on anything else until it's fixed and if possible it will be reverted.
There needs to be some incentive to not let people shit all over the code based for everyone else to clean up. Reviews are enough. All code was required to be reviewed by owners and there was still lots of this.
I'm not sure what "blame culture" is, but any professional software team should ideally have some kind of "accountability culture".
Whatever the fallout from people feeling "blamed" may be, attrition of all of your genuine programming talent due to tech-debt-machine peers being promoted ahead of them is not exactly an ideal outcome either.
What's more, very often genuine potential in naturally talented new programmers can be stunted if they're rewarded for lazy faux "productivity". I've seen this: a programmer has a genuine interest and passion for quality, but loses it over time due to a focus on doing (different) work that gets them promoted.
Code ownership (being required to "own" ones work along with the bugs & maintenance burden that come with it) is one of the most valuable ways programmers learn. This needs to be balanced, as one runs the risk of having a bus factor of 1, but it's still vital.
Blame culture is when bugs, downtime, etc. happen and people both look for somebody to blame and simultaneously look to exculpate their own behavior. It leads to CYA behavior, backstabbing and massive risk aversion. It can also lead to or be a result of a toxic work environment.
In multi causal bugs it can lead to people downplaying causes which they had something to do with and exaggerating the effect that co-worker thry don't like had something to do with. This often leads to confused attribution and poor rectification of systemic issues - e.g. writing more unit tests even when more unit tests won't really help.
"Accountability culture" sounds like it could be the same thing. Or not. I'm not really sure.
This sounds like a bad faith culture in general: i.e. "paying a price for mistakes", rather than "owning postmortems". Treating a mistake as an opportunity for punishment (be it purely social or otherwise), rather than for learning, is more about how you treat staff & peers than how you deal with mistakes.
> Accountability culture" sounds like it could be the same thing.
Can I ask what specific part of my description of accountability culture sounded like it lined up with your description of blame culture?
Yes, that’s what blame culture is: a dysfunctional organizational culture where, when mistakes are made, the organizational priority is to find and punish those responsible. The consequence is CYA and finger pointing.
There is "blame culture", where every problem turns into a witch hunt, and there is the opposite, call it "anti-blame culture" where no problem can be declared, because, well, it's never the only problem to solve, and thus can never be solved.
Competitive entities nearly never fall victim of the anti-blame kind, but where there is little competition, they are just as common as their opposite.
This is somewhat analogous to programmers who type very fast but delete half of what they type. This is very subjective again, most of the deletes can be legit deletes. There is a difference in the mistakes that are deleted and deletes that arise because of a thorough process change. I digress, but to a routine observer the person typing fast appears like a pro, someone who is really good at their job. While this can be true of other professions like a carpenter being good at his craft will be extremely fast, am not sure the same holds true for the typing speed of programmers.
If you are a trial and error type of programmer then of course speed matters a lot because it has a direct correlation on how fast you can iterate and increase your sample space of trials. The frequency of commits and PRs can be seen in the same light.
So IMO it's hard to find a true universal measure to identify the top programmers.
Check out pages 71:26 and 71:27 for "Codebase introduction and retention" between Clojure and Scala. I'd like to see more graphics like these to illustrate "lifespan of commits"
I think this can also be problematic. Long lived code _must_ work, which is good. However, it can also live a long time for bad reasons. I have seen long lived code that only lives a long time because it is so complicated that no one can understand how it works, or written in a way that is very hard to change. So it lives a long time, because no one wants to take the time or effort to touch it. Finally, when it _must_ be changed to support a new feature, it might require a full rewrite.
I've had to deal with plenty of colleagues who moved very fast, committed extremely often, and were praised by management for the amount of work they produced. But in reality their work was rushed, sloppy, riddled with issues and a nightmare to maintain. And it would inevitably fall upon us "lesser" performers to actually investigate and fix all those issues, making us look less productive in the process.
In my opinion a better metric to quantify someone's performance is to count the amount of problems they solve vs. the amount of problems they create. I bet that many of the rockstar programmers out there would land on the negative side of that particular scale.