Debugging code and reading others code are the two big ones to point out that are usually equally important to writing good code.
I have experienced that you can compare programmers in these tasks on a micro level and see drastic different results.
Top programmers tend to not only write code with good foundations, but also have an uncanny sense for the root of seemingly obscure issues, as well as the ability to understand other code almost on sight.
When they call me up frustrated that they've wasted hours trying to find a bug, whether it be as innocuous as a typo, as subtle as a type-error, or as painful as a quirk in the framework, I will always consider time spent debugging to be worthwhile. As this is when you pull apart the guts of the code, stretch your understanding of it, and learn to isolate the flow of data within a system. Then you can put it back together in a better way.
At least, that's how I learned to code.
To me, that's programming in a nutshell. Or at least what it should be.
On the other hand, feature poor software is not necessarily simple. Simplicity is hard to achieve. It requires a lot of thought and usually a fair amount of iteration. "Don't touch that code because it doesn't have a good ROI" is also a surprisingly good way to slam the breaks on a project.
Maintaining velocity (or even increasing it) requires a delicate balance of avoiding work that will harm you and encouraging work that will help you. There needs to be a dialog between stakeholders and developers that's 2-way to accomplish this.
Yes this swings both ways. I worked at a startup where every feature improvement was shouted down as a waste of opportunity cost.
They found a decent local maxima but their growth stalled and they degraded into a consultingware company. The more ambitious devs bailed because there was no room for growth.
I'm curious - what were the devs doing with their time, if feature improvements were constantly being vetoed? Were they being kept busy with other tasks which you consider lower-priority, or were they just sitting idle?
Yeah it was a really weird culture, there was definitely some quiet time where I would propose a product improvement but still get no traction. Eg the product website was really embarrassingly 90s, they would let us A/B test customer deployments but not their own site.
There also was a lot of repetitive content scraping tasks that should have been improved and resulted in excessive pager duty (hard to explain but think fragile regexes that parsed customer HTML). That was the last straw for me, I'll do pager duty but not every night just because the PHB is a fool.
They mostly burned our time on trivial consultingware requests instead of improving the core product. I eventually started sneaking core enhancements in by padding out the customer work and just not telling anyone.
This is the same PHB who threatened to fire me because I had to rush my partner to hospital with a concussion. Wonderful human being that.
It's not a strategy I approve, but it can be effective from a commercial stand-point, specially if the half that doesn't work is the half barely used in real life.
It was specially true in the past, with on premise software where clients were in a kind of lock-down with the software provider. Today, as we move more and more towards SaaS, this approach is far more risky because your client could easily switch to another service provider.
More seriously, this kind of situation can happen without ill intent, simply have a fast developer with a strong personality and he will become the lead for everything and the only one able to understand the code base.
Also the velocity of an individual developer can be overvalued and mistaken for a 10x, it's certainly easier and faster to write spaghetti code instead of a well designed/architectured code base, but doing so can have dramatic effects down the line, or even immediately for the other developers in the team. Yet it can be a quality in critical situation (like our startup will shutdown if feature X is not implemented in 2 days).
It can be hard to differentiate between a developer like I describe and a real 10x developer you describe. The applications we designed can be quite difficult to implement depending on the complexity of the domain specific logic the application deals with. Sometimes it's impossible to recognize between complexity caused by the application architecture (generally avoidable) or complexity inherited by the overall domain logic (nearly unavoidable).
And lastly, it's never, ever as black and white as I describe.
That being said, your colleague seems like a really good developer, able to see the big picture and steer a code base in the right direction. I hope for you that he will part of your team for a very long time.
A 2nd edition of Peopleware summarises it; the 10x programmer is not a myth, but it's comparing the best to the worst; NOT best to median. It's also not about programming specifically; it's simply a common distribution in many metrics of performance.
The rule of thumb Peopleware states is that you can rely on the best outperforming the worst by a factor of 10, and you can rely on the best outperforming the median by a factor of 2.5. This of course indicates that a median developer, middle of the pack, is a 4x developer. Obviously, this is a statistical rule, and if you've got a tiny sample size or some kind of singular outlier or other such; well, we're all adults and we understand how statistics and distributions work.
Peopleware uses Boehn (1981), Sackman (1968), Augustine (1979) and Lawrence (1981) as its sources. [ "Peopleware", DeMarco and Lister, 1987, p45 ]
Now all this is only an extrapolation of the above power law which was originally meant to describe papers published within a time frame. And to make things more meaningful, its really hard to compare people across teams and even companies. And finally, its all relative within a company.
From other posts, it feels like people tend to overestimate other's ability. I'm sure all those anecdotes are true but the evaluation of said individual to be 10xer may be overstated without a sensible metric.
It's incredibly hard to estimate anyone's relative abilities in knowledge fields - and this includes yourself (probably worst of all - estimating your own ability is remarkably wrong much of the time).
How much of someone else's ability came down to an "aha! moment"? How much was because of what they've done before. How much was because of what they heard someone say about the problem that no one else caught? How much was seeing three other teammate's efforts, and noticing they're reproducing work, and can cut some of their workloads? How much was because they kept everyone else excited and motivated to finish the project and do well?
Some things are easy to compare - times to run a 5K, how many biscuits you can roll in an hour, how many bricks you can carry at once. Most things aren't that simple.
Analyzing the relative ability multiplier for any given contributor can really only be done after the fact (often well after), if at all.
10x itself comes from the long-term observation that some programmers are just dramatically more productive than most. It doesn't come from one trick or technique, rather it's a combination of strong modeling, internalizing the high-level goals, and being able to mentally move through the layers of abstraction very fluidly. Where the 10x comes from is building elegant models that bypass whole rafts of problems and unnecessary code that lesser programmers would create. It's more about the code they don't write than the choice of tools.
Consider git vs svn for example. Linus was familiar with Subversion and its predecessors (CVS, RCS) before he created git. He knew that that model of change control was inherently flawed, and that he could create a better base model. According to Linus, he built the git core in 10 days. Even though svn had decades of a head start and tens of thousands of hours put into it, it only took a few years for git to rapidly supplant svn. The reason? Because the problem was modeled better. After 7 years of using Subversion, I was still getting bitten by weird bugs, had a terminal fear of branching, and general uncertainty about how the internals works. Within 1 month of using git, I had a better mental model of how it worked than I did about svn. Looking at the ecosystem around git now, especially considering how user hostile the porcelain CLI is, is a testament to that original core Linus created in a remarkably short amount of time.
Another example in the Ruby world is how Yehuda Katz and Carl Lerche created Bundler. Rubygems already existed of course, but there was no way to freeze dependencies, and as the Ruby ecosystem exploded in the mid 2000s we rapidly entered our own version of the famed "DLL hell". These guys came in, and over a relatively short period of time, hammered out a solution for freezing gem dependencies, layered over Rubygems (whose antagonism they often faced), addressing a huge range of use cases, and got it adopted as a defacto standard. To this day, Python has made 3 or 4 attempts, none of them as good as Rubygems. NPM, which started after Bundler was already out, was inferior and is only approaching in the last couple years with Yarn.
These are the type of things 10x developers do, they build systems that pay huge dividends by their elegance, things which others can build on. Neither one of these examples has anything to do with using a "wide variety of tools". Learning multiple tools is just something that happens organically through your career in most cases, but it's far from necessary. You could specialize entirely in C++ for your entire career and while there are lots of problems it wouldn't be a fit for, but that doesn't preclude being a 10xer.
The fact that Bitkeeper precipitated git does not really diminish the accomplishment in my eyes. Remember, Bitkeeper was closed source, and the licensing crisis was precipitated by reverse engineering of Bitkeeper in the broader kernel developer community. Linus specifically set out to build a replacement from first principles and he built an incredible foundation in a very short amount of time. If it was a straight clone that would be one thing, but in reality git is its own thing, and fairly par for the course in terms of a top engineer leveraging past experience to build something better.
Joking aside, I only use a subset of git functionality which generally suffices. The part that keeps me happy is that the tools for detecting files renames, and diff’ing tools continue to improve and become very useable (despite the diff options being quite archane at times). There was an article a while back discussing how git’s straightforward approach of storing the original data blocks allowed the continued improvement of the diff and rename tracking tools in contrast to others like mercurial or bazaar which tried more sophisticated delta (?) techniques upfront. Wish I could find that article again, but it would support the parent’s premise that choosing the right models and framework make a 10x programmer.
Being a 10xer means being fast at finding the right abstraction within a reasonable time frame which allows to further diminish time constraints that are part of the execution.
In other words, think (well) before you act.
To be bloody minded, what happens if 10X says: "Why use RoR for an API-only app with no database when you could go with Sinatra? Or even serverless?"
And all the <10X devs on the team say "But we know RoR."
Then it's a cost benefit analysis of whether retraining the team to adopt a 'better' tool is worth it. Sometimes it's blindingly obvious that you shouldn't be inventing a wheel when there's a wonderful library that'll do it for you, most of the time it's not so simple.
If you think can tell within less than 1-3 years then it's probably not. I.e. just spewing up code faster than someone else or using some technology that appears to work isn't really the test. There are those who do things very quickly, then re-do them, re-do them, fix them, all very quickly. A lot of action, not much gets done. There are things that are done slowly, not a lot of noise, but get you to a completely different place. Very rarely is this purely a question of tooling, language, editor etc.
That's not to say that being able to move quickly doesn't have value for proving a concept or prototype some idea, maybe the stability or scalability or maintainability don't matter. But when you're building stuff for the long term those are x10 factors.
10x is producing code of such a high quality that time spent in QA doing break/fix dev is negligible.
10x is thinking up front about the data model and technical design before writing any code.
10x is maximizing simplicity (no unnecessary abstractions, etc.)
10x is having a deep understanding of the environment (OS, language, supporting libraries, etc.)
The difference between 1x and 10x is decreased time spent downstream in the development process, reworking code that doesn't work, maintaining it after, etc.
I came into my current company as a dev lead to create a modern dev shop from scratch.
There were some areas where I had no practical experience and didn't have the breadth of knowledge I needed and of course no one else in the company did either.
My first major green field project would have turned out a lot worse if I didn't have a network of former coworkers I could ask recommendations from.
That's in line with research by people like Alex Pentland who have identified face-to-face networking as the most important factor that distinguishes high performers from everybody else.
Extremely successful people often have a strong network of peers who they can rapidly bounce ideas off and iterate through new solutions.
The discussion on top performers often seems to have a narcissistic tone and focuses too much on individual aptitude.
I’m meeting with this attitude frequently. In case I would be the more competent dev, why should I let in person with such attitude?
It is just not true that any 1x programmer can be a 10x programmer.
General intelligence, working memory and the ability to focus for extended periods of time can be slightly improved but cannot be improved by an order of magnitude.
It is a noble fiction to pretend otherwise. But it's still a lie.
In a large organization, it arguably helps to make the 1x programmers into 1.5x (or 2x or 3x) programmers, and the 2x programmers into 3x ones. But it does not make a 1x programmer into a 10x one. Whether that's possible has probably already been decided before you met the person.
The last five years have been pretty kind to me - senior developer at two companies, delivering a number of successful projects, building up development teams, delivering talks and open-source tools, and leaving each company in a much better position than when I joined.
From my CV, you'd probably think that I was a solid developer, but in reality I've felt like I'm a passable developer that simply learned how to use one framework really well. I was a senior .NET developer, but I felt like an imposter when people would talk about Linux, or many of the tools I see on HN every day.
So, I left a cushy job, and I've jumped into the deep end. It's fairly obvious that I'm not a 10x developer because I've found it REALLY hard! I can go from C# and read a bit of Ruby, but if you were to throw me onto a project and say "fix that bug" it'd definitely take me twice as long as a competent Ruby dev, and I'd probably add more bugs than I had fixed.
Hopefully things will improve, and within a year or so I'll feel comfortable enough in a range of languages and framework to feel less like an imposter, and while it'd be nice to be a 10x developer I'd feel much happier to feel less like an imposter.
I've seen 10x'ers that mostly used existing code and political/influence moves to go faster than everyone else. Smart people for sure, but not just code slingers.
Building off existing code is far far smarter most of the time.
For instance, we just moved to AWS. I am not going to spend hours upon hours learning in depth all of AWS's services. I did watch a few videos and subscribe to the AWS podcast so I could be aware of the different services to know what they have. When there is a business need to implement X, I'll know what's out there and then do a proof of concept to see if what AWS has fits our needs.
Learning best practices transcend technology. We've had "consultants" come into the company where I'm the dev lead and question my choices of technologies. I ask them two questions - what's the business case and what industry standard best practices are we not following?
Yes, but do you have any evidence that can lead to "10x" productivity, even anecdotally?
If they had ELO ratings for programming (like they do for chess), then a "10X" programmer would have a grandmaster rating of 2500+.
You know a "10X" programmer when you see one. Until you see one and experience working with one in real life, you simply won't understand. You will attempt to rationalize the concept by examining all of the programmers you know, and assuming that the best one must be a "10X" one. You may or may not be right.
Interestingly enough, 10X programmers almost never get paid 10 times what the average programmer makes. From what I've seen, they are lucky to get paid even twice what an average programmer makes.
How many devs would pass this test? Hardly, that many.
Engineers at Google can simply have smart people who they can leverage, easy access to experts and specialized contractors and if all fails, no need to worry there is ad tech monopoly in place which keeps minting money.
You simply cannot discuss "Engineer" without also talking about the constraints involved.
Engineers at these companies are worse engineers since the most important constraints are already taken care by their respective monopolies.
Also software developer salaries are still quite similar to real estate prices. What matters is location, location and location.
He said he hates dealing with people. Even today, he has just 3 devs and still operates the same way handling multi-million dollar ad revenue.
Knowing something completely different like R, TensorFlow, or Erlang, and the specific scenarios where those are exactly the right tools for the problem you are facing, can completely change the kinds of problems you can solve in a short amount of time.
- Restrict cleaning, washing and hygiene at a large to the surfaces seen by customers.
- Reuse food from leftovers.
- Use expired or different ingredients.
In the restaurant industry, if you do this and get caught, you get sanctioned. In software there's no such thing. In software do the equivalent of these things on purpose, and many times it's encouraged.
Neglecting requirements in order to distort the perception of how much progress has been made, or the product value, is anti-consumer behavior.
I am not interested in anti-consumer behavior but rather doing my job and giving back to society. That means shipping code that meets the expectations of what's being sold.