It's MUCH better to have an engineer which takes 3 days to implement a feature in such a way that it doesn't need to be revised/fixed again for at least 1 year than to have an engineer which takes 4 hours to build that same feature but in such a way that the feature has to be revised 10 times by 5 different engineers within the course of the year.
The second approach actually consumes much more time in the medium and long term. By putting too much pressure on engineers to implement features quickly, you encourage them to create technical debt which another engineer will have to deal with later.
It's basically a blame-shifting strategy.
The only problem with this (correct) approach is that somebody has to be able to foresee what happens in 1 year. Since it is a relatively rare skill, many decision makers choose quick and dirty solution.
I haven't worked at Google but I have worked in both startups and big corporations and I find that big corporations are usually slower-paced and engineers have more 'generous timeframes' to complete features (and to do them properly).
The downside of the big company/corporation approach though is that managers tend to use the extra time to create bureaucracy around the project (in an attempt to manage risk) and this slows things down further - So while big companies tend to 'get it right the first time'; they do so at a significantly slower pace than what you would see in a startup.
As engineers, we rarely pick the absolute best solution for every problem - Sometimes we pick the second, third, fourth... best solution (because they have smaller initial implementation costs).
If management puts too much pressure on engineers to implement features quickly, they will rarely choose the best possible solutions to problems; they will keep picking the 6th or 7th most optimal solution over and over again (for every problem they encounter)- In the end, the project will be littered with suboptimal solutions and the whole project will slow down to a crawl.
Better solutions usually take more time to implement - The 10x engineer doesn't type code 10x faster than a regular engineer.
You could have a 10x engineer whose code is so inflexible that they slow everyone else down by 20x - In that case, they are not a true 10x engineer - In reality, they are a (1/n)x engineer where n is proportional to the total number of engineers working on that project.
For every task you need some minimal iq. Some tasks need higher iq than others. A programmer is someone who can at least do the least iq requiring task. He will fail on some more difficult tasks.
Iq just a standin for mental compute capacity.
Inventing Calculus was hard, but virtually anyone can learn to pass a Calculus test given enough time and incentives.
I've encountered concurrency challenges in my day-to-day work that have no cookie cutter solutions, are specific to my application (particularly in constraints and requirements), and I haven't even been able to fully convince myself of the correctness of my own solution, let alone contrive a way of systematically proving or even testing it.
On the other hand, programmers faced with solving technical challenges such as optimizing and approximating in the face of limited resource, where heuristics and intuition only discovered and learned through experience and trial-and-error (pattern building and "finding" at its best). These meta "design patterns" aren't found in any books yet, especially because they are pretty hard to articulate using the English language.
There is an inventive element to the sort of problem solving that good programmers perform that goes well beyond pasting solutions from StackOverflow. This is why duct tape and bubble gum cobbling of out-of-the-box solutions rarely works in practice (and why good engineers are paid so well). Clever hacks are actually pretty damn clever.
It's scary how many "best practices" I've only learned through building my own production systems and having actual users test what I've coded -- having already done a fair deal of sitting in classrooms being inundated with CS theory from the very minds who conceived it -- even years of building pet projects of my own haven't prepped me for the challenges I am encountering now.
That sounds like an accurate description of 95% of existent software in the world.
If the end user experience is anything better than catastrophic and it's making a profit, it's very hard to make anyone care. At best, there will be a big cleanup project that takes it from being totally incoherent and inefficient to being merely very stupid, but in a fairly consistent way.
Also like you say, everyone won't pass calculus. Not even programmers.
Googling the original problem description will (for a novel formulation of the problem) not result in any advantage and implementing a standard algorithm will not work if you need n to be large for your application.
I will admit that it's a slightly contrived example ;)
I'd guess that at you get to at least a few percent if you count grad students, researchers, people working on performance critical stuff like GPU drivers, people working on state of the art tooling like Google internals or large OS libraries etc.
The truly bad programmers destroy more than they create, making extra cleanup work for competent and productive programmers like me and everyone reading this.
On the other hand, that's an average. Most anyone can solve any individual problem once.
Apply the Intermediate Value Theorem to this example, as needed.
Working memory: Memorize things that surprise you. Then Try to understand why you were surprised. You don't have to memorize things you understand.
"Thus, if we ignore the most extreme cases, the differences
between the slowest and the fastest individuals are by far
not as dramatic as the 28:1 figure suggests"
Why would you ignore the most extreme cases?
And after a (inordinately long) time reading, the light bulb moment happened and I added a single line of code.
In my view, when you are in the business of making Seven-League Boots, you don't need to sprint.
Yes we want to deliver products quickly, but the link between good products, effective business and speed of code writing is tenuous at best. Take your time, line up your shots, and be sure you are a value multiplier. (That's the real 10x programmer. 10x as valuable. That might mean using good SEO techniques to get paying customers, but using boring old SQL back ends)
(Link to the years old article I have not written yet, comments welcome http://www.mikadosoftware.com/articles/slowcodemovement)
To me the main job of a lead dev is setting up all the boring processes of build scripts, continuous deployment, package libraries, testing environments etc., that make the other developer's jobs easier.
It's the other way around. If you take the time to do things better, then you build up momentum and make future work easier. If you want to improve productivity, slow down and do things right, consistently. It'll pay off in the long run.
I don't buy it. Sometimes you need to do things deliberately slowly in order to think everything through. All the use cases, the potential edge cases, failure modes, and so on.
IMO a "10x" programmer is someone who knows when to crank out code and when to take things slow.
I'd love to believe that, but it assumes that the incoming rate of things to be done is and will remain lower than your sustainable velocity. In practice, that "more time for doing them better" has a significant chance of becoming additional technical debt.
The main findings from this investigation of the dataset variance.data can be summarized as follows:
The interpersonal variability in working time is rather dif- ferent for different types of tasks.
More robust than comparing the slowest to the fastest in- dividual is a comparison of, for example, the slowest to the fastest quarter (precisely: the medians of the quarters) of the subjects, called S F .
The ratio of slowest versus fastest quarter is rarely larger than 4:1, even for task types with high variability. Typ- ical ratios are in the range 2:1 to 3:1. The data from the Grant/Sackman experiment (with values up to 8:1) is rather unusual in comparison.
Caveat: Maybe most experiments represented in variance.data underestimate the realistic interper- sonal variability somewhat, because in practical contexts the population of software engineering staff will often be more inhomogeneous than the populations (typically CS students) used in most experiments.
Still only little is known about the shape of working time distributions. However, variance.data exhibits a clear trend towards positive skewness for task types with large variability.
The effect size (relative difference of the work time group means) is very different from one experiment to the next. The median is about 14%.
The oft-cited ratio of 28:1 for slowest to fastest work time in the Grant/Sackman experiment is plain wrong. The cor- rect value is 14:1.
For the last couple days I've been doing Project Euler with Ruby and the lack of lag time translates into much better focus.
Another issue is of course the person who replays technical debt on every completed JIRA ticket will probably be a bit slower. And the person who removes lines of code and asks if a feature is really necessary is another beast. All that thinking is going to slow down your LOC per second.
Dividing the data into quarters instead of looking at individuals is definitely going to have a smoothing effect.
To «prove» a 10x programmer existence you would need a bi-modal repartition on the percentile of workers/speed.
The grant sackman/peopleware/The Mythical Man Month all try to answer a question that is tricky : what makes someone creative productive?
People focus on the speed. But they are just forgetting the most important part of the experiment.
One of the most important part of G/S experiment that everybody forget is the lack of correlation between performance and
2) experience after 2 years of practice.
Having done more than one job, other fields of works that are also creativity based, the «feeling» was that it is not only about coders but musicians, intellectual professions, journalists, project manager...
What are the implication of the lack of relation between diploma and experience?
1) Diploma are overpriced, the job market is artificially skewed in favor of those who have the money for it;
2) New devs are underpaid, old devs overpaid.
The burden of the proof that a diploma/experience is relevant for a job should be in the hand of the one selling diploma. Diploma especially in computer science seems to be a SCAM
The effect of this scam is :
1) young workers enslaved by loans in jobs they may not be good at/liking;
2) a rigid job market that prevent people from moving hence artificially creating difficulties to have full employment
3) an artificial exacerbated competition resulting in cheating from both sides.
Don't be concerned about it, but try to get faster.
>  Three of the twelve subjects did not use the recommended high-level language JTS for solving the task, but rather programmed in assembly language instead. Two of these three in fact required the longest working times of all subjects. One might argue that the decision for using assembly is part of the individual differences, but presumably most programmers would not agree that doing the program in assembly is the same task as doing it in a high-level language.
In my experience, making the right decisions like that is the real
difference between good and not so good programmers. Good programmers
do on average better choices that results in less code, code that is
easier to maintain and reason about, and choosing language and architecture
that fit the problem at hand. It is not that good programmers develop
so much faster usually.
Finding metrics which work well even when people try to game them is incredibly difficult (if not impossible).
Inefficient and complicated solutions build up and the mediocre developer ends up fixing old problems.
(slow is just an indication of mediocre)
Unfortunately the long term effects are not visible until after a long time (duh), hiding the individual differences.