In 50 years of programming I've worked with a few. One characteristic is a deep understanding and command of all that they touch.
For example, I worked with Bill Schelter (GNU Common Lisp). In 1980, prior to the Internet, we were working on a problem (tail recursion). He was using the emacs editor and he found a bug.
Bill stopped what he was doing, fetched the sources for emacs (using ftp), found the source of the bug, fixed it, and sent off a patch. After that he restarted emacs and continued right where he left off. It was a virtuoso performance. It was like watching Victor Borga (https://www.youtube.com/watch?v=dKeqaDSjy98).
I know a guy who plays guitar. He can listen to a performance and repeat it as easily as I can type what I read. He can transpose keys, add riffs, change tuning "on the fly", and even play thru with a broken string, adjusting chords to the new "5 string" form. He also knows guitar characteristics, pedal effects, etc., basically "all things guitar".
Great software engineers I've known seem to have complete command of all of the ideas, tools, and techniques. They understand their tools "all the way down to the metal" and all the way "up to the esthetics". You can see great people in all crafts (like woodworking).
They also write beautiful "crafted" code. I'm keyboarding a FORTRAN punched card deck (for an historical museum). You can see from the code that the author wrote clean, careful, and clear code. You can almost feel how good he was.
Greatness isn't about "team player", "hard working", "good at time estimation", or anything "social". Greatness is a property of the person, a deep love of the craft, and a focus on "doing it right".
>> Bill stopped what he was doing, fetched the sources for emacs (using ftp), found the source of the bug, fixed it, and sent off a patch. After that he restarted emacs and continued right where he left off.
>> Greatness isn't about "team player", "hard working", (...) Greatness is a property of the person, a deep love of the craft, and a focus on "doing it right".
In your history I see a great deal of "hard work" and "team player". Moreover, to be able to perform like that, I'm quite sure he has been hard working for a long time. Of course, there is talent and virtuosity which I'm not downplaying. My point is that when people see such a virtuous performance (in all fields, like you mentioned), where there is clearly a lot of talent, there is a tendency to downplay the hard working part.
Hard work is the essence of what produces so-called talent. It seems to my understanding that talent is more so the culmination of a chain of clever thinking which ultimately derives more understanding and natural association with XYZ, than a deterministic natural born trait. Albeit, there are most definitely physical strengths tied to DNA, it wouldn't be entirely far-fetched to claim that different people have different mental strengths.
However, hard work doesn't always produce greatness. It would be misleading to say that hard work is equivalent to all other hard work. It's not. It's synonymous with the phrase, "practice makes perfect." Yet the phrase should really be amended to "perfect practice makes perfect." Aimless, bad practice will led to aimless, bad results.
Mental focus and barricades are huge components as well. The greats are just as human as the rest of us. Comparison is mentally debilitating for everyone, especially if "talented" individuals have a higher rate of growth than the average. Yet I have to wonder, does the individual who "desires greatness" aspire to achieve validation for ego-boosting fictional accomplishments or truly wish to push the edge of their capabilities to their respective human limits? The former have no chance at putting their energy into prospective efforts. The latter may continue to develop and create something the world hasn't seen before, and or realize that they can do things they had believed they couldn't before.
Can't speak for the OP but I read "hard working" in double quotes as something that's not the same as someone who spent a lot of time perfecting their craft. So in the Dilbert sense of the word.
I do notice the paper defines it as "someone who is willing to work for more than 8 hour days to deliver the software product". I'm thinking meh. I think hard work at some point is a requirement to achieving excellence in most disciplines, in something like martial arts, or music, pretty much everyone I consider to be great has spent some years where their whole life was the discipline. I'm not really aware of someone who got there by raw talent without the practice (there might be examples). My experience with software engineering is similar.
However that's not the same as saying you'll become great by working 10 hour days, i.e. that sense of "hard working". You need to be in the right mindset.
Also this sort of folklore is cool but you have to consider the context, great musicians are not necessarily great in any style of music or with any instrument. That said, some people are surprising skilled in broad areas and the skills do tend to transfer within the discipline to some degree. On top of that, to the novice, an intermediate may be considered great. You really need to have some perspective to be able to recognize greatness and appreciate it.
EDIT: random btw is that most 6 string guitar chords work just as well with a missing string. Most simple chords have 3 notes in them and so with 6 strings there's enough redundancy. If you know where the root, third and 5th are in the chord (which most intermediate guitar players know) you'll immediately know whether the missing string removes any of those (even if it does the chord will still sound fine, you don't necessarily need all those 3 notes). Transposing on the fly is a little harder, but you don't really need to be a master to do that... So stuff like that may seem like magic but isn't necessarily so... Playing by ear is cool, usually people who start playing young are better at this, especially if this is how they started. This skill, to some degree, is also reasonably common...
I think there's work and there's work. The people who've I've observed with great engineering talent have certainly worked to get it, but they have been selective to the point of selfishness about what work they do. That work might look like "fun" to colleagues, and so they are diplomatic about it, or don't do it completely out in the open.
Of course the proliferation of free engineering tools makes it possible for them to have two lives, one doing the "hard work" that satisfies colleagues during the day, and the other "hard work" that develops talent at night.
I've also seen many young engineers plunge straight into "hard work" consisting of running the CAD terminal and fighting fires, and those engineers were recognized as being productive but lost their fundamental knowledge and did not become great engineers.
TLDR: Scott and his younger brother were both enrolled in piano class. His brother took to it and is now an internationally known musician. Scott meanwhile started off trying equally hard but never got anywhere in music - but won writing competition after writing competition without it feeling like effort to him.
I like to compare endeavors to athletics. People have natural talent that, by itself, will take you so far. People with less natural talent can work hard and supersede people with superior natural talent that don't work hard.
To be the top of the top, you must have natural talent and nurture and bolster that with hard work. The common denominator of all the exceptional people is hard work.
I appreciated the first 90% of your comment, and consider the picture of greatness you painted therein something to aspire to. Deep thoughtfulness, personal responsibility, craftsmanship, and virtuosity are all traits I hope to one day embody. But as a matter of argumentation the last line is simply discordant, it does not follow from the anecdote (beyond the obvious fact that the anecdote is anecdotal).
In my experience the most brilliant people aren't brilliant individually or brilliant as a team player or brilliant at X or brilliant at Y, they're simply brilliant. The one exception to this observation, in my experience, is that I've met a few brilliant mathematicians who are very narrowly brilliant.
It definitely could be the case that my last point and GP's last point overlap to some extent. I read it (with the sentence before it) more as a claim that certain characteristics aren't factors when it comes to "greatness".
I encounter so many bugs nowadays in my tools. I would probably spend all my time fixing those bugs in multi-million-LoC codebases (assuming that tool is open source). That story sounds like something that could happen back in the day, when things were simple enough. I wonder if it could happen today.
It's like with polymaths. The higher up the stack we move, in science, the harder it is to know multiple fields, let alone have a major impact on several.
Documenting bugs is providing a lot of value already.
It would be also inefficient for you to spend time learning how the tool works, setting up the environment etc. when there are people who know it already.
> Bill stopped what he was doing, fetched the sources for emacs (using ftp), found the source of the bug, fixed it, and sent off a patch.
That would be far more difficult if the source code was in a poor state. Some libraries are easier to patch than others, I can think of a couple of libraries I've wanted to quickly patch and soon (not soon enough) realised that it's unpatchable because it's a mess (or perhaps even worse, is too clever by half and has far too much indirection and not enough comments) and would need extensive surgery.
That's not to diminish the strength of your story about Bill Schelter but only to point out that it's easier to do your best work if others are doing theirs.
Personally, I think it's two things - working hard and lack of fear/anxiety.
In your example, this Emacs bug fixing, there is a lot of people who work hard but would just think that the bug is probably difficult to fix (they would fear of it taking too long to do) and decide not to do it.
I know personally how much anxiety can block you from doing things that help you progress. It also makes decision-making slower. I think playfulness can help to alleviate it, but it's hard to maintain in a "professional" setting.
> Great software engineers I've known seem to have complete command of all of the ideas, tools, and techniques. They understand their tools "all the way down to the metal" and all the way "up to the esthetics". You can see great people in all crafts (like woodworking).
Do people like this exist today? I mean, the complexity at every level in the stack seems to have grown exponentially(maybe?) for decades. Are there engineers who are experts covering every layer across modern hardware architecture, CPU instruction sets, modern operating system architecture, cybersecurity, networking protocols, web technology, database systems, file systems, cloud infrastructure, quality control, Data Science, AI, AR, VR, Machine learning, UI framework design, wireless interfaces, etc. And that's all before you even consider the non-technical skills of software engineering. The bar moves higher every day and I'm not sure this level of expertise is achievable/maintainable.
You don't have to excel at every layer. If you understand the basics of every layer, you're so far ahead in the game that you might as well be a magician.
You make a mockery of the original statement, if you focus on YOUR stack (not just any) then yes with a bit of focus it should be possible, but you need to stop learning yet another JS framework.
Never met one. I met many that looked very talented but once the stack changed they were completely fish out of the water.
But they were very productive with what they knew, but at some point they got tired or didn't have enough time to reach again the same level of mastery. I'm talking mostly about web/UI work.
The variety of tools that many software engineers work with today is so large that I find it virtually impossible to fully master them.
Let's say a software engineer writes a C++ application that stores documents in an inverted index and he also takes care that the program is stored in a docker container and deployed to Elastic Kubernetes Service via Jenkins. Good luck understanding your IDE, C++, Tries/Radix trees, docker, Jenkins and EKS "all the way down to the metal".
It may be possible for a small subset of those but if the expectation for great engineers really is to deeply understand "all that they touch", I wonder how many can keep up with this standard.
After a few years focused on those areas, with a solid understanding of computers and software as a foundation, it may even be possible for many engineers. The issue is the tech landscape shifts so quickly that you probably won't be using that exact stack in just a couple years. Swap jenkins with gitlab ci, docker with podman, c++ with golang. Functionality is still pretty close, but implementation details are different. And the road to mastering them is reset.
Don’t confuse familiarity with greatness. Of course if you wrote the damn thing you can demonstrate technical virtuosity.
Edit: Maybe I read too fast. Anyways, intuition is sort of the precursor to greatness. You know those designers that pick out colors innately? No amount of color theory studying will get you to that level. You are probably describing someone with an intuition for debugging.
One of those things in life everyone has to learn eventually. Don’t fight the tide, find the waves that are certainly for you.
Can't claim I read the entire thing, but I got down to methodology and must say it doesn't look particularly impressive. The study is a survey sent out strictly to Microsoft employees asking them how they rank a set of pre-defined criteria about what makes a great software engineer. That criteria includes things like "hard working", "honesty", "team player", "creates a safe haven" etc...
Obviously no Microsoft employee is going to say something like "Lazy liars who treat people like crap make great software engineers."
At the very least you'd expect a study to have a control group in order to filter out these useless questions. For example send a similarly worded survey to a group of grocery cashiers and ask them if they think attributes such as "Hard work, honesty, integrity, long term thinking, being a team player." make someone a great engineer.
My hypothesis is you'd end up getting similar answers from taxi drivers as you would from Microsoft employees and consequently that there isn't much value derived from this survey or their selection of participants.
Anyways, I'm by no means an expert and could be misjudging this research but frankly it looks to be beyond useless.
Most academic software engineering “research” I’ve seen is pretty useless, especially these kinds of human factors studies. Questionnaires and interviews are usually too underpowered or specific to be statistically significant, and their terms are defined vaguely enough to draw any conclusions the authors want. Occasionally there is some interesting demographic information, but most of the papers I’ve read are usually glorified position papers under the guise of an empirical study.
What is your take on the DevOps DORA research? From what I could tell of the explanations of methodology in Accelerate (as well as Jez Humble and/or Nicole Forsgren talking about it) they seem to do relatively solid survey-based work.
One of the key takeaways from the paper for me is the relative ranking of those 54 attributes. If the employees ranked “long-term thinking” above “creates a safe haven”, I have possibly learned something. If engineers rank attributes differently than managers or if cashiers rank attributes differently than tech workers...
Side note: I also thought the negative phrasing methodology used was interesting. They asked “is it possible to be a great engineer without this quality?” which I think is most tuned towards qualifying and ranking attributes.
Will it tell you individually how to become a great engineer? No, there are hundreds of posts, many on medium.com for that. This paper may not be perfect, but I found a lot to consider inside and “beyond useless” is pretty far from my judgment.
The paper may not offer insightful prescriptions for experienced engineers, but can work like this still be useful for informing future studies in a meaningful way? The authors repeatedly note the widespread inadequacies of the current research landscape. (To anyone familiar with the literature, is their assessment accurate?) In my eyes, the message is that the paper represents an incremental step in the direction of a truly detailed understanding of the factors involved in developer productivity. Even if there's a broad intersection between the answers one would get from taxi drivers and from computer scientists, the distance between the two fields makes that an unexpected result, which should prompt us to change how we think about computer science* and/or how we think about studying it.
> The authors repeatedly note the widespread inadequacies of the current research landscape.
This is standard language in academic research papers. It is there to sell the importance of the research to the reader, in particular to journal editors or peers who review the article. It is mere puffery.
In grad school we had a forced module on information systems in HR in which one case was about RBC and their new formerly-Bain Director of HR Data and the 3 or 4 year effort to “quantify” labor. Cases should be taken with some skepticism but let’s assume the case was correct.
I wanted to tear my hair out. Not only was the output or ultimate deliverable of this gargantuan activity an infographic of largely non falsifiable random words (trustworthy, reliable, predictable, etc etc), I did a quick back of the envelope that the approach to repeatedly solicit feedback on every employee from every business line and every customer they intersect could generate 5 trillion outbound survey “friction” onto their customers and stakeholders in a given year give or take 2 orders of magnitude. What does that cost in terms of non-enjoyment? Does it even work, and at what response rate at that scale? And what does it cost to maintain, since surely the Canadian treasury at least sort of requires or at worst encourages banks to employ trustworthy employees. Worse, how implementable are the outcome and/insights? For a real open requisition is evaluating “trustworthiness” at time of hire possible? Is it possible to compare two candidates on a relative or absolute basis on this metric? And if they are hired, going forward how is that tracked? More surveys?
Perhaps the case wasn’t as bad, and I’m sure the OP HB post of this paper has subtleties in survey design or methods I don’t appreciate, but just to pick on them by cherry picking the first few attributes that stick out: passionate, systemic, data driven, focused, hardworking, persevering, etc?!?!?
Worse, where’s the actual data? Like when Google apparently decided to get rid of middle management it felt like a reversible scientific experiment with measurable outcomes and metrics. This doesn’t. For example, from personal experience, in investment banking being hard working especially at the very lowest and newest ranks is similarly lauded and required and emphasized. But with a fairly simple app that tracks second level activity of which app is open and when and can assign a rule a rule based score, when reviewing a few years of data, the data contradicted the folsky saying.
The data showed above XX hours per week for me was non-restorative, and further above XX hours would create some incremental inefficiency increase, and above even that was a “tech debt” like factor which later required recuperation (which could also be calculated but maybe that was an accident). In English: if (simple numbers) working 10 hours leads to 8 hours of work and 2 hours of goof off for some productivity, maybe working 14 hours is 10/4, and maybe working 17 is 11/6 and also requires 2 hours of rest at a subsequent trailing period. Depending on what’s going on, maybe it makes sense to do that, maybe it doesn’t. The software engineer view would be much more interesting.
It would be fascinating to compare even arbitrary metrics of consenting individuals across a large pool. Who gets the biggest bonus relative to hours worked? What time do peak performers come in and leave at and are there any wfh days? Even trivial but theoretically transferable insights would help: “what percent of top performers drink no caffeine or tea” or “pair programming and velocity”. Heck, buzzfeed baiting analysis like “do top engineers type faster? A 10 part slideshow on why you should switch to a mechanical Dvorak keyboard” or “Can you type XYZ words per minute? Find out if you are dragging your team down”.
Anything but more lengthy repetitions of “trust” and “respect” and whatever the word of the day is.
It’s interesting there are so many different answers given to this question whenever it is asked. If we asked, “What makes a good surgeon?” or as pointed out in another comment, “What makes for a great basketball player?” it seems that there would be less debate.
It seems to me that this debate is related to another question, “Why do so many software projects fail when you don’t see any skyscrapers collapsing under their own weight?”
If you could build a skyscraper according to rules limited only by our imagination, then perhaps we’d see a lot more of them collapsing. And given that frequency of failure, we’d see a lot more debate about what makes for excellence in skyscraper-building.
Maybe a better question to ask is, “What makes for a poor software engineer?” That question seems a lot easier to answer. A poor engineer takes a long time to write unreadable, unmaintainable, buggy code that only partially solves problems that no one has. Ironically, such code often goes unnoticed by its very nature: it takes a lengthy period of time to even come into existence, when it does exist it is so buggy that its failure seems related to bugs rather than that it solves no problems, and the unreadable nature of the code is mistakenly attributed to essential complexity rather than incidental.
And that last point is where I think we get closest to an answer: Good engineers reduce complexity. Bad engineers magnify complexity.
Tolstoy wrote, “All Happy families resemble one another, but each unhappy family is unhappy in its own way.” It seems to me that the same could be said of software engineers.
So many of the thought exercises present in your comment are missing rather obvious validations.
Why do we not ask this of other professions? Because other professions have metrics and regulatory bodies.
Why are no skyscrapers falling? Because they are well regulated and required a huge infusion of cash to build. A software company with such a large cash infusion is also likely to not fail.
Software engineers are best analyzed on a bell curve and once identified according to such plotting relative to other developers the criteria becomes clear. The problem, for many people, is that selective bias prevents people from seeing the objective distinctions. If not for those blinders the question of what makes an excellent software engineer is largely self answered.
> Software engineers are best analyzed on a bell curve and once identified according to such plotting relative to other developers the criteria becomes clear
Absolutely. Four out of six software engineers are average and interchangeable - they get tickets of average complexity done, and make mistakes, some of which are caught by other average programmers in code reviews.
One out of six engineers is junior, or is effectively junior, and is a drag on teams until they get up to speed. They can only do simple tickets, every pull request must be carefully monitored etc.
One of six engineers is one standard deviation above the rest. If a new system or module or library is to be built, the team defers to them for architecture. They notice subtle problems others miss. They are up to date on the stack and know what features and problems are coming down the road. They tend to do the needed work outside of what the standard ticket requires.
One feature I have noticed about engineers one or more deviations above the mean is their focus, when needed, on the build system. They work to make a simple, working, fast as possible build. If the build is breaking, or gets too complex, they will address the problem in the build system and then get back to work.
> Software engineers are best analyzed on a bell curve and once identified according to such plotting relative to other developers the criteria becomes clear.
So how are you plotting this bell curve without having criteria to begin with? Sounds like you already decided what you were looking for.
Objectively it doesn't matter. You need several measurable criteria. Each of those criteria are subject measures. They could be things like quantity of delivery, originality, speed of change, and whatever else you might want. You just need to measure something to establish a plot.
"Why do so many software projects fail when you don’t see any skyscrapers collapsing under their own weight?"
I think this is a great question to use as a thought exercise. We don't see the designs that fail, as they don't pass the review? Using the analogy of software engineering being the design stage (vs build/construction stage), this would be a closer comparison. How many skyscraper designs fail before they end up being "uptaken".
Also, just because buildings don't collapse and fail catastrophically, I assume there are many flaws in the design that get "worked around" during construction. Many flaws (bugs) do likely end up "in production", but they are more of a technical debt type of issue that will be a burden for building maintenance and/or future tenants.
Yep. In our cowering space, a fire marshal comes by to make inspections. If we're not up to snuff, we'll get closed down. I can not imagine too many software projects where people will submit to random audits and accept total shutdown if they don't meet specific criteria.
As other commenters have noted, this is a quite generic article with very little specific to software engineering. For a much more informative take on the same topic, read "Norris Numbers" by Lawrence Kesteloot:
I recommend reading that Norris Number article. It is very interesting in terms of describing walls you hit at various code sizes, and it matches my experience. I've been thinking in similar ways, although in "complexity" rather than lines of code.
I think the whole "10×" programmer debate makes more sense when considered in context of Norris Numbers. If the task is on one side of the complexity wall, both programmers perform roughly the same. But if the task is on the other side, one programmer will flounder around while the other will succeed, resulting in the fabled 10× performance.
Imo great engineers produce a lot of good to great quality work fast. They care about the big picture and get involved with the actual domain they're working in. They see code reuse opportunities and friction company wide vs only whatever team they're on. Great software engineers will write tools, packages, and guidelines that the entire company can use- not just their small team. They know how to standup to management and produce quality work that users actually want.
I like to think a great engineer is also not afraid to write some iffy code in 1/10 the time if it solves 90% of the problem and not be bothered by the code being not to pretty.
The trick is to know the bounded context of the "iffy-ness". Iffy code that might occasionally lead to latency or a timeout for some users using your service, is different from iffy code that charges the user the wrong dollar amount at checkout, or iffy code that could brick a user's device. Great engineers know which things are safe (enough) to take what types of shortcuts on.
It can be that high. It's not the number of lines of code produced but the value/utility created. A great engineer can actually make the same product in X days where it takes an average developer 10X days to come up with the same. The better engineer might (and in most cases, would) have written less code.
Yes it can be that high depending on _how you're measuring_. I think this is why the myth has gone on for a long time. It also greatly depends on who you're comparing. Are you comparing Peter Norvig to a 1st year undergraduate, or someone on Peter's team who has around the same experience and skillset?
Fair enough, but the problem is you can't always find a team full of people "who has around the same experience and skillset as Peter Norvig".
That's basically what "The Mythical Man-Month" (the origin of the 10x developer) says: "Always recruit star programmers if you can, but most of the time you can't. Therefore learn how to best utilise the average ones."
A steaming dumpster fire of a paper that assumes greatness can be determined by a ranked numerical model, then smears this assumption all over the floor and carpet.
The algorithmic HR startup that inevitably follows will be gamed into oblivion.
As an aside, in his appearance on JRE, there's one part where Joe Rogan asks him about working hard and JC says something like (going off memory) "I don't really like that idea - of working really hard being the thing. Personally, I've noticed that after a while the quality of my work really drops off. Around 13 hours in I'm not really doing a good job". I remember the moment because I was listening to this while driving 90 mph on I-280 and I laughed out loud.
Personally, it reminded me of how people say "it's not a sprint, it's a marathon". Yeah, but Haile Gebreselassie runs the marathon at an average of 20 km/h. Some people really do operate at a heck of a limit.
This rings close to home for me, I'm a firm subscriber of the mantra that laziness a good quality for a software engineer to have. It has saved me many times already working on difficult/never-seen-before problems, and (conversely) not adhering to it has bitten me multiple times.
Often it simply takes some time for solutions to new problems to percolate, and you need to take a step back regularly during the process. Forget about the problem completely and do something completely different, interspersed with short bursts of research and reading on related topics. Sooner or later the contours of a solution will start to form and you will (hopefully) be able to realize it much quicker and at higher quality compared to 'working hard' by pounding yourself and trying to force things.
I cannot count the number of times I almost gave up on a problem after working myself into multiple dead-ends, and almost instantly seeing a path forward after taking a few steps back and allowing my brain to work itself out of these dead-ends.
Not spending 1 week of 'hard work' on a bad solution can save you months of work in the future.
from [1]
> people always wind up extrapolating sort of unacceptably where people think oh I worked 18-hour days or something and I have to say no I never worked 18-hour days because I know my productivity falls off a cliff after 13 hours I'm you know that's about the longest that I can do any effective kind of computer work and the key to even being able to get an effective 13 hours is having multiple tasks that you can switch between rather than just kind of sitting heads down grinding beating your head against one specific topic but I'm you know I've been for most of my career now I like working a 60-hour workweek I like being productive I you know nowadays I have I have family and kids and I don't I usually miss that Target by a bit now but I if I ever don't hit 50 hours a week I feel I'm being a slacker
My favourite part about this is that he's clearly not bragging or anything. He's just explaining this view, but his threshold is so ridiculously high that it completely defeats the explanation and he doesn't even realize it because he's been doing it for 30 years. In the audio, you can hear him talk with such sincerity.
> Personally, it reminded me of how people say "it's not a sprint, it's a marathon". Yeah, but Haile Gebreselassie runs the marathon at an average of 20 km/h. Some people really do operate at a heck of a limit.
I wonder if that might be missing the broader point of the saying. Yes, it's true that some people do operate "at a heck of a limit" but it's much more true to say that human beings, compared to other animals, have distinct advantages and disadvantages. Bear in mind that few other species can actually /generally/ run marathons -- we evolved the ability to sweat and engage in endurance ran /in the pursuit/ of persistence hunting.
Maybe it's just a personal thing these days, but I'm even more curious about min-floors than max-ceilings. Not just because of patterns like this, but because the thought leadership trope of "let's look at a leader in the field and figure out what we can learn from them" is not very useful. Figuring out why it's so easy for me to get into decent aerobic shape pretty quickly as a human being (for millions of years, it was useful for hunting) is a lot more enlightening.
So to get back to your point, JC is probably even more of a "marathon runner" than he looks like. I've seen that with all great engineers (and founders) -- they have the patience and opportunism to pull solutions out of a different problem space from scratch for a six to seven year stretch of time.
It's not about working hard. It's not about working fast. It's about merely surviving -- and getting creative with how you go about doing that. And it is almost always about efficiency.
> Personally, it reminded me of how people say "it's not a sprint, it's a marathon". Yeah, but Haile Gebreselassie runs the marathon at an average of 20 km/h.
He is so agile he can run the marathon as a sequence of sprints. I wonder if software developers will be expected to do the same.
This reminds me of the saying about don't expect someone to do something when their paycheck depends on doing the opposite. That said yes a great software engineer would be writing software BUT would have said no to a lot of other tasks, so that the task she is working on is vital and has a big impact.
No they don't, writing new code is often cleaner and simpler than adding a library. Exactly when to do either is arguable, and picking the right choice requires skill. So always going the library route when possible means you aren't a great software engineer.
Then write down what you mean, people aren't mind readers. However you will find it quite hard to write down what you mean in this case since your statement doesn't work even if you talk about other situations.
Like should you hire a guy to manually set a field in new entries in the database every night instead of writing a short script to do it for you? That is a non code solution, so your answer would be yes. But I'd say no, that is a dumb solution. You could say "But don't follow this rule when it obviously does apply", then I say that figuring out when the rule applies and doesn't apply is a part of being a great software engineer. Short platitudes like what you wrote sounds great but are actually mostly nonsense.
I'll give you an example about how it can be interpreted:
A good engineer stops unrealistic/bad ideas before they even reach a point where people start to think about how the idea could be carried out in detail.
A bit OT - but...I find it a bit interesting, even fascinating, how people in the field of software seem to absolutely obsess over greatness in individual engineers.
You see this is in many shapes and forms - whether it's a discussion what makes one a true "10x" engineer, what lifestyle habits will result in excellent engineers, what personality traits to look after when searching for world-class software engineers. etc.
Having been in many other (technical) fields outside software engineering, I've never seen the same mentality or obsession there.
I think it might be related to the leverage Software Engineers have compared to people in other fields.
It's possible for a single Software Engineer to stand up an entire system by themselves. Most other fields don't have anywhere near that amount of leverage.
It’s a form of narcissism. The same way humans marvel at how Earth could possibly have the perfect conditions for life to exist. Shit must be have been meant to be right?
In our own small little worlds, we build software that sort of fits the business, and collect a check. Woah, how’d that happen? I wonder if we’re like, amazing or something?
I think it's because measurement, metrics and estimates are so hard in software engineering.
How long should it take to do X? If X is building a shelf, servicing a power plant or designing a car headlight I would imagine at least within the field the order of magnitude is pretty clear. Sure, better tools and experience may make someone twice or thrice as fast or the new guy is taking double the normal time, but such spans fit neatly into the human brain.
But now observe how neither of these statements seem off:
Getting my new web-based CRM tool written and published took me the whole afternoon.
Getting the new web-based CRM tool written and published took us 24 months.
Now add to that the fact that nobody really understands software top-to-bottom (or at all) and it is no surprise that everyone is confused all the time, builds weird mental models of things and skills while vaguely striving to achieve the first scenario and not the second.
The more experienced I get, the more doubtful I am about every code.
In the end I will be an ermit on a mountain imagining a code base that does not evolve into a monstrous mess.
Honestly, I envy the young Mavericks who code shit and get work done. They at least sleep at night.
> Honestly, I envy the young Mavericks who code shit and get work done. They at least sleep at night.
Ignoring the burden shifting of on call rotations, I often find the young mavericks don't sleep at night -- they're fire fighting and debugging their work.
As a freelancer, I am usually hired 3 months before the delivery of the product to help them firefighting.
In the end, juniore don't deal with the real issues in their code. We, senior developpers, do.
Note: after a stroke of lucidity, I stopped accepting those kind of missions.
Weird to me how down these comments are on what I read as the main message of the paper, i.e. that engineers aren’t just code monkeys.
Being a “great engineer” is not strictly determined by technical ability; in fact, social ability plays a big role in a so-called “individual contributor role.”
What is even a code monkey? A coder takes a problem described in human language and solves it using computer code. In order to do this well you need to both be able to properly read and understand problems in human language and be able to write and structure code. There is no world where you can write code without thinking about the hard stuff, at least not if you write something more complex than a basic static frontend webpage.
I think the term has negative connotations because of people who build the wrong thing before asking critical questions or clarifying anything that would help shape the requirements.
I understand some people enjoy being just implementers, though. I don't think that's enough but to each their own.
Question for everyone.
Doesn't everyone know in the industry what makes great software engineers? And that usually there are only "blockers" that stops you from being great?
Let me give an analogy. If you're paid to play basketball wouldn't you know what makes a great basketball player? If you're playing pro basketball, you can be a great basketball player. The only reason why you can;t be great is because there are certain things outside of playing basketball that you have to worry about.
I guess i'm asking cause it's kinda annoying when people tell me what makes a great engineer. Anyone I know who has been doing this a few years knows what makes a great engineer. It doesn't mean they want to be a great engineer because the trade-offs arent just worth it
People always seem to want to pretend it isn't a factor, but natural talent also matters, for both basketball and software engineering.
In basketball, it is quite easy to prove that some people are simply unable to ever be a great basketball player; there has never been an NBA player shorter than 5'3, only 25 shorter than 6'0, and only 10 under 5'10. Are you really telling me NO ONE shorter than 5'3 ever decided the trade offs were worth it?
No, at some point, it doesn't matter how hard you work or whether or not you have any outside issues; some people simply don't have the genetic attributes needed to be great.
Now, everyone can get better, but we all have some limit to our abilities that we approach but don't cross as we work harder and practice. Not everyone who isn't great should feel like it is because they didn't choose to be great, some people (most of us, probably) just aren't capable of being that great.
Of course, most of us aren't that close to our limits, and it is impossible to know what those limits are.... but they are there nonetheless.
>> In basketball, it is quite easy to prove that some people are simply unable to ever be a great basketball player
There's a simple cure for this: Height classes
You can still be a GREAT basketball player if you were only 5 foot 6 inches tall. Is the best 5'-6" basketball player in the world, not great? They just can't compete against the physically taller players. What if you made a league exclusively for players in between 5 and 6 feet tall?
Floyd Mayweather is a great boxer. But if there weren't weight classes in boxing, and you put him in the ring with some journeyman heavyweight, he'd lose. He's just too small to win.
> In basketball, it is quite easy to prove that some people are simply unable to ever be a great basketball player; there has never been an NBA player shorter than 5'3, only 25 shorter than 6'0, and only 10 under 5'10. Are you really telling me NO ONE shorter than 5'3 ever decided the trade offs were worth it?
Anecdotal, I'm 6'1. At a middle school age, I was _hounded_ into playing basketball (until they realised I had no talent whatsoever, or any inkling that I wanted to attempt to improve that). There's definitely a selection bias in who gets started playing basketball in the first place.
Back on topic,
> some people simply don't have the genetic attributes needed to be great.
I think this is a massive stretch. I think it's fair to say that some people don't have the attributes to be in the top 0.01% of the their game (in baskebtall, there's a few hundred(?) players in the NBA - that's the equivalent of the John Carmack/Peter Norvig types that have come up here), but is there any proof whatsoever of any form that says that an "average-to-poor" engineer can't become a "great" engineer - (in basketball terms, going from the person who can't dribble to the person who absolutely dominates your after-work league consistently)
> Anecdotal, I'm 6'1. At a middle school age, I was _hounded_ into playing basketball (until they realised I had no talent whatsoever, or any inkling that I wanted to attempt to improve that). There's definitely a selection bias in who gets started playing basketball in the first place.
This has had a bigger impact than you'd imagine :-) For example quite a few tall football players have been nudged when they were kids to play basketball. But luckily some more open minded coaches let them play and found out that they were actually good. The best example is Jan Koller: https://en.wikipedia.org/wiki/Jan_Koller
2.02 (6'8), so definitely towering above most other people :-)
But the things is, genes do matter at a certain level. I really think you can't be an NBA player if you're 1.50 (I think that would be about 4'8). No matter how fast or resilient you are. You'll just be outmuscled, dunking will probably be almost physically impossible.
And the only example that has been given is that someone who is legally considered disabled cannot compete in the top 0.01% of professional sports. I had a bit of a search, and Jahmani Swanson [0] is 4'5, and would absolutely wreck pretty much every single non-NBA basketball player you would ever meet in your life (I am aware that the team are an exhibitionist team, and not an NBA team). I'm sure there are hundreds of other examples of people who won't be able to compete at the top 0.01% of the activity in question, but are unquestionably "great".
We can argue semantics about how great is great, but you can pretty much guarantee that he's in the top 1% of basketball players worldwide. That's pretty great.
If you are saying he would be top 1% in the general population, sure, but I don't think you can define that as great. If you are saying top 1% of people who actually play basketball regularly, I think you are wrong... he isn't going to be better than any one who plays highschool level or better, and will be average at best on an adult rec-league team.
I would certainly not call someone who is only top 1% in of the general population great... 1% of the world's population is 70 million people... there are an estimated 27 million or so software developers, so more than half of the top 1% of software developers in the world don't even write software. I would hardly call them great.
No, they do not. There isn't even a consensus on what "great" means.
You play basketball in a very static environment. The rules don't change from game to game, or team to team. The duration of a match is constant. How well the team performs is a well known metric.
People do not do engineering in a static environment. The needs asked of one changes from team to team. The quality metrics vary from application to application. The people he/she relies on varies even when you stay in the same team. An engineer who is really useful in one company may be harmful to another one.
I suspect you haven’t played much basketball! The same player on different teams can be much better or worse - some players fit in with almost any team and others need a team built around them.
(I don’t disagree with your larger point! Just that basketball is an example of a “very static environment“ relative to programming)
> You play basketball in a very static environment
I would strongly disagree - basketball is a highly dynamic environment, simply because you have an opponent. Everything you are trying to do, you have someone else trying to stop you. They are as adaptive as you are.
Because of this dynamism, many players who were great in previous eras couldn't survive in the modern era (and vice versa, really)... for example, most centers who played even 10 years ago would be abused in the modern game by opposing centers shooting threes. It simply wasn't part of the game 10 years ago, and now everyone does it. There was literally a player (Roy Hibbert) during the transition who was a star before and suddenly became unplayable as teams adapted to the new environment. He was out of the league in 2 years.
Software engineer is too many things. The person implementing a database engine working in a huge tech organization and the person implementing non scaling crud services as the only technical person in the company are both software engineers but what they need to be great are almost completely disjoint.
Basketball has almost an endless list of examples of superior talent not excelling compared to those with less talent (this is objectively measured physically, where a player with clear physical advantages doesn’t reach full utilization). There’s graveyards of these cases.
There are 6’6 bums in the NBA, top draft picks, that had less of a career compared to say a 5’3 Mugsy Bogues.
In Basketball, part of your greatness is how well you optimized for the cards dealt to you.
Tech is not that much of a meritocracy like sports, it’s a way to have a living. I’d almost say, the tech industry is not equipped to assess greatness. Since it’s literally people’s livelihoods, we won’t (and shouldn’t) try to analyze this. In sports, if you come out of a top school as a top draft pick, fans will eventually say ‘hey you are a top draft pick but you do about the same thing as someone that went way later in the draft’. We in tech won’t ever go ‘look Facebook, your app looks like the same shit everyone else builds’. It can’t hold up to that scrutiny.
Open source is probably the only place where you can objectively assess.
It’s not a competition, because if it was, and tech was a sport, fans would tear it up as to what is a 10x engineer. Speaking as a sports fan, they are savage.
I think this thread is not defining ‘greatness’ well enough. If we’re really talking about greatness, there should like 0.5% of HN that fit the bill, or something like that across the industry.
Reframe the question, who are our Bachs and Beethovens? Were they great because they created a lot of good work fast? Is good even enough? None of that shit defines greatness.
Our best developers came from open source. The stuff they made was undeniable. Is being super productive the same as making undeniably useful programs(languages, tools)?
Look at our greats and I think you’ll see they made a breed of work that sits outside workplace metrics of productivity, the same way a Hitchcock was genre defining.
- they describe their insights as (inter alia) "ecologically valid"
----
This is HN, so... I'd approach the title as "when is x10 possible?"
- insight of a far simpler or faster etc implementation (e.g. better choice of modularity, factoring out hidden commonality, dynamic programming);
- insight about a better way to achieve the end result - might be no code at all
So I'm talking about insight. What personal qualities lead to insight? (note that there isn't always a better way to be found, so personal qualities don't guarantee it)
- be really really smart. Especially, large working memory, to be able to see connections. Or at least be able to load in (cram) the information temporarily.
- the ability to step back and notice the bigger picture. Executive function.
- ability to manage stress, so they have the breathing space to do that (the wisest engineers will choose a workplace that makes this easier)
Skill in proofs is a predictor: flexibility, handling complexity, managing the top-level purpose as well as the detail (forest-for-trees), noticing connections. Mathematical knowledge itself can help sometimes too.
But probably the skill in any intellectual discipline is an equally good predicator e.g. philosophy, history, law
OK, so I admit I just started with the abstract, but the first item listed is "writing good code". Umm, I would hope that is kinda part of the definition? I was actually looking for some valuable insight (e.g. how are great software engineers able to consistently write good code) but I didn't see it in this report.
Even narrower, what expert Microsoft engineers think makes a great engineer. That seems extremely likely to shift the responses towards qualities important at large enterprise-focused companies.
I'm generally liked where I end up working, but tend to just do stuff on my own. I'm polite, but don't go out of my way to interact with others typically.
I do check the boxes on the code quality stuff however, especially writing code that plans for the future.
> “At the end of the day, to make changes [to software], it still takes a developer, a butt in a seat somewhere, to type [Source Control System] commit. — Dev Manager”
Not entirely relevant, but this opening comment just struck me as funny.
Plot twist: it's not the Adderall, but actually the ADHD. The engineers with an interest-based nervous system who have trouble switching off and hyperfocus on their latest interest staying up coding all night tend to learn a lot of shit faster than their peers because they simply spend more aggregate time doing it.
Source: ADHD engineer. I don't actually take Adderall. Tried Ritalin briefly but stopped as it doesn't affect me at all. Might try Adderall and report back.
Given that it is presumably part of a journal, I am not sure that HTML would have worked fine. The purpose was to publish in one of those. We are a secondary audience.
I'm surprised there is no reference to time estimation. An important part of their role is estimating how long a task will take to complete, and I've found many people, even engineers with a lot of experience, are terrible at this.
People in general are terrible at predicting the future... I don't think being clairvoyant is a quality that should be expected out of an engineer or anybody.
This whole time estimation thing is akin to predicting when the next hurricane or earthquake will occur. The main problem is business people don't understand that so they place this unrealistic burden on engineers.
A manager or business guy who needs constant and very accurate time predictions is a sign of a bad manager that is overly reliant on engineers and lacks understanding of software. A good manager should have the technical knowledge to make a technical guesstimate himself (that will also likely be wrong) and have the foresight to be able to manage delays and allow for buffer time.
A great team of people creating a product consists of both great Technical product managers and great software engineers. A rockstar software Engineer alone may not have the ability to manage the politics of unrealistic expectations.
> This whole time estimation thing is akin to predicting when the next hurricane or earthquake will occur.
This myth needs to die. Can you predict if an item will take closer to a month or a decade? If true then it is far easier to predict than hurricanes or earthquakes. You might not make predictions as accurate as management wants all the time, but most can predict how long things will take within a factor of 3x or so and it will be within that margin most of the time, a person who could do that for hurricanes or earthquakes would be the greatest genius in history.
And yes as you get more skilled your predictions will become more accurate. Hence accurate predictions being a sign of skill.
Let me spell it out in an example. Sports. Horse racing or basketball. You have a team of highly skilled players with a bunch of information quantized, including height, weight, score statistics, rebound statistics, biography... etc. And these guys are in a game with a very very controlled set of rules under exactly the same time pressure and everyone still fails to predict the outcome.
In software you have a product. The product is usually not concretely defined and you have a complex code base and you can never be 100% sure exactly how the new product will integrate with that code base... you're also not 100% sure how the code will be put together to define the product. Additionally are you 100% familiar with the stack? Do you know every possible primitive of psql or ruby or python or C++ that you could be using to create your project because I pretty much guarantee you every basketball player more or less knows every possible move and rule of a basketball game.
You're also working with a team that includes people that you have much less information on than normal. You worked with a guy for what at most two years does that give you accurate statistical information to the degree of say a basketball player? Also there's bound to be people you're less familiar with working on the project as well. Are you interacting with other teams as well? Does the outcome of your project hinge on the completion of a feature by an entire team outside of your own?
People can be experts on horse races or sports. Even then they can't predict things accurately. If you were to start making bets on software development dates of completion. You will also massively fail because not only are there more variables in a software project... but you have much less information.
Chaos is a phenomenon that happens to systems we have close to perfect information for. We find that if we have the perfect information of all the particles in a weather system except for say one particle. We find that information about that missing particle will make our mathematical calculation wildly inaccurate.
For software we don't even have anything close to perfect information in a system with multitudes of variables. Chaos will throw any prediction off.
>but most can predict how long things will take within a factor of 3x or so and it will be within that margin most of the time, a person who could do that for hurricanes or earthquakes would be the greatest genius in history.
3x of what. 3x can be big or small depending on x. So if I predict a project will take one year I can be off by 3 years under your logic. If I predict a month, than I can be off by 3 months. If I predict a week, 3 weeks. 3x is pretty horrible if you ask me, it's easy to make guesses within these parameters.
I predict that both a hurricane and an earthquake will happen in a century. I'll only be off by 3x or 3 centuries. Actually I can do better than that. I'm 100% sure multiple earthquakes and multiple hurricanes will happen in the next century and I am 100% sure that I will by 0x off let alone 3x.... Look I'm the greatest genius in history.
Thanks for this, I'm going to use the sports analogy in the (near) future the next time I discuss this with anyone.
It's all politics. Estimation is a purely political game by EMs/TPMs/PMs to try and shirk responsibility for engineering outcomes.
Here's another neat tack to try in this conversation. Ask for individual examples of "good estimators," and ask for details about what makes them good estimators. You'll rarely (if ever) hear that someone was so accurate that it materially impacted a project in a positive way. What would that even look like? "I was sooo accurate that the client success people were able to say that our new product would be ready 6 months ago, and now didn't have to send a follow up email to update the timeline!!!" The only answers I've gotten are that people who are good at estimating are the ones where nobody ever really has to look at their schedule and there is no drama because they are always getting things done. This is highly contextual and rarely has to do with that person's individual estimating skill. It has to do with their project, team, EM, PM, experience relative to teammates, etc. Recently I was given an example of a client-side developer who is far more experienced than the back-end guy building his API, so he's always just sitting around waiting for the API to be finished to build his features. He almost always does 3/4 of his tasks ahead of time and then just waits for the API to be done and puts his ticket in as completed at that point. So he's not really estimating at all, and he's not accurate, it's just that the project is bottle-necked on the back-end dev speed regardless, so nobody ever cares about his estimates.
It all just seems so wishy-washy and bullshit. It's just about pushing people to work hard and get more done ("you need to hit your estimates, think of them like commitments!"). Framing this all as somehow single-handedly the engineer's problem is just a sign of someone who doesn't know how to operate a software development team effectively. Estimation is a team effort in scheduling, delegation, planning, and communication.
Why in the world (if not politics) would anyone invent a concept where it punishes someone who over-achieves and chooses to work harder for a short period of time? ("you should try to be more accurate about estimates, even if you have to slow down your work, and over working like this leads straight to burn out, be careful!) It's all just nonsense invented by MBAs who have no actual experience in anything except inane "policy" and "oversight."
Software project estimation is a real thing, but it has nothing to do with one person's ability to predict the future. It's an analytical data methodology far more than an individual, experiential skill.
One place I worked said that you weren't allowed to make an estimate that was longer than three weeks. There are apparently studies showing that estimates longer than that tend to have much larger errors. So if it was going to be longer, we had to break it up into pieces until each piece was smaller than three weeks.
That could get tedious. On the other hand, we did do a lot better than normal at hitting our dates.
(I believe this shorter-than-three-weeks idea came from Extreme Programming, but I'm not quite certain of that.)
This is also why I like estimating tasks using the Fibonacci scale without a direct correlation to time.
In my teams, we generally set 8 points as something that would take an entire day. Every number after that jumps up in relatively large increments as they are more difficult to accurately determine
Not the OP, but one of the teams I'm on uses fib somewhat differently.
Any point with 1-2 is estimated at < 1 day. Some are literally 20 minute fixes, but ... you don't always know that up front, you just generally know it may be pretty small. 2 might sometimes go up to a day.
A 3 is assumed to be a day or two.
A 5 is assumed to be 3-4 days.
An 8 would be 1-2 weeks.
Anything higher is backlogged until it's broken down into smaller segments.
Not sure how well that compares to usages by other teams, but that's one data point for your question.
The idea was that the three week pieces add up to the whole of the larger task. (Yeah, I know - only if you didn't miss anything. Take the time to think it through well enough that you don't do that. And what if you have things you don't know? Then you have to do a research project to find them out before you can give valid estimates.)
"And what if you have things you don't know?" I usually find stuff out in the middle of work - a question comes up I don't have an answer for, and many times, no one else does either. In effect, no one can estimate it, but we didn't even know that up front. And... I've often hit things where the time to give an 'accurate' estimate takes more time than the actual work effort. Is that common in your "limit everything to 3 weeks" world?
If the time to give a more accurate estimate takes more time than the actual work, you aren't dealing with an estimate longer than three weeks. If the estimate is less than a day, it's not worth getting more precise.
To your first point: Yes, that happens sometimes. When it does, your estimates can be wrong. (Hey, they're estimates - they're not prophecies.) If that happens very often, though, you might add a fudge factor for "that kind of thing". Maybe something like "unknown surprises crop up most of the time, and when they do, they take about 20% of the effort, so we'll make our best estimate, then add 20%". That won't be perfect either - sometimes it will be 40%, and sometimes 0. But, you know, estimates...
And then jumps through large hoops to hide that it's still asking people to estimate. Sure, it's not hours, it's "velocity" and "difficulty", and you don't estimate, you play "Fibonacci Poker".
But at the end of the day, the question "can we do this in the allotted amount of time" still gets asked and answered.
What agile got right is realizing that the error bars increase superlinearly with duration, and that scope isn't fixed - so frequent estimates with frequent course correction. But you're still estimating.
First, allotting an amount of time to delivering value is an anti-pattern in itself.
Second, Agile doesn't ask people to estimate ("respond to change over follow a plan"). Management asks people to estimate.
Jeff Patton says it best in User Story Mapping, the "client-vendor anti-pattern"
> It's the client's job to know what he wants, and explain the details to the vendor. It's the vendor's job to listen, understand, and then think through a technical approach for delivering what the client asked for. The vendor then gives her estimate - which in software lingo actually means "commitment" ..
> The real tragedy is the client understands their problem better than she's able to predict what will solve it. But in the anti-pattern, conversations about problems and solutions are replaced by discussions and agreements about requirements. No one wins.
> Try showing up at your doctor's office and giving her your "requirements". Tell her the prescriptions you'd like written and the operations you'd like scheduled. If she's nice, she'll smile and say, "That's interesting, tell me where it hurts."
> In my head, I picture a continuum where on one side is the word waiter, and on the other is the word doctor. Try to make your working relationships more like doctor-patient and less like waiter-diner.
Pretty much all existing incarnations of agile have a planning meeting. And an entire edifice around managing longer-term planning. (Stories. Epics. Sagas)
Sure, we can true-scotsman, but in practice agile asks you to estimate.
In the client-vendor context, you can sidestep that with a fixed price bid. Somewhat. If you're bad at estimating your fixed price, your business will burn.
In the employee context you can sidestep that somewhat as long as you consistently deliver more value than you cost, but even then, making choices requires having an idea of opportunity cost. If you can't give that idea at all, there are usually better uses of the money.
In almost all contexts, you are compensated for your time. Almost no one likes writing blank checks.
> Second, Agile doesn't ask people to estimate ("respond to change over follow a plan"). Management asks people to estimate.
I think it would be more accurate to say that those who pay for your time will ask you to estimate on the value which you expect to deliver in said time. That seems like a fair question to me.
This is ipso facto, but like all creative work, they should prefer to pay you for your work product.
And I could write a book on it here (several others have!), but I think your comment points at the heart of this entire problem, namely the disconnect between the work being done and "those who pay you."
If they were truly invested in the value fulfillment cycle, they would never ask for an estimate, they would be clear about what hill they needed to be taken next in service of the product or customer (i.e. "here's where it hurts, doctor") and then help you agree on the smallest possible experiment to take the hill.
> and then help you agree on the smallest possible experiment to take the hill.
in most corporate environments, that wont cut it though... what management wants, and product owners are preassured to deliver, are large wins and overal product milestones, not incremental updates (outside or bug fixes)
> the disconnect between the work being done and "those who pay you."
yes, i think in many places, there is a fundamental tension between the "corporate thinking" and "agile thinking" and without real syncronization of methodologies and culture, any kind of "buy in from management" will usually lead to dysfunction and overall dissapointment
Perhaps a corollary might be that not enough great engineers exist to give good estimates, so we attempt to avoid giving estimates with much consequence attached.
For example, I worked with Bill Schelter (GNU Common Lisp). In 1980, prior to the Internet, we were working on a problem (tail recursion). He was using the emacs editor and he found a bug.
Bill stopped what he was doing, fetched the sources for emacs (using ftp), found the source of the bug, fixed it, and sent off a patch. After that he restarted emacs and continued right where he left off. It was a virtuoso performance. It was like watching Victor Borga (https://www.youtube.com/watch?v=dKeqaDSjy98).
I know a guy who plays guitar. He can listen to a performance and repeat it as easily as I can type what I read. He can transpose keys, add riffs, change tuning "on the fly", and even play thru with a broken string, adjusting chords to the new "5 string" form. He also knows guitar characteristics, pedal effects, etc., basically "all things guitar".
Great software engineers I've known seem to have complete command of all of the ideas, tools, and techniques. They understand their tools "all the way down to the metal" and all the way "up to the esthetics". You can see great people in all crafts (like woodworking).
They also write beautiful "crafted" code. I'm keyboarding a FORTRAN punched card deck (for an historical museum). You can see from the code that the author wrote clean, careful, and clear code. You can almost feel how good he was.
Greatness isn't about "team player", "hard working", "good at time estimation", or anything "social". Greatness is a property of the person, a deep love of the craft, and a focus on "doing it right".