Hacker News new | past | comments | ask | show | jobs | submit login
Athletes and musicians pursue virtuosity in fundamental skills (andymatuschak.org)
159 points by JustinSkycak 14 days ago | hide | past | favorite | 154 comments



The force of competition at work. There are vanishingly few desirable opportunities in music and sport, and many potential contenders who will be selected on their individual skills and performance vying for those spots. Often to put in the hours needed to be competitive you have to also love the game; in all fields including knowledge work I’d speculate the top performers are more likely to enjoy the activity for its own sake than the median performer.

Knowledge work like programming is just much less competitive. In a graduating class of 5000 computer science majors at a good university, I’d be surprised if the majority fail to “make it” to a 100k job and be able to support themselves. Once you secure a spot in the workforce it’s pretty easy to hang onto it as an average contributor without much objective measure or comparison against your peers.

Compare to sport, at the same university maybe there are 50 spots on a sports team, and 10 good teams at the school. What percent of kids who start out playing a sport at 10 years old get to have one of those spots, and what percent of those who make the college team go on to make 100k, support their family etc playing the sport?

That competition forces rigor - if I had to compete like that to get a software job, maybe I’d be “practicing scales” on the weekend too, not just when gearing up for interviews.


> Once you secure a spot in the workforce it’s pretty easy to hang onto it as an average contributor without much objective measure or comparison against your peers.

There's also the weird "success path" that goes from developer to manager. It's as though the end goal of learning to be a concert pianist was becoming a conductor, or perhaps a concert hall manager.

If your "success metric" is earnings, then "The force of competition at work" for knowledge work doesn't necessarily drive you to practice your Rust development "scales" every day, or to be the best Javascript dev in the team - it is probably a better use of your time to be "good enough" at your developer job, and hone your schmoozing and office politics skills to make the jump to better paid non development roles like "chief architect" of "VP in charge of {whatever}" or ultimately CTO or something.


> It's as though the end goal of learning to be a concert pianist was becoming a conductor, or perhaps a concert hall manager.

Is it not. Wouldn't be surprised either way.


They also tend to need objectively measured skills more than knowledge workers. We know what a good batting looks like but we still can’t say what good code is in any reasonably objective way.


Sure we do.

* execution performance time

* build time

* regression frequency

* defect quantity

* code size

* dependency quantity

* test automation coverage

* test automation execution duration

The problem isn’t that we don’t know. The problem is that developers don’t measure things and simultaneously bitch about strong opinions.


At the same time, those criteria are very context-dependent. Thus, great developers are usually only identified as such because of how they perform relatively to others working under a similar context--not because of the above criteria in any universal or objective sense.


Lynx is a lean, efficient, fast-to-compile, low-dependency browser - and has the fewest open defect reports of any major browser.

I'm not sure the metrics you propose capture what makes good software good.


Would you say that Lynx is not good software?


Absolutely: "good software" has to be useful for its intended purpose. Lynx may be well-written, but it's useless if you want to do most things on the modern web: it just isn't capable of doing them. If you want to use a web browser to, for instance, use your bank's web interface, a buggy browser that can do this is infinitely more useful than a non-buggy browser that simply doesn't have the technical capacity to do so.


Is the intended purpose of Lynx to access the "modern web"?


+1 it worked well when I required it, eg: reading manuals while recovering a system that lacked a GUI

Ok, but then what game is being played according to these metrics? There is no game of "code" like there is a game of "football" or "golf".

Perhaps these metrics matter in some small fields but overall most places that need software could care less about most of these, as long as the business objectives are reached. And the business "winning" relies on more than just the code running its software.


Some measures are better than no measures. Crying about what measures to use when you currently have none is just crying. That crying is the biggest distinction between software and all other examples these comments mention.


When used to evaluate employee performance (the context here), no objective measures are better than bad measures.

Measuring the wrong metrics is as good as not understanding the sport. It's best if the person advocating the wrong metrics step down.

Why? What are the worst consequences of wrong metrics? The problem there is that its a paradox. Seriously, consider what you are saying. You are saying measurements should not be conducted, because if somebody believes they are bad measures some mysterious unknown consequence will come to pass. The fallacy there is that the same consequences will come to pass irrespective of any measures conducted, because measurements alone do not dictate courses of action. However, if you don't conduct measurements you won't have any idea either way, which makes the very concern of bad measures entirely moot.

Bad measurements are only bad when they disqualify better measurements and better measurements only occur when some measure more directly proves/disproves the desired result.


Yet weighing all these factors is decidedly a subjective call. Unless it's something stupid like half an hour for an app that's considerably simpler than a web browser or OS, I would fire any manager who prioritized build time or test execution time over live performance.

Also, "dependency quality" just recursively moves the entire quality criteria to the next level.


These are good attributes for code to have, but I would strenuously disagree that they are what makes code good. Harder to measure, but I would say more important, are softer qualities like being clear to read, safe to modify, easy to learn, and above all, solving a valuable problem.


Sure, those are important too. Now that you have some application written that solves a valuable problem then how do you assess quality/value objectively? The keyword is objectivity. You have to measure something and compare those measures against something else.

This is why non-developers believe developers are generally hyper-autistic. For most developers everything must be about clear, easy, simple, safe. These are all super subjective self-serving opinions that don't do anything for the product or the labor that builds that product. Product owners will scream about this, developers will pretend to hear it, will immediately discard it, and then repeat the same insanity where they find comfort.

Step back, take a deep breath, understand that it's not about you or what you want, and finally discard the self-serving circular insanity and only focus on measuring your time to complete a task, time for the application to do things, and frequency of user engagement.


I get your point you are being pragmatic, but the reason why I sometimes behave exactly in the way, that you just criticized are those very same product owners.

Example: I have a talk with the PO where we agreed on certain features and certain things that do NOT have to work. Often times when I make decisions during development that rest on the asumption that these certain things dont have to work I later on get told to incorporate them anyways. So I have lost a lot of trust in POs or anyone that is not a developer that tells me how a piece of software is supposed to function.

Example 2: I am currently dealing in my department with a case where a Product Manager talked to a Product Owner and they contractually aggreed with a customer to deliver one of our internal development tools, that are absolutely not ready for production or were ever meant for any customer.

Yes I will be "hyper-autistic" about my code because sometimes I do not have a choice.


Goodness is fundamentally a subjective attribute. Trying to measure it objectively is attempting alchemy. You cannot make the subjective out of the objective - you will always have smuggled subjectivity in somehow.

Suppose I say that good code should have the lowest number of curly braces, or the lowest number of subroutines, or the flattest or deepest object hierarchy. A measure being objective doesn't make it good. So why is test coverage good?

In fact, every single one of the metrics proposed in the parent post is something I have seen gamed to the point of maladaptivness.

* execution performance time - So frequently overly focused on that you can find google hits for "Premature optimization is the root of all evil." I have seen developers spend hours or days saving (sometimes imagined!) run time, where all the time they saved over the life of the product wouldn't add up to the time spent. Extreme pursuit of performance leads to code that is hard to work on - who cares about those milliseconds when critical new features take months to write, or are even impossible?

* build time - It is very easy to diminish build time by destroying critical architecture. I have done it myself. :)

* regression frequency - Insisting on only making very safe changes is how you wind up spending six weeks lobbying change control boards for one line of code.

*defect quantity - In environments that actually track this, people merge issues into a single "fix what's wrong" ticket, degrading the utility of the ticket system. Defect granularity is not actually obvious!

* code size - Obfuscated X contest entries are often very compact, and people who obsess on saving lines and characters can wind up leaning towards that style.

* dependency quantity - Leads to attempts to homebrew encryption.

* test automation coverage - Automated testing ossifies design and architecture, which can paralyze, e.g., an experimental prototype. Full coverage is also costly - time and energy spent maintaining pointless tests can come at the expense of mitigating more realistic risks. I realize I depart from prevailing wisdom in this, but there are times and places when automated testing is simply inappropriate.

* test automation execution duration - Sometimes the right way to write a test is slow.

I'm not disagreeing that these are generally good things to strive for. They are! I'm saying that if you think these things define goodness, each one can lead you to a cursed place. (I hasten to add that there are times when a metric really does define goodness - sometimes you need speed or reliability or whatever, at any cost. Recognizing that circumstance and its limits - "any cost" does not generally mean any cost - is subjective.) Goodness is subjective, and while objective measures can help you assess it, such measures cannot define it - when and how you use which measures, and when you think they've gone off the rails, is itself a judgement call.

I once inherited a system that was both essential for business operations and a thorn in everyone's side. The guy I inherited it from (and the guy he inherited it from) had taken over a year to learn how to use it. I set about reorganizing, rewriting, documenting, abstracting - all those soft changes in pursuit of clarity and obviousness. They aren't objective, but they do pay off: when I handed the system off to the next guy (and three more after him!), he was off and building on it in a day. That was how I knew I had succeeded! When doing that sort of thing, you do always wonder if what you're writing is clearer for everybody or just you. But surely even the most hardheaded bean counter can see the value of training developers in a day rather than a year. That's good. :)

Goodness is contextual and subjective. I can agree that your goals are generally right, and I can point to circumstances where they're overemphasized or even outright wrong. Sometimes, when the sky is falling, a nasty little bash script that meets none of the usual criteria for "quality" is the best possible thing.

There are people who use subjectivity as a haven for vanity, and build mountains of pointless code in pursuit of some idea of goodness that serves no practical purpose or is even harmful. It is important that we retain our on ability to criticize on subjective grounds, precisely to counter that sort of activity - because you will find it in the land of the objective advocates as well, building mountains of metrics that don't serve any practical purpose either. To recognize a bad abstraction and a bad metric is the same skill, and requires the same confidence in your own good judgement.

Objectivity is no refuge from the necessity of good taste.


> A measure being objective doesn't make it good

That completely misses the point. It’s not about what’s good. It’s not even about what’s better. It’s only about how much better, the distance between theirs and ours. Better is objective only when like aspects of competing items are compared within accepted bounds of precision using evidence.

The interesting thing about measuring stuff isn’t that people are otherwise entirely wrong in their assumptions more than 80% of the time, but that they are typically wrong by one or more orders of magnitude.


>> A measure being objective doesn't make it good

>That completely misses the point. It’s not about what’s good

The loss of context here makes me wonder if I am talking to an AI. The comment you originally replied to was,

   We know what a good batting looks like but we still can’t say what good code is in any reasonably objective way.
You replied with, "Sure we do" and proposed a list of metrics.

Are we tracking? The original comment claimed that we do not know how to objectively measure goodness in code. You are (apparently) claiming to know how to do it. I am claiming it is impossible even in principle.

In this context, I find your response ("It’s not about what’s good", and the claim that "better" is easier fo measure) bizarre and nonsensical. Like an AI, you seem to have lost track of what we are talking about. We are talking about whether we can objectively measure code being good. It is exactly "about what's good".

Of course it is possible to measure things about code, but equating those measures to goodness relies on artificially constrained circumstances -- like a code golf contest or a PO declaring test coverage a metric to maximize. Athletes find themselves in such constrained circumstances all the time because it is a pursuit dominated by competition and games! It is most obvious in sports like sprinting or powerlifting, which are analogous to something like code golf, but even sports like basketball in which "goodness" is harder to define are heavily artificially constrained such that goodness in a player is about maximizing an objective measure - team score. This might be analogous to a programmer who sees his mission in terms of ticket closed per week. By contrast, programmers in general are usually working in a context in which the quality of what they produce is measured by a lot of complex impacts - on users, on business, on other programmers.

Some of what code needs to accomplish - conceptualizing a problem well, communicating clearly - is inherently subjective, having to do with how it is received by another mind. Programmers (myself included) are generally focusing on these sorts of characteristics when talking about code in isolation, partly because we feel the impacts to ourselves most keenly. But I would contend that there is something deeper and less obvious here - that maximizing this subjective goodness profoundly improves the situation in more objective areas. Well architected code, well communicated code, clear code is resistant to defects in a way that mere test coverage can't accomplish. This is not obvious, but it is deep wisdom arising from experience, and is part of what drives programmers to emphasize the ineffable in their understanding of goodness.

In fact, I was originally going to draw a parallel between being a good basketball player, maximizing team score, and a programmer maximizing business revenue. But I stopped myself, because this is an element of the deep wisdom of good code: focusing on ineffable, subjective excellence is profoundly positive for revenue. It's not something you can measure directly so much as it defines the circumstances the business finds itself in. This observation isn't unique to me - here's a pg essay making a similar point: https://paulgraham.com/avg.html Programmers talking about code being good in this sense may only be building sandcastles in the air - but they also may not be. You must possess the wisdom yourself to tell the difference. But there is certainly more to it than than selfish vanity.

But even leaving that aside, because code must meet a broad array of conflicting demands, optimizing among those depends both on the circumstance and the values held by the people in it. Hence, goodness in code will always have a subjective element, and (in the athletic sphere) is most like saying you are "healthy and fit" or "your best self". You can certainly bring measurements to bear, and we are certainly talking about something real, but there is an inescapable subjective dependence on the value judgement of the judge.

This actually touches on a broader philosophical debate: Are value judgements mere meaningless personal preferences, or are they (often imperfect) attempts to articulate something real? In code, and in life, I believe the subjective is pursuing the real, and moreover that anyone who thinks it's worth arguing about intuitively agrees with this assessment. By contrast, the view that only the objective is real, popular as it has been for the last couple of centuries, and attractive as its promises are, has been increasingly producing absurd results.


You are really wanting this to only be about code quality, whatever that means, and not product quality, which is something that can be measured from outside the organization by people with no understanding of your skills. I can repeat all day that it’s not about what you or other developers want but I somehow suspect you will circle back to code quality and goodness because those are important to you. That is why I claimed the prior comment misses the point. The complete inability for developers to accept, on any level, that it’s not about what they want, I believe, is why non developers stereotype developers as autistic. They aren’t wrong.

There are several things in this comment I find odd.

The comment about developers being autistic is particularly funny because I personally am autistic, and my wife (who you are responding to) is very much not. It was a major source of tension for us for many years.

Likewise, the emphasis on people "outside the organization" and the suggestion that she is somehow deficient on that front is laughable -- she's extremely highly regarded by customers/clients, and has been for decades, specifically for her ability to understand and solve their problems by getting them a great product whether or not they have any idea what her actual coding ability is.

And then the comment about "really wanting this to only be about code quality" is strange, since the subthread going back to kasey_junk's comment is about how "we still can’t say what good code is". Your initial response is entirely about measurable attributes of code (like code size and build time); Dove introduces "softer qualities" including "solving a valuable problem" (which is more about "quality product" than any of your metrics), and then you go back to a different set of metrics. She responded with an extended comment that specifically noted the importance of "complex impacts - on users, on business, on other programmers". Her comments have consistently been more about how good code impacts the functional product, while yours have been about measuring things about the code, but then your complaint is that she's too focused on the code and missing the point. Then you make comments about her own state of mind: what she is "wanting" to do and what you "suspect" she will do and her "complete inability" to accept certain things.

This very much feels like you just have a point you want to spike about measuring things (FWIW she does make it a point to measure things and to train her subordinates on making sure they're measuring things), and her comments (and my other one) are more excuses for you to repeat your point than actual ideas you're trying to interact with. Like you're not actually interested in engaging with the core idea that "Some of what code needs to accomplish - conceptualizing a problem well, communicating clearly - is inherently subjective, having to do with how it is received by another mind." It's just another opportunity for you to say that your set of objective measures are the only thing that matter, which, as I noted elsewhere, is itself a subjective position about which things to value.


> "the view that only the objective is real ... has been increasingly producing absurd results."

As you rightly noted a couple of comments back, what this view does is it smuggles in subjective assumptions. That is, someone operating under this view is going to objectively measure something (like kloc or number of tickets closed or execution time on a test data set) but the selection of what to measure, and the selection of how to value each individual measure of an objective quantity in order to determine overall "goodness", is subjective. The step where they assign meaning to a measurement is a subjective step.

It's interesting to watch the development of "objective" measures in basketball and the dialogue around how to determine if a player is the best, most valuable, etc. over time. Decades ago, the only stats we had were "counting stats" -- points, rebounds, and assists. Steals and blocks came a bit later. There is a correlation between putting up big counting stats and winning games, but it's not as strong as you might naively suppose. Once more sophisticated metrics were developed, something that "subjective" observers had always noticed ("losing player with good stats" is something that was often said about specific players) started to be quantified: some players put up big stats because they're doing inefficient things that result in individual stats at the expense of the team, like taking a high volume of shots even if they're lower percentage shots than a teammate could get on that play, or not contesting an opponent's shot but trying to chase the rebound instead (leading to more opponent scoring but also more personal rebounds over the course of a large number of shots.) In the modern era, advanced stats like PER, VoRP, WS, and BPM are basically more sophisticated models built on top of counting stats that try to scale them and weight them according to regression models. These stats are better, but they still don't capture everything, they only capture things that can be inferred from counting stats. They don't capture things like -- Steph Curry has such strong "shooting gravity" that his teammates often have extra space to shoot because multiple defenders are trying to make it hard for him to get a good look, or Rudy Gobert being on the court changes an opposing team's play selection because he's such a good shot blocker that plays that would lead to a bucket against a different player are leading to him getting a block so teams avoid those plays. Someone insistent on "objective measures" won't even consider these as things to potentially care about unless they have a way to measure them (which, now that we have sophisticated player-movement-tracking, we can actually measure things like how close the nearest defender is on shots by a Curry teammate when he's on court vs off court, or what percentage of opponents' shots are taken in a specific part of the court when Gobert is defending them vs when he isn't. So those measurements are coming online over time.) And, of course, understanding that it means something that Curry's teammates have extra space to shoot, or that Gobert's opponents might not be running their strongest offensive plays because of his shot blocking, puts us in the realm of meaning rather than mere measurement. Knowing to make the value judgment of "it matters how this player is impacting the game in ways we don't have a good numerical measurement of, but that a sophisticated observer who values those things can watch for and give subjective consideration of" puts us in the world of meaning rather than mere measurement.

That seems to be the same issue underlying this discussion. Knowing that conceptualizing a problem well matters -- and that it will profoundly impact the end result in objective areas even though it's not directly measured -- is wisdom.


Ohhhhhh shit, that has to be the longest paragraph in all of HN. Have you seen the movie Moneyball?

Just start with the premise that bias isn’t helpful and even less helpful when implicit. Let that determine what to measure and will not be harmed. If such decisions twist you into knots then you are the person qualified to make such decisions.


> "bias isn’t helpful ... Let that determine what to measure"

Determining what counts as "bias" is itself a subjective activity. People who grow up in different cultures have different baselines for what factors matter the most and what factors they consider to be overvalued, undervalued, inappropriately accounted for, and so on. Not just different countries, but different subcultures within the same country (like, my cousins from the farm see a lot of things in society as biased toward big cities, which I never considered because I've lived in big cities for essentially my entire life.)

"start with the premise that bias isn't helpful" is, by the way, also not a measurable goal, which IMO supports what I'm saying. Knowing how to conceptualize a problem well (of which "eliminate bias" is a small subset) isn't something you can objectively measure, but it's something that will impact your objective measures down the line.


You didn't even include "solve the customer's problem" or "has features people care about" or portability. Electron apps are not taking over the world because "execution performance time" is the one true metric by which all code should be judged.

The first two are product quality, not code quality

Yes, but presumably we all work at businesses that are trying to make money. Code quality and product quality are inextricably linked in the business world.

This is not true in practice because the consequences of bad code show up months and years later and in a way that makes it impossible to attribute to any one business decision.

This is totally fair and I agree with that. What I was trying to say and didn't express particularly eloquently is that you need to consider both "abstract" measures of code quality (performance, test coverage, complexity, rate of regressions/defects, etc.) and specific product metrics. You can deliver "high quality code" that checks off all the abstract metrics, but if it doesn't actually solve your business problem then you've basically just succumbed to Goodhart's Law.

Agree. Except, to be very pedantic, with this?

> then you've basically just succumbed to Goodhart's Law

We're talking about adding more metrics that should be considered, and treating or not treating them as the ultimate goal is orthogonal to choosing the set of metrics! Product metrics don't save from goodharting


Your metric is a laundry list. (Somewhat unavoidably, at this stage in the development of software development.)


> execution performance time

* build time

* regression frequency

* defect quantity

* code size

* dependency quantity

* test automation coverage

* test automation execution duration

Why would these metrics be measured against human engineers when a large amount of these issues should be used to inform how to improve our systems and tooling to help devs?


Because in many cases human engineers will directly fight the mere suggestion of measuring anything in order to achieve self-satisfying desires like easy, clean, clear, safe, and so on. Its not about what the developers want. It's only about the product. Human engineers that put themselves before the product are an ethical liability. Ethical failures are more the norm than the contrary in software, and one way to fix that is to hold people (entire teams) liable for failures to perform above certain established baselines.

For me its all about lowering regression and internal execution speed at all steps as much as possible, because I would rather do something else with my time than repeat myself on the same slow failures over and over. Getting my time back for personal use is a win for me and it allows the employer to ship a faster and more durable product.


You missed the most important ones: how much business benefit does it provide, and how overall expensive is it?


It blows my mind you didn’t include readability and documentation (or, in one word, grokability)

Since code is primarily for communication between humans, I suppose we should look to the humanities and ask them what good writing is, but therein lay a parallel exercise to one suggested in the article.


I say this often. Code is for people, and the qualities of great code parallel those of great writing: clarity, efficiency, and a sort of profound obviousness and inevitability.

The job of software architects is the role Heidegger ascribes to man when he calls us "shepherds of Being". To understand the world as it is, and find the right abstractions to describe it. To constantly evolve those abstractions toward better ones, clearer ones, to seek out ways to represent things that make solutions seem obvious and inevitable.

A mathematics professor once said to me that as time went by, the definitions in mathematics became more complex while the theorems became simpler. "Why is that?", I asked, and he replied, "It's progress!" This is the sort of progress that the software architect seeks as well: to set up the problem so excellently that the solutions are smooth and the process is enlightening.

Great writing, and great code, are first and foremost about great ideas. About brilliant ideas that change how you view the world for the better.


No. It needs to be treated as engineering if we want any stability and maturity in this field. Humanities are too subjective.


Well written code in one language looks nothing like good code in a different family. They won't even share a similar syntax, grammar or idioms.

The only real lesson to be learned is "write for your audience" in my humble opinion.


A good analogy for how good prose looks different in different languages, to different cultures or professions, etc.

Code is primarily for execution. It is very important that it also communicates to people, for the sake of the business that the code supports, but the main thing is the execution.


Not intended to be pedantic or contrarian, but I think this falls apart when the code is making sure a signal is sent somewhere (aka communication). If I write a module that allows IPC am I focusing on execution or communication? I understand one can say they were “executing the communication” but at that point we ought to remove execute from the phrase if it has to preface anything you do.

To fully disprove my point, at what point do we never execute something with code so the distinction is even worth mentioning?


OP talks about asking humans what good code looks like, and that's the scope of my response. Code as a communication tool between developers is not its primary reason to exist.


Then why do we keep code after it has been compiled to a functional executable? Why do we (those of us working on teams) have rules limiting how code is written, or even in what language? Why do we study abstraction strategies like OOP and FP? Codifying and then storing a solution is not a necessity essential to a solution, it’s a necessity satisfying an external pressure. My conjecture is that that external pressure is the authors’ need to transmit solutions across time (eg to oneself when it stops running for w/e reason) and space (eg to collaborators), so the solution can be read and understood, revised and extended, in the same way we transmit other ideas in natural language prose or other symbolic systems. If we want to treat this outside of the humanities, we need to abandon high level languages like bash and C and return to manipulating registers and memory directly.

Because we need to change it on a regular basis. We usually only ship binaries - literally what you described.

Doing some rough numbers:

There are ~2400 NFL players in the US, which means about 1 in 140k people.

If you figure that 10% of the people care enough about football to make an attempt, then someone who is in the 99th percentile of that group of people--someone I think many people would call undeniably good at the game, then they still have to be at least 99th percentile among those people to actually have a go of making it as a career.


You’re off by an order of magnitude.

There’s ~165 million males in the US, but many ex professional football players still alive and some kids who will eventually become football players. I doubt 10% of boys try out for high school or college football, but let’s say 10% make even some vague attempt.

Many people play professionally for a very short period, but arbitrarily suppose 1/7th of men are in the age range to play professionally. 165,000,000 / 2400 / 7 / 10 … So something like 1 out of 1,000 guys who make even a vague attempt end up in the NFL.

More realistically I doubt even 1% of boys ever approached football with serious intentions. Still horrible odds, except there’s a lot more athletic scholarships than openings in the NFL.


No, your math is cockeyed and the parent is much closer.

"Roughly speaking, there were 1,083,308 high school football players competing, and eventually 251 made it to pro. After simple calculation, we can get that the percentage of student-athletes going pro is approximately 0.023%."

https://u.osu.edu/groupbetaengr2367/junran-add-things-here-f...


High school lasts 4 years. Some people play for all 4 years but if the average is 2 years you’re looking at 0.5 million people playing for the first time each year.

Most players aren’t drafted. The average NFL career lasts 3.3 years. 2400 / 3.3 = 727 new players per year.

500,000 / 727 would be 1 in 687 or 0.15%.


That 1M is a 4 year rate and the 250 is a 1 year rate. Additionally you don’t have to be drafted to play in the NFL. So it’s about 5x that.


That’s selection bias, not competitive pressure raising the top level. It’s not that professional positions being so few and far between for artists raise the bar. It’s that you are only looking at the top of the competent musicians which are actually far more numerous but keep music as a hobby because there is no job there. Take a similar sample of the "best" at anything and you will find similar traits.

" in all fields including knowledge work I’d speculate the top performers are more likely to enjoy the activity for its own sake than the median performer"--

In soccer, a sport I am very familiar with on the playing side at a reasonably high level, this is not true. The median professional enjoys training and playing far less than the median spectator and median player of lesser quality would expect.

The common refrain, "If I had that opportunity, I would train like crazy," "I cannot understand how people can make that kind of money and live the dream of being a professional player and yet..." clashes with the reality of young people, like most top soccer players, who want to party, not train as much, and live a life of equal wealth but fewer obligations.


Does a single university with a good reputation graduate 5,000 new CS majors per year? Or is that number just for the sake of example?


Totally random number, based vaguely around there being 2000 enrollments in US Berkeley CS61A per semester. Not nearly that many graduate in CS or EECS, but I couldn’t easily find a number.

UIUC awarded 500 ish CS degrees in 2023, so 10x less!


>> In a graduating class of 5000 computer science majors at a good university

Jézus Krisztus and people are wondering why there is a job crisis underway. I had to attend my nephew's university graduation this summer and was shocked to see some 500 computer science majors, thinking where will all these people find a job? And you're talking of 5000 as a "sure thing".

Well it ain't no sure thing anymore and graduate inflation surely don't make things better.


It’s a totally made up number


If you want to look at something specific like software engineering, it is not even a little bit similar to being an athlete or musician.

Athletes and musicians are performers. They are repeating a set sequence of movements over and over. They are reacting to the same situation with minor variations over and over.

If your knowledge work is in any way similar to that, then it should have already been automated. Probably by you, if not someone else before.

And there is no live requirement for doing programming while someone watches in a particular time frame. In fact, it's better to take your time. That will allow you to solve more difficult problems more robustly.

I would almost say that programming is just about the opposite of something performative like a sport or playing music.

You can get better at reading and solving problems by practicing that. But I don't see how toy exercises are usually important at all for professional programmers. Much less something like reading for the sake of practice.


This is the fundamental distinction yes.

Performance is about doing something exactly right the first time on the night. Concert musicians, sportspeople, firefighters, surgeons, airline pilots, and military personnel all operate to a greater extent under this kind of constraint. When you are called upon to perform, you need to get it right. So you train and practice and drill to make sure you have all the basics down, and you rehearse and prepare for the specific performance you expect to do next.

Knowledge work is specifically work that is not like that. It’s work that will involve evaluating information and making decisions and incorporating novel insights and it doesn’t have to be right first time - there’s room for iteration and experimentation and bouncing ideas around.

Now, there are parts of some of those performance oriented jobs that require improvisation and creativity and evaluation on the fly - and there are parts of more knowledge-work jobs that require well drilled fundamentals (think about incident response in software operations). So the reality is these jobs all fall on a spectrum between structured performance and freewheeling discovery.

But asking ‘why don’t we rehearse how to do knowledge work?’ is nonsensical, knowledge work is precisely that work that involves incorporating and applying knowledge to do - the only way learn how to do it is by doing it.


Programming is a fractal of tasks. There's big stuff like how you architect a program, then down to how you write functions, and then down even further into grit below that.

And one of the joys of programing is that at each level, there's not one right answer. But even with there being different things you can optimize for, there's also a ton of poor choices that could be made as well.

Practice lets you focus on one aspect at one level, and improves your ability there. If you were to practice writing a function focused on correctness, another time on readability, and lastly writing the function based on performance optimization, you would almost certainly be able to write a better function later. You've expanded your tools, you've learned new techniques, and you've consciously evaluated your work from different perspectives.


Wow, this comment sounds like it's written by someone who's never interacted with "musicians" or thinks a musician is a person who goes on stage and plays the same song over and over. And further thinks that engineering or being an "athelete" or "musician" is a very narrowly scoped job.

Musicians, like any other "professional" have a broad range of functions from arrangers, composers to song writers, performing musicians, session musicians, touring musicians and so on.

A jazz musician (who might be performing live on stage) will likely not play the same song the same way twice. Is a jazz musician then not a musician because they aren't "repeating a set of movements over and over?"

If anything writing a CRUD type application IS something that could be automated because the patterns and the goals are largely the same but to take this example and apply to what atheletes do is pretty misguided.

Most atheletes are dynamically reacting to their environment or situation, taking into account the newest data and formulating a plan "on the fly" to meet their goals (scoring a goal, landing punches, etc).

My own credentials including multiple engineering degrees, experience designing equipment for "musicians" and "atheletes" so I don't think I'm talking out my ass here.


Another key difference is that music and athletics are physical activities where reaction times have to be faster than the time required for conscious thought. If your hands aren't already moving to do the right thing before you have time to think about it, you'll never make it as a top-level musician or athlete. There's nothing really like this in programming.

I think knowledge work perhaps is more similar to being a composer rather than a musician.

Musicians performing existing pieces add touches of nuance but are penalized if deviating too much from the original.

A composer on the other hand should be adding significantly new components to the existing body of works and is measured on how much they are deviating from the existing body of works.

A sports analogy (although a bit weaker) would be Olympic/NBA/NFL/MLB/FIFA sports which have known rules and limits versus X-game/RedBull type sports which are pushing boundaries of non-existent rules.

Tech activities more similar to athletes/musicians that benefit from repetition would probably be like timed competitive leetcode or competitive Excel tournaments[1].

[1] Financial Modeling Worldcup https://fmworldcup.com/excel-esports/


Healthcare is at least somewhat similar to sports and music, in the sense of performing complex tasks under tight time constraints. Certain aspects of healthcare have been automated but it's still mostly humans performing hands-on procedures.


> Athletes and musicians are performers. They are repeating a set sequence of movements over and over. They are reacting to the same situation with minor variations over and over.

This is peak hacker news.


My grandparent was a big organist, he practiced the same movements over and over on new organs before big performances. How do you think musicians learn to play every note close to perfectly?

If you mean a musician working with a DAW, then yeah it is much closer, but such musicians works much more like a knowledge worker and doesn't need to focus on the fundamentals that much, just like programmers, at least not more than the equivalent of an undergrad that programmers gets.


Not all musicians consider "close to perfect" a virtue. I suspect that outside of western classical and pop music it's quite rare. Certainly the music I listen to isn't anything like that.

Musicians who play in orchestra may play the difficult sequence a lot, but they do not get to play whole concert again and again. They also need to be able to react to other sections, because otherwise whole orchestra desynchronized. Sometimes play unexpectedly something different.

The big groups of musicians are massive chaos.


Extra points if they can compare to bridge engineering too.


This is something I am currently thinking about. I am a software engineer who also happens to be a amateur musician. I used to do at least 2h of exercise on my instrument for a year, and then not less than that for many years after that. Lots of time I did allocate to fundamentals and standard songs I did not want to lose - and even today, more than a decade after my peak and active time, I have a feeling of where I am skillwise when I comes to those things I practised.

But for software engineering? This seems a lot harder to me. What seems to make most sense to me currently is really high-level stuff like "build up a local dev environment from scratch", "implement a minimal change than is visible in the frontend, but results in a change to the data storage in backend" and "write an integration test". Those seem to touch on many areas of skill and should be "trainable" in some sense, making them a good target of deliberate practice.

Thoughts or experiences anyone? :)


While learning to write compilers, I would memorize small, but critical, programs like converting a char range, like [a-zA-Z_], in string format into a table or reporting an error if the range was invalid. At my peak, I could implement the function that did this in about 60 lines of Lua in about 3 minutes.

I haven't done exercises like that recently, but I found it helpful at the time.


20-30 years ago that was a role in competitive programming team - fast typer with knowledge of data structures, whose job was exactly that: very fast and bug less writing of them during competition


I came to a belief recently that memorization is way too underrated of a skill. Most of programmers, myself included believe that why you should memorize something if you can look it up, but... I'm not really sure by now.

Perhaps we rely too much on our ego that we can come up with everything on the fly where we should instead look into how other crafts did it in the past?


I've spent maybe 40-60 hours a week on average programming since I was 15 or so (42 now, but I don't program as much nowadays). I'm very logical in general, so I was naturally drawn to programming, but that's a ton of work to put into something. Software engineering comes easily to me now, from the very high level to the very low level.

I don't know tons of stuff about tons of stuff, but I do have a fairly good sense for how computers work at all levels of the stack, which helps.


Slightly off topic.

But I feel for a lot of "knowledge work" what is proclaimed to be the fundamentals and what the actually fundamentals are diverge quite a bit.

E.g. if automata theory a fundamental of (generic) software development. IMHO it's not. Sure it's a fundamental of many things you use for software development (e.g. programming languages, compilers, various "foundation libraries" like regex etc.). It's also a neat tool to have from time to time. But definitely not a fundamental for most software development jobs.

Through it's also a bit a question about what you define as "fundamentals". E.g. there are "fundamentals somewhat needed to understand at least somewhat to _effectively_ improve yourself" at least if you want to improve above a certain (often medium skill level) point. E.g. how colors work (physically and mentally) for painting and/or graphics design. And "fundamentals the science is technically based on but it kinda doesn't matter much for using it". E.g. a lot of things related to Chomsky-Hierarchie and grammars is for most software developers most times irrelevant. Not always, sure. But most times you end up needing this stuff at work you should pause and wonder "is that a good idea?". Because lets be honest while it's often fun, most times you are reinventing a wheel or making things more complicated then it should be or less maintainable etc. Sometimes in a subtle way. (E.g. custom config file format instead of leveraging existing formats like json, toml, etc.).


Athletes and musicians pursue fundamentals because they have time; their “work” occurs in intense but short bursts of performances, leaving them the rest of their days to practice. Knowledge workers don’t “practice” because their job (long-term research or whatever) demands much more time and commitment.


perhaps if you have a very narrow definition of what a "musician" is.

a modern music producer will literally spend 700 hours on a single song.


Does that kind of musician also spend time practicing?

Because the comparison we’re being offered is to a concert musician, and their work routine is likely very different to a music producer.


I would argue the process of producing a song falls under “practice” of the craft.


Programming also falls under "practice" of the craft, but we aren't talking about that kind of practice, we are talking about training sessions.


It's all relative of course. But keep on mind that 700 hours is about 4.5 months of full time work. I sure have had epics lasting much longer than that. And the "concert" is just shipping/launching whole I'm already working on the next epic. Maybe later on we bug fix, but we never truly get to "own" a feature the way a musician owns a song and gets to re-perform it in their repertoire.

I'm not really trying to establish which is harder or easier. Just that the pipelines and layoffs differ immensely.

And


In this instance what would qualify as “pursing fundamentals” for a producer as defined by the author?


Some of this is covered in the book "The Creative Act" by noted producer Rick Rubin.

https://www.penguinrandomhouse.com/books/717356/the-creative...


Resting ears. Ear training. Doing your scales. Training your taste. (re)learning old/new tech/tools/instrument/history/theory of music. (re)listening to known/new music from separate styles/periods/cultures. Listening to artists you work for/with. Training/mentoring others.


Many top-tier knowledge workers are also teachers, who review their fundamentals every time they teach them, and whose ideas are regulary vetted students in class discussions.


This doesn't seem to take into account that professional athletes and musicians work very few hours in a year. Imagine if programmers were like NFL players, working for four 15-minute sessions, seventeen times a year.


What an odd thing to say. How is practice and training not work?


I'm pretty sure they mean that NFL players (and musicians) have dedicated practice time separate from their "performance" time (games for athletes, concerts/recording/writing sessions/etc. for musicians), whereas software engineers are (generally) expected to produce useful output during all of their work time, and aren't generally allotted time in the schedule for self-improvement.


> and aren't generally allotted time in the schedule for self-improvement.

I never met a company that didn't expect to have this in the schedule and budget for their employees.

Given the breadth of technologies and the pace of the industry, I don't get how a tech-dependent company could afford not to.


In contrast, I have only met companies that don’t invest in real training nor schedule/budget for any knowledge gains.

The only exceptions so far have been FAAMG, but all of them also have it as optional not mandatory and rarely encourage it tbh.


Same here. The only time I've seen a company schedule/budget for training was when I worked at Intel in the early 2000s. They were very serious about sending people for training classes in various technologies. After I left that company because of a big downturn, I never saw this kind of support for training at all.


Ask 3/4 of my last studios. Not even much introductory training. Just sat down and given a task after the first week of setting up authentications.

Also, contracting doesn't usually let you bill training as part of the work. That's just "research" for the problem.


But even then the training to "performance" ratio is not even close (and for good reason) for developers vs athletes/musicians.


Training is work but their main “events” are just much shorter than say that of a programmers. Even the longest athletic event like TdF lasts only several weeks. A programmer or lawyer worker 9 to 5 (at a minimum) doesn’t have time after work to practice.


And how much of that 9 to 5 is writing code?

This whole line of reasoning is ridiculous and smells like something said by people who know fuck all about athletics and musicianship.


Not writing code doesn't equate to not working. Thinking, documenting, designing, discussing, etc are all important parts of their job. I don't understand why you seem offended by my take on this. This line of reasoning doesn't by any means diminish athletes/musician's work.


Because the identical logic works when used on athletes and musicians. You're just being obtuse and refusing to recognize it. I don't know if you get off on feeling superior to people who aren't developers, but I think it's weird you're trying to argue this angle so hard.

In all these professions, and many more, we're expected to maintain a certain level of performance (or capacity for performance) whether we're officially given the time for it at work and whether we're paid for it or not.

I also don't see how you can possibly in good faith compare "public performance" to whatever you think the equivalent "performance" is as a developer.

It sounds disrespectful and unnecessary, honestly.


My main point is that these jobs are fundamentally different and people spend their time differently as a consequence. At this point, I am more curious about how what I argue even comes across as me feeling superior or being disrespectful to people who aren't developers. (I am not a developer and would rather spent all my time running if I could)


Practice and training are work.


Imagine if you told your boss you were only going to actually write code or investigate bugs from 1pm-5pm on Fridays, and the rest of the week you were going to practice "fundamentals". Your boss would think it completely ridiculous, but it's still a larger ratio of performance to practice / training than a professional musician or athlete.


The nature of the work is not the same, hence, the proportion of the modalities of your work are not identical.

The problem is considering that only your office boss work view is the one that qualifies as work.

The problem is also considering that a software engineer performance is in writing code/investigating bugs, whereas it is in the whole process/intellectual pursuit. In some cases, you will need to spend a whole week of going back to fundamentals or training or other, to be able to solve your issue in a few hours on Friday.


>The nature of the work is not the same, hence, the proportion of the modalities of your work are not identical.

I guess we solved that problem then. We're comparing apples to oranges and wondering why oranges don't have edible skin.

>The problem is considering that only your office boss work view is the one that qualifies as work.

So you're suggesting that more programmers should practice in their free time?

>The problem is also considering that a software engineer performance is in writing code/investigating bugs, whereas it is in the whole process/intellectual pursuit.

You can argue we're always performing or never performing in that case. Or perhaps our "performance" is crunch for a deadline, or right after a product shislps.

Either way, it's fundamentally different from practice/performance scheduling of athletes or musicians. Performance should be a place where you put 110% into an act, often in a burst. Physically or mentally, we can't afford to operate at 110% every day. That's why there's often a rest day for musicians/athletes. Knowledge workers is much more spotty.


He separates those from “performance”.


Yes, but that's still a peculiar separation.

Learning, training, practicing, teaching, rehearsing, performing are just several different modalities of (the) work, with distinct proportions depending on the job and the role(s).

It's not because one's not in a 9-5 office job that it's not work either.

Your plumber, or locksmith, or carpenter, or physician is also often performing for you only a few minutes/hours. You may think you're watching/paying for this performance only, but you're really watching all the experience that goes into this performance, that is the result of their previous studies, training, and practice and other performances.

The "difference" with a "typical" office job is that you don't get to have them in the single same place all the time, and watch/see them work through all those modalities, sanctioned by some manager. It's much more open than that.

What is amazing is how normalised the "controlling" factory/office work culture has become.


I think that's the point - you're practising at your desk everyday.


Taxi drivers practice at their car everyday, but we don't see many stock car teams poaching experienced taxi drivers to compete as pilots.


With how good WLB is for the intelligent in tech, many of them do perform at about that exact amount per year!

Seriously, this is the source of “rest and vest” as a mentality and why some companies, I.e Microsoft, are seen as tech retirement homes. They hire the brilliant lazy.

Yes, there really are a lot of FAANG caliber engineers who don’t work much harder than you described. Yes really.


I can't tell if this is meant to be sarcastic or not.


I don't think it is. A NFL player is hired to perform well for just a few hours a year, the rest doesn't matter. A live musician needs to play well during the few hours of concert they have each year, what they do when the public is not there doesn't matter.

It means all of their job is concentrated in a few hours per year, so they have to be damn good at it. In order to do that, they need training, which is most of their working time. For most other jobs, there is much less time to train, and it doesn't matter as much because what counts is the average performance, not just a few key moments.

For programming, the parallel would be competitive programming. A competitive programmer will spend days studying algorithms like no one else, because it will matter for the hour or two of the competition. For typical programmers the loss of productivity for not knowing the algorithms is less than the time spent studying them.


If you interpret OP's term "work" as "perform on the public stage", then it comes across as non-sarcastic.

I don't think they meant to say that NFL players loafs around all year except for a cumulative 17 hours.


Yes, by "work" I meant "do what they are specifically paid to do, i.e., perform in front of an audience.


I think it was confusing people because pro sports players are often paid by an org or team that will fire them if they don't show up for the endless hours of training. They also get huge amounts of resources to make that training more effective.

Versus, say, a touring rock band that gets a cut of the performance revenue. I don't think there's anyone paying them a multi-year contract salary.


saying training is work is also confusing, because just doing your work is not training either. I'm kinda angry at the school system for never teaching me about dedicated practice. For the longest time, I did believe practicing something was just doing it over and over.

I'm having a lot of trouble understanding your definition of work. Even if you meant perform- most athletes have to perform continuously outside out of games in order to earn minutes in game. And touring musicians are performing way more hours than you cite.

Imagine if programmers were like NFL players, constantly measured on their performance against their peers.


NFL players wouldn’t have time to do drills if their games last 8 hours and they need to play everyday.


I think a great example of the point you're trying to make is taxi drivers and motorsport pilots.


The article starts with a legitimate problem but meanders to false or suboptimal solutions. It is true that, for example, when I read a text, I can usually only (consciously) remember a few high-level points. But the solution is not note-taking, spaced repetition, or "Inbox Zero". This might be anecdotal, but I have tried all these and failed to discern any noticeable improvements. One technique the article mentions that might work is Ben Franklin's practice of rewriting a previously read text in one's own words.


Yeah that was probably my primary disagreement. There's no universal fundamental that fits all. And it's been established for a while that generally, continuous active practice (funny, just like athletes and Musicians) will generally yield better results than studying theory and then trying to "perfectly" approach a problem.

Your fundamentals will change per field, and even per-domain. A back and front end web dev will have very different fundamentals.


I'm probably missing the forest for the trees but elite player and world class musician are the p99 in their fields. I'm pretty sure some top notch at FAANG, medical research etc do at very least teaching/mentoring other people, which makes them go through "the basics" often enough.

Also, muscle coordination is something completely different from "knowledge work", unless the knowledge worker needs to learn by heart the Hamlet.


The piece compares exceptional athletes with average knowledge workers.

It’s not interesting to compare an extraordinary athlete - fill in your favorite professional hall-of-famer or multiple Olympic gold medalist here - to your average knowledge worker. Of course the extraordinary person does things differently.

More interesting to compare an extraordinary knowledge worker instead: top-tier CEO, famed author, Noam Chomsky, Einstein, whatever.


You're explicitly choosing the basis of comparison on the grounds that it will deliver the answer you prefer to hear. The piece compares professions. Professions are a way you make money to live. They should compare programmers who make $150K/yr to musicians who make $150K/yr.

Ironically, the fact that you think it is obvious that you would only compare the 0.1% of "knowledge workers" to musicians and athletes means that you find the conclusions obvious. Of course knowledge workers, at the same salary as athletes and musicians, are far worse in quality.

Not that it's an injustice. Athletes and musicians are paid to be passively watched and listened to; it's a demand thing. Who's going to pay to watch or listen to someone average? The average programmer (and the average garbageman, who also doesn't put a lot of time into the fundamentals) gets stuff done.


So are we comparing million dollar software architects to the average or below average athlete in that case?

I just don't think salary is a good metric or comparison here. Concert musician as a paid professions as fallen for decades. Tech is a multi trillion dollar industry, and even outside of tech there's a need for every business to establish basic IT and security.

It's a mix of demand, supply of cash, supply or workers, general respect, and a few other factors. Everyone needs school teachers but how do we treat them (outside of the whole "essential worker" schtick the one time they really should have stayed home)? Child care, on the other hand, pretty much lacks the funds to compensate any better given all the regulations that need to be upheld.


Your second paragraph is a refutation of your first.


Speaking from personal experience: tt depends on what level of a musician you aim to become. If you intend to be one of the top classical solo musicians (violin, cello, piano, flue etc.) you will have to practice 8h a day - every day - for years. Even if you "only" aim for a top position on a major orchestra.


Where are we with robot solo musicians ?


If you care about music: at least 100 years, almost certainly much longer. I don't think any of us in this comment section will live to see an AI that truly understands human music.

If you care about money and don't mind making the world a more terrible place: humiliate a few dozen human classical pianists by making them record hundreds of hours of motion capture, invest in engineering a good robo-arm, and I would guess in 5 years you'd have something passable.


I would think self play and record would be enough to iterate. chess engines don’t really need any human games any more and can learn from self play from scratch. I don’t see why music would be different.


Games are "easy" for reinforcement learning because they have win conditions that can be automatically evaluated. Music isn't really like that.


I am skeptical to a robot's ability to interpret a piece of music from say Gershwin, Bernstein, Mozart or Beethoven. The beauty of human musicians is that each high level musician my have his own approach to a given piece of music. It is not just about the scores written but to interpret what the composer might have thought or felt. A robot playing these pieces might send the a.m. composers spinning in their graves.

Some new Steinway pianos have a 'high resolution player piano' feature where the performances of concert pianists can be played back on your Steinway. Not really a robot soloist, but I thought it was interesting. Not interesting enough that if I suddenly came into Steinway money I'd get one with that feature, though.


record players are perfect


robot lips are pretty poor. They exist, but not at the point of making brass/wind instruments practical.

most other things can be operated adequately by robots.

But then, you're better off with either synthesisers or direct genAI synthesis.

However I do wonder what the point of offloading the distillation of human emotion to robots is. The whole point of music is that it contains what someone _feels_.


I have been suffering of this lately, ironically because I'm venturing into business.

I find huge gaps in my ability to rigorously read (and push through) boring (but important) paperwork. Take notes and do the required work afterwards (or take an important decision because of what I have read).

I find it very difficult to organize myself (and others) to do chores especially ones that are very disruptive and not technical.

I'm starting to see the value of project managers and other non-technical or semi-technical people in the companies. Their work now seems much more difficult than I imagined previously. Their skills are much less "interesting" and maybe "easier" in isolation, but in the same time they need to perform at high levels constantly.


Former customer success manager with a technical SaaS company here. Formerly very much in the practical intersection of support, consulting, and sales. Always interesting to me that "not everyone" understands the value streams delivered by functional teams/roles. Human orgs really do need the technical and non-technical and everyone in between.


This is simply a case of "How good do you need/want to be?"

This is everywhere. Humans can get remarkably competent at lots of things with a small investment of time (100 hours or so) and some intermittent practice.

The problem is that reaching the next layer almost always takes a big jump in time commitment. Want to be better at that foreign language? Yeah, 1000+ hours of practice and memorization incoming. Want to play something on guitar other than Wonderwall? Yeah, 1000+ hours of scales and metronome work. Want to win more at Chess. Yep, 1000+ hours of tactics along with some basic opening memorization.


Other problem is modern society gives us less time even if we do want to put that time in. Especially now that bosses expect to be able to ping you at any time on your mobile pocket pc we call a phone.

So that leaves the idea to "practice" as your job. And that works for a few years. Then you realize you're mostly pencil pushing after that and not truly pushing to the next layer


I agree that practice can be tremendously valuable in knowledge work, but at some point in the skill curve, once you have built up a the ability to accurately self-evaluate the value of practice goes down because the work itself is the practice.

Benjamin Franklin was referenced in the article, and there are far more examples in his life than just the writing exercises where he employed deliberate practice to improve his ability in an area. But he didn't continue these once we was rocking these skills at a world class skill level - instead he switched to practicing new skills he wanted to add.

But if you haven't tried doing so deliberate practice - I'd highly recommend it.


For athletes, everything is less subjective. Diet, gear, tactics, strategies either produces results or they don't.


I disagree. “Results” can pretty hard to quantify in athletics, especially in team athletics. Sure, there is a binary result for a win or loss, but are so many confounders that it can be pretty difficult to determine if a given strategy, diet, or gear is optimal, or even better than another.


When athletes exercise they keep track of series, repetitions, weight, how many miles you are running, distance, times, etc.

If you make an adjustment that enhances your performance you may notice it.

Tactics and strategies are more situational and harder to track because they are not universally superior but rather situational. You can still see results if the athletic performance of the team is enhanced... you can see it in metrics like ball possession and such.


It varies. If doing quantitative or research work, there is a better pursuit of fundamentals, but even so, it's no more than is necessary. Yet, it is fundamentals that push boundaries, as for example with innovative neural network architectures.


+1 for hard columns. I can't stand fluid web interfaces, it makes it really hard for mind my to remember the placement/visualize the words I've previously read. I did not have this problem with text books.


As a knowledge worker who went from math academia to software engineering to management, it turns out the fundamentals of each job haven’t actually mattered that much for the next job.


this fits... i'm constantly amazed by how bad most programmers are at maths and problem solving, and how few of them do anything in their own time to improve their skills.


Do musicians do any practice in their free time on top of 8 hours of daily work?

Their job is to practice, they don't need to use their free time for that.


Comparing top tier anything to normal people is apples to oranges.

I am a knowledge worker, but I don't often take notes in meetings, because the purpose of meetings to to get agreement, not to forge new knowledge. Sure there are minutes and actions. But a meeting notes is not a "kata" that I practice to be better at my job.

I work in a research org at a FAANG, which supposedly puts me in the "top tier". I do not have a doctorate, or a masters. The thing that makes me "good" is that I am able to communicate how to do x with y, and direct people to use z with building blocks omega and theta. the thing I practice every day is working out how to translate an infrastructure problem to a researcher who couldn't give a shit and just wants put what they have running locally on the GPU farm, but faster.

That is my kata, that is what I strive to be better at.

I write to explain, not remember. that's just a nice side effect. Is that writing perfect prose? fuck no, but its a fucksite better than most of my peers. It has to be because I'm a shit engineer otherwise.

> People seem to forget most of what they read

Yes, and musicians forget music. Sure they have a repertoire of core pieces that they can pull out, but they are often learning one off pieces, or semi-sight reading stuff (session musicians are fucking ace by the way. Some are able to read music like a news reader does an autocue.) That core repertoire is kept alive because they need to play it often. For me, my professional repertoire is threading, message passing, and large scale dataflow. But my sight reading is computer vision shit.

In the same way a phd student will master and expand a tiny part of human knowledge, a musician will tend to specialise in a few composers, styles or periods.

> confusing a sense of enjoyment with any sort of durable understanding

Again, thats not what a knowledge worker does. Learning for fun is not the same as core knowledge/skill required for someone to perform a job. Thats someone pissing about and learning new things for enjoyment, and they should fucking do it regardless of the snobbery from people who want "completeness"

One of the amusing things about this whole argument is that the writer must have been able to write, spell and read well from a young age. The ancient greeks would have been very suspicious of that kind of working, because they thought that writing was the death of memory, deliberation and debate. Socrates would have particularly pissed off with the assertions on memory.

I couldn't write meaningfully until I learnt to touch type. So for me, everything was memory. I work differently to most people, so I'm not arrogant enough to produce sources and say that I have the best way to be a knowledge worker. I don't but it works for me. The author would do well to remember that untested assertions are not science. (yes, even if you cite papers.)


I'm curious....

How many of you use ZettelKasten note taking?


What is the underlying technology for notes.andymatuschak.org? Seems like a nice note-taking application.


I believe he uses Bear notes and exports them for this closed source web app. I think he has mentioned that he just wasn't ready for it to be open sourced.


Comparing top-tier athletes and top-tier musicians against average knowledge workers will lead to this sort of result.

I suspect that top-tier athletes at least are not spending 40 hours a week on training and performing. Writers, too. Musicians I am not sure about.


The saying goes "smooth and slow to go fast". It applies to all sorts of physical abilities: playing an instrument, driving, motorcycle riding, shooting, and athletics.

There's a lot of neural and muscle learning and tuning involved to specialize in those skills. But in order to start, you need to learn and tune the right things. As they also say, it very hard to unlearn things.

It what amazes men when I watch baseball (I'm a nut for baseball). We watch these guys perform "routine" stuff on the field every day, but we also watch them bumble, slip, drop things, miss the balls, etc. And these are the REALLY GOOD players. There's 10,000 other players in the minor leagues. They try to make it look easy, but demonstrably, it's not.

But if you watch how they train, it's all about fundamentals. Arm angle, foot placement, where to look, when to look, and that's even before you talk about "baseball" knowledge -- knowledge of the game itself, field awareness, etc. This is just getting the ball in the mitt or the bat on the ball.

In our field?

Not so much. It's far less important.

My favorite anecdote was when a junior programmer at work came to me and we were talking about his project, a little GUI front end to a SQL database. He was done with the project, and I asked him how it went. He said it went fine, but he was confused about something. He wasn't sure what the difference was between RAM and disk.

So, here was a fellow, who accomplished something, using then modern tools while essentially ignorant of how a computer even works. This is a testament to the tools and platforms of the day. How with just some syntax knowledge, and a bit of a logical head on his shoulders, he can accomplish productive work.

For many, computer work is borderline blue collar work. It's assembly line stuff, know what to do, not necessarily how or why it's done. Drag and drop, cut and paste, commit it and ship it. And now, of course, we have the AIs to help.

This is not a bad thing.

I've managed to get through my entire career without a deep understanding of networking, firewalls, BGP, routing, all of that stuff. Can I configure a DNS server? Nope. Despite the Petabytes of information I've shipped hither and yon across such things, when it comes down to the core level? The lower layers of the stack? "Contact your Network Administrator" because that's not me.

I am ignorant of cache lines and such like that. I know they exist, I certainly understand what they do, but I've never given them any consideration in my work. None of that has ever been necessary.

It's certainly valuable to get exposure to all of the parts of puzzle, even if you don't have a full understanding of them. I know I resort to core fundamentals about how things work to understand problems all the time. But the truth is, for a lot of the work available, and that needs to be done, that level of detail is unnecessary.

I've worked with folks who are fascinated with the craft and field, always learning and growing. And I've worked with the 9-5 folks, who learn precisely what they need to accomplish the job, and just...stop. Do the work, but just the work. They have other interests elsewhere.

Doesn't mean they can't do the job though. These are not bad people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: