"The use of FORTRAN, like the earlier symbolic programming, was very slow to be taken up by the professionals. And this is typical of almost all professional groups. Doctors clearly do not follow the advice they give to others, and they also have a high proportion of drug addicts. Lawyers often do not leave decent wills when they die. Almost all professionals are slow to use their own expertise for their own work. The situation is nicely summarized by the old saying, “The shoe maker’s children go without shoes”. Consider how in the future, when you are a great expert, you will avoid this typical error!"
Richard W. Hamming, “The Art of Doing Science and Engineering”
Today, lawyers delegate many paralegal tasks like document discovery to computers and doctors routinely use machine learning models to help diagnose patients.
So why aren’t we — ostensibly the people writing software — doing more with LLM in our day-to-day?
If you take seriously the idea that LLM will fundamentally change the nature of many occupations in the coming decade, what reason do you have to believe that you’ll be immune from that because you work in software? Looking at the code you’ve been paid to write over the past few years, how much of that can you honestly say is truly novel?
> Looking at the code you’ve been paid to write over the past few years, how much of that can you honestly say is truly novel?
While the code I write is rarely novel, one of the primary intrinsic motivators that keeps being a software engineer is the satisfaction of understanding my code.
If I just wanted software problems to be solved and was content to wave my hands and have minions do the work, I'd be in management. I program because I like actually understanding the problem in detail and then understanding how the code solves it. And I've never found a more effective way to understand code than writing it myself. Everyone thinks they understand code they only read, but when you dig in, it's almost always flawed and surface level.
Intrinsic reward is always part of the compensation package for a job. That's why jobs that are more intrinsically rewarding tend to pay less. It's not because they are exploiting workers, it's because workers add up all of the rewards of the job, including the intrinsic ones, when deciding whether to accept it.
If I have to choose between a job where I'm obligated to pump out code as fast as possible having an LLM churn out as much as it can versus a job that is slower and more deliberate, I'll take the latter even if it pays less.
Richard W. Hamming, “The Art of Doing Science and Engineering”
Today, lawyers delegate many paralegal tasks like document discovery to computers and doctors routinely use machine learning models to help diagnose patients.
So why aren’t we — ostensibly the people writing software — doing more with LLM in our day-to-day?
If you take seriously the idea that LLM will fundamentally change the nature of many occupations in the coming decade, what reason do you have to believe that you’ll be immune from that because you work in software? Looking at the code you’ve been paid to write over the past few years, how much of that can you honestly say is truly novel?
We’re really not as clever as we think we are.
reply