On the one hand, this part of the abstract does make me feel a bit better about my own AI doomerism:
> Overall, we find muted effects of AI on employment due to offsetting effects: highly-exposed occupations experience relatively lower demand compared to less exposed occupations, but the resulting increase in firm productivity increases overall employment across all occupations.
This is generally good on its face, but the devil - as always - is in the details:
> We find that higher-paid occupations are, on average, more exposed to AI—the workers at the 90th percentile of the pay distribution have the highest mean exposure.
This is bad. These developments come at a time of immense wealth inequality, higher costs of living, stagnant wages, and growing civil unrest. Disrupting the highest paid workers, who are also more likely to be dealing with the student debt crisis, only exacerbates the above problems. I know tech workers get roundly mocked and sneered at for whining on Blind about TC's in the millions, but the reality is those were middle class jobs in America, particularly in big cities; disrupting those, along with Doctors and Lawyers, will have major knock-on effects on societal order. This is where my doomerism stems from, and I continue to maintain that position in general: disrupting higher wage employment at a time of gargantuan wealth inequality would be highly destructive to economic and social order.
Okay then, so what does the paper say about that specifically? Is my doomerism founded in the case of Large Language Models (LLMs) and Machine Learning (ML)? The answer - thankfully - appears to be NO:
> Consistent with our conceptual framework, we find that occupations with high dispersion in AI exposure experience relatively higher employment growth, underscoring the importance of within-occupation task complementarities.
If I'm understanding this correctly (and I encourage actual scientists to correct me if I'm wrong, so I can learn something new), this seems to suggest the real consequence of AI use within an organization is to increase the employment of generalists who can function independently cross-domain and leverage AI for specialty tasks or facilitating in-the-moment upskilling, while over-specialization in an area highly exposed to AI (like, say, a Python, C++, or Web developer) are seeing a decrease in employment to favor broader-skilled generalists in most firms. In other words, the more AI compliments your skills instead of replaces them, the more likely you are to be employed and in-demand. This is, to a degree, the expected outcome (and why I've fiercely resisted over-specialization: the risk-reward formula does not pan out for my personal calculus in the face of technological progress), but also seemingly reaffirms my original thesis when LLMs first sprang up two years back and everyone said they'd eliminate highly paid jobs:
These (LLMs) would not be the great replacements of workers their boosters proclaim, but they're likely to be the final warning shot before whatever does.
So while the report does kind of soothe my broader anxieties ("what if the worst case scenario I've prepared for isn't bad enough and I'm wrong?"), I still stand by my personal concerns about long-term effects of next-generation machine learning and AI tooling on a fragile employment landscape.
It corresponds with the qualitative observation that violent revolutions are usually led by the “middle class”. This itself corresponds with versions quantitative theories like Turchin’s Structural Demographic Theory.
> Overall, we find muted effects of AI on employment due to offsetting effects: highly-exposed occupations experience relatively lower demand compared to less exposed occupations, but the resulting increase in firm productivity increases overall employment across all occupations.
This is generally good on its face, but the devil - as always - is in the details:
> We find that higher-paid occupations are, on average, more exposed to AI—the workers at the 90th percentile of the pay distribution have the highest mean exposure.
This is bad. These developments come at a time of immense wealth inequality, higher costs of living, stagnant wages, and growing civil unrest. Disrupting the highest paid workers, who are also more likely to be dealing with the student debt crisis, only exacerbates the above problems. I know tech workers get roundly mocked and sneered at for whining on Blind about TC's in the millions, but the reality is those were middle class jobs in America, particularly in big cities; disrupting those, along with Doctors and Lawyers, will have major knock-on effects on societal order. This is where my doomerism stems from, and I continue to maintain that position in general: disrupting higher wage employment at a time of gargantuan wealth inequality would be highly destructive to economic and social order.
Okay then, so what does the paper say about that specifically? Is my doomerism founded in the case of Large Language Models (LLMs) and Machine Learning (ML)? The answer - thankfully - appears to be NO:
> Consistent with our conceptual framework, we find that occupations with high dispersion in AI exposure experience relatively higher employment growth, underscoring the importance of within-occupation task complementarities.
If I'm understanding this correctly (and I encourage actual scientists to correct me if I'm wrong, so I can learn something new), this seems to suggest the real consequence of AI use within an organization is to increase the employment of generalists who can function independently cross-domain and leverage AI for specialty tasks or facilitating in-the-moment upskilling, while over-specialization in an area highly exposed to AI (like, say, a Python, C++, or Web developer) are seeing a decrease in employment to favor broader-skilled generalists in most firms. In other words, the more AI compliments your skills instead of replaces them, the more likely you are to be employed and in-demand. This is, to a degree, the expected outcome (and why I've fiercely resisted over-specialization: the risk-reward formula does not pan out for my personal calculus in the face of technological progress), but also seemingly reaffirms my original thesis when LLMs first sprang up two years back and everyone said they'd eliminate highly paid jobs:
These (LLMs) would not be the great replacements of workers their boosters proclaim, but they're likely to be the final warning shot before whatever does.
So while the report does kind of soothe my broader anxieties ("what if the worst case scenario I've prepared for isn't bad enough and I'm wrong?"), I still stand by my personal concerns about long-term effects of next-generation machine learning and AI tooling on a fragile employment landscape.
Excellent report.
reply