Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Occupational Downgrading: AI and Our Performative Work Future (stateofthefuture.substack.com)
32 points by lawrencelundy 5 months ago | hide | past | favorite | 4 comments


> By the late 2020s, office jobs in developed countries will be primarily about overseeing AI systems—and even that "oversight" will be largely performative.

The constant in all of these articles is that the writer invariably doesn't understand what these roles are for or actually do.

The notion that lawyers are going to replace their paralegals with an LLM is farcical, for example. It's possible that a paralegal would replace a legal assistant by using the LLM to perform clerical work (e.g. "Take this list of data from this Word document, and put it into an Excel table."), but one of the primary services the legal industry provides is document assurance - that is, reviewing a document to ensure its contents contain what they need to contain. You can't use an LLM for this in many jurisdictions because it is illegal and would open you up to criminal and civil liability if you swore an affirmation knowing you hadn't actually conducted a review.

Accounting is similar. I've been hearing it would be automated for the last fifteen years. The idea that a computer is going to assume responsibility for these tasks is farcical because the whole point of having a human in the loop is that you need them there (both legally and as a matter of practicality) to catch errors, omissions, and frauds, as well as to affirm the documents that are produced by the accounting process. You can in theory close the loop and have all transactions recorded by a point of sale terminal, and all of these transactions arranged by a program into a database that generates reports, and maybe even dumb down accessing that database so that it's as simple as asking the computer to print you a graph in Excel, but the issue is that this is still specialized domain knowledge. The executive requesting the documents won't know what he doesn't know; neither would an LLM. What's worse, though, is that the executive might know that he doesn't know something, but an LLM doesn't know anything at all. This is exactly the same problem you run into with programming: An LLM can provide serviceable boilerplate code, but it will break down on more complex tasks like choosing an architecture.

You could argue that all of this is just friction introduced by social and technical constraints and that automation is inevitable - and on a long enough timeline this is probably true; but it won't be happening during this current hype cycle over LLMs.


> In a world of AI-generated abundance, the ability to distinguish what is worth paying attention to becomes the ultimate scarce resource.

I am an optimist so I hope in the end we will come full circle and realize the true value of authenticity. Maybe the hollow shell the author references will implode, having no meaningful substance.


AI is a big lie. The planned reset is the truth. ML is good for surveillance state and killing people. There are other useful things, but they are not enough to drive the hype train.

The digital economy is driven by faith in the "best" tomorrow, not by rationality or critical thinking. Rationality is faked trough statistical interpretation. Bias is ignored for shareholders growth.

Progress is the new religion. A global Babylonian model of existence. In short: Dystopia.

The models behind the tech are antihumanistic. Transhumanistic. Posthumanistic.


> Progress is the new religion

"Progress as religion" is not only not remotely new, but probably in the weakest state it's been in... I dunno, 150 years?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: