> It doesn’t really matter how good AI systems get, that’s not going to change, and since most white collar work deals with these kinds of problems, there is little danger in it being replaced.
Does the accounts payable team keep their jobs because their manager enjoys chatting with them? Does the junior analyst stay employed because the VP values their specific personal opinion on the Q3 revenue forecast? Note the article is about work
I wouldn’t frame it as “chatting with,” more like, corporations want people in certain roles to deal with things, more than they necessarily want just the results that said person gives. Depends on the job and situation of course.
When you have X employee in a certain role, you know someone is “handling” a particular thing. With AI that isn’t really clear. Maybe you just get the same person owning the responsibilities that previously were under 3 people.
I think the word "entirely" is missing from the last line. A significant amount of white collar tasks are getting replaced, and eventually that leads to a need for fewer white collar employees, which subsequently also leads to less communication overhead and less of a need for humans in the loop to interpret subtleties, desires, etc. But that need will always be there at some level, or we'll have very intelligent AI agents that very intelligently blackmail your vendor's CEO because they have determined that to be the fastest way to get the TPS report you asked for. Humans still need to be there as guardrails at a minimum, but also because humans understand humans, and humans are your customers.
So yes, white collar jobs will be replaced, but they won't be replaced entirely.
It's the same error pattern every time: identify what AI is currently "bad" at, define that as the essential core of the work, declare the work safe. Wait 6 months, shocked Pikachu gif.
Basically we do not rationally analyze what work can be automated and what work is forever safe. We just assume that "sexy work" is safe, and work backwards to figure out how to explain this belief to ourselves.
Such a fascinating blog post! At first I could not believe it was written in 2013. But the more I think about it, the less I understand what he is actually trying to say. Anyway, the point that we (erroneously) see less prestigious jobs as more automatable is spot-on
Did you read the article? Because I don't think it claimed AI to be bad at anything in particular but claimed that certain kind of problems need human judgement even if AI is good at it
It's extremely common for people to be unable to project into the future when there is a bias in the way. Anytime you see a blatant failure to look beyond the tip of their nose by a person, it's almost always due to their own biases getting in the way (ie it's irrationality, they're giving up reason in exchange for not having to challenge their own position/s).
The other side of that irrationality coin is 2D extrapolation: a thing happened (or a context is such N), so therefore I shall extrapolate it happening again (once or many times) into the future on a smooth line, so as to fit my bias.
I built a Klatt formant synthesizer with Claude from ~120 academic papers. Every parameter cites its source, the architecture is declarative YAML with phased rule execution and dependency resolution.
Note how every rule and every configurable thing in the synthesizer pipeline has a citation to a paper? I can generate a phrase, and with the "explain" command I can see precisely why a certain word was spoken a certain way.
Well - I've learned what a "formant" is today. Looking at the repo, it's not obvious to me what .md is authored by you vs system-generated. This is an observation, not a criticism. I was looking for the prompts you used to specify the papers summarization, which is very nice.
Do you think that bad things happening is just hilarious in general? Do you like to see good behavior punished? I'm really trying to understand what you get out of making this comment. Also what happens when ... This doesn't happen? You just polluted the epistemic commons a bit more with some cynical bullshit sans consequence? Enough. I think it's time to start calling this garbage out when I see it.
Two things can be true at the same time. It can notionally be a “good” decision and also a straightforward act of Anthropic continuing their PR that they’re some sort of benevolent entity despite continuing to pursue a typical corporate capitalistic structure. It is what it is. The game is the game. But I’m not going to sit there and pretend their virtues are as pure of snow. I’m sorry that’s upset you.
> Costco is a really popular subject for business-success case studies but I feel like business guys kinda lose interest when the upshot of the study is like "just operate with scrupulous integrity in all facets and levels of your business for four decades" and not some easy-to-fix gimmick
I don't know, staff at my two Costcos feel much more disinterested and rude then I remember a decade ago. It used to feel fun but now it's miserable.
At peak times they run out of carts and tell the customers to go hunting in the lot for them, door greeters shouting at members across the floor, checkout queues stretch the length of the warehouse, they start half blocking the gas station entrance 30mins before close so trucks can't get in, so maybe they're turning those profit screws.
Ah, right, by being actually good, as in - being okay with mass surveillance as long as it isn't being done in the US, being okay with Claude assisting in killing people as long as it isn't fully autonomous, and being actively hostile to open-weight LLMs and open research on LLMs? This kind of "good"?
No, OP is right, their PR department is doing a great job.
Correct. Protect our citizens' rights, as we are the ones under the jurisdiction of our government. Yes, design competitive weapons systems that can stand up to the threats that adversary powers are creating, but do so while maintaining human control.
Sibling comment summed it up pretty well; my country is considered an ally of yours, but even left leaning Americans seem to take it for granted that we deserve mass AI surveillance/blackmail/manipulation if there’s a chance it could benefit us citizens in the short term. I suppose we deserve it for being complicit in American crimes for so long
You're assuming things I didn't state. I don't particularly want mass AI surveillance at all, but considering how much more dangerous a government's mass spying is to its own citizens living in it 24/7, it's not unreasonable for that to be the focus.
> You're assuming things I didn't state. I don't particularly want mass AI surveillance at all
That's fair, sorry for that.
> considering how much more dangerous a government's mass spying is to its own citizens living in it 24/7, it's not unreasonable for that to be the focus
The US government is actively trying to influence politics in my country and spending huge amounts of money to do it. The US government is a much larger threat to us than our own government.
All of our tech is owned and operated by US companies, which means the US government has read/write access to all of our data. If we attempt to incentivize domestic software production (e.g. by taxing imported software, or by stipulating where our data can be stored and who can access it), the US government will destroy our economy. This has played out a few times recently.
I can't believe we were so foolish as to let this situation grow. Its going to be a painful few decades.
I feel this is a facile interpretation of the phrase, kind of like complaining that "Measure Twice Cut Once" would lead to selling illegally adulterated flour. A more steel-man interpretation of POSIWID--the way I think it's intended to be understood--would be:
"The practical outcomes of a system over the long-term reveal something important of the the true-preferences of the various interests which control that system, and these interests may be very different from the system's stated goals."
> The purpose of a cancer hospital is to cure two-thirds of cancer patients... These are obviously false. The purpose of a cancer hospital is to cure as many patients as possible, but curing cancer is hard, so they only manage about two-thirds.
I don't see the contradiction here. The purpose of a cancer hospital is to cure as many patients as possible. "What it does" is cure as many patients as possible. The fact that as many patients as possible is currently (presumably) two-thirds is irrelevant. If major advancements in medicine or new types of cancer emerged which changed the percentage of people cured it wouldn't matter at all. "What it does" and "the purpose of the system" is still unchanged.
“If a system is maintained over an extended period and has observed behavioral traits that are consistent within that period, that is, in itself, strong evidence that those behavioral traits are consistent with the purpose for which the system is permitted to exist” is kind of a mouthful, though, and there is value in succinctness.
(Although there is another message, there, too: “the purpose of a system, insofar as it can be said to exist separate from what it actually does, has no weight in justifying the system’s existence or design”.)
Great read. I've always noticed that the type of argument invoked is often less telling than when and in which context you invoke that argument.
You can make a lot of claims and they can match to reality a lot - normally people think of evaluating things in terms of a strict "does this fit or does this not", but it's often the meta-style (why do you keep bringing up that argument in that context?) that's important, even if it's not "logically bulletproof".
Wow that post is bad. The author clearly never actually attempted to understand what POSWID actually means and where it is coming from. Perhaps, instead of looking at Twitter, they should have opened Wikipedia. Or, better yet, Stafford Beers books (though admittedly, he was a pretty atrocious writer).
The follow-up is slightly better. But still not very convincing, IMO. They get far too stuck on a literal interpretation. Of something that self-describes as a heuristic.
The phrase does not make more sense even if we go all the way back to Beers. I certainly don't feel alone in not understanding how he went from his (fair) observation that "[There's] no point in claiming that the purpose of a system is to do what it constantly fails to do" to his more controversial conclusion: "The purpose of a system is what it does (aka POSIWID)".
Surely, there were many more sensible (but perhaps less quippy) stops between the two.
Being quippy is the point. That's how aphorisms work: creating a short, pithy distillation of a complex argument, that you can then use pars pro toto to make a point.
I certainly agree that POSWID is easily (and perhaps frequently) misused. But that doesn't invalidate it in general.
Same way you handle preserving any other property you want to preserve while "vibecoding" -- ensure tests capture it, ensure the tests can't be skipped. It really is this simple.
reply