An old favorite that might find new relevance with the growth of OpenAI. Substance of the article begins after the quote from John Stuart Mill. Some of my favorite passages:
"One reason for my own skepticism is the fact that in recent years the AI landscape has come to be progressively more dominated by AI of the newfangled 'deep learning' variety [...] But if it’s really AI-as-cognitive science that you are interested in, it’s important not to lose sight of the fact that it may take a bit more than our cool new deep learning hammer to build a humanlike mind.
[...]
If I am right that there are many mysteries about the human mind that currently dominant approaches to AI are ill-equipped to help us solve, then to the extent that such approaches continue to dominate AI into the future, we are very unlikely to be inundated anytime soon with a race of thinking robots—at least not if we mean by “thinking” that peculiar thing that we humans do, done in precisely the way that we humans do it."
My impression of cognitive science is that it is a branch of psychology/neuroscience and is largely focused of stimulus -> response, and studying what happens in between. There is a lot to learn here, but I would be surprised if it's the whole potato that unlocks blatantly sentient ai.
> How shall we find meaning and purpose in a world without work?
What? Everyone in the world is brain damaged enough to all keep repeating this useless idea.
Work is just the shit you do for other people to stay alive. If a robot is doing that thing, I'll do something else. If robots do everything a person can do for zero marginal cost, then the we are in a post-scarcity utopia, why is that bad?
In the land/labor/capital triad, the only thing that will be post-scarcity due to AI will be labor. Anyone who lacks one of the others will be screwed since the only they have to trade - their labor - will be worthless.
If nobody can buy your land or trade your capital, what do you really have? We need the value produced by labor for there to be a market for anything else.
the labour being replaced by autonomous robots driven by an AI means that the output is cheap, and plentiful.
But this requires initial capital to get going, and that capital (and land) is what gets traded. At some point, when capital is no longer required, then it would become worthless, but by then i think humanity would've reached star trek level of post-scarcity (or killed themselves with a war).
It's not impossible, but it is highly unlikely to happen that way… There's a certain group of humans that are determined to hoard all that they can, and they don't want to share.
> What? Everyone in the world is brain damaged enough to all keep repeating this useless idea.
I feel like it's some kind of Stockholm syndrome, they don't want to realize the shear absolute lack of logic and meaning in their day to day life so they rationalize that fulfilling random task for most of their day/life somehow _is_ the meaning of life.
To me it's signals a complete lack of imagination of what life could be. Opening any book from the 50s/60s/70s about the "future of work" that automation was going to bring would blow their mind. They're too caught up in the rate race to even notice that there is something else
> If robots do everything a person can do for zero marginal cost, then the we are in a post-scarcity utopia, why is that bad?
Because you won't have the money to buy anything, perhaps?
Just because things can be produced at zero marginal costs in no way means those things will be free. The people who own the AIs doing the stuff will want to gain wealth for doing so.
If robots can do everything, that includes duplicating the robots. And it only takes one country willing to tolerate robot "intellectual property theft" for free duplicates of the do-everything-robots to proliferate.
There's currently some incentive for governments in poorer countries to put up with IP-friendly laws favored by richer countries. It might be the only way to get their exports into the European market, or get development loans, for example. But if robots can do literally everything then there is no reason that these carrots matter any more. Instead of taking development loans and trying to become the next low-cost-manufacturing location after China, just copy the robots and ignore everyone who says it's theft. If you're a Sri Lankan, Bolivian, or South African politician looking to improve your country and become wildly popular at the same time, "copy the robots" is the fastest way to develop. It's also the fastest way to modernize and expand a poor country's military, if they're afraid of gunboat diplomacy from the richer countries.
That only means that the robots will also be in a similar situation: in the end the only thing that will be of any worth is resources (land, mines, oil fields) and market shares, while labour and IP will become worthless as we reach a plateau of science and development, and patents and copyrights run out.
Given the societal impact that the "curse of resources" has, future societies might look more like some african country or the russian oligarchy where a small percentage lives in utter luxury while the masses live in extreme poverty.
I agree that the ownership of natural resources would still matter. I don't think that concentrated oligarchy is the probable outcome, though. If robots do everything, trying to prevent copies is like trying to prevent copyright infringement. It's the clanking, macroscopic equivalent of having a replicator. It's as easy to make weapons as to make appliances. I can see things spiraling out of control to the point where e.g. routine civilian air travel shuts down, following the wide availability of MANPADS, and nuclear proliferation runs amok, but I don't see a stable equilibrium that ends with secure oligarchies. There are too many rival oligarchies in the world for them to coordinate on the problem of keeping do-everything robots locked down in the interests of their own safety. There will be defectors.
with a self replicating robot you can build an army of robots that can build rockets that can take self replicating robots to asteroids that can be turned into goods and habitats by the robots.
I searched "obsolete", "Obsolete Short Story", "Obsolete Alien", "Obsolete Alien Short Story", and "Obsolete Alien Sandstone" and couldn't find anything. Do you have a link?
>Work is just the shit you do for other people to stay alive.
A disheartening realization my sister-in-law pointed out is that we don't perform work, we perform jobs. Even the unemployed and retired have many jobs. Sibling, friend, parent, child, partner, spouse. Our most vital and rewarding jobs don't have a uniform or an RFID badge, but the robots may nevertheless take them too.
You and I might be hesitant to replace friends and family with artificial people but they might not feel the same way. Why risk rejection when compatibility can be programmed?
It requires a collective leap of altruism for humanity to bridge the gap from “work for food” to a society free from scarcity. The most likely outcome is a society that drowns in its own selfishness as the fires that sustain our current world order cool.
Work and accomplishing tasks is baked into our sense of purpose. Look at anyone who's long-term unemployed. They mostly lay around ths house playing video games or watching TV at best, or they get involved in criminal stuff at worst. Alcoholism and drug abuse are commonplace. Neglect of dependents is commonplace.
People need work to feel that their lives have meaning.
Post-scarcity utopia is exactly that. A fantasy. If people have nothing to do then civilization will collapse.
> Those things don't have to be work in the sense we mean it today.
i think the word "work" is overloaded with two meaning - one being the regular meaning of doing work (for a wage/price). The other one being to exert oneself.
People _do_ find meaning in exerting oneself, but only for a purpose which is not merely sustenance. For example, intrinsically valuable work like artistic pursuits is work, but would still be done in the event of post-scarcity.
The way we designed out current society makes unemployed people some kind of pest, it's not glorified, quite the opposite.
But that's not a god given rule, nothing obliges us to follow that path, especially if (big if) AIs/robots take over most things.
If anything you're flat out wrong, automation moved people from stable and fulfilling jobs (small companies, family companies, life long carriers, &c.) into more and more precarious situations (minijob, student jobs, faceless corporations, hashed carriers, &c.)
Productivity shot through the roof while wages stagnated
PS: I've been unemployed for large chunks of my life and had perfect meaning, I was productive, learned stuff, created stuff, enjoyed nature, got in shape, &c.
that is assuming there are still tasks that robots are not good at, which I predict will remain the case for a long time. without those, humans just become unnecessary biomass. anyway, if you expect some part of these fantasies to come to pass within your lifetime, expect to give up your reproductive rights in exchange for ubi or some such.
> Work is just the shit you do for other people to stay alive. If a robot is doing that thing, I'll do something else. If robots do everything a person can do for zero marginal cost, then the we are in a post-scarcity utopia, why is that bad?
We already live in post-scarcity thing(don't know if it is utopy or dystopy, but for some people it's former while for some later, for some reason I think dystopy have higher population), but the surplus of production is destroyed/put into landfills(with police security so no one can steal it from trashcan).
Power is going to be vacuumed up by few people who are good at doing that, and when the vast majority of the rest population is both comparatively powerless and useless to the elites, what incentive will the elites have to keep them around?
Yeah before you needed soldiers and many of them so you couldn't add all of them to the 1% of elites. Those people were obviously sympathetic with people not from the 1%, so you could only go so far until the soldiers mutinied or just... didn't fight the revolutionaries with the zeal needed to be successful. With AI you'll have way better tools at your disposal. You can man a 24/7 control room of 10 people hating the poors, all made up of 1%ers, controlling an army of 10 million robots.
But on the other hand, elites themselves also have hierarchies. There used to be kings, lower nobility, higher nobility, etc. Those would fight all the time about how much power each of them had. If the top 10% of the human population had a secret meeting to kill the bottom 90%, and used their combined powers to do that, then what used to be the bottom 9% suddenly becomes the bottom 90%. Probably those people get really scared that the same will happen to them at the next meeting of the top 10%ers council.
> When the system does need a human pilot to do something, it usually just needs the human to expertly execute a particular sequence of maneuvers. Mostly things go right. Mostly the humans do what they are asked to do, when they are asked to do it. But it should come as no surprise that when things do go wrong, it is quite often the humans and not the machines that are at fault.
Actually that’s dead wrong, it is the system that is at fault here, rarely or never the human. Otherwise I thought this was a really good read.
Not really. That's the famous difference between Boeing and Airbus autopilots. In Boeing planes the pilot is in command and can override the autopilot by applying sufficient force against the autopilot's suggestions. You can still override the Airbus autopilot, but it is a detailed procedure designed to make the pilot reconsider whether that's the right thing to do. And it's unclear which is the right perspective.
First, that’s my bad on quoted text selection, in context it is clear that the system under discussion is not an autopilot but rather the entirety of the ATC system. But since that wasn’t obvious, I will attempt to respond in your context!
In that scenario, a time-sensitive decision will automatically lead to failure in cases where the autopilot is wrong. That’s a trade off I wouldn’t take but they chose it because in the same time-sensitive scenarios pilots are most prone to misunderstanding the situation and making a bad decision on that basis. However, in safety we do not ever blame the pilot for their misunderstanding and bad decision. Instead we look at the systematic forces that left them confused and attempt to improve the system.
If this is all conceptually new, I can highly recommend Dekker, The Field guide to Human Error which is about exactly this though the field has progressed since it was published in 2002.
I'm confident in 10 years we'll look back on this and laugh at how people thought AI would take over, and in 200 years we'll be upset they didn't do more
We're not far off from people owning AI agents who will directly act in economic and political spheres. The potential leverage on behalf of an ownership class (of not necessarily just humans) with the ability to evaluate data, formulate hypotheses, and execute strategy in the real world will make Wall St. stock strats seem quaint.
AI in that capacity will be functioning as a highly competent secretary. Not necessarily revolutionary except for the latency between request for information and fulfillment.
"One reason for my own skepticism is the fact that in recent years the AI landscape has come to be progressively more dominated by AI of the newfangled 'deep learning' variety [...] But if it’s really AI-as-cognitive science that you are interested in, it’s important not to lose sight of the fact that it may take a bit more than our cool new deep learning hammer to build a humanlike mind.
[...]
If I am right that there are many mysteries about the human mind that currently dominant approaches to AI are ill-equipped to help us solve, then to the extent that such approaches continue to dominate AI into the future, we are very unlikely to be inundated anytime soon with a race of thinking robots—at least not if we mean by “thinking” that peculiar thing that we humans do, done in precisely the way that we humans do it."