Is it a bad thing to have dumb jobs done by machines?
I'm always amused when I read "stop making progress, there are some risks!". Of course there are, duh.
As for singularity I think it's an overrated theory. Some solutions are not just a question of intelligence. As if "intelligence" was the solution to everything.... You also need resources, time and luck.
> I'm always amused when I read "stop making progress, there are some risks!". Of course there are, duh.
It's ironic that you characature one point of view as facile, yet your point of view is similarly so - with no characture needed.
What is being discussed is existential risk, and 'unfortunately' you'll probably never get to look silly for taking your point of view, because if it does come to pass we won't be here to discuss it.
I mean to take the concept of making progress and it having associated risks and then make such as generalisation is just not sensible. Progress does not have to continue having the same properties it has had historically (which is why perhaps putting too much stock in history and its wisdoms is dangerous). We could be heading towards a proverbial waterfall.
The last remark about intelligence seems rather optimistic... resources, time and luck are largely limiting factors because of our own biological limitations.
Perhaps we are simply a link in evolution between biological life and cybernetic life. Who knows?
I still think however that the Super AI singularity is a phantasm. Intelligence is not the key to everything. As fast as ideas can go, putting them into action is the tricky part.
We should be so lucky to have machines outsmart us.
This article fails to communicate the unfathomable chain of events such technology would set in motion. Without hyperbole, all previous events in human history would pale in comparison.
That is hyperbole; for all we know, the first human level AI (for however you define human level) might just want to read lolcats and watch sports (or study mathematics but not help us, etc.)
That's what most humans want as well but there is a system of incentives that keep people working. Needless to say the 'productive' versions of AI will end up with a lot more computing resources than the 'lazy' ones.
Apparently not, and i'm somewhat dismayed at the lack of opposition to this point of view ... so i'll try to make a case against it.
Creating smarter than human AI would be humanity giving up its dominion, and who can say what will happen as a result of this. It is not being melodramatic to say that there is a major chance that it will be the end of humanity, the end of history and the end of everything that most humans value.
I would submit that our values, what we find wise, beautiful, kind, humurous or otherwise virtuous, our appreciation of the natural world and our legacies, they are all relative to our human condition, and that jeopardising this - for everyone - is wrecklessness without comparison. Yet enough of us would do it
Were is this complicity in our own demise coming from. Are supporters of ai and the singularity misguided - by their own values - or do we have irreconcileable philosophies on the matter.
My suspicion is that a lot of support for the singularity comes from death anxiety - the way things are, in the long run i'm dead anyway so may as well have a punt (50-50?) on immortality, with humanity as the stakes. That is the singularity is a plausible alternative to an after life. Call me a wanker, but i think that one should take fatality like a man (i.e and die), and not be so selfish.
For me, one good, or at least indisputable reason, is disillusionment with humanity... i would not be able to argue against someone who lived through the trenches in WWI and had such an opinion, that this world as we know it is just not worth it. However I would not agree.
Finally I think a lot of support for the singularity comes down to pure hair-brained optimism and reading too much sci-fi. It actually made me a bit sad to watch the recent star trek movie and its clumsy attempts to make the characters relevant (a sword fight? come on!). Traditional, speculative, science fiction used to be about final frontiers and buckaneering captains, now its struggling to reconcile the future and anything that we might want from a narrative.
Smarter than human AI may be our story is coming to an end ...
The technology to end humanity is already here (nuclear and soon biological weapons). In the long run superhuman AI may be the only thing that can prevent us from destroying ourselves.
Even if it were desirable, stopping the advance of technology is impractical. If superhuman AI is possible, it will be built. A more practical argument would be that superhuman AI should be strictly controlled, though advancing technology will eventually make that difficult too. The only solution in the long (really long) term is for us to become superhuman ourselves and attempt to preserve the things we find important in the transition.
> Were is this complicity in our own demise coming from. Are supporters of ai and the singularity misguided - by their own values - or do we have irreconcileable philosophies on the matter.
This reminds me of an (unintentionally?) hilarious LW post:
"For example, most existential-risks activists (scientists doing networking and research about risks like unFriendly AI) are male, and I plan a top-level post to assert that not having reliable access to sex with the kind of sexual partners who can most improve the life of an existential-risks activist should be considered a large disability in a male prospective existential-risks activist -- in the same way that, e.g., an inability to stop rationalizing one's own personal agenda should be considered a large disability."
So there you go. When skynet gets invented, blame it on sexual frustration :).
It sounds like you're equating "AI" with "sentient intelligence with its own values and goals which will only arbitrarily coincide with ours." Certainly, that's a possibility--and one AI researchers would do well to avoid.
But the possible intelligences we can observe on this planet have more variety than that, and the useful intelligences we've created have even more. There's no reason a smarter-than-human AI couldn't simply be a general-purpose tool in the way that a faster-than-human spam classifier is a special-purpose tool.
I know it was a typo, but software that's sentimental as well as sentient is probably what we need, as opposed to the vast, cool and unsympathetic variety.
I think most proponents of strong AI include the idea that they should be built on the foundation of a "premise-state" compatible with our human ideals such as intrinsic respect for life, appreciation of complexity and diversity, decentralisation of power, privacy of thought and freedom of action, etc etc.
Which, sadly, demonstrates how far AI as a discipline has to go. I am much more encouraged by the development of Rommbas and similar that seek out an electrical outlet, and so are actually motivated to learn, by something akin to hunger.
It is a reasonable concern - the foundation of our economy is built around labor being rewarded with recompense which in turn is used to drive the services and consumer goods economy.
Robots don't consume services or goods - all they require is manufacturing and maintenance. What happens when your gas station needs fewer employees because everything involved in pumping and payment is automated (as many are in California). What happens when checkouts are all automated (Starting to Happen in Home Depot, Walmart, others..)
The answer, of course, is that people move up the feeding chain in employment, and are freed from such low level positions as technology improves - But, and here is the catch, as technology improves, the low-water mark may, in fact, start to rise above where people are capable of competing with technological solutions - then what happens?
Imaging a world in which Fast Food Restaurants, Grocery Stores, and Gas Stations were all 95%+ Automated. That may happen within the next 10 years. What are the new jobs for those people? What happens when Taxis, Trains, and Buses are automated (Skytrain in Vancouver, BC has had no drivers on their trains during normal circumstances for 20+ years)?
There will always (in my lifetime) be Jobs that technology won't be able to replace, but there are many, many, many jobs that are going to disappear.
Maybe I am overly optimistic about human intelligence, but I don't really see a "low water mark", at least in a practical sense -- certainly not anytime soon.
There will always be a "least skilled job" in the employment hierarchy. By the time our society reaches a level at which its lowest form of employment exceeds the capabilities of our current checkout clerks and fast food workers, staying alive will likely require very little in the way of monetary resources.
The conventional wisdom says that the economy will create 50 million new jobs to absorb all the unemployed people, but that raises two important questions:
- What will those new jobs be?
They won't be in manufacturing -- robots will hold all the manufacturing jobs. They won't be in the service sector (where most new
jobs are now) -- robots will work in all the restaurants
and retail stores. They won't be in transportation --
robots will be driving everything. They won't be in
security (robotic police, robotic firefighters), the
military (robotic soldiers), entertainment (robotic
actors), medicine (robotic doctors, nurses, pharmacists,
counselors), construction (robotic construction workers),
aviation (robotic pilots, robotic air traffic controllers),
office work (robotic receptionists, call centers and
managers), research (robotic scientists), education
(robotic teachers and computer-based training), programming
or engineering (outsourced to India at one-tenth the cost),
farming (robotic agricultural machinery), etc. We are
assuming that the economy is going to invent an entirely
new category of employment that will absorb half of the
working population.
- Why isn't the economy creating those new jobs now?
Today there are millions of unemployed people. There are
also tens of millions of people who would gladly abandon
their minimum wage jobs scrubbing toilets, flipping
burgers, driving trucks and shelving inventory for
something better. This imaginary new category of employment
does not hinge on technology -- it is going to employ
people, after all, in massive numbers -- it is going to
employ half of today's working population. Why don't we see
any evidence of this new category of jobs today?
Although I think he goes a bit overboard in some cases (robotic actors?), I still find this to be rather persuasive.
Enlighten us, if you don't mind, as to which results from complexity theory indicate that these things are not possible. By my reading none of them say anything of the sort.
All the results I know of don't apply to computers any more than they do to humans, who reason no less algorithmically than computers (even if our algorithms are currently more subtle); these results tend to say that certain magic cannot be performed without invoking new classes of computation, which has absolutely no bearing on whether computers can do the type of practical research humans have been engaging in for our entire history.
"Although I think he goes a bit overboard in some cases (robotic actors?)"
Not at all overboard - there are numerous successful animated movies. Granted they still use humans to provide the voices but I find it plausible that artificial speech could be constructed.
Jobs are a signal of needs. If all production jobs are eliminated than that is a good thing. It clearly mean the end of scarcity as we know it. Economic as we know it would largely disappear.
If that happens, we humans can just relax and enjoy life as it is. Sucking the system's resourcewhile robots produce ever more stuff for us to consume.
Jobs are not the end to itself. Rather, they are signals that tells us the demands of individuals in a world of scarce input. If robots multiply that input until it reach a level where humans cannot possibly consume all for many years, than it is a good thing.
What happens when your gas station needs fewer employees because everything involved in pumping and payment is automated (as many are in California). What happens when checkouts are all automated (Starting to Happen in Home Depot, Walmart, others..)
Have you ever been to Japan?
But, and here is the catch, as technology improves, the low-water mark may, in fact, start to rise above where people are capable of competing with technological solutions - then what happens?
Ancient Greece - lots of slaves, ancient Rome - a whole lot of slave, in fact their economy collapsed when they ran out of slaves.
The future we can look forward to is this order:
1. An economy much like modern day Japan.
2. An economy similar to ancient Greece where people busy themselves with pursuits of the mind.
3. An economy similar to ancient Rome where a whole lot of people have nothing to do and we have a vital need for bread and circuses, luckily easily taken care of by robots.
4. Machines as smart or smarter then man - Singularity!
The thing is, Japan has a relatively homogenous society and a population decline, which may better equip them to deal with social problems (although they encounter their own social difficulties, eg hikikimori, freeters etc.).
The same technological trends may play out very differently in a less diverse or younger population where competition for resources is more necessary, acute and violent. I'm struck by the fact that a lot of Americans whose careers were invested in manufacturing and who didn't have an exit or transfer strategy are now feeling unable to adapt and tend to become politically polarized as a result. Depending on their political affiliations they may place the blame on capitalism, globalism, China, illegal aliens, computers, or wherever...the underlying point being that there are a lot of people finding themselves in an economic dead end with limited future prospects, and they're very angry about it. Such people tend to be both pessimistic and confrontational.
There will be the time when mankind stops to exist. The only question is: Will we exterminate ourselves or will we transform ourselves into something greater?
Constraints usually aren't that hard to implement. For example, "computer worms and viruses that defy extermination" is and always will be a computer security problem, not a philosophical AI problem.
Have you ever seen a BMW factory? - http://www.youtube.com/watch?v=9vRg64HX5gA
Is it a bad thing to have dumb jobs done by machines?
I'm always amused when I read "stop making progress, there are some risks!". Of course there are, duh.
As for singularity I think it's an overrated theory. Some solutions are not just a question of intelligence. As if "intelligence" was the solution to everything.... You also need resources, time and luck.