

Scientists Worry Machines May Outsmart Man  - lnguyen
http://www.nytimes.com/2009/07/26/science/26robot.html

======
shin_lao
Of course robots will take jobs from human beings.

Have you ever seen a BMW factory? -
<http://www.youtube.com/watch?v=9vRg64HX5gA>

Is it a bad thing to have dumb jobs done by machines?

I'm always amused when I read "stop making progress, there are some risks!".
Of course there are, duh.

As for singularity I think it's an overrated theory. Some solutions are not
just a question of intelligence. As if "intelligence" was the solution to
everything.... You also need resources, time and luck.

~~~
lsd5you
> I'm always amused when I read "stop making progress, there are some risks!".
> Of course there are, duh.

It's ironic that you characature one point of view as facile, yet your point
of view is similarly so - with no characture needed.

What is being discussed is existential risk, and 'unfortunately' you'll
probably never get to look silly for taking your point of view, because if it
does come to pass we won't be here to discuss it.

I mean to take the concept of making progress and it having associated risks
and then make such as generalisation is just not sensible. Progress does not
have to continue having the same properties it has had historically (which is
why perhaps putting too much stock in history and its wisdoms is dangerous).
We could be heading towards a proverbial waterfall.

The last remark about intelligence seems rather optimistic... resources, time
and luck are largely limiting factors because of our own biological
limitations.

~~~
shin_lao
Perhaps we are simply a link in evolution between biological life and
cybernetic life. Who knows?

I still think however that the Super AI singularity is a phantasm.
Intelligence is not the key to everything. As fast as ideas can go, putting
them into action is the tricky part.

------
jcapote
Am I the only one looking forward to this?

~~~
lsd5you
Apparently not, and i'm somewhat dismayed at the lack of opposition to this
point of view ... so i'll try to make a case against it.

Creating smarter than human AI would be humanity giving up its dominion, and
who can say what will happen as a result of this. It is not being melodramatic
to say that there is a major chance that it will be the end of humanity, the
end of history and the end of everything that most humans value.

I would submit that our values, what we find wise, beautiful, kind, humurous
or otherwise virtuous, our appreciation of the natural world and our legacies,
they are all relative to our human condition, and that jeopardising this - for
everyone - is wrecklessness without comparison. Yet enough of us would do it

Were is this complicity in our own demise coming from. Are supporters of ai
and the singularity misguided - by their own values - or do we have
irreconcileable philosophies on the matter.

My suspicion is that a lot of support for the singularity comes from death
anxiety - the way things are, in the long run i'm dead anyway so may as well
have a punt (50-50?) on immortality, with humanity as the stakes. That is the
singularity is a plausible alternative to an after life. Call me a wanker, but
i think that one should take fatality like a man (i.e and die), and not be so
selfish.

For me, one good, or at least indisputable reason, is disillusionment with
humanity... i would not be able to argue against someone who lived through the
trenches in WWI and had such an opinion, that this world as we know it is just
not worth it. However I would not agree.

Finally I think a lot of support for the singularity comes down to pure hair-
brained optimism and reading too much sci-fi. It actually made me a bit sad to
watch the recent star trek movie and its clumsy attempts to make the
characters relevant (a sword fight? come on!). Traditional, speculative,
science fiction used to be about final frontiers and buckaneering captains,
now its struggling to reconcile the future and anything that we might want
from a narrative.

Smarter than human AI may be our story is coming to an end ...

~~~
modeless
The technology to end humanity is already here (nuclear and soon biological
weapons). In the long run superhuman AI may be the only thing that can prevent
us from destroying ourselves.

Even if it were desirable, stopping the advance of technology is impractical.
If superhuman AI is possible, it will be built. A more practical argument
would be that superhuman AI should be strictly controlled, though advancing
technology will eventually make that difficult too. The only solution in the
long (really long) term is for us to become superhuman ourselves and attempt
to preserve the things we find important in the transition.

------
unignorant
"they said there was legitimate concern that technological progress would
transform the work force by destroying a widening range of jobs"

They sound rather crazy... perhaps I am misreading the implication?

~~~
ghshephard
It is a reasonable concern - the foundation of our economy is built around
labor being rewarded with recompense which in turn is used to drive the
services and consumer goods economy.

Robots don't consume services or goods - all they require is manufacturing and
maintenance. What happens when your gas station needs fewer employees because
everything involved in pumping and payment is automated (as many are in
California). What happens when checkouts are all automated (Starting to Happen
in Home Depot, Walmart, others..)

The answer, of course, is that people move up the feeding chain in employment,
and are freed from such low level positions as technology improves - But, and
here is the catch, as technology improves, the low-water mark may, in fact,
start to rise above where people are capable of competing with technological
solutions - then what happens?

Imaging a world in which Fast Food Restaurants, Grocery Stores, and Gas
Stations were all 95%+ Automated. That may happen within the next 10 years.
What are the new jobs for those people? What happens when Taxis, Trains, and
Buses are automated (Skytrain in Vancouver, BC has had no drivers on their
trains during normal circumstances for 20+ years)?

There will always (in my lifetime) be Jobs that technology won't be able to
replace, but there are many, many, many jobs that are going to disappear.

We need to consider the consequences.

~~~
msluyter
Marshall Brain's "Robotic Nation" explores this in some detail:

<http://www.marshallbrain.com/robotic-nation.htm>

His own view seems to be summed up as:

 _The conventional wisdom says that the economy will create 50 million new
jobs to absorb all the unemployed people, but that raises two important
questions:

    
    
        - What will those new jobs be? 

They won't be in manufacturing -- robots will hold all the manufacturing jobs.
They won't be in the service sector (where most new jobs are now) -- robots
will work in all the restaurants and retail stores. They won't be in
transportation -- robots will be driving everything. They won't be in security
(robotic police, robotic firefighters), the military (robotic soldiers),
entertainment (robotic actors), medicine (robotic doctors, nurses,
pharmacists, counselors), construction (robotic construction workers),
aviation (robotic pilots, robotic air traffic controllers), office work
(robotic receptionists, call centers and managers), research (robotic
scientists), education (robotic teachers and computer-based training),
programming or engineering (outsourced to India at one-tenth the cost),
farming (robotic agricultural machinery), etc. We are assuming that the
economy is going to invent an entirely new category of employment that will
absorb half of the working population.

    
    
        - Why isn't the economy creating those new jobs now? 

Today there are millions of unemployed people. There are also tens of millions
of people who would gladly abandon their minimum wage jobs scrubbing toilets,
flipping burgers, driving trucks and shelving inventory for something better.
This imaginary new category of employment does not hinge on technology -- it
is going to employ people, after all, in massive numbers -- it is going to
employ half of today's working population. Why don't we see any evidence of
this new category of jobs today?_

Although I think he goes a bit overboard in some cases (robotic actors?), I
still find this to be rather persuasive.

~~~
Dilpil
The results of complexity theory heavily suggest that automated science and
automated engineering are impossible.

~~~
ewjordan
Enlighten us, if you don't mind, as to which results from complexity theory
indicate that these things are not possible. By my reading none of them say
anything of the sort.

All the results I know of don't apply to computers any more than they do to
humans, who reason no less algorithmically than computers (even if our
algorithms are currently more subtle); these results tend to say that certain
magic cannot be performed without invoking new classes of computation, which
has absolutely no bearing on whether computers can do the type of practical
research humans have been engaging in for our entire history.

------
die_sekte
There will be the time when mankind stops to exist. The only question is: Will
we exterminate ourselves or will we transform ourselves into something
greater?

------
tybris
Constraints usually aren't that hard to implement. For example, "computer
worms and viruses that defy extermination" is and always will be a computer
security problem, not a philosophical AI problem.

------
kingkawn
I'm sure Skynet only has our best interests in mind.

------
kingkawn
All assuming of course that sentient machines want anything to do with us.
They may just join the dolphins in swimming around having fun.

------
rw
When it comes to intelligent nonhuman organisms, the question seems to be:
dominate or cooperate?

