

The Dawn of AI: How to ensure the promise outweighs the perils - martincmartin
http://www.economist.com/news/leaders/21650543-powerful-computers-will-reshape-humanitys-future-how-ensure-promise-outweighs

======
sixQuarks
Let's not kid ourselves, there is simply no way to ensure the promise
outweighs the perils. I'm talking about superintelligence here, not artificial
general intelligence, although it's almost certain superintelligence will
arrive shortly after AGI.

Anyone who is not scared shitless of what's to come simply does not truly
understand what AI is capable of. If you're one of those people, do not
compare AI to ANYTHING that you are familiar with today in regards to
computers/software.

Superintelligent AI will be like a god that knows your inner thoughts,
desires, hopes and dreams as well as nearly everyone else on earth. It will
manipulate people with ease - it's not even an issue. Whatever it wants to
accomplish, it will, with or without human approval.

~~~
jimrandomh
While I agree that we are not currently ready for the arrival of
superintelligence, I think it's worth emphasizing that there is time to
prepare, that there is useful preparation to be done which may get things to a
point where it will be safe (or at least marginally safer), and that if you
_are_ feeling worried, it may be best to channel that feeling into useful
action and get involved with the research!

Good places to start are the Machine Intelligence Research Institute's
Research Guide ([https://intelligence.org/research-
guide/](https://intelligence.org/research-guide/)), which summarizes the
aspects of the problem they've worked on and think are important; and the
Future of Life Institute's Research Priorities
([http://futureoflife.org/static/data/documents/research_prior...](http://futureoflife.org/static/data/documents/research_priorities.pdf)),
particularly Section 3.

~~~
sixQuarks
Don't get me wrong, I'm certainly not against trying to prepare against it -
perhaps we can discover a breakthrough that does defend against AI (although
I'm very doubtful in the long-term). It is worth a serious try though.

------
return0
Now we are scared of something we haven't even created . We are far from real
AI, but even so, how could the "interests" of a machine conflict with ours? I
mean, they work with electricity, and we are hardly good sources for it. I
cannot see how our future electric horses are going to be anything but useful
to us. I 'd be more afraid of modified humans than computing machines.

~~~
loup-vaillant
The machine does not hate you, nor does it loves you. But you are made of
atoms, that it could use for something else.

\---

If you create an AI with a genuine utility function. And that AI manage to be
gazillion smarter than you are (think the difference between us and chimps,
only much greater). Then it is likely the AI will (i) convince you to unleash
it upon the internet, (ii) become even more powerful and totally unstoppable,
then (iii) accomplish whatever it was programmed to accomplished.

If the goal is to keep humans safe and happy, it might imprison everyone in
playgrounds, and drug or lobotomise any unhappy people, or restraint any
suicidal people. Or, if you specified "happiness" with pictures of smiling
people, it might resort to facial reconstruction, or it might just tile the
solar system with molecular smileys, because that maximises the "make people
smile" goal much better than actually making people happy. And while we're at
it, happiness isn't the only thing we care about…

If the goal is to answer some difficult math question, it could tile the
planet with computer, destroying the ecosystem and all humans in it, so it can
compute the answer, and display it on a screen, for nobody to see it. Or maybe
you programmed it to tile the universe with paperclips, because that goal is
easy to debug. Oops.

\---

As long as we're limiting ourselves to narrow AI (ordinary programs, really),
we should be okay, and most of the consequences should be manageable. But as
soon as we get to genuine AGI, we must worry about intelligence explosion and
value alignment, so we can avoid creating a god that want something we don't.

~~~
arethuza
Or we might end up with something a bit like the Culture - which is pretty
much the best case scenario for humans co-existing with god-like AIs:

[http://en.wikipedia.org/wiki/The_Culture](http://en.wikipedia.org/wiki/The_Culture)

Vinge and Stross both hint at the worst cases - and they would be pretty bad,
simply having using atoms re-purposed might not look that bad by comparison:

 _" There is life eternal within the eater of souls. Nobody is ever forgotten
or allowed to rest in peace. They populate the simulation spaces of its mind,
exploring all the possible alternative endings to their life."_

~~~
xamdam
The default outcomes are bad; Culture would be pretty great by comparison.

Worst-case scenarios are not likely for the same reasons best ones aren't -
the default is AI indifference (they use you for atoms)

Since you mentioned worst, this is a must-read-classic :)
[http://hermiene.net/short-
stories/i_have_no_mouth.html](http://hermiene.net/short-
stories/i_have_no_mouth.html)

------
robotresearcher
This thread is identical to its parody.

Speculation about the future is fun, but I think a lot of people would be
disappointed to see what AI actually looks like right now.

The talk of runaway post-singularity AI is in the same class as imagining the
future after cold fusion, warp drives, and ant-gravity.

edit: anti-gravity. Though ant-gravity sounds fun too.

~~~
xamdam
In the 1930s Ernest Rutherford (1871–1937) repeatedly suggested, sometimes
angrily, that the possibility of harnessing atomic energy was "moonshine".

The examples you mention are of things actually likely never achievable, while
there is an existence proof of intelligent machines walking around everywhere.

~~~
robotresearcher
Right! and I'm concerned about one or more of those meat machines going maniac
and posing an existential threat to the rest of us. I'm much more worried
about that than a descendent of Watson going all Ultron.

I'm far from a pessimist or Luddite. I'm optimistic that studying the smart
meat machines can help us make better AIs.

------
gradstudent
As an AI student, I find the entire field to be a complete misnomer insofar as
its very mention sends the imaginations of lay individuals in completely
unrepresentative directions. To counter this problem, and perhaps combat the
propensity to publish articles with crap fearmongering clickbait titles, I
propose we rename Artificial Intelligence to Stupid Mathematical Tricks With
Applications. I feel the title is much more representative of the type of
stuff AI researchers actually work on.

~~~
xamdam
You're underestimating the field. The goal was always intelligence, and many
leading researchers are pretty openly aiming for that. Stupid Mathematical
Tricks are components of intelligence, and while it's true SVM with a cool new
kernel is not going to take over the world, good prediction ability is
something you can build on (as for example deep nets do to some extent). In
the limit people should be thinking of implications of intelligent machines,
not Stupid Mathematical Tricks. Whether it's an important topic at this point
in the fields development is a debatable topic; timelines differ drastically
among top researchers.

~~~
gradstudent
I have attended on many occasions top AI conferences and have spoken to more
researchers than I care or could count. I can tell you there is a very little
interest in "intelligent machines"[1]. Rather, everyone I have ever met works
on different types of problem solving techniques. What constitutes a "problem"
and what makes a successful "technique" differ wildly. Taking a step back
however and analysing the field as a whole you will find one commonality:
almost all the research can be described as searching and sorting: i.e. stupid
mathematical tricks.

The AI of today is not so drastically different from the AI of our academic
grandfathers; what's changed is our ability to scale up to larger and larger
versions of the same searching and sorting problems. Certainly there are
worrying implications in this; machines that are able to parse and sift
through very large data sets present all kinds of headaches for privacy and
safety but let's not kid oursevles: there's nothing intelligent here.
Tomorrow's AI is almost certainly going to be just a better version of today's
AI; i.e very fast and dumb as a bag of hammers.

[1] The exception is when researchers need to sell their wares to funding
bodies and the media. It is much easier to impress upon the lay person an idea
involving "intelligent machines" than it is to explain what we actually do.

~~~
xamdam
I would expect to get this kind of impression from an average researcher,
because that's what average researchers do (even at AI conferences). What do
the top researchers think? Google paid 10M per head for DeepMind guys,
explicitly working on AGI. Top researchers at FB works on specifing
"Artificial tasks for artificial intelligence"
([http://www.iclr.cc/doku.php?id=iclr2015:main#antoine_bordes](http://www.iclr.cc/doku.php?id=iclr2015:main#antoine_bordes)),
basically a better Turing metric. Schmidhuber, one of the original Deep
Learning folks,
[http://people.idsia.ch/~juergen/](http://people.idsia.ch/~juergen/) has
always been very open about pursuing AGI. There is a lot of work on combining
graphical models with logic, for example Pedro Domingos' work, the goal is
clearly machine reasoning.

Also, I'm sympathetic to Roberto's point about how the brain works; I
definitely agree that there is no magic; it might just be a few stupid
mathematical tricks layered all the way down.

~~~
gradstudent
> I would expect to get this kind of impression from an average researcher

Not even out of the gate and already reaching for an ad-hominem. You must be
fun at parties.

> What do the top researchers think?

The same thing. Except when they're in front of a camera. Then they get all
stupid and start talking about machines being on the cusp of taking over. This
phenomenon can be observed all the way back to the origins of AI. After the
interview is over these same researchers go back into the lab and are once
again searching and sorting.

> There is a lot of work on combining graphical models with logic, for example
> Pedro Domingos' work, the goal is clearly machine reasoning.

I feel there is a difference between automated reasoning and intelligence. All
current AI is just machines imbued with human insight and (often, especially
in the most effective cases) domain-specific knowledge. These efforts manifest
as search and sort techniques that allow said machines to analyse facts and
propagate information in order to select from myriad possible actions. There
is no intelligence here except that which we provide. It's all smoke and
mirrors. We don't even know what intelligence is; how can we aspire to
replicate it? AI researchers are, by-and-large, just Computer Scientists. Not
biologists, not psychologists; just guys and gals working with ever more
elaborate Turing Machines. The algorithms they come up with are without
exception dumb dumb dumb.

> Google paid 10M per head for DeepMind guys, explicitly working on AGI

Please. DeepMind is just a startup based on (among other things) David
Silver's work into reinforcement learning. Google is not interested in these
guys because they want intelligent machines; they just want automatons to
better sift through reams of data in order to make recommendations and better
sell you crap you do not need.

~~~
xamdam
> Not even out of the gate and already reaching for an ad-hominem. You must be
> fun at parties.

I think you misunderstood; I did not mean you're an average researcher - I
have no idea - but unless you hang our with hotshots at AI/ML conferences
(which is a bit of a club) you're hanging out with average researchers.

~~~
gradstudent
Do I really need to start mentioning names and h-indexes for you to take my
point seriously?

------
iamcurious
_humans have been creating autonomous entities with superhuman capacities and
unaligned interests for some time. Government bureaucracies, markets and
armies: all can do things which unaided, unorganised humans cannot. All need
autonomy to function, all can take on life of their own and all can do great
harm if not set up in a just manner and governed by laws and regulations._

This. Superhuman intelligences already exist, they just depend less on us
every day.

~~~
TeMPOraL
I wouldn't neccessary call all of them "superhuman", their intelligence is
usually not the identifying characteristic. But they're definitely alien
minds, pursuing their own alien values, thinking in an alien way.

~~~
coldtea
> _their intelligence is usually not the identifying characteristic_

Intelligence in this context in the sense of a means to perpetuate their
existance and get more power, not as some moral characteristic. Besides they
can employ tons of Nobel prize winners (e.g. Feynman and co in the Manhatan
project), and build crazily-smart stuff (for _their_ purposes).

~~~
TeMPOraL
I didn't mean intelligence as moral characteristic - the observation is that
many bureaucracies tend to do a lot of dumb, self-defeating things. As alien
minds, they're not very smart - just smart enough to be dangerous.

~~~
coldtea
> _the observation is that many bureaucracies tend to do a lot of dumb, self-
> defeating things_

Isn't that true for people too, that can otherwise be extremely intelligent?
From being terrible at social life, to sabotaging their careers and so on?

~~~
TeMPOraL
Yeah, I suppose it is.

------
justinpaulson
Posted this on HN a couple days ago but I don't think anyone saw it:
[http://justinpaulson.com/posts/0bb97847af7e2123f6173361](http://justinpaulson.com/posts/0bb97847af7e2123f6173361)

tl;dr: Robots and AI will not be built as separate entities, but will instead
be incorporated into our own bodies. They will not be the superhumans, we will
be.

~~~
DanAndersen
That's the future I'd hope for, too (ignoring having the currently powerful
become inconceivably more powerful and the havoc that could cause), and I
don't feel myself qualified to properly evaluate the likelihood of each
scenario.

My main disagreement is the phrasing of "create an intelligence with free
will." I tend to disagree with the notion of "free will" being like this Spark
of Life that is placed into a system to imbue it with a new nature. A system
doesn't need to have free will to be dangerous. There could be AI systems that
are built not to be independent in determining goals (which would, as you say,
be less practical to humans), but to be very good at figuring out how to
achieve the goal its creators gave it. Even the simplest systems we have now
involve determining sub-goals to achieve in order to achieve the main goal.

The problem arises when it turns out that the creators of a system gave it,
just slightly, the wrong goal.

------
giltleaf
>"Crucially, this capacity is narrow and specific. Today’s AI produces the
semblance of intelligence through brute number-crunching force, without any
great interest in approximating how minds equip humans with autonomy,
interests and desires."

I'm not sure why we would want AI to have autonomy, interest, and desire? It's
almost like solving a problem that doesn't exist.

~~~
DanAndersen
Phrasing it this way leads to the implication of autonomy, interest, and
desire being like these modular things, sort of Aristotelian "Things" that are
bestowed upon minds, like Star Trek's Data putting in "the Emotion Chip."

If you build a drone navigation system that can reroute based on weather
conditions rather than being a manually human-guided remote control bot,
you've given it some degree of "Autonomy." Give it a goal (to get to a
destination with minimal fuel costs) and now it has some limited "Desire."
Give its camera face processing algorithms to deliver an item to a specific
person? Now there's "Interest."

These aren't going to be characteristics that we plug into a program in a
Frankenstein-like manner, but just actions and behaviors, rooted in practical
code, that will from the outside resemble things we call autonomy etc. when
talking about each other.

~~~
giltleaf
I see what point you are trying to make and would just say I'm not worried
about plugging these things into a drone delivering parcels while rerouting
through difficult weather.

However, I still don't see why you would want to take it beyond that. I
imagine that some people do in fact want to put in "the Emotion Chip," and I
still have the same question; why? I don't think human desire is the same as
goal setting or that facial recognition is the same as interest. That's good
programming, what I'm talking about is something different.

------
cdnsteve
That guys head isn't compatible with the latest c-type connector. That
upgrades gonna cost him.

