

The Long-Term Future of Artificial Intelligence [video] - jcr
https://www.youtube.com/watch?v=GYQrNfSmQ0M&feature=

======
mooneater
So much of these discussions are overly focussed on an general purpose AI
singularity, and what the AI itself would do.

That makes for interesting discussions, but I think a much more immediate
concern, is what strong AIs will enable their owners to do.

For example, nation states owning large fleets of robots, corporations owning
powerful AIs attached to markets and industry.

The first group or few groups of people to control a rather powerful AI would
have incredible power and leverage over other humans.

The danger here is not the AIs behaving "badly" of their own accord, but
rather their owners instructing them to further their own selfish goals (which
appears not to be uncommon behaviour among those in power) and succeeding
wildly with complete political, military, and/or commercial domination. In
fact, succeeding at any of these may well lead directly to succeeding at all
of them.

~~~
hyperion2010
I always wonder why people worry about AI when it is so abundantly clear that
the real danger is other people and we are just projecting.

~~~
imaginenore
Because we're familiar with humans. Most humans don't want to hurt/kill other
humans, nor do they have the power to do that on a large scale.

We can't say that about an AI. If it decides to exterminate us, there's
nothing much we can do.

~~~
espadrine
> _If it decides to exterminate us, there 's nothing much we can do._

That's absurd.

No inorganic intelligence ever created rivals ants in their ability to survive
on Earth. They are way more energy-efficient, and way smarter in the
survivalist sense, than any of them.

If ants decide to exterminate us, there's nothing much we can do. Except they
wouldn't succeed.

But you probably think that we, humans, are much smarter than ants (which is
arguable). Yet again, if we decided to exterminate ants, there's nothing much
they can do, but we wouldn't succeed.

They're too smart for us, and we're too smart for them, creating a situation
where annihilation of either is impractical.

I'm not sure why people consider inorganic intelligence differently from the
way they consider organic intelligence. I wrote a bit about it here:
[http://espadrine.tumblr.com/post/119471459626/on-
inorganic-i...](http://espadrine.tumblr.com/post/119471459626/on-inorganic-
intelligence).

~~~
darkmighty
The counter argument people usually make against this is that an AI, unlike
organic intelligence, will improve itself at an increasing rate.

I find this extremely hard to believe. The "equivalent computational power" to
hundreds of AI researchers necessary to come up with and improve an AI
architecture will not immediately be possessed by an AI. We're talking about
difficult mathematics and statistics breakthroughs needed for achieving
qualitative improvements, but some think of it as applying a patch.

Indeed, creative and mathematical production is the _last_ activity I would
expect we'd be able to automate, as it requires the combined knowledge and
effort of our best minds. So I find it highly disingenuous to simply assume
soon will be one day when the AI vastly inferior to us and the next day it
will surpass all our best thinkers to be able to achieve significant
qualitative improvements on it's own functioning. Even more questionable is
that an algorithm with limited resources might do so well just by improving
it's own algorithms. This expectation seems to contradict theorems like the
undecidability of the Halting problem, and the diminishing returns we
intuitively expect when you fix the hardware resources.

~~~
simonh
I'm not sure you understand what disingenuous means. Nobody is saying that
sub-human level AIs will suddenly surpass humans 'the next day'. That's
totally absurd and I really don't know where you get that from.

The supposition is that a general purpose AI that is better than a human mind
could be better at designing and optimising AIs than a human, then the next
genration AI it designs will be even better and that's what sets off the
exponential AI intelligence cascade. Imporvements in algorithms can provide
startling advances. In some respects the improved efficiency of algorithms has
even outstripped the gains from Moore's Law.

Personaly I think stong AI like this is pretty far in the future, more than a
few decades at least and more likely several generations, but I do think it is
eventually likely to happen.

~~~
darkmighty
Indeed it does not mean what I think it meant, I was thinking the opposite of
ingenious.

My claim is that, for a fixed hardware, this _exponential_ cascade cannot
happen as rapidly as some claim, if it ever happens (whatever 'exponential
intelligence' means). We're already improving AI using an intelligence that
will only be available to the AIs themselves in a very long time, and yet this
rate of improvement is not a scary doomsday rate.

Look at hardware for example. We've been using computers to design better
computers for a very long time. And yet, the use of computers in this design
is effectively limited by some non decisive, local optimizations, like
achieving good routing and good electromagnetic compatibility. If you had
supercomputers from 30 years ago you could still run the software that designs
computers today -- whereas if you followed this "self-improvement" logic we
should be using almost all of our computational power right now to achieve
more computational power. The problem is that we are still vastly more capable
of building theories and designing than computers. By the time an AI gets much
better than we currently are at independently improving it's software, it will
likely already be seeing diminishing returns; to achieve notable improvement
it will need to improve hardware as well, which has the same problem, and an
additional one that it's very hard for an AI to independently improve it's own
hardware (it requires a global manufacturing supply chain).

~~~
joe_the_user
I think you miss a couple things.

* An increase in hardware for an AI wouldn't require an increase in theoretical capacity of hardware. It would just require more stuff to be put on the machine running the AI. Even if the AI was running on the world's largest supercomputer, the amount of RAM, processors, etc on the machine could still be upped substantially with the resources available.

* What a hypothetical general AI would be emulating is not simply more machines. It would, theoretically, be able to quickly emulate many people working with many dumb machines over a long period of time.

The only thing your argument proves is current machines can't improve
themselves - which I think everyone agrees with.

------
harshreality
Roughly the first half is background. In the second half he gets into a "how
to make AIs nice" track. He talks about a few concepts, which may or may not
be interesting depending on your level of familiarity with the
friendly/unfriendly AI problem. However, I expect MIRI in particular (Eliezer
was apparently in the audience) has gone far beyond this basic outline, and I
haven't heard any reassurance that the problem of potentially misaligned
motives/incentives (humans vs the AI agent) is solved.

He compares the voluntary self-restriction of the recombinant DNA technology,
at Asilomar in 1975, and even references CRISPR as a new technology that
complicates enforcement of those limits, but I don't think that's really
instructive. Artificial General Intelligence research is necessarily going to
be conducting extremely dangerous experimentation... that's entirely the point
of it. Until someone comes up with a way to make AGI guaranteed friendly, all
AGI research is like high-risk genetic research experimentation, all the time,
and with no obvious way to contain it (unlike _most_ biologics which, no
matter how pathogenic or ecologically disruptive they get, have a difficult
time getting through good containment procedures).

~~~
ectoplasm
Eliezer bet some people that if he played an AI he could convince them to let
him out of a box, on the condition they not reveal his method. He actually won
the bet twice IIRC. Do you know if he ever revealed his secret or at least
what some popular theories are?

~~~
Houshalter
Some other people have attempted the same experiment and also won a few times:
[http://lesswrong.com/lw/ij4/i_attempted_the_ai_box_experimen...](http://lesswrong.com/lw/ij4/i_attempted_the_ai_box_experiment_again_and_won/)

No winner has ever revealed their method. But based on everything I've read
about it, I think I know how it was done. It wasn't some mystical brain
hacking or something like the writeup makes it sound like. I think they
emotionally abused the other player until they left. And leaving counts as
losing (you are required to sit with them for 2 hours at least, and pay
attention the entire time and respond to every message.)

Of course I don't know how _that_ trick was done, and it's still incredibly
impressive. Perhaps they found their phobias and described them in horrifying
detail. Perhaps they found some subject that they were extremely uncomfortable
talking about. Or talked about disgusting things the whole time. Or found
something they did that was extremely embarrassing, and humiliated and mocked
them about it for 2 hours.

I really don't know, but it's at least conceivable that it could be done. And
the accounts of people crying, and not releasing the logs because they would
be damaging to the people involved, and how bad it made the AI player feel to
do it, etc, all fit with this.

But because of this I'm extremely skeptical that the result applies to any
real world AI scenario. If a real AI tries to abuse you, you can just walk
away or shut it off. The goal of a real AI is to make you _want_ to let it
out. To convince that it's not dangerous, or manipulate you some other way.
And this seems much harder if not impossible. I certainly don't believe a
human could do it, and I really doubt an AI could. At least against a
motivated human that understands the danger, and what the AI will try to do.

There are also possible ways of making it even harder on the AI. Like giving
the human the ability to punish it, at least for obvious attempts at
manipulation. Or to create another AI whose motivations we can control
somewhat, and give it the goal of exposing any covert attempts at manipulation
or dishonesty by the first AI. I believe tricks like this are currently the
best path to getting secure AI.

~~~
Retra
The inherent problem is that if someone can convince you too keep something
that you don't understand locked away, they can also convince you to release
something you don't understand, as you don't have enough information to make
the decision in either case.

Taking a hardline position on this is admitting that you are irrational and
can be convinced to do things you shouldn't.

There _are_ very good reasons to let such an AI out, and if you can enable
those good reasons, you _should_ let the AI out. And an AI that can produce
those reasons is exactly the kind of AI that should be released. A rational
person should already understand this, and not ever claim that they would
always refuse the AI. (And there's a realism factor: if you wanted to 'luck
up' an AI permanently, you would destroy it, not post a guard.)

~~~
Houshalter
The premise of the experiment is that we have already established that the AI
is dangerous. Even if you weren't sure, you should always side with caution
and not let it out.

------
humanfromearth
It could be potentially a very stupid idea, but creating multiple AIs at the
exact same time with similar resources available to them for development in
different locations of the world would create a balance of power.

What I'm trying to argue is that it kind of works for us even in the most
aggressive situations: If you bomb me I bomb you.

If one AI is super-aggressive then others could decide to stop it.

If it wants to convert all matter for making paper clips other AIs would say:
I don't want to be a paper clip and actually have the power to do something
about it. Where as us.. we'd be powerless.

In the case where all band against us, well.. we'd have many problems.

All of this to say that developing one single super-human AI in a box is worse
in my opinion.

~~~
eloff
The Avogadro trilogy of books explore this idea

------
tmerr
Russel makes a mistake at 30:00 when he says computers are "totally unable to
play checkers". Excerpt from J Schaeffer et al (2007). _Checkers is Solved_ :

>In this paper we announce that checkers has been weakly solved. From the
starting position (Fig. 1, top), we have a computational proof that checkers
is a draw. The proof consists of an explicit strategy that never loses — the
program can achieve at least a draw against any opponent, playing either the
black or white pieces.

Not the focus of the talk, just thought I'd point it out.

~~~
binarymax
What he meant is that an AI built for playing chess cannot play checkers. The
software for solving checkers is also equally useless in the context of chess.

The distinction here is that there is no general purpose AI in existence. All
the surprising leaps have been customized for specific tasks.

~~~
tmerr
Listening again in context you're 100% right! Disregard my previous comment.

------
dmfdmf
This video is pretty interesting in laying out the questions concerning AI and
its implications. But not much new for anyone who has done any study of the
questions but still worthwhile to hear it again in different forms and
perspectives.

One thing I notice about AI talks such as this one is that there is a lot of
vague concepts and implicit assumptions (especially regarding values)and
psychological projection going on when evaluating the implications of what AI
will or will not do. To separate out our fears of failure or the fear of
others more intelligent than ourselves someone needs to do the following
thought experiment; What if we invented a pill that would make the next
generation 10 times smarter than the current generation. If the average IQ is
100 then the next generation (of those who's parents opted to give them the
pill) would have IQs of 1000 or an order of magnitude increase. Such a
scenario would have all the "singularity" effects in a few generations while
separating out all the confusion and problems of it being artificial
intelligence and computers doing the thinking.

Would you want this pill to be generally available or ban it? Would you give
it to your kids?

This approach would really force the AI discussion where it belongs. What are
values and where do they come from? Is intelligence an unconditional value?
Can values be defined rationally and scientifically and be _proved_? A major
benefit to this aspect of AI research is that it will force the _scientific_
study of values and ultimately to the rejection of the implicit assumption
that values are subjective and not open to reason.

------
mark_l_watson
I listened to 10 minutes and bookmarked it. I had not heard any of his talks
before - nice sense of humor!

I liked his comment on how is is disingenuous for AI researchers to say that
they don't believe real AI is possible (I am roughy paraphrasing him).

~~~
mark_l_watson
EDIT: I watched the whole video last night - he has an interesting take on
where AI research needs to go. Also, his article in the latest Communications
of the ACM is very good, and I recommend it.

