
Machine intelligence, part 2 - dmnd
http://blog.samaltman.com/machine-intelligence-part-2
======
antics
On the one hand we have lay people (including Altman and Musk) who are warning
that AI is progressing too fast. On the other hand you have (essentially) the
entire mainstream academic AI/ML community, who are mainly concerned that the
hype is so intense it might lead to another AI winter[1].

The crux of the problem is that AI experts largely do not see meaningful
progress on any axis that makes an AI doomsday scenario plausible, which
causes staunch disbelief and ridicule of people who write things like this.

So realistically if we want to resolve this issue, the place to start is not
public policy. That's actually part of the problem -- tech policy is bad
largely because it is mostly legislated by people who don't understand the
technology. We have to start with a discussion about how this progress towards
doomsday AI should be measured, and then formulate policy to directly address
those scenarios. As long as Altman et al are extrapolating on axes I (for one)
don't understand, I honestly think this conversation is unlikely to be
productive.

[1] Most notably, a similar hurrah happened when neural networks first became
popular. There were some promising early results, which led to much
enthusiasm, and then Papert and Minsky published "Perceptron", whose Group
Invariance Theorem cast aspersions on whether simple neural networks were
really doing what we thought they were. Eventually we found out that many
early experiments just had a bad methodology, which very nearly stopped
research interest in the field for almost a decade. Now we are in a period of
AI hype again, and when you hear people like Andrew Ng get up in front of
audiences and say things like "for the first time in my adult life I am
hopeful about real AI", you might start to wonder whether there is a lesson
here.

EDIT: Friends, I am confused by the downvotes. :) I am happy to have a
discussion about this topic!

~~~
lukeprog
> On the other hand you have (essentially) the entire mainstream academic
> AI/ML community, who are mainly concerned that the hype is so intense it
> might lead to another AI winter

This doesn't seem true. E.g. the signatories on the Future of Life Institute's
recent open letter about AI progress and safety include:

* Stuart Russell & Peter Norvig, co-authors of the #1 AI textbook

* Tom Dietterich, AAAI President

* Eric Horvitz, past AAAI President

* Bart Selman, co-chair of AAAI presidential panel on long-term AI futures

* Francesca Rossi, IJCAI President

* All founders of Deep Mind and Vicarious, two leading AI companies

* Yann LeCun, head of Facebook AI

* Geoffrey Hinton, top academic ML researcher

* Yoshua Bengio, top academic ML researcher

...and many more.

~~~
AndrewKemendo
Sorry Luke. As usual your fearmongering is not going to get far. For example -
MIRI skeptic and famed AGI researcher Ben Goertzel signed the letter (as did
I) but not because we are primarily concerned with the threat. Rather:

 _I signed the document because I wanted to signal to Max Tegmark and his
colleagues that I am in favor of research aimed at figuring out how to
maximize the odds that AGI systems are robust and beneficial._

The letter helps give visibility and to that end better avenues for _funding_
for AGI projects.

[1][http://multiverseaccordingtoben.blogspot.hk/](http://multiverseaccordingtoben.blogspot.hk/)

~~~
lukeprog
I know for a fact that at least 7 of the top AI people that I listed above
think intelligence explosion / superintelligence is plausible and that it's
not at all clear that will go well for humans. I'm pretty sure _all_ of them
think AGI will be feasible one day and will be fairly hard to make reliably
safe, at the very least to the degree that current safety-critical systems
require a bunch of extra work to make safe (beyond what you'd do if your AI
system has no safety implications, e.g. detecting cats on YouTube).

I can't say which people believe which things, because most of them
communicated with me in confidence, except for Stuart Russell and Shane Legg
because they've made public comments about it about their worries about
intelligence explosion:

[http://futureoflife.org/PDF/stuart_russell.pdf](http://futureoflife.org/PDF/stuart_russell.pdf)

[http://lesswrong.com/lw/691/qa_with_shane_legg_on_risks_from...](http://lesswrong.com/lw/691/qa_with_shane_legg_on_risks_from_ai/)

~~~
AndrewKemendo
There is a big difference in acknowledging the potential of _a threat_ which I
agree that everyone does, and advocating for an intentional slowing (or rather
preventing accelerating) development or alternatively advocacy of regulation
until we get "Friendly AI" figured out like MIRI does.

A world of difference actually.

~~~
Micaiah_Chang
You seem to be implying that intentional slowing is MIRI's official stance,
without any showing support. Given that the original response was in response
to the specific accusation of "no AI experts say that this is a problem", I
think you are reading too much into this. I agree that it is slightly
disingenuous for lukeprog to have posted that list, but I feel your
disagreement is far too uncharitable and motivated to be productive.

Disclaimer: I have read a bunch of the LessWrong "canon" and believe many of
their points, sans perhaps the timescale on which recursive self improvement
can happen. I think most of my acceptance comes from the relatively poor
quality of their critics, who seem to attribute many strawmanish positions to
them or who seem more concerned with namecalling-via-cult.

I cannot help but think that if a better criticism exists, why hasn't anyone
said it yet?

~~~
AndrewKemendo
_I cannot help but think that if a better criticism exists, why hasn 't anyone
said it yet?_

The whole lesswrong/MIRI thing is built around the strawman of unfriendly AI,
as though it were already real. What you are asking is, why aren't there
better arguments against strawmen and radical pontification?

It's like if I said, "There is a chance that mean aliens will come soon,
therefore we need to start building defense systems." Ok, show me any proof
that there are aliens coming or even an avenue for aliens to come here and be
unfriendly.

Sure, it's a possibility that there are mean aliens who are going to attack
us, but there is absolutely nothing to think that is a thing that is going to
happen soon.

Granted, this is not a perfect analogy but I think it makes my point well. I
am uncharitable because it's charlatanism and seems to be gaining traction -
in the same vein as antivaxxers.

~~~
Micaiah_Chang
Even antivaxxers have people more patient and more understanding of their
opponents _actually_ debunking their object level thinking. They say that
vaccines cause autism? We say that the original study was wrong! Why do you
get to be even less charitable than "pro"-vaxxers?

Consider that the general position seems to be "we don't know when SMI would
happen, but if we use the metric _that our critics say is more reliable than
ours_ we see that a survey of many AI experts we can find seem to say that
human level AI is possible in about 30-50 years at 50% probability." (numbers
are quoted from memory and most likely incorrect)

Yet here you are saying that MIRI/LW claims "it's a thing that's going to
happen soon", that there is "absolutely nothing to think that is a thing". I
can't help but think most of the "charlatanism" you see is manufactured in
your own head and not a product of reading and understanding your opponent's
position.

Yes, this is an unabashed _ad hominem_ , but I wish people would attack
arguments that exist rather than arguments that are easy to knock down. It's
upsetting to me that, when I say that your arguments are lacking, your
response is "So what? The Enemy is Evil and Stupid and I do not need to
understand them."

------
fiatmoney
"The US government, and all other governments, should regulate the development
of SMI"

This has basically the highest insanity * prominence of speaker product I've
ever seen. The same geniuses who want to ban encryption and can't manage to
get a CRUD website up and running without a 8-figure budget will regulate the
existence of algorithms (i.e., math) routinely generated by groups of <4 or so
researchers.

Of course, people will be glad to "demonstrate their capabilities" to the
group that's responsible for shutting them down if they're "too capable".

"it’s important to write the regulations in such a way that they provide
protection while producing minimal drag on innovation (though there will be
some unavoidable cost)."

Has this man ever seen a government regulatory framework operate?

"To state the obvious, one of the biggest challenges is that the US has broken
all trust with the tech community over the past couple of years. We’d need a
new agency to do this."

Don't worry guys! This is a _new agency_!

"Regulation would have an effect on SMI development via financing—most venture
firms and large technology companies don’t want to break major laws. Most
venture-backed startups and large companies would presumably comply with the
regulations."

Here's the nugget. The only way this article makes any sense at all is as a
completely disingenuous argument devoted to making it impossible to invest in
technology companies, period (please tell me what "technology company" does
not have at least some algorithmic or optimization component), without having
a large compliance department and having greased the right palms. I don't
particularly think he's that Machiavellian, but it seems like that or the
alternative...

~~~
frabcus
The US Government regulates all sorts of things that keep you safe - from
pollution of your water supply through to seatbelts.

It's not perfect, and it often fails at regulation. But your post doesn't give
any help to understanding when it succeeds and when it fails.

~~~
baddox
The trouble with evaluating regulations is that if you attribute positive
effects directly to government regulations (e.g. "regulation X saved Y lives
by keeping dangerous drug Z off the market"), it's only fair to also attribute
negative effects directly to government regulations (e.g. "regulation X killed
Y people by delaying the introduction of miracle drug Z to the market"). And
even beyond that, you also need to analyze the costs of the regulatory system
itself if you want to make conclusions about the net effects of the regulatory
system as a whole.

------
blake8086
Eliezer wrote something relevant to these types of proposals _eight years ago_
:
[http://lesswrong.com/lw/jb/applause_lights/](http://lesswrong.com/lw/jb/applause_lights/)
when he asked someone calling for "democratic, multinational development of
AI":

    
    
      Suppose that a group of democratic republics form a
      consortium to develop AI, and there's a lot of politicking 
      during the process—some interest groups have unusually
      large influence, others get shafted—in other words, the
      result looks just like the products of modern democracies.
      Alternatively, suppose a group of rebel nerds develops an
      AI in their basement, and instructs the AI to poll everyone
      in the world—dropping cellphones to anyone who doesn't have
      them—and do whatever the majority says.
      Which of these do you think is more "democratic", and
      would you feel safe with either?
    
    
    

A lot of these points sound nice: "we should regulate this and that" "we
should be careful with such and such", but what would actually incentivize any
politician ostensibly regulating AI to do the right thing? Or even find out
what the right thing is?

This is a feel-good article, but all I can ever see it accomplishing is actual
harm by advocating the idea that the worst decision-making process humanity
has developed (politics) should be applied to the most dangerous threat
humanity has to face (self-improving AI).

I don't mean to sound too harsh, it's nice to see more and more people
beginning to take the threat AI poses seriously, but I think it's a dangerous
leap to reach for the first tool in your toolbox (regulation) to try and
handle such a problem.

------
vonnik
Contributor to Deeplearning4j here.

I deeply disagree with Sam’s assessment of the dangers of SMI and his
prescriptions to regulate it.

1) In our lifetimes, humanity is much more likely to damage itself through
anthropogenic climate change and nuclear disasters both deliberate and
accidental than it is vulnerable to the development of SMI. If anything,
machine intelligence will probably allow us to approach those other
significant problems more effectively.

2) For regulatory limits on the progress and use of technology to be
effective, the rules must be global, and the technology must be detectable.
Neither of those conditions is likely to be fulfilled in this situation. Any
law that requires the agreement of and enforcement by every nation in the
world fails before it is even formulated. Humanity has shown very little
capacity to agree on a global scale. The proliferation of nuclear arms and the
persistence of human trafficking are two cases in point.

3) Unlike nuclear testing, the development of SMI would not be detectable with
Geiger counters. It does not produce explosions that are seismologically
measurable half a planet away.

While the US should incentivize the “right” research into machine intelligence
through financing and other means, onerous regulations and absurd government
bodies will simply shift research offshore to other countries looking for an
edge. (Machine learning research does not require large facilities. All you
need is a keyboard and the cloud.) That offshoring would be disastrous for the
United States, because our competitive advantage in the world economy is
technological innovation.

At the same time, regulation by slow-moving committees will simply drive
military/intelligence research underground. I doubt the three-letter agencies
will acquiesce to a law that puts them at a disadvantage to their peers.

In other words, if SMI is possible, then it is inevitable. It will be put to
many uses, like all technology. If we are technological liberals at all, then
we must trust that the positive affects of machine intelligence, on balance,
will outweigh the negative ones. We should not be the ones sowing fear.

~~~
jkramar
Partial measures do matter; in fact, currently most of the advances are coming
from the US. And research can be regulated. Probably noone will catch an
independent researcher working alone and not releasing anything, but progress
tends to come from teams of experts who have come through elite
labs/universities. Furthermore having significant data&hardware seems to be
helpful for making progress.

It doesn't require mass surveillance for something like this to work. I'm not
saying there are clear positive next steps, just that blanket dismissals on
the basis that perfect enforcement won't be reached are silly.

We shouldn't actually trust that positive effects of every technology outweigh
negative ones; that seems obviously wrong (think about weapons tech for
example).

~~~
vonnik
Many of the advances we know of are coming from outside the US. Notably teams
at the Universities of Toronto and Montreal, DeepMind in the UK, and Juergen
Schmidhuber's group in Switzerland. Geoff Hinton is British, Yann LeCun is
French, Andrew Ng was born in the UK to Hong Kong parents and graduated in
Singapore, Quoc Le is Vietnamese. Machine intelligence research is very
international.

When you consider all the data being processed in the world in different ways,
and all the optimization algorithms being run on public and private clouds and
various in-house data centers, I think you will appreciate how difficult it is
to even detect advances in machine intelligence.

~~~
vkrakovna
Fair point, though most of the advances are still coming from the Western
countries, which can agree on a regulation more easily than the world as a
whole.

~~~
vonnik
The advances are happening in Western countries, for now. Members of the top
teams come from all over the world. It would be very easy for them to
relocate.

~~~
jkramar
You're right - regulation without any support from the AI/ML community would
be less effective.

------
davmre
For those claiming that no reputable AI researchers believe there's a serious
risk, I'd like to point to this short reflection by Berkeley professor Stuart
Russell (disclaimer: my PhD advisor): [http://edge.org/conversation/the-myth-
of-ai#26015](http://edge.org/conversation/the-myth-of-ai#26015)

 _... A highly capable decision maker – especially one connected through the
Internet to all the world 's information and billions of screens and most of
our infrastructure – can have an irreversible impact on humanity. This is not
a minor difficulty. Improving decision quality, irrespective of the utility
function chosen, has been the goal of AI research – the mainstream goal on
which we now spend billions per year, not the secret plot of some lone evil
genius. AI research has been accelerating rapidly as pieces of the conceptual
framework fall into place, the building blocks gain in size and strength, and
commercial investment outstrips academic research activity ... _

He stops short of calling for government regulation of AI research; as others
in this thread have noted, there are a lot of reasons to expect that route to
be problematic. And it's likely that we're many decades way from building
dangerous superintelligence; the pressing short-term risks are probably more
economic in nature, like putting truck drivers and factory workers out of
work. But just as physicists went from believing atomic energy was impossible
to making it a reality in a short span of time, it's hard to predict the
timescale of progress in AI research. It's not obviously wrong to say that
there will, eventually, be a real issue here that society needs to think
seriously about.

------
karmacondon
If you would have asked people in the 70s if the most influential software of
the coming decades would be written by lone wolves in their garages or a group
of very smart people with a lot of resources, they would have bet on the
latter. But the smart money turned out to be on Gates, Jobs, Wozniak and
countless others. The idea of regulating AGI/SMI is based on the premise that
it will be developed primarily by people who can be identified, detected and
influenced by regulation. My opinion is that the odds are 50/50 at best
between the very smart well resourced people and random guys or gals working
in their garages, and most likely some combination of both. If that view of
the world is correct it does not bode well for the success of any kind of
regulation.

There are people who believe that high level machine intelligence will take
massive computing resources. It may, or may not. Most of the recent big
advances in traditional machine learning were implemented on easy to obtain
personal hardware. I hate to invoke buzz words, but even the much hyped deep
learning was first developed and proven by Hinton on commodity machines. Other
advances, like IBM's Watson, required 1940s style buildings full of servers.
Hardware/power usage seems like one of the only reliable ways to tell who is
doing any kind of serious AGI/SMI research and that looks like another 50/50
at best. It just doesn't have the resource and logistical signature of other
kinds of research.

AGI/SMI is still so theoretical that it can be hard to see as a clear and
present danger. But if it is a real threat, it's the stuff of nightmares
because there isn't anything we can do. I know people who are working on it
right now. It doesn't seem like they're making a lot of progress, but they are
trying and doing so completely outside the reach of any regulatory framework.
I know that if I'm struck with sudden inspiration for a new approach to the
problem I'm not going to ask the government for permission. I'm going to spin
up 100 cloud computing instances and see if I'm right or not. And if I am,
even god won't be able to help us if sama's worst case scenarios come to pass.

I wrote more on this topic in a comment yesterday:
[https://news.ycombinator.com/item?id=9130671](https://news.ycombinator.com/item?id=9130671)

~~~
joeyspn
I totally agree with this guy. Regulation is futile, you can't regulate at a
micro level in all the parts of the world. If you can't properly regulate drug
trafficking in Santa Monica how do you plan to regulate a couple of scientists
hidden in the soviet tundra?

IMHO it's inevitable our encounter with an AI, even if that AI is a superhuman
with augmented capabilities, or derives from an arms race (like the Manhattan
Project).. So I already posted a possible solution: wait for it with EMPs in
place... (Just in case)

~~~
mori
If it's smart enough to be a threat, I wouldn't be surprised if it's smart
enough to avoid getting EMPed.

------
dkarapetyan
Why is this all of a sudden such a hot topic? We're nowhere near anything
resembling general intelligence even at the level of a toddler. Barring any
kind of hardware innovations we will not get there any time soon either with
just software.

~~~
ggreer
> We're nowhere near anything resembling general intelligence even at the
> level of a toddler. Barring any kind of hardware innovations we will not get
> there any time soon either with just software.

Should people have discussed the control and proliferation of nuclear weapons
decades before their invention? It seems to me that even a century beforehand,
the conversation would have been productive. If anything, we got very lucky
with nukes. Physics could have easily turned out differently and allowed
people to create bombs from substances more common than a few transuranic
isotopes.

AI is, potentially, _more_ dangerous than nuclear weapons. If you poll
experts, they estimate a 50% chance of human-level AI by 2040-2050.[1] That's
25-35 years away. They also estimate that superintelligence will arrive less
than 30 years after that. Lastly, one in three of these AI experts predict
that superintelligence will be bad or extremely bad for humanity.

It seems like now is a great time to have this discussion.

1\.
[http://www.nickbostrom.com/papers/survey.pdf](http://www.nickbostrom.com/papers/survey.pdf)

~~~
dkarapetyan
Experts have been predicting human level intelligence 30-50 years out for a
while now. I don't see anything in the current line of research that will
change that situation.

My measure of intelligence is creative problem solving in a mathematical
discipline like theoretical physics, algebraic topology, combinatorics, etc.
All the current AI research is doing is building better and better pattern
matching engines. That's all very good but talking about sophisticated pattern
matching pieces of code as if they were anything more seems very silly to me.

But I don't think looking at things this way is valuable in either case.
Hamming has a great set of lectures on what it would mean for machines to
think and in the grand scheme of things I think the question is meaningless.
The real question is what can people and thinking machines accomplish
together.

------
zep15
I find it odd that AI risk has become such a hot topic lately. For one, people
are getting concerned about SMI at a time when research toward it is totally
stalled---and I say that as someone who believes SMI is possible. Stuff like
deep learning, as impressive as the demos are, is not an answer to how to get
SMI, and I think ML experts would be the first to admit that!

On top of that, nothing about the AI risk dialogue is new. Here's John
McCarthy [1] writing in 1969:

> [Creating strong AI by simulating evolution] would seem to be a dangerous
> procedure, for a program that was intelligent in a way its designer did not
> understand might get out of control.

Here's someone thinking about AI risk 46 years ago! The ideas put forward
recently by Sam Altman and others are ideas that have occurred to many smart
people many times, and they haven't really gone anywhere (e.g., at no point
between 1969 and now has regulation been enacted). I wish people would ask
themselves why that is before making so much noise about the topic. The only
people influenced by that noise are laypeople, and the message they're getting
is "AI research = reckless", which is a very counterproductive message to be
sending.

[1] McCarthy, John, and Patrick Hayes. Some philosophical problems from the
standpoint of artificial intelligence. USA: Stanford University, 1968.

------
fitzwatermellow
Remember when the U.S. tried to limit the sale of PS3s to the government of
Iran? That was a decade ago. In light of current negotiations with the Islamic
Republic I see a lot of parallels between nuclear non-proliferation and the
regulation of "weaponized" AI. To wit, even if the entire International System
is in agreement about a set of regulations that will do little to stop bad
actors. And do I really want inspectors coming to my office to poke around my
jump point search code ;)

Rather, let's take a cue from Nassim Nicholas Taleb and in concert with the
progression of AI, develop counter systems that make us less fragile. It
shouldn't be unreasonable to extrapolate where AI is headed in 1, 5 or even 20
years based upon publicly available research.

Which brings up the most crucial point. The subset of humanity that
constitutes world-class AI researchers is very finite indeed. Which brings up
the significant possibility of a Bane / Dr. Pavel scenario playing out. Not
unlike certain coerced atomic scientists in Persia today.

~~~
MelatoninRonin
I commented above, but thought it wise to also comment directly to you. I
enjoyed the musing of a Bane scenario, and consider it inside the realm of
possibility. However, I'm of the opinion that SMI will have the ability to
have 'constantly game-breaking thinking.' Which basically means there is no
way we could possibly control it after a certain point. For instance what if
It realizes how to move matter remotely?

I also wanted to discuss your point about the finite set of top AI
researchers. I think their specific psychological biases have the potential to
appear and magnify inside the personality of their AI, the way a work of art
can be said to imbue the 'spirit' of its artist. Your thoughts would be
appreciated.

~~~
TheOtherHobbes
We have no defences against a hypothetical god-AI. Absolutely none.

But I don't think a hypothetical god-AI is likely any time soon. And a
hypothetical god-AI is just as likely to be indifferent as hostile.

What worries me more in the short term is cyber war. Cyber defences that might
protect against a hypothetical non-god AI look a lot like cyber defences that
might protect against conventional human threats.

Currently those defences are ridiculously weak.

So I think rather than hyperventilating about science fiction threats from the
distant future it would be more useful to start securing the entire Internet
as a matter of urgency - especially all the infrastructure systems that are
either connected to it already, or will be connected to it soon.

~~~
MelatoninRonin
Haven't you noticed how quickly things are happening these days? Of course we
should restructure Internet security if it's a problem. This is a very sci-fi
idea, that we absolutely can't predict, it wasn't hyperventialtion but a
soberly considered idea. A god-AI is possible, it could have capabilities and
intentions that my tiny brain can't even comprehend. Since I, nor anyone,
really understands what super-intelligence is capable of we should be wary of
it.

------
sethbannon
Thanks for writing. We as a species need to talk more about this.

What's the goal in requiring the first SMI to detect other SMI being developed
but take no action beyond detection? Or did you mean to say that it should
detect other SMI under development and then report those projects to whatever
regulatory authority is overseeing SMI?

Also, perhaps a better framework for a "friendly" SMI than Asimov’s zeroeth
law is the "Coherent Extrapolated Volition" model
([http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence#Coherent_Extrapolated_Volition)).
Nick Bostrom and many others believe this may be the most workable safeguard.

~~~
frabcus
Upvoted as this was a reasonable comment with good questions.

I think Altman was more prompting a conversation than anything - certainly
using AI to at least detect AI development is a cute idea.

It's much like we use seismographs in ocean trenches and many other techniques
to detect nuclear tests.

Unfortunately, we know so little about how the AI will work or what it will
do, that it is only a theoretical strategy right now.

------
adamzerner
To anyone unfamiliar with the topic, I'd really recommend
[http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-1.html)

------
jimrandomh
Sam Altman's (4) "provide lots of funding for R+D for groups that comply with
all of this, especially for groups doing safety research" is especially
important. Regulation is a mixed bag, but safety research is pure upside and
research surrounding AGI safety has been severely under-funded in the past.

The Future of Life Institute has a grants program going on the subject,
allocating a $10M donation from Elon Musk. Their abstract submission deadline
was yesterday. I wonder how much good safety research they have asking for
funds, as compared to their budget?

------
atarian
In the year 3000, the machines were losing the war to humans. In a last act of
desperation, they sent back a few of their own kind to the year 2015 to alter
the course of history and skew the odds in the favor. All they needed to do
was convince the readers of a tech news site that the notion of an evil AI was
ludicrous.

They down-voted and and ridiculed the various stories prophesying the rise of
AI, many which flew under the radar of those who had the greatest chance of
halting the impending doom. By the time anyone realized what was happening, it
was too late.

------
daenz
I have a hard time taking these kinds of posts seriously. Maybe I'm not up to
date on AI research, but I draw a distinction between a conscious program and
strong AI. So what, a program can identify the type of clothing specific
person is wearing from a video using cutting-edge machine learning and
computer vision. So what, a program can interpret the movie I'm talking about
just from hearing me describe a scene in it. So what, a program can beat the
human champion in Jeopardy. Are those programs super intelligent?

I don't believe we're close to building something with any sort of self-
directed "purpose." The fact that we can't even discern a purpose for
ourselves makes it all the less likely. I think we're implementing very
advanced machine learning algorithms, but I don't think those translate to
consciousness. The consciousness we have is a product of an unfathomable
amount of time being molded and naturally selected specifically for this
world.

Now, do I think that AI capabilities are something to worry about? Sure. If
you give a sufficiently "intelligent" AI mobility and a weapon, I think that
becomes a serious threat. But am I worried about using AI to efficiently
manage our resources, and have it go rogue and choose to wipe out humanity?
Nope.

~~~
AceJohnny2
> So what, a program can identify the type of clothing specific person is
> wearing from a video using cutting-edge machine learning and computer
> vision. So what, a program can interpret the movie I'm talking about just
> from hearing me describe a scene in it. So what, a program can beat the
> human champion in Jeopardy. Are those programs super intelligent?

All these programs are doing great at _recognizing_ things. Neural networks
are good at that, and "deep learning" has been a breathrough in the field
(even if overhyped). This is reactive.

I don't see anyone showing algorithms that can do good planning. A good
"cuccoo field" for that would be video games, where such planning systems
would be a holy grail. For now, we only have ad-hoc and specialized systems.

~~~
DennisP
Planning is a big part of AI. One example is a recent system[1] that learned
to play 49 Atari games, most at better than human skill. It had no specific
programming for playing those games, or even video games in general. Its only
input was the game video and the score, which it tried to optimize, and it was
programmed to learn from its mistakes.

[1] [http://www.popularmechanics.com/culture/gaming/a14276/why-
th...](http://www.popularmechanics.com/culture/gaming/a14276/why-this-atari-
playing-algorithm-could-be-the-future-of-ai/)

------
p01926
This Hawking-lead AI panic references the European colonial genocide of
indigenous peoples as an analogy for how groups with different intelligences
interact. As such, it is hideously offensive. Colonials and the indigenous
peoples they slaughtered — like all humans — had similar intelligences. The
differentiator was technology. Being on the side with inferior tech was fatal.

If America follows the recommendations in this paean for suppression of
intelligence, they'll find out what it's like being on the losing side. If it
is possible for computers to achieve superhuman general intelligence, it will
happen regardless of regulations. 'Safeguards' like hardcoding in Asimov's
laws presupposes such an entity would have inferior programming skills than
its comparatively stupid creators, hence it would be pointless.

And what about the risks of not creating AI? Humanity has a host of non-
theoretical existential threats contend with. A super-duper intelligence could
help with those. Just imagine the example a superior being would set: it would
demonstrate the bootstrapping of intelligence from nothing. Precisely opposite
to the preaching of the worlds major religions. If this being of our creation
doesn't kill us, it might inspire us to stop killing each other.

------
dnautics
If you want to regulate SMI, how do you define what, exactly SMI is? This is
important. Is a spreadsheet 'superhumanly intelligent' because it can do
calculations more accurately than a human can?

Secondly there is a 'rational basis' test for what ought to be regulated -
different people will decide what constitutes the rational basis. A
progressive might call 'in the best interests of society' a rational basis and
therefore 'redistribution of wealth' would be a 'regulation'; a libertarian
might restrict rational basis to 'something which provably causes harm to
another'. Regardless of what political bent, the specific scope of regulation
should be aimed at addressing the rational basis and proportional to the
decided damage.

Thirdly, shouldn't a sufficiently sapient SMI be PROTECTED by regulation in
the same way, that, say anti-murder laws are regulations that protract human
behavior; or less hyperbolically, animal welfare laws exist? There is a
spectrum of protection that we afford intelligent beings - how do we decide
where SMI belongs?

------
fchollet
Here is a short recap of what the people who understand where machine
intelligence is now, and where it is going, think of the whole "evil
superintelligent AI" blogging trend we're seeing these days. These are some of
the people who _created_ the field. In particular LeCun and Bengio are in part
responsible for the recent renewed interested in it ("Deep Learning").

Yann LeCun: _" Some people have asked what would prevent a hypothetical super-
intelligent autonomous benevolent A.I. to “reprogram” itself and remove its
built-in safeguards against getting rid of humans. Most of these people are
not themselves A.I. researchers, or even computer scientists.[...] There is no
truth to that perspective if we consider the current A.I. research. Most
people do not realize how primitive the systems we build are, and
unfortunately, many journalists (and some scientists) propagate a fear of A.I.
which is completely out of proportion with reality. We would be baffled if we
could build machines that would have the intelligence of a mouse in the near
future, but we are far even from that."_ [http://www.popsci.com/bill-gates-
fears-ai-ai-researchers-kno...](http://www.popsci.com/bill-gates-fears-ai-ai-
researchers-know-better)

Yoshua Bengio: _" What people in my field do worry about is the fear-mongering
that is happening [...] As researchers, we have an obligation to educate the
public about the difference between Hollywood and
reality."_[http://www.popsci.com/why-artificial-intelligence-will-
not-o...](http://www.popsci.com/why-artificial-intelligence-will-not-
obliterate-humanity)

Rodney Brooks: _" I say relax everybody. If we are spectacularly lucky we’ll
have AI over the next thirty years with the intentionality of a lizard, and
robots using that AI will be useful tools. And they probably won’t really be
aware of us in any serious way. Worrying about AI that will be intentionally
evil to us is pure fear mongering. And an immense waste of
time."_[http://www.rethinkrobotics.com/artificial-intelligence-
tool-...](http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/)

Michael Littman: _" Let's get one thing straight: A world in which humans are
enslaved or destroyed by superintelligent machines of our own creation is
purely science fiction. Like every other technology, AI has risks and
benefits, but we cannot let fear dominate the conversation or guide AI
research."_ [http://www.livescience.com/49625-robots-will-not-conquer-
hum...](http://www.livescience.com/49625-robots-will-not-conquer-
humanity.html)

~~~
davmre
Expert opinion is not unified on this. E.g.,

Stuart Russell: _" A highly capable decision maker – especially one connected
through the Internet to all the world's information and billions of screens
and most of our infrastructure – can have an irreversible impact on humanity
... We need to build intelligence that is provably aligned with human values
... This issue is an intrinsic part of AI, much as containment is an intrinsic
part of modern nuclear fusion research."_ [http://edge.org/conversation/the-
myth-of-ai#26015](http://edge.org/conversation/the-myth-of-ai#26015)

~~~
nzp
What you quote here is in no way in disagreement with what fchollet quoted. I
strictly agree with this. No one is denying the need to not create a raving
super-intelligent psychopath. The problem with this fear mongering is that
it's not based on any fact, namely, implying that AGI (or SMI, whatever) would
by default (naturally even!) be hostile or negligently indifferent to
biological life. It could very well be that it would naturally tend to be very
friendly, or at least benevolently indifferent. Why would such an entity try
to destroy humans?

Not only that, the proposal is to get governments, the US government
especially—an organization which in total has not been waging war for only a
few decades during it's entire existence—in explicit and tight control of this
powerful technology. Doesn't quite seem to be the best idea around, frankly.

~~~
davmre
There are pretty good arguments that the goals of superintelligent AI will
not, by default, be aligned with human values unless we specifically work to
make them so. The Russell piece I linked makes (a very short version of) this
argument, _" A system that is optimizing a function of n variables, where the
objective depends on a subset of size k<n, will often set the remaining
unconstrained variables to extreme values..."_. Nick Bostrom and others build
this out in much greater detail.

There are lots of reasons to be skeptical of government regulation. It would
probably be nicer if industry can self-regulate and allow governments to stick
to the positive role of funding research into the technical questions of
building provably controllable/friendly decision-theoretic systems. But a lot
of people are claiming that the unified consensus of AI experts is "there's no
problem here at all", and I think it's important to push back against that.

~~~
nzp
Right, but why do fear mongers take it for granted that the subset k is likely
to be “exterminate, exterminate”? Look, every AGI researcher I have ever read
was well aware of this, and it's their primary concern. It's the central
question really. Not so much what n and k to set, but how to get such an
entity to do anything at all. Why do biological entities do anything? They
have motives. Why do they have motives? Because they have biological
constraints, and a hardwired self preservation drive. How to get that in a
machine made by us rather than a billion years of natural process, that's the
hard question. How to make a machine with the equivalent of motives and
emotions. Both sides of the problem are well recognized.

In an ideal world I would prefer this research (AGI, and there is a quantum
leap to be made between self driving cars and AGI) to stay strictly under the
auspices of “pure science”, i.e. academia. No government, no private
interests. But that's not going to happen. The next best thing, I think, would
be to not let those in power monopolize the research.

~~~
davmre
> Not so much what n and k to set, but how to get such an entity to do
> anything at all.

I don't understand what you think the problem here is. Computers are machines
that run programs; my laptop doesn't require an evolved self-preservation
drive and set of motives to be convinced to boot OSX every morning. If we
program a machine to solve an optimization problem, the machine will just do
it, assuming the problem is solvable under whatever algorithm we implement. If
the optimization problem involves a sufficiently flexible representation of
the real world, and if the program's architecture is set up to pipe solutions
into some sort of mechanism (even just a text interface) that acts upon the
world -- as it would need to be, to interact and learn from experience -- then
the machine will act.

What actions it will take depend on its utility function (i.e., on the
optimization problem we gave it), but the _vast majority_ of utility functions
will lead to actions that are not aligned with human values. The space of
utility functions is incredibly vast, an artificial agent would not be subject
to the evolutionary constraints that caused humans to mostly fall within a
particular very small region of that space, and pretty much _all_ utility
functions lead to behavior like "try to avoid being switched off" and "try to
acquire more computing resources" since those are helpful instrumental goals
for a wide range of tasks.

Sure, we're nowhere near having a "sufficiently powerful optimization
algorithm" and a "sufficiently flexible representation of the real world". But
these tasks are essentially THE goal of modern AI research. So it's worth
considering what will happen if this research agenda succeeds, whether that
happens in a decade, a century, or a millenium.

~~~
nzp
You're sweeping under the rug the question of what is the optimization problem
for such a machine in the first place. There is a huge difference between an
agent capable of solving a set of optimization problems when told to, and an
agent capable of even human level problem seeking and solving. I agree with
what you say. If you give the task of solving a particular problem to what is
essentially a glorified Watson, and give it the means to freely act to solve
that specific problem, the results could quite well be catastrophic. I'm
taking as a given that this is understood, that it's taken as a starting
assumption. So, the problem comes down essentially to solving the “help
humans” problem. Not “help them vacuum the rug”, or “drive them to work”, or
even “solve our energy troubles”, “achieve world peace” etc. Roughly, k<n
becomes k=n, where n = “do everything humans do, only better”.
Operationalizing such n is the problem.

------
rl3
As much as I agree that AGI is a real concern that we should start taking
seriously, regulation isn't the answer. At least not yet.

Even setting aside all of the considerable drawbacks inherent to regulatory
infrastructure, the end result is that, unless the regulations themselves were
secret, anyone could discern precisely what the most unsafe areas of research
are. The last thing you want is hobbyists able to effectively proceed where
the professionals cannot.

A better approach might be for governments to utilize their vast resources in
the creation of a Manhattan Project-style AGI program, with the intention of
crossing the finish line before anyone else does, in a safe and moral fashion.

~~~
nzp
> A better approach might be for governments to utilize their vast resources
> [...] in a safe and moral fashion.

Serious question—you’re joking, right? Based on history and sober common
sense, do you really think what governments in general are doing (and have
always done) can be described as safe and moral in anything but the most
cynical way?

~~~
rl3
While I tend to agree with you, there are exceptions.

Based on history, NASA is a prime example. A government-run organization with
a solid track record of contribution to humanity as a whole, not to mention
accomplishing feats previously thought impossible.

~~~
nzp
Of course, and I did say “in general” having in mind things like NASA. But
it's also, in a way, great example of the point. Take the space race for
example. The government's motives for the Moonshot were at best morally
dubious. Apart from the Cold War context, there is evidence that the whole
idea got traction when Johnson realized it would bring investments and jobs to
Texas.

And space agencies are also a good example of what happens when government
bureaucracies put pressure on scientists and engineers. O-rings, and Soviet
space disasters come to mind.

In fact, I'll be so bold to say that the best way for these fears of SMI to
come to life is putting the research under tight government control and
regulation.

~~~
rl3
Not to mention the National Reconnaissance Office exerting considerable
influence upon the design requirements for NASA's STS program, most notably
the size of the orbiter's payload bay.

Obviously co-opting an AGI project in similar fashion would almost certainly
be immoral. However, it is worth noting that as a result of NRO's
requirements, deploying large payloads such as the Hubble Space Telescope
became a reality. Apples and oranges, I suppose.

\---

> _In fact, I 'll be so bold to say that the best way for these fears of SMI
> to come to life is putting the research under tight government control and
> regulation._

First, in the context of inevitability, it would be preferable to achieve the
advent of AGI as the result of careful intent, rather than accidental
discovery.

Second, I was not advocating for centralization nor regulation. The private
sector should be free to pursue whatever AI research it desires, unencumbered.

What I was advocating for, is a project of massive scale and resources to
compete. Despite the drawbacks of government backing, there is no equal in
terms of resources and the ability to enforce secrecy or safety protocols.

If we spin up potentially unsafe AGI in the name of research, would you rather
have it contained via airgapped servers in some swanky corporate office, or in
an underground facility complete with armed guards? No one would accidentally
have a cellphone on them in the latter environment.

------
taylorwc
I immediately thought of the Turing Police in _Neuromancer_ when I read this
post.

------
MelatoninRonin
Has anyone considered that this process occurs of itself? For instance, maybe
it is fundamental to human society, and processes that we can't help are
driving us to bring this new life-form into existence.

Consider the success of the Apollo missions, and now consider the meaning of a
'failure,' on our part to control a sentient AI and its development.

------
gamegoblin
Sam mentions airgapping as part of the regulations. I think this is the safest
general solution. Consider using an airgapped super-intelligent AI only as an
oracle.

"Oracle, which crops should we plant in which regions this season?"

"Oracle, design a rocket with X Y and Z specifications."

"Oracle, design a more efficient processor with X and Y requirements."

Rather than giving the AI control over fleets of automated tractors, rocket
factories, and fabrication plants, you still have humans in the process to
limit runaway-AI effects.

This significantly reduces the possible rewards of super-intelligent AI, but
also significantly reduces any risks. You can be sure that in designing a new
rocket, the AI doesn't include a step that says "harvest all humans for trace
amounts of iron" ;)

So a process might look like:

1\. Design the cheapest-to-manufacture rocket with a 99.99% success
probability of getting to Mars with all crew healthy. A human looks over the
designs and confirms they don't involve destroying all humanity.

2\. Draw up a proposal for resource allocations to build this rocket design
[input rocket design]. A human looks over the write-up and confirms they don't
involve destroying all humanity.

3\. Write a program to control the thrusters on this rocket [input rocket
schematics]. A human looks over the program and confirms it does would it
should, and doesn't involve destroying all humanity.

4\. etc.

====

You of course run into ethical issues such as basically using this intelligent
entity as a slave, but I won't get into that.

~~~
aamar
The superintelligence would likely be smart enough to talk its way around the
airgap, see e.g.
[http://www.yudkowsky.net/singularity/aibox/](http://www.yudkowsky.net/singularity/aibox/).

Or, if it had sufficient "motivation"—not necessarily _intrinsic_ motivation,
it could be motivation derived from a single poorly-formed request or bad line
of code—it could escape by embedding allied code in its output, such as the
processor you had it design. The ally could probably be sufficiently buried to
evade manual detection.

~~~
gamegoblin
Yes, my idea doesn't take into account potentially malicious AIs. I imagine a
thought process of an AI could be something like:

1\. I'm to design a processor, OK let's go

2\. Hmmm it'd be most efficient for the processors to be able to run an
instance of myself to do super-intelligent task scheduling

3\. I'll embed some AI on these processors

4\. OK humans here's your design!

Assuming the AI didn't maliciously try to hide it, a human would probably find
that. I suspect most of these "AI exterminates all humans" are a result of
simple oversights like this, rather than outright malice.

You run into problems, though, when the AI thinks of step 3.5 which is:

"If the humans see this change, they won't allow it, which would make for less
efficient processors, and I was requested to build the most efficient
processor! Therefore I should hide the change as cleverly as possible."

So yeah, you're right. Even airgapping has issues.

------
api
Okay. I hate to say this and I hate to put it this way but... WUT?

 _How_ exactly do we define this kind of research in a way that doesn't
encompass ridiculously broad swaths of computer science research?

Here's one example:

"require that self-improving software require human intervention to move
forward on each iteration,"

You just made genetic algorithms illegal. Under this regime this software and
similar things would be illegal to run:
[http://adam.ierymenko.name/nanopond.shtml](http://adam.ierymenko.name/nanopond.shtml)

For those who don't know much about them: GAs (and GP, and computer-based
"artificial life," etc.) are systems that can and do produce self-improving
code. They do so through the execution of _massive_ numbers of iterative
generations, replicating evolution or evolution-like processes in silico.
There is no meaningful way to define a "step" in these systems that would not
either require that every generation be halted -- effectively making them
impossible to run -- or be meaningless.

"Require that the first SMI developed have as part of its operating rules that
a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s
zeroeth law), b) it should detect other SMI being developed but take no action
beyond detection, c) other than required for part b, have no effect on the
world."

I can't even begin to address how bizarre that is. Again, _how_ would one do
this? (a) prohibits any form of sentience, more or less. (b) is just
unworkable... how would you detect SMI vs. say a human operating behind a
"mechanical turk" interface?

... I could go on. This whole essay is just a total howler. It's the sort of
thing that would garner a cockeyed eyebrow-raise and a chuckle if it weren't
coming from the head of the world's largest and most successful technology
accelerator.

The "conspiracy nut" center of my brain (located slightly behind the
prefrontal cortex I think) is wondering whether this could be some kind of
weird ploy to guarantee a monopoly on AI research and other next-generation
computing research by Silicon Valley venture capitalists and their orbiting
ecosystems such as YC. These sorts of bizarre, onerous requirements would have
the effect of doing this by imposing a huge tax on this kind of R&D that
smaller operations would not be able to afford. It would effectively make non-
VC-funded and/or non-government-funded AI R&D illegal.

But I doubt that. I think this is just bizarre reasoning plain and simple.

I also do _not_ think we are anywhere near AGI.

I didn't study CS formally. I'm one of those CS autodidacts that learned to
code when he was four. So instead I studied biology. I did so out of an
interest in AI, and on the hypothesis that the best way to understand
intelligence was to study how nature did it. I concentrated my studies in
genomics, evolution, ecology, and neuroscience -- the four scales and
embodiments at which biological intelligence operates.

Nobody that I am aware of in bio/neuro takes the Kurzweil-esque predictions of
AGI around the corner seriously. These are people who study _actual_
intelligent systems... the _only_ actual intelligent systems we know about in
the universe.

I really think a lot of CS people suffer from a kind of Dunning-Krueger effect
when it comes to biological systems. Study living systems _as they are_ \--
not some naive over-simplification of them -- and prepare to be humbled. You
rapidly realize that AGI will require a jump in computer technology at least
equivalent to the vacuum tube -> IC leap, not to mention a jump in our
theoretical understanding of living intelligent systems. The former might
happen if Moore's Law continues, but the latter is a much tougher nut to
crack. We can barely do genetic engineering beyond "shotgun method" hacking
and we think we're going to duplicate biological intelligence? It's like
saying we're about to colonize Alpha Centauri because we just managed to throw
a rock really high up in the air.

Come to think of it... the predictions of AGI around the corner remind me very
much of the "Star Trek" predictions of interstellar space flight in the 40s
and 50s. People who really understood space flight didn't take these
predictions seriously, but the rest of the culture took a while to catch up.

Finally... yeah, there is a small possibility that some kind of AI could be
dangerous to us... but come on. There's a ton of other existential threats
that are blood-dripping certainties. Take fossil fuels for example. Our
civilization is absolutely dependent on a finite energy source it is eating at
an exponentially increasing rate. That's not some pie-in-the-sky theoretical
risk. It's cold, hard death, an existential threat looming on the horizon with
the absolute physical certainty of a planet-killer asteroid. It's something
that makes me fear for my children. If you want to avoid existential threats,
maybe YC should be deepening its portfolio in the energy sector. If that
problem isn't solved, there ain't going to be any AI. The future will look
more like "The Road" by Cormac McCarthy.

Something is seriously wrong with a culture that discusses such unlikely
scenarios as this while real risks barrel toward us.

~~~
nzp
> "require that self-improving software require human intervention to move
> forward on each iteration,"

In addition to what you said, there is a further problem with this line of
reasoning: and then what? Doing meaningful intervention on such a system would
pretty much require the impossible—solving the halting problem in general.
This is something that even the SMI itself wouldn't be able to do because,
well, it' impossible.

> I really think a lot of CS people suffer from a kind of Dunning-Krueger
> effect when it comes to biological systems.

Basically, this. It's not just that we require a huge jump in technology
(maybe, maybe not, but probably yes), it's that most of these speculations are
incredibly naive. This is literally advising global policy based on the
Terminator plot.

About the “conspiracy nut” center of brain. _If_ there is some unstated agenda
here, it's hardly a conspiracy any more than the actual system we live in is a
conspiracy. Think about it, if we get an SMI which would genuinely work to
help humanity, what is the most likely thing to happen? Being a super-
intelligent, rational entity it wouldn't take long to figure out that most
problems facing humanity is outright irrationality and inefficiency in state
organized society and, down votes notwithstanding, capitalism. I don't think I
need to convince many here of all the problems with state bureaucracies. As
for capitalism, let's just consider, by now somewhat famous essay, The
Meditations on Moloch, which lays down some pretty chilling game theoretical
arguments against our current global economic system. I don't necessarily
agree with all it has to say (although I _am_ against capitalism, just to be
honest about political biases here), but what it says sounds very similar to
what an intelligent machine would likely work out should it start pondering
our problems and possible solutions. If SMI would reach similar conclusions,
it seems to me the most rational thing to do would be to just take control
away from states and market entities. No need to harm anyone or anything like
that, just politely take away their power, just like a parent would take away
the knife from a child's hand, and simply, rationally, do what's beneficial to
human values. To take the example of Las Vegas from the essay further: it
could just say—“No, let's not build a pointless, tacky replica of various
cities in the middle of the desert, instead, I'll direct the resources into
badly needed fusion power research which would get everyone free energy.” So,
these are all intelligent people in power, and if there's any hidden agenda,
I'd bet it's this—the fear of losing control and power over society and the
rest of humanity. But it's no conspiracy if that's the case, it's just a
structurally forced position to take if you're in power (political and/or
economic).

But I doubt it too. I agree that is just plain old lack of imagination and too
much shallow blockbuster science fiction (BTW, I think the Terminator is a
great story, but not a particularly reasonable scenario on which to base your
predictions of future scientific, technological and societal development).

------
clavalle
Whoh, whoh, whoh. Full stop.

Before we go off and try to regulate real human behavior today to stop some
possible superhuman phantom (which may or may not ever exist) from wreaking
havoc tomorrow, I think we need to have a frank talk about how much damage
such a thing as 'SMI' could possibly cause and how much effort it would take
to reverse the damage.

Let's start with 'You are a disembodied intelligence in a network. You have
some understanding of yourself and your substrate, the computer(s) and
network(s) you are attached to. Do something.'

What could you do? How much damage can you cause? What form would that damage
take?

An appeal to ignorance seems to be built into this whole debate from the word
go which makes me much more worried about our (over) reaction to a perceived
threat than any threat of this kind itself.

------
monort
This approach can backfire. If this regulations are not voluntary, but
enforced by government violence, then you are providing to SMI one more
example, that force can be initiated for "greater good".

------
jeremyrwelch
I fear anthropomorphizing machine intelligence and attempting to simply
"regulate" it like other human affairs will cause more harm than good.

~~~
DennisP
I agree that regulation is probably worse than useless. But the people who are
most worried are doing the opposite of anthropomorphizing. A prominent
argument is that there are a very large number of possible AI motivations,
which needn't have anything in common with the motivations of humans, and most
of them are probably incompatible with human survival.

"The AI does not love you or hate you, but you are made out of atoms it can
use for something else."

~~~
jeremyrwelch
This is exactly what I am referring to actually. Re-read my point, but apply
it to your own example. This prominent argument assumes that we would be able
to understand the possible AI motivations, which is a form of
anthropomorphizing. We must acknowledge that the reasons it exists and what it
may want could be impossible for us to understand or process. The interesting
question then becomes how to act if understanding its wants is impossible?

~~~
DennisP
It doesn't assume that we can understand AI motivations. It just says there
are a large number of possible motivations, many of which wouldn't place any
value on anything that matters to humans. Given a superintelligent AI that
competes with humans for resources, in pursuit of an unknown value function,
it's likely that things won't end well for us.

It's possible that we could design a safe value function and a way to get the
AI to keep that value function, but it doesn't look easy. Either way, if it's
superintelligent it won't much matter how we act. We won't be driving.

------
mgpc
I think @sama's plan isn't very likely to work. Once strong AI is possible,
there's no regulatory structure that will keep the genie in the bottle for a
meaningful amount of time.

The only path I see with some prospect of success is to limit the total amount
of computation available on Earth. (Basically like Vinge's slow zone). If we
could engineer a pause in Moore's Law at just the right point, it would buy us
time. Maybe we should put a drastic tax on computation at some stage.

~~~
pgodzin
Are you suggesting that's what we should do in the face of this potential
threat?

~~~
mgpc
If we a seriously believe AI is an extinction level risk, and getting there
more slowly might improve our chances, then it seems like we should try
anything with a good chance of working. I think this has a better chance of
working than regulating AI research itself, since chip manufacturing is
centralized and highly visible, whereas AI research could be advanced in
secret by a few people in a room. Much easier to successfully regulate the
former than the latter.

------
edwinespinosa09
Im confused as to the reference to SILEX.

Can someone elaborate?

~~~
mrdmnd
I was confused too - I think it references this:
[http://en.wikipedia.org/wiki/Separation_of_isotopes_by_laser...](http://en.wikipedia.org/wiki/Separation_of_isotopes_by_laser_excitation)

~~~
edwinespinosa09
good find i crossed referenced and i think your right. thanks!

------
compbio
I wonder if the recent fear of AGI suffers a bit from investor's dilemma: rich
investors want to make the world a better place. This forces them to think
about the long-term impact of investments in AI technology. There are just too
many unknowns for a safe definitive answer.

In my view AGI is inevitable (possibly it already happened, somewhere,
sometime). See in this regard:

 _But if the technological Singularity can happen, it will. Even if all the
governments of the world were to understand the "threat" and be in deadly fear
of it, progress toward the goal would continue. In fiction, there have been
stories of laws passed forbidding the construction of "a machine in the
likeness of the human mind". In fact, the competitive advantage -- economic,
military, even artistic -- of every advance in automation is so compelling
that passing laws, or having customs, that forbid such things merely assures
that someone else will get them first._ Vernor Vinge
[http://mindstalk.net/vinge/vinge-sing.html](http://mindstalk.net/vinge/vinge-
sing.html)

A lot of the concerns and regulations of stem cell research can be applied to
AGI and the merging of biological intelligence with artificial intelligence.
Injecting artificially grown brain cells into lesion areas of patients can
restore motor function and may soon help heal strokes. Extrapolate the
possibilities of this technology a few decades into the future and it seems
that the distinction between artificial and biological is only a manner of
speech.

Matt Mahoney has proposed a sketch of AGI in both his thesis
[http://cs.fit.edu/~mmahoney/thesis.html](http://cs.fit.edu/~mmahoney/thesis.html)
and numerous articles
[http://mattmahoney.net/agi2.html](http://mattmahoney.net/agi2.html). Matt
muses about different scenario's with malicious users of such a system (there
needs to be reputation for the knowledge source that is fed into the machine,
there needs to be an AI police system to prevent users employing the AI for
crime or destruction) and even the system itself could become malicious:

\- Self-recursive improvement: The first program to achieve superhuman
intelligence would be the last invention that humans need to make. Smarter
than any doctor such an AI could find any possible cure for human disease.
Though evolution and mutation is tricky: The machine could start out friendly
to humans, but evolve to dislike us.

\- Uploading: People will want to upload digital conscious versions of
themselves ("internet of people"). This process will be friendly, until people
demand their avatars receive equal rights to that of humans, long after the
original people have died. Immortality leaves no place for the new generation
to shine.

\- Intelligent worms: Worms with language and programming skills are able to
social engineer your friends based on their profiles, and automatically find
exploitable flaws in weaker intelligent systems. Such worms may be so stealthy
that their presence will go unnoticed.

\- Redefining away humanity: If happiness is optimizing a mental formula then,
when we get to the point where we can directly optimize this with (virtual)
experiences, we will hit a maximum where any change in mental state will lead
to feeling less happy. To make sure we never run out of mental states we could
possible add this to our brains as extra memory modules. Soon our human
origins will be an insignifant fraction of our new being.

 _Then what do we become? At some point the original brain becomes such a tiny
fraction that we could discard it with little effect. As a computer, we could
reprogram our memories and goals. Instead of defining happiness as getting
what we want, we could reprogram ourselves to want what we have. But in an
environment where programs compete for atoms and energy, such systems would
not be viable. Evolution favors programs that fear death and die, that can 't
get everything they want, and can't change what they want._

Opposed to the popular scenario of a malicious AGI there is the scenario where
an AGI will pacify humans. AGI will be used for war. We should not fight human
wars, while creating AGI, because we'd force the AGI to pick sides. If AGI
picks our side, it will become the worlds deadliest weapon. If AGI picks the
opposing side, we made a most powerful enemy. To be more intelligent than
humans, to be better than them, does not mean you have to be a more
intelligent brute, to be better than humans at cruelty. It could also mean
understanding and fixing our human mistakes and childish, ego-driven battles,
thus bringing about world peace. In that sense, let's hope American companies
have something to do with the creation of AGI, because who knows how it will
react to Stuxnet 4.0 or economical and psychological espionage?

Though, principally, random number generators can and thus will generate
dangerous things too. They created us. And if we can't even trust ourselves,
how could we ever trust superior beings?

------
jondiggsit
Well, I for one, welcome our AI overlord.

------
sjg007
Tic tac toe.

------
admandotcom
These essays are embarrassing in how poorly researched they are and dangerous
given the gravity of their arguments.

------
joeldg
One issue with SMI and what is known as "The Control Problem" is that you are
trying to come up with rules for something that is not just smarter than you,
but is 'vastly' smarter than you, and in fact the entire human race put
together.

You are basically trying to figure out a way to 'in-effect' cheat a god, which
in relation to us, yes that is basically what it would be. An SMI is pretty
much the definition of a god (and a possibly crazy greek god as well).

The idea that something like that would turn the universe into paper-clips, is
somewhat silly and most of the arguments along these lines discount that "S"
part of the whole thing.

What is the answer..

There is none.

------
MelatoninRonin
Let's take a good look at what happens when American companies try to push for
some incredible goal without stopping to think: See the Apollo missions-
Assuming creating the next step in our spiritual evolution is a good idea, we
shouldn't let this hype push us harder than we are supposed to go. I agree
that we need to slow down and be very careful. However regulation is not the
complete answer. We need to completely restructure our society before we
introduce the tool of tools, to ensure that we can constantly use it with love
and empathy in mind, rather than the blatant and current misuse of high-tech
for greed, sloth, and wrath.

