
Stop Fearing Artificial Intelligence - frostmatthew
http://techcrunch.com/2015/04/08/stop-fearing-artificial-intelligence/
======
Moshe_Silnorin
Nobody is afraid of today's AI algorithms. But if we make machines that are
smarter than us and have desires, they will influence the future to achieve
their desires. If these desires conflict with our own, things will not end
well for the dumber party.

As we really have no idea what we, collectively, think of as a moral terminal
goal, and less so how to formalize this, there is no reason to expect the
first AIs to have goals that correspond to what we want. If AIs self-replicate
in a competitive ecology, what would be selected for would be agents millions
of times more intelligent than us who use their intellects only to make more
copies of themselves - using all available resources including those we need
to survive.

I'd recommend people read Stephen Omohundro's paper on the topic, Basic AI
Drives:
[https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)

~~~
mbillie1
> But if we make machines that are smarter than us and have desires

It is a hypothesis that human desires - the conscious processes involved in
thinking "I want such-and-such" \- influence human behavior. Consciousness is
not whatsoever a solved problem, and it may be that we are just complex,
deterministic automatons whose consciousness and desires are _caused by_ our
behavior (like the steam produced by an engine), rather than causing it.

I'm not necessarily in this camp, but I bring this up to point out that there
is an awful lot of common prejudice bundled up in what we consider _our own_
intelligence/consciousness to be, and we project these biases onto what we
imagine AI will be. We almost certainly misunderstand our own consciousness
profoundly in this way, and at any rate there is no definitive scientific (or
philosophical) answer to "what is self awareness" or "what role do desires
play in human behavior" (again, the desire could be the steam from the
engine).

~~~
amelius
> and it may be that we are just complex, deterministic automatons whose
> consciousness and desires are caused by our behavior (like the steam
> produced by an engine), rather than causing it.

Then explain that we are talking about it.

~~~
mbillie1
I'm not sure I understand your point. Computers communicate with one another
constantly. There certainly does _seem_ to be something different about us,
but I've yet to see any scientific or philosophical proof that our self-
awareness is necessarily involved with our intelligence or behavior. There's a
good deal of philosophical literature around this idea (
[https://en.wikipedia.org/wiki/Philosophical_zombie](https://en.wikipedia.org/wiki/Philosophical_zombie)
), and virtually no scientific explanation of consciousness.

I grant of course that this may not be a terribly _useful_ way for human
beings to view themselves, and it is quite contrary to popular belief, but for
all that it could still be true.

------
ikeboy
Instead of going through each of the points made, let me link to some pages
that already dealt with many of them. It sounds like all this guy heard was
"AI might be dangerous" and decided to rebut that, without looking at the
arguments put forward by so-called "doomsayers". Protip: before demolishing
any argument, try to find one (or a couple if you have more time) person who
makes the argument seriously, instead of taking your understanding of the
argument from news articles.

[https://intelligence.org/2015/01/08/brooks-searle-agi-
voliti...](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-
timelines/)

[http://lesswrong.com/lw/k4h/request_for_concrete_ai_takeover...](http://lesswrong.com/lw/k4h/request_for_concrete_ai_takeover_mechanisms/)

[http://slatestarcodex.com/2015/04/07/no-physical-
substrate-n...](http://slatestarcodex.com/2015/04/07/no-physical-substrate-no-
problem/)

[https://intelligence.org/files/AIPosNegFactor.pdf](https://intelligence.org/files/AIPosNegFactor.pdf)

[http://futureoflife.org/misc/open_letter](http://futureoflife.org/misc/open_letter)
(this is just to counter the implied claim that nobody who actually deals in
AI disagrees with him and it's only uninformed people like Bill Gates(!) that
worry.)

------
jeremynixon
Superhuman Machine Intelligence does not have to be the inherently evil sci-fi
version to kill us all. A more probable scenario is that it simply doesn’t
care about us much either way, but in an effort to accomplish some other goal
(most goals, if you think about them long enough, could make use of resources
currently being used by humans) wipes us out. - Sam Altman

[http://blog.samaltman.com/machine-intelligence-
part-1](http://blog.samaltman.com/machine-intelligence-part-1)

Oates seems to have missed the concept of an Intelligence Explosion, which is
why it is difficult to compare current AI limitations to the behavior and
capabilities of a superhuman machine intelligence.

I would strongly recommend reading Nick Bostrom's Superintelligence for a full
treatment of the source of worry for many brilliant minds.

~~~
heimann
I'm surprised with how many self-proclaimed experts in AI related fields
complete ignore this point of resource optimization. Any article with a title
like OP's that doesn't mention that seems pointless to me and just an attempt
to jump on the new bandwagon to dismiss fears of strong AI.

------
leot
It seems most people who take a position _against_ the feasibility of machine
superintelligence are basically making an argument-from-lack-of-imagination:
"I can't think up a plausible way that bad things could happen, therefore bad
things won't happen." This is then backed up by an argument-from-authority:
"And I've been in this field for X decades, so you should trust my ability to
imagine this kind of thing."

We have seen arguments like this throughout history. Often, what motivates
them is the writer's own failures and consequent life narrative: "I couldn't
do Y, so Y isn't possible right now."

~~~
at-fates-hands
I completely agree with your points.

It's strange to me to think as humans we will remain at the top of the food
chain forever. As such, it would seem a fault of our genetic makeup thinking
nothing would capable of knocking us off that top rung of the evolutionary
ladder.

It's like we can't fathom building something that would undermine our own
existence, or downplay the notion that AI would have some sort of bad
intentions since we built it.

~~~
leot
"Bad intentions" aren't even necessary -- out of control AGI will probably
look more like "oops, I hit >rm -rf /".

AFAIK, nearly every piece of software ever deployed has had bugs that humans
have had to subsequently patch. AGI must not, even though it would probably be
_much_ easier to build it if it did.

~~~
TeMPOraL
> _AGI must not, even though it would probably be much easier to build it if
> it did._

Which is exactly what raises the issue to the level of an existential threat -
it's _easier_ to build an Unfriendly AGI than a Friendly one - which means the
first GAI ever built is likely to be Unfriendly - and it takes only one
impatient team / person unleashing an UGAI on the world to kill us all.

------
politician
HFT algorithms aren't even AI but are capable of doing large amounts of damage
when they become synchronized - the resulting flash crashes can wipe out large
amounts of capital.

And it's most likely that these things will emerge from industries that have
no effective regulatory oversight (e.g. HFT/SEC), so effective controls
against damage will most likely just not exist.

There won't be a shotgun welded to the main processing core of the Wintermute
AI that decides, for whatever reason, to bankrupt your company, halt food
shipments to your grocery store, or shutdown the systems that clean the water
supply.

What scares me about AI is the ineffective controls that will exist on these
things, and given the state of most software development practices I don't
think that's a huge stretch.

------
neonbat
Oates' argument generally seems to be "It won't be scary because it won't be."
He rejects the premise of Musk, Gates, etc. He basically says we won't be able
to build a superhuman AI. All his arguments are then predicated in this
assumption. The assumption that we won't build a superhuman AI is just wrong.

For example:

> [Would this superhuman intelligence inherently go nuclear, or would it
> likely just slack off a little at work or, in extreme cases, compose rap
> music in Latin?]

This statement assumes an AI on the caliber of a human intelligence. Making
rap music in Latin is something a human would do. A superhuman AI is beyond a
human intelligence in the same way a human's capacity for thought and tasks is
beyond that on an ant.

The premise of a superhuman AI has at least some evidence supporting it. Oates
offers no solid evidence to his rejection of the premise. His argument is
therefore kinda bad (but not necessarily incorrect).

~~~
chubot
No, his argument is "nothing we have built so far (i.e. special purpose AI)
leads us to believe that we will build general purpose AI".

There is nothing inherently contradictory about a future with ONLY special
purpose AI. Technologically speaking, it doesn't really follow that one leads
to the other. He is using his credibility as an AI practitioner to state this,
although he doesn't go into details.

He's saying the burden of proof is on the doomsayers, which is reasonable. I
still don't see the argument either.

~~~
neonbat
I agree; that is part of his argument. Obviously though it is a completely
flawed argument. Up until the creation of the personal computer most machines
were specialists. They were large computers designed for specific computation
tasks or machines made specifically for manufacturing certain kinds of goods.
Generalist computers didn't have much precedent. That's kind of the point of
new technology. It's not necessarily predicated by something that came before.
It is orders of magnitude better.

To me this argument has historically less validity than creative people
dreaming about singular, binary acts of creation. Historically those people
have won.

~~~
icebraining
_Up until the creation of the personal computer most machines were
specialists. They were large computers designed for specific computation tasks
or machines made specifically for manufacturing certain kinds of goods._

Sorry, but that's just not true. Back in the 60s, way before personal
computers, the were many time-sharing companies, selling general "computing
resources" to different people who would dial-up into the machines and run
their own programs.

~~~
neonbat
Yeah I thought about that when I was writing the reply. I don't really
consider that a generalized use case. Is it more generalized? Yes. Computing
itself, however, can be argued to be a specialized task. My computer today
doesn't do "computing" it lets me read news stories, it lets me talk on skype,
it lets me send mail, it lets me have philosophical discussions on hackernews.
The personal computer is so clearly a generalized machine whereas those time
sharing rigs back in the 60s were clearly not. They were specialized tools for
programmers.

~~~
icebraining
PLATO had graphical terminals with touch screens and software implementing
forums, message boards, online testing, e-mail, chat rooms, picture languages,
instant messaging, remote screen sharing, and multiplayer games. They weren't
just used by programmers by a long shot.

~~~
neonbat
Ok, fine. PLATO was a super cool _more generalist_ machine that happens to not
be a personal computer. My use of personal computers was just an example that
I thought would get a point across about the evolution of technology. The fact
that PLATO existed and was very different than what came before it (as per all
the cool stuff you listed) if anything strengthens the argument I was making.
Generalist machines were a big step up from what came before, orders of
magnitude better. A super-AI will also be _unpredictably_ orders of magnitude
better than what we've done so far.

~~~
icebraining
But my point is that the generalist computers weren't a sudden or
unpredictable step, they were the result of slow and progressive improvement.
If generalist computers are any example to go by, the fact that we don't have
anything close to a primitive general AI indicates we're far, far away from
developing anything close to a "super-AI".

------
pdkl95
I don't fear "AI". What I fear is AI being used as a replacement for human
judgment or as a justification for greed and/or bigotry.

We already have this problem, where decisions that should be made by the
humans involved are restricted under the "policy" excuse of bureaucracy, blind
to the reality of unusual situations that require an exception in the usual
routine. I fear AI because it is harder to fight complicated systems. I fear
AI because - like the dubious "studies" that came before it, it will become a
tool to obfuscate and hide bias and prejudice.

------
RA_Fisher
In a data mining / machine learning class I took in university a key point my
teacher made was that of the 'No free lunch' theorem. In that class the
theorem was offered on its face and I haven't yet read the original paper
(just summaries and recounts)
([http://web.archive.org/web/20140111060917/http://engr.case.e...](http://web.archive.org/web/20140111060917/http://engr.case.edu/ray_soumya/eecs440_fall13/lack_of_a_priori_distinctions_wolpert.pdf)),
but the point of the theorem feels very related to the point the author makes.
The thesis boils down to: 'not having seen the data, there is no universally
superior algorithm'. The author's point of view seems to comport with this
theorem. I haven't gone through the paper myself, so I can't argue for or
against it. However, in my experience as a professional statistician that has
used Random Forests, SVMs, Neural Networks, etc --- finding the right
algorithm takes 'feature engineering'
([http://blog.kaggle.com/2014/08/01/learning-from-the-
best/](http://blog.kaggle.com/2014/08/01/learning-from-the-best/)) and close
inspection of the data and basis function. Are the data fit well by
rectangles? Then you'll want to use decision trees / random forests. Curvy?
Give SVM a try. Do the data take intricate patterns? Look into boosting
(fitting on residuals recursively). Feature engineering boils down to the old
statistical phrase, 'live with the data.' That is, you actually need use your
own mind and beliefs to organize inputs prior to feeding them into an
algorithm. So in my experience, what seems to be true in Kaggle competitions,
and the perspective of this author, it's true --- algorithms are either
tightly constrained and perform really well at a specific task, or they
perform poorly (but quickly) in a broad sense.

~~~
elsewhen
if our computing systems continue to get significantly more powerful, won't a
brute-force system obviate feature engineering? in other words, can't kaggle
eventually be outcompeted by an algo-of-algos that iterates through all known
approaches and then settles on the ideal candidate?

~~~
RA_Fisher
Hmm, great question. The caret package made for R might actually make that
loop possible:
[http://caret.r-forge.r-project.org/](http://caret.r-forge.r-project.org/)
However, in my experience it's really easy to accidentally ask caret to
iterate over a grid of parameters for just one model that would effectively
take forever to complete (the useful grids change for each dataset). My bet
would be that the algorithm space will expand with more and more
computationally intensive methods such that we're always chasing this brute-
force method. Even with this great 'caret' abstraction layer today, boy would
it be hard to run through even a handful of algorithms in an algorithmic way
(not having seen the data ahead of time).

------
azakai
> [combination of things necessary for an AI to become a threat to humanity]
> Sound reasonable to you? Me either

The question isn't whether it "sounds reasonable". If there is even a 2%
chance of that happening, we need to consider it. Anything that can end
humanity is worth considering even if it is unlikely.

> Even if this were the case, there is absolutely no reason to believe that,
> by virtue of running on a computer, an AI will be better at computers than
> we are.

No, there is every reason to believe that. Once we make an AI that is equal to
us, we will be able to scale it up by running it on the next generation of
CPUs, or running more such CPUs in parallel. We can't scale up human
intelligence in any similar way. If an AI can match us, it can far exceed us.
edit: in fact, the AI may well work to scale itself up

~~~
ikeboy
I think the list is misleading and is not necessary for an AI to be dangerous.

>It has an “I,” a sense of self distinct from others.

I deny this is required for a program to be dangerous.

>It has the intellectual capacity to step outside of the boundaries of its
intended purpose and programming to form radically new goals for itself (the
“I”).

Deny this as well; even carrying out programmed goals can be harmful. In the
words of Superintelligence (copied from
[https://intelligence.org/2015/01/08/brooks-searle-agi-
voliti...](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-
timelines/)):

>[W]e cannot blithely assume that a superintelligence with the final goal of
calculating the decimals of pi (or making paperclips, or counting grains of
sand) would limit its activities in such a way as not to infringe on human
interests. An agent with such a final goal would have a convergent
instrumental reason, in many situations, to acquire an unlimited amount of
physical resources and, if possible, to eliminate potential threats to itself
and its goal system.

>It has access to resources on a global scale to carry out the plan.

This sounds reasonable. As [http://slatestarcodex.com/2015/04/07/no-physical-
substrate-n...](http://slatestarcodex.com/2015/04/07/no-physical-substrate-no-
problem/) points out, _humans_ have made billions anonymously, just by being
connected to the internet.
[http://lesswrong.com/lw/k4h/request_for_concrete_ai_takeover...](http://lesswrong.com/lw/k4h/request_for_concrete_ai_takeover_mechanisms/)
gives some ways an AI could takeover.

------
mbillie1
Virtually all of these discussions of AI come from the very peculiar way in
which we view our own intelligence. It's only the notion of a "conscious will"
or whatever unifying feature we find in reflective self-awareness that
prevents us viewing intelligence as nothing more than large chains of
deterministic complexity. It is a hypothesis that there is something to _even
human behavior_ which is more than this deterministic complexity. In other
words, there may be no qualitative leap from the AI we currently have to some
sort of "real AI" or singularity or whatnot. It may be that you just build up
more complex cases over time.

That way of thinking is hurtful to our egos (we would be simply complex
deterministic automatons) but there is really nothing other than our pride to
suggest that this is not, or could not, be the case. It would also put to rest
concerns about machine AI, as it would be an extension of what currently
exists, simply on a larger scale.

------
ThomPete
Hardware and software evolve fast, humans don't. This is enhanced by
increasing network complexity.

Unless you think there are evidence that this will slow down somehow and we
are becoming less and less depending on technology i don't see why the burde
of proof is on the doomsayers (which are not the only one saying this)

------
zxcvcxz
This is a little off topic but maybe someone who knows more about this than I
do can chime in and give me some links to studies on this:

What is the possibility of creating a biological computational device and are
there currently efforts to do so? To me it seems like rather than physically
building computers out of processed materials, growing computers in labs might
be much more efficient and more powerful. The human brain is an example of
this (not lab grown, biological).

Are there any studies that look into harnessing the power of biological
computers? I mean, actually tapping into neurological pathways, giving input
and receiving output?

------
colordrops
All this hype about the coming AI menace seems like old fears about new
technology, like the 19th century fear of the explosion of printed material
exhuasting and destroying children's minds.

We still don't know how to create a human level general intelligence and it
may be far more difficult than we think. I suspect that the pace of brain/mind
research will be tied in with AI research and the boundary between AI and the
human mind will blur before general AI has a chance to take over. We will be
augmenting our own intelligence in step with AI progress.

------
troels
One thing I don't see discussed much in these superintelligence/singularity
scenarios, is the possibility that there is an upper limit to intelligence.
Maybe it simply isn't possible to be much smarter than what we already are. So
maybe we build AIG, but even with all its smarts it can't improve beyond - say
- Einstein level (or what have you).

But surely I ca't be the first to make that suggestion, so who has discussed
this possibility?

~~~
xamuel
Even if that's the case, quantity wins where quality fails. Imagine what you
could do if you had so many clones of yourself that you could monitor every
website, every forum, every person logged on IRC, 24 hours a day, giving each
one of them your complete undivided attention.

(Not that I'm taking an anti-AI stance. Machines do what we program them to
do, that's a tautology. If there is risk, it's from user error / intentional
terrorist action / AI upsetting distribution of wealth, not from some kind of
magic skynet event)

~~~
troels
Sure, but there is overhead of communication with a large group. One smart
person can be extremely efficient, but 10 smart people are probably not 10
times as efficient. And it gets worse with scale.

------
foobarqux
It becomes much easier to understand the AGI threat if you consider whole
brain emulations -- human minds running at superhuman speed on silicon.
Hollywood has made it obvious, if it wasn't already, the serious implications
of that type of development.

Whole brain emulations also make it clear that the issue has nothing at all to
do with human values: a human superintelligence would be a catastrophic risk
to humanity.

~~~
Balgair
Whole Brain Emulation is far off, even putting in moore's law, because we
still are far off from understanding the brain. Yes, optogenetics and the
connectome are going to bring us a long way there, but at the end, we have to
understand that the brain is far to plastic to really know at any time point
what is going on.

Experiments with macaque motor cortex (M1) provide a great example. When you
cut off a finger or toe and record from M1 at the same time, you can watch the
synapses change in real time. The remapping of M1 in the monkey starts the
second that the nerves are severed. This remapping is poorly understood as it
stands. Typically, the brain will allow innervation of the areas the control
the other digits into the now useless area where the missing digit is. Even
more interesting is when a pair of monkeys were stolen by anti-vivisectionists
(The silver spring incident, I think [0]). When returned, the monkey's M1 had
totally remapped from it's original state. Hippocampal neurogenesis [1] is
also very poorly understood as it is today. Why our brains make new neurons
and mostly for the more short-term memory centers in the hippocampus is a
mystery.

What I am getting at is that we do not understand our own intelligence from
the standpoint of it's mechanics enough to emulate a brain near at all. It is
a looooooong ways off. To truly emulate it, you have to know that it is a
moving target, and therefore any emulation is a moving one as well. One that
has to live in a body that you emulate in a world that impinges on it. Trying
to take the mind out of the body is impossible as well as trying to take both
then out of the world. The results would have no meaning.

[0][https://en.wikipedia.org/wiki/Silver_Spring_monkeys](https://en.wikipedia.org/wiki/Silver_Spring_monkeys)

[1][https://en.wikipedia.org/wiki/Neurogenesis](https://en.wikipedia.org/wiki/Neurogenesis)

~~~
foobarqux
The point is that the danger of AGI is much easier to understand through whole
brain emulation, not that whole brain emulation is necessarily imminent. That
said, the most pernicious risk in AGI is that it will happen "slowly, then all
at once"; that it will always be a long way away until the day it isn't.

That said, it isn't obvious to what extent we need to understand the brain in
order to transfer it to another substrate: much of engineering relies on a
simplified understanding of the world.

------
chubot
I read "Superintelligence" by Nick Bostrom, essentially on the recommendation
of Elon Musk (he tweeted about it). It talks about the dangers of strong AI
and possible paths to it, and how humans can mitigate its effects.

The only reason I read past the beginning is because in the preface he says:
"This book is likely to be seriously mistaken in a number of ways".

So at least he's intellectually honest. I believe he's building 300 pages of
argument and analysis on a flawed premise.

As far as I can tell, the entire discussion rests on what he calls
"instrumental goals" vs. "final goals". (This article I found on Google has
similar content:
[http://www.nickbostrom.com/superintelligentwill.pdf](http://www.nickbostrom.com/superintelligentwill.pdf)
)

His example is the "paper clip maximizer":
[http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

In this situation, the final goal is: Produce the maximum number of paper
clips.

The instrumental goal is: Acquire all resources in the world so that you can
direct them toward paper clip production, which involve destroying all humans,
etc.

Personally, I don't believe this threat is worth thinking about this point.
The supposed path to implementing such a technology isn't credible, and it
seems orders of magnitude less likely than, say, us having to evacuate the
entire planet.

In other words, I believe that we will be able to build very useful special
purpose AIs that accomplish our goals. I can see a future full of benign
"plant-like" intelligences, existing indefinitely. They are machines that take
in incredible amounts of information, and spit out ingenious answers that no
human could have come up with.

From that, it doesn't follow there there is any motivation to take over the
world.

We should think about the many, many challenges ahead with special purpose AI
instead, and our increasing dependence on computing.

All these special purpose AIs will collecting everybody's personal data, shape
their behavior, etc. For example, you can easily imagine a company like
Facebook or Google deciding to sway an election.

There are a lot more important problems to be thinking about now.

~~~
ikeboy
Why do you feel we will not build "Strong" AI? We know AI as smart as humans
_can_ theoretically exist, because humans exist, so intelligence is possible.
Is there a reason to think that humans are near the upper limit of what's
possible? If not, why can't we build something significantly smarter than us?

~~~
DougMerritt
Just to play Devil's Advocate, not because I believe this:

We can, and do, create machines that are faster than humans, and it is
reasonable to assume that eventually computers can be created that have
capacity and connectivity larger than humans.

But that doesn't prove that the result will be as intelligent as humans, let
alone more intelligent, because intelligence is clearly more than just speed
and capacity.

Edit: for instance, faster and bigger does not change the O(n) complexity
analysis of algorithms, thus faster and bigger does relatively little to
improve the ability to solve exponential problems.

 _Exactly_ what intelligence is, is still not known, and it is quite possible
that we will need an _algorithmic_ breakthrough to create something equivalent
to human minds.

And even if _that_ happens, making _that_ thing faster and bigger than human
minds may turn out not to result in superintelligence -- that would just be
another similar assumption as the first.

So it might need yet another algorithmic breakthrough to create
superintelligence, and it is possible that neither humans nor human-equivalent
AIs will be able to make that breakthrough.

Again, I don't believe the above, but I think it is purely a matter of
optimism versus pessimism, not about things that are proven.

~~~
ikeboy
>Exactly what intelligence is, is still not known, and it is quite possible
that we will need an algorithmic breakthrough to create something equivalent
to human minds.

Let's assume this is true.

>And even if that happens, making that thing faster and bigger than human
minds may turn out not to result in superintelligence -- that would just be
another similar assumption as the first. So it might need yet another
algorithmic breakthrough to create superintelligence, and it is possible that
neither humans nor human-equivalent AIs will be able to make that
breakthrough.

This seems unlikely to be true. Imagine we have a program that is as smart as
the human brain. It's running on hardware a million times as fast, though. So
at the very least, it can think in one day what a human can think in a million
days. Even if it's restricted to human-level intelligence, it should still be
far more effective than a human. This is even without considering self-
modifications.

~~~
DougMerritt
> So at the very least, it can think in one day what a human can think in a
> million days.

You are simply reiterating the original position that I was devils-advocating
against.

It is not proven that a million days of work equals superintelligence, even
though obviously it would be more effective than 1 day.

If some activity is exponential, like if it takes 10^x days to accomplish,
then a human could solve problems of size 3 in about three years, and the
million-fold faster computer would take only 10^(x-6), or about a minute.

And in three years it could solve problems of size 9. But a problem of size
1000 would take so many multiples of the lifetime of the universe that it
would appear to be approximately as slow as a human, the difference is
negligible.

This is not hypothetical; the best known complete (not approximate) solutions
to things like the traveling salesman problem, or to fitting odd sized
containers into a container ship, are in fact exponential like that as far as
we currently know.

The topic is not limited to algorithmic complexity theory like that, it's just
a proof of concept.

So the point is that superintelligence _might_ turn out to be something that a
million-fold improvement isn't enough to achieve.

We don't understand intelligence, so it's pretty obvious we don't have a
handle on superintelligence.

Again, all this is a matter of optimisim or pessimism up until the subject is
understood far, far better.

~~~
ikeboy
>It is not proven that a million days of work equals superintelligence, even
though obviously it would be more effective than 1 day.

>This is not hypothetical; the best known complete (not approximate) solutions
to things like the traveling salesman problem, or to fitting odd sized
containers into a container ship, are in fact exponential like that as far as
we currently know.

Why would something need to solve problems like that to be significantly
smarter than us? On real problems, working 1000 times as much time surely
helps a lot. This is not even taking into account how much benefit you get
from not being lazy and everything like that.

~~~
DougMerritt
I am merely playing Devil's Advocate. I personally think that speedups of
1000-fold and million-fold etc. are a really big deal.

"Quantity has a quality all its own."

Nonetheless, it remains _possible_ that there is a difference between fast
intelligence solving things faster than we can, versus a superintelligence
(from extraterrestrial aliens or something) that can create superior
inventions or art or math or whatever that humans couldn't no matter how much
time was allowed.

Time is not always the issue.

Anecdote from the Feynman biography "Genius":

A physicist once said that there were two kinds of genius. The first kind was
the sort that a good but non-genius physicist could imagine that they, too,
could have achieved the same results as the genius, if only they could work on
just that for decades or even lifetimes.

But then he said that Feynman was the second kind of genius, whose results
seemed to have been created by magic, and that ordinary physicists could not
imagine achieving no matter how long they worked at it.

The key thing here is that some pretty smart people already believe that there
are at least two kinds of very high intelligence, one of which is not related
to speed.

So my point is merely that the same _might_ apply to these other topics. We
know so little about how to create a mind that there simply is no proof one
way or the other.

And that latter is an objective fact. There is no existing algorithms for
creating human-level minds that merely needs more cpu + ram + disk to work,
there just isn't. So we don't know. Different people choose to believe one way
or the other.

That's all.

------
bsenftner
Would your opinion change if you were told that article was written by an AI?

------
jonsen
We wouldn't wish for the end of humanity, but if it implies the end of
inhumanity, it would be a net gain.

------
michaelochurch
I fear AI far less than I fear humans.

First, twenty thousand years or more of human history is about us trying to
make other humans into machines: slavery, rape and forced prostitution,
political coercion, war, overreaching prison systems and draconian punishments
for minor crimes. We're not very nice to each other. Rather, we're controlling
assholes as a species. Most of the ones who get power in organizations small
and large are more focused on relative dominance than on absolute gain. From
the Epic of Gilgamesh to the Bible to modern Stalinism, we see that effort by
powerful people to turn the rest of humanity into a machine that will exert
their own will.

Given 20,000 years of abusively turning humans into machines, tearing apart
their families and banning their religions, attempting to boil them down into
simple working devices, is it really likely that a control-freak species like
us is going to let a machine try to assert itself as human? In peacetime, I
think we'll be pretty good at preventing that from happening. We're good at
mechanizing labor and, now that we have devices that outperform us at menial
and dangerous work, that's becoming an asset rather than a flaw.

Sure, it's possible that we get outdone by AI, and in fact there's one context
in which it's likely: war. Counted among the casualties of an all-out or
existential war are all the rules that people once believed in, and all of our
assumptions about what humans (who, normally, aren't so terrified and
desperate or so power-hungry) will do. If a runaway AI destroys humanity, it
will probably begin from humans warring against other humans, and in an all-
out conflict where surrender is not seen as possible by either side.

This is not to downplay the risk. At best, I'd be saying that AI isn't
dangerous in the way that guns aren't dangerous-- and, of course, we know that
guns are extremely dangerous when used by humans to kill other humans.
Luckily, this desire to kill another person doesn't seem to exist at the scale
that would enable the existence of guns to be an existential threat.

So why might humans tend to kill other humans? Crime often results from
scarcity. Well, technological unemployment is only accelerating. What happened
to agricultural commodity prices in the 1920s, leading to widespread rural
poverty and a global depression in the 1930s, is happening to _almost all
human labor_ today. It's terrifying because ill-managed prosperity begets
scarcity and that begets fear and authoritarianism and war. While we're
decades away from being able to build a species-killing AI (which, of course,
would typically not be designed as such; it would probably be designed to kill
some humans before running amok and doing fatal damage) I do think that if we
are similar in character, by that time, to what we are now, it's a real
threat. Power accrues, in most human organizations, not to those who deliver
progress but to those who create scarcity. If this doesn't change, then wars
will never end and that fact alone is an existential risk.

~~~
joshrivers
An idea that I got after reading Guns, Germs, and Steel was that at a basic
level, killing other humans is the correct response to most threats,
annoyances, or disagreements. Manager wants to micromanage with story points?
Kill him. Problem solved.

Now, clearly, if everybody is killing everyone beyond their immediate family,
it is hard to have any sort of larger society. Living and collaborating under
constant threat of immediate death wouldn't work. So we have a large set of
social adaptations to limit the killing and restrain ourselves.

My point here, is that 'why do human kill other humans' is a less interesting
question than 'why are the social adaptations that prevent killing not taking
affect?' The problem of fighting isn't a moral failing, or some flaw in the
human soul. Killing makes sense within the decision making scope of our
conscious brains, and is the result of inadequate build up of collective
structures of inhibition that are required to have larger groups of people in
closer contact with each other, and as the groups get bigger and the contact
becomes closer, we need to engineer new schemes for inhibiting the 'lets just
kill them off and solve this once and for all' urge.

------
tsaoutourpants
What an idiot.

------
nodata
Careful... Roko's basilisk
([http://rationalwiki.org/wiki/Roko%27s_basilisk](http://rationalwiki.org/wiki/Roko%27s_basilisk))

