
The Myth of a Superhuman AI - mortenjorck
https://backchannel.com/the-myth-of-a-superhuman-ai-59282b686c62
======
_greim_
> Temperature is not infinite [...] There is finite space and time. Finite
> speed.

Just want to point out this is true, however these things go astronomically
high.

> what evidence do we have that the limit is not us?

We can measure the speed impulses travel through neurons, and compare that to,
say, the speed of electrical impulses through silicon or light through fiber.

We can find the maximum head-size that fits through a vaginal canal, or the
maximum metabolic rate a human body could support, and try to determine if
these factors imposed any limitations on intelligence during human evolution.

We can look at other evolved/biological capabilities, like swimming or flying,
and compare them to state-of-the-art artificial analogs, and see if a pattern
emerges where the artificial analogs tend to have similar limitations as their
biological counterparts.

~~~
andrepd
> Temperature is not infinite [...] There is finite space and time. Finite
> speed.

There's no theoretical limit for temperature, and we believe spacetime could
be infinite even though the observable universe is not. Nevertheless this is a
very silly argument. If it's finite but absurdly high it's good enough for the
purpose.

~~~
adrianN
If the energy density in a fixed volume of space gets too high it collapses
into a black hole. That seems to suggest that there is a kind of limit to
temperature.

~~~
GotAnyMegadeth
I'm a complete noob in this area, but that doesn't mean that temperature has
reached it's limit does it? The black hole can carry on getting more dense and
hotter.

~~~
adrianN
No, the black hole can't get more dense. If you put more energy into it it
gets bigger.

------
MR4D
First, let me say that I'm generally a Kevin Kelly fan.

That being said, I think his article shows extreme arrogance for one simple
reason: To suppose that superhuman AI (AI smarter than us) won't exist is
roughly the equivalent of saying that humans are at the limit on the spectrum
of intelligence. Really? Nothing will ever be smarter than us?? Highly
doubtful.

That should stand on its own, but I have other critiques. For instance, why
does silicon have to be assumed? Why not germanium or graphite, or something
else? I have little faith that a CPU circa 2050 will be built exclusively on
silicon. By 2100, no way.

Second, there is a simple definition of intelligence that is applicable to
many forms: intelligence is the ability to recognize patterns and make
accurate judgements / predictions based on previously seen patterns. The
higher the accuracy or the more complicated the pattern, the higher the
intelligence.

My final point of contention is the idea that AI must emulate human thinking.
Why? Maybe human thinking sucks. Maybe Dolphins have much better intelligence,
but due to a lack of opposable thumbs, they don't rule the world like we do.
And lest you think that less intelligent species can destroy others, could you
really doubt that roaches and ants will be extinct before us?

~~~
interfixus
_To suppose that superhuman AI (AI smarter than us) won 't exist_

Which is exectly what Kelly doesn't say. He says that the _smarter_ concept is
ill defined, and that our current fantasies of some universally superior AI
galloping onto the scene and taking over everything may be just that -
fantasies.

~~~
BoiledCabbage
> He says that the smarter concept is ill defined

Which isn't a contradiction like he claims it is. It just means that there are
many different ways that a future AI can be smarter than us. That intelligence
could be multi-dimensional.

But guess what, we can easily take that multi-dimensional input, and find a
formula that reduces it to a single scalar value based on our practical
valuation of these forms of intelligences (almost like an intelligenc 'utility
function' from economics), and problem solved. We're right back to a single
order dimension for ranking intelligence.

It was a really weak argument he put forward.

Additionally a weak argument was the branching / fan pattern of various
species. Yes all living species are at the peak of evolution for their
environment, but they weren't all pressured to evolve more intelligence. Some
evolved strength, speed, flight to their environment.

If instead natural selection began only selecting for intelligence (like
humans searching for AGI will), then you would could definitely rank all
animals linearly on a single path of intelligence.

~~~
interfixus
_It just means that there are many different ways that a future AI can be
smarter than us. That intelligence could be multi-dimensional_

A condensed way of saying precisely what Kelly is saying in the article.
Allowing for the very real possibility that I am simply too dumb and not
grasping your point.

 _but they weren 't all pressured to evolve more intelligence_

And it isn't claimed that they were. General evolution is used as an example
of potential patterns in evolution of various intelligences.

~~~
BoiledCabbage
He attempted to use the multi-dimensionality of intelligence to make the
following claim:

> Intelligence is not a single dimension, so “smarter than humans” is a
> meaningless concept.

This is poor reasoning. The fact that intelligence is multi-dimensional has no
bearing on our ability to declare something smarter than us. It isn't at all
meaningless. Because of this he claims that there will be no super-human AI.

Via analogy. He says, "you can't compare two football players because one may
be stronger, while another if faster." So the concept of "better" is
meaningless. And no player can be declared better.

My response is that's absurd. A simple counter-example, a single player can be
both strong and faster, and thus clearly better.

~~~
interfixus
A third player is weaker, but faster. And smarter. Or tougher. Or more agile.
More agile but not quite as smart. More persistent. Less predictable. And so
on and so forth. Your 'meaningless' only has meaning because you apply it to a
hugely simplified model.

~~~
BoiledCabbage
> we can easily take that multi-dimensional input, and find a formula that
> reduces it to a single scalar value based on our practical valuation of
> these forms of intelligences (almost like an intelligenc 'utility function'
> from economics)...

My original comment addressed that specific case.

------
Houshalter
This is completely silly. Superhuman AI is inevitable because there is nothing
magical about human brains. The human brain is only _the very first_
intelligence to evolve. We are probably very far away from the peak of what is
possible.

Human brains are incredibly small, a few pounds of matter. Any bigger and your
mother would be killed giving birth or you would take 10x as long to grow up.
They are incredibly energy constrained, only using a few watts of power.
Because any more and you would starve to death. They are incredibly slow and
energy inefficient; communication in the brain is done with chemical signals
that are orders of magnitude slower than electricity and use much more energy.
And they are very uncompact - neurons are enormous and filled with tons of
useless junk that isn't used for computation. Compared to our transistor
technology which is approaching the limits of physics and built on an atom by
atom scale.

That's just the hardware specs of the human computer. The software is hardly
better. There are just more unknowns because we haven't finished reverse
engineering it (but we are getting there, slowly.)

But beyond that, the human brain evolved to be good at surviving on the
Savanahs of Africa. We didn't evolve to be good at mathematics, or science, or
engineering. It's really remarkable that our brains are capable of such things
at all! We have terrible weaknesses in these areas. For instance, a very
limited working memory. We don't realize how bad we are, simply because we
have nothing else to compare ourselves to.

Consider how even today, relatively primitive AIs are vastly superior to
humans at games like chess. Human brains also didn't evolve to be good at
chess after all. Even simple algorithms designed specifically for this game
_easily_ mop up humans. And play at a level of strategy far above what even
the best human players can comprehend.

Imagine an AI brain that is optimized for the purpose of mathematics, or
computer programming, science, or engineering. Or at doing AI research...
Imagine how much better it could be at these tasks than humans. It could
quickly solve problems that would take the greatest human minds generations.
It could manage levels of complexity that would drive humans crazy.

~~~
wamatt
>They are incredibly slow and energy inefficient;

Human brains are energy inefficient? Well, thats a first ;)

 _" In 1990, the legendary Caltech engineer Carver Mead correctly predicted
that our present-day computers would use ten million times more energy for a
single instruction than the brain uses for a synaptic activation."

"Last March, AlphaGo, a program created by Google DeepMind, was able to beat a
world-champion human player of Go, but only after it had trained on a database
of thirty million moves, running on approximately a million watts. (Its
opponent’s brain, by contrast, would have been about fifty thousand times more
energy-thrifty, consuming twenty watts.)"_

[1] [http://www.newyorker.com/tech/elements/a-computer-to-
rival-t...](http://www.newyorker.com/tech/elements/a-computer-to-rival-the-
brain)

~~~
Neeek
For that to be a fair comparison, wouldn't you need to look at all the energy
consumed by the human brain over the many hours it took them to become a Go
champion?

~~~
yk
I think that's a fair argument, but from the quote above

> "Last March, AlphaGo, a program created by Google DeepMind, was able to beat
> a world-champion human player of Go, but only after it had trained on a
> database of thirty million moves, running on approximately a million watts.
> (Its opponent’s brain, by contrast, would have been about fifty thousand
> times more energy-thrifty, consuming twenty watts.)"

Let's say alphaGo trained for a year, that would be 1 MWyr energy consumed.
And lets assume that Lee Se-dol's brain consumed 20W over 34 years of his live
doing nothing but working on Go, that would be 640 Wyr, still a factor
1000-ish smaller.

~~~
Neeek
Totally, I'm sure it's correct, and even if you were to bring the comparison
in to line then the outcome is still "computer is watt hungry". The point is
that the original statement, while correct, doesn't really say anything
useful.

------
bko
Overall I am sympathetic to the authors argument that fear of super ai is
overblown. But I do take issue with some of his arguments.

> Even if the smartest physicists were 1,000 times smarter than they are now,
> without a Collider, they will know nothing new.

I'm not a historian but I have read that some scientific discoveries are made
through pure logic. Einstein and relativity come to mind as he was not an
empiricist. So perhaps there is some hope that ai can lead to scientific
discoveries without experimentation

>So the question is, where is the limit of intelligence? We tend to believe
that the limit is way beyond us, way “above” us, as we are “above” an ant.
Setting aside the recurring problem of a single dimension, what evidence do we
have that the limit is not us? Why can’t we be at the maximum? Or maybe the
limits are only a short distance away from us? Why do we believe that
intelligence is something that can continue to expand forever?

The idea that humans could, just by chance, be pushing the limits of
intelligence strikes me as silly

~~~
Retric
Einstein's work was based on a lot of evidence that seemed very strange. Light
somehow had a fixed speed in all reference frames WTF?

~~~
lazaroclapp
Sure. Question is: how many other discoveries await today such that we,
collectively, as a species, already have all the puzzle pieces, but haven't
yet put them together? An AI as smart as your average grad student, but which
somehow could retain in mind at the same time all of our accumulated
scientific knowledge, might be able to quickly draw all sorts of seemingly
brilliant conclusions. Now imagine its reasoning process works 6 to 7 orders
of magnitude faster than ours, even if not qualitatively different in its
logic or biases.

Dunno, I don't really believe we are that close to building that sort of AI,
but it doesn't seem fundamentally impossible, and it does seem like it could
do things that to us would look as "more intelligent" than us. It might in the
end be better at creating scientific knowledge in the way current computers
are better than us at solving arithmetic: faster and capable of holding more
in memory, rather than following any qualitatively different process. But even
that would be enough.

~~~
Retric
IMO, few major ones. We have billions of man years of research and things are
better than before that period, but diminishing returns are real.

Don't get me wrong I think it would be useful, just that the gap from human
level AI to 1,000x human AI is simply not that huge. Let's say you moved
someone from 2006 to 2016 or even from 1996. Yea sure there is real progress,
but not really. We have better drugs in terms of AIDS for example, but we are
worse off in terms of antibiotics. Game graphics have improved, but quake is
from 1996 so we already had real 3D first person shooters and gameplay is not
that different. Hell FTP is arguably worse. Further that's 20 years so we are
talking literally millions-man years of effort and trillions of dollars worth
of R&D for not much.

In terms of machines the SR-71 is still the fastest manned aircraft, no
supersonic passenger aircraft. Tallest building is much taller, but lacks much
space on the top floors making it more monument than utility as the sears
tower has more useable space and a smaller footprint.

~~~
TheOtherHobbes
Invention proceeds because of game-changing insights.

Maxwell's equations were a game changer. So were Newton's laws. So were
Relativity and QM.

Church-Turing was also a game changer. But I don't think there's been anything
equivalent in computing since.

There's been a lot of application, but no game-changing deep theoretical
insights.

Quantum computing may - should? - eventually throw up something new.

It's going to have to. If it doesn't I think we're going to be stuck with much
less progress than we're expecting.

~~~
jacquesm
> Church-Turing was also a game changer. But I don't think there's been
> anything equivalent in computing since.

Quantity when the difference is large enough becomes quality. The 9 orders of
magnitude or so that computers have gone through in storage capacity and speed
definitely count as a game-changer.

------
nohat
Better quality than most such posts, but still seems to be missing the point.
The remarkable thing about Bostrom's book is how well it anticipated the
objections and responded to them, yet no one seems to bother refuting his
analysis, they just repeat the same objections. I actually agree with a decent
bit of what he says on these points, though his application of these
observations is kinda baffling. He makes a lot of misguided claims and
implications about what proponents believe. I'll sloppily summarize some
objections to his points.

1\. This doesn't really bother making an argument against superhuman
intelligence. Yes, of course intelligence has many components (depending on
how you measure it), but that's not an argument against superhuman
intelligence. I'm reminded of the joke paper claiming machines can never
surpass human largeness, because what does largeness even mean? Why it could
mean height or weight, a combination of features, or even something more
abstract, so how can you possibly say a machine is larger than a human?

2\. Mainly arguing about the definition of 'general' without even trying to
consider what the actual usage by Bostrom et al is (this was in the
introduction or first chapter if I recall correctly). I agree that the
different modes of thought that AI will likely make possible will probably be
very useful and powerful, but that's an argument for superhuman ai.

3\. Well he makes his first real claim, and it's a strong one: "the only way
to get a very human-like thought process is to run the computation on very
human-like wet tissue." He doesn't really explore this, or address the
interesting technical questions about limits of computational strata,
algorithm efficiency, human biological limitation, etc.

4\. Few if any think intelligence is likely to be unbounded. Why are these
arguments always 'x not infinite, therefore x already at the maximum?' He also
seems to be creating counter examples to himself here.

5\. Lots of strong, completely unbacked claims about impossibilities here.
Some number of these may be true, but I doubt we have already extracted
anything near the maximum possible inference about the physical world from the
available data, which is basically what his claims boil down to.

~~~
rspeer
I haven't read Bostrom's book. I don't think I would enjoy it. Maybe I need to
grudgingly read it to be able to respond to what Bostromites say.

Here's the thing. If Bostrom's claims about AI are so strong, why does
everyone who's referring to his book as their source of beliefs about the
future spout non-sequiturs about AI?

Here's an example. 80000 Hours has a mission that I generally agree with, to
find the most important problems in the world and how people can most
effectively work on them. But somehow -- unlike cooler-headed organizations
like GiveWell -- they've decided that one of the biggest problems, bigger than
malaria, bigger than global warming, is "AI risk" (by which they mean the
threat of superhuman AGI, not the real but lesser threat that existing AI
could make bad judgments). [1]

To illustrate this, they refer to what the wise Professor Bostrom has to say,
and then show a video of a current AI playing _Space Invaders_. "At a super-
human level", they say pointedly.

What the hell does Space Invaders have to do with artificial general
intelligence?

For that matter, what the hell does deep learning have to do with AGI? It's
the current new algorithmic technique, but why does it tell us any more about
AGI than the Fourier Transform or the singular value decomposition? I would
say this is a bias toward _wanting to believe_ in AGI, and looking for what
exists in the present as evidence of it, despite the lack of any actual
connection.

Has 80000 Hours been bamboozled into thinking that playing Space Invaders
represents intelligence, or are they doing the bamboozling? And if Bostrom is
such a great thought leader, why isn't he saying "guys, stop turning my ideas
into nonsense"?

[1] [https://80000hours.org/career-guide/world-
problems/#artifici...](https://80000hours.org/career-guide/world-
problems/#artificial-intelligence-and-the-control-problem)

~~~
nohat
Bostrom is in no way in charge of people who happen to agree with him wrt ai
risk. For the book he mostly collected and organized a lot of existing thought
on ai risk (not that he hasn't made his own novel contributions). That's very
valuable, largely because it makes for a good reference point to contextualize
discussion on the topic. Unfortunately the critics don't seem to have read it
because (in my experience) they repeat the same objections without reference
to the existing responses to those objections.

People do sometimes overblow alphago/ dqn playing Atari, but it's not
meaningless. These systems (and other deep learning based systems) can truly
learn from scratch on a decent variety of environments. One of the most
important unknowns is exactly how difficult various cognitive tasks will prove
to be for a machine. Each task accomplished is another data point.

~~~
rspeer
I wouldn't say that DeepMind learns Atari games "from scratch" any more than
Deep Blue learned chess from scratch. It learns to play Atari games because
it's a machine designed to learn to play Atari games.

~~~
Elrac
I strongly disagree. You don't seem to be aware of the difference in approach
between Deep Blue and DeepMind.

Deep Blue was hand-led directly and specifically to solve the problem of
chess: It was provided with a library of opening moves, some sophisticated
tactical algorithms relevant to the problem of chess, a library of strategies
for chess, and so on. Many actual human masters of chess were consulted,
directly or indirectly, to help with developing Deep Blue's approach to the
problem.

DeepMind, on the other hand, was created as a "blank slate" with no more hard-
wired instruction than "create optimal algorithms to achieve the winning
state, given the inputs." Critically, its learning phase is completely self-
directed. Essentially, the box is given access to the controls and the video
screen content and then sent on its way.

It's instructive to note that this is pretty much exactly how, very generally
speaking, evolution and intelligence solve the problem of survival: every
organism has controls and a glimpse of "game state" and has to learn
(collectively as a species, individually as an organism) to play the game
successfully.

~~~
Ngunyan
> DeepMind, on the other hand, was created as a "blank slate" with no more
> hard-wired instruction than "create optimal algorithms to achieve the
> winning state, given the inputs." Critically, its learning phase is
> completely self-directed. Essentially, the box is given access to the
> controls and the video screen content and then sent on its way.

Have you seen DeepMind algorithm to be able to say this ? Are there other
people outside of Google who have seen the algorithm and can confirm Google's
press release?

~~~
Elrac
Maybe you have some legitimate concern about Googles' claim as per their press
release and my comment. Who knows, maybe they have some reason to lie about
what they did!

But then I wonder why you aren't asking the same question of my parent poster.
Has he viewed the DeepMind code, is he qualified to tell us it works the same
as chess code? Having made that claim backed on even less evidence than I made
mine, I'd say his burden of proof is somewhat greater.

~~~
rspeer
I think there's a heavy dose of press release to what Google is saying. Most
people wouldn't call PR puff "lying", but only because standards are low.

I don't think Google has fundamentally different deep-learning technology than
everyone else. In fact, TensorFlow indicates that they have the same kind of
deep-learning technology as everyone else and they just want to do it more
cleanly.

Deep learning is parameter optimization. There are more parameters now, and
they optimize more things, but don't get caught up in wild visions of machines
designing themselves. Would you consider the bzip2 algorithm to be "self-
directed learning"? What's the difference, besides the number of parameters?

The PR people, when they say "blank slate", are discounting all the
programming that went into the system because it sounds more impressive that
way. This is unfortunate. It has happened in AI for decades. To be a
responsible consumer of AI press releases, you need to understand this.

~~~
Elrac
> _I don't think Google has fundamentally different deep-learning technology
> than everyone else._

That's true, and I never claimed otherwise, but that doesn't help you argue
your point - in fact, you just proved yourself wrong. From IBM's press
release:

> _Does Deep Blue use artificial intelligence? The short answer is "no."
> Earlier computer designs that tried to mimic human thinking weren't very
> good at it. No formula exists for intuition. So Deep Blue's designers have
> gone "back to the future." Deep Blue relies more on computational power and
> a simpler search and evaluation function._

I'll summarize for you: Deep Blue and DeepMind, similar names notwithstanding,
work in very different ways.

~~~
rspeer
What comparison are you even making here? I know that Deep Blue and Deep Mind
are different. There is 25 years (edit: sorry, 20 years) between them! Deep
Blue is not deep learning. Did the word "deep", used in two unrelated ways,
confuse you?

What I am saying is that I know how deep learning works, actual deep learning
of the present, and it does not involve "programming itself".

You are trying to tell me that it must be programming itself, because a press
release said so, and press releases would never lie or exaggerate. Based on
the current state of AI, this is very improbable. You should focus less on
trying to "prove" things with press releases.

I made the comparison to Deep Blue because there is little mystique around it
now, and because IBM was even reasonably responsible about avoiding AI hype in
their press at the time.

------
faragon
The article is wrong, in my opinion.

Regarding point #1, still not being formally wrong, world computing capability
is growing at exponential rate. Not even the end of the Moore's law will stop
that, e.g. 3D transistor stacking, strong semiconductor demand for consumer
and industrial market, etc. Aso, the author don't know if there is already CPU
capacity for matching human intelligence: may be the key missing is not the
hardware, but software (efficient algorithms for "human" intelligence running
on silicon).

Point #2 is clearly wrong. Demostration: I, for one, if still alive, and
having the chance, will try to implement general purpose intelligence, "like
our own". And, come on, I know no hacker able to resist that.

Again, point #3 is wrong, unless you believe we're smart because a religious
"soul".

Point #4 is a void argument: the Universe itself is finite.

Point #5 is right: a superintelligence may, or may not, care at all about our
problems. In the same level you don't have the guarantee of a human government
caring about you (e.g. totalitarian regime).

------
mcguire
Not a particularly well written article, but he has a few good ideas. Here's a
couple of important paragraphs:

" _I asked a lot of AI experts for evidence that intelligence performance is
on an exponential gain, but all agreed we don’t have metrics for intelligence,
and besides, it wasn’t working that way. When I asked Ray Kurzweil, the
exponential wizard himself, where the evidence for exponential AI was, he
wrote to me that AI does not increase explosively but rather by levels. He
said: “It takes an exponential improvement both in computation and algorithmic
complexity to add each additional level to the hierarchy…. So we can expect to
add levels linearly because it requires exponentially more complexity to add
each additional layer, and we are indeed making exponential progress in our
ability to do this. We are not that many levels away from being comparable to
what the neocortex can do, so my 2029 date continues to look comfortable to
me.”_

" _What Ray seems to be saying is that it is not that the power of artificial
intelligence is exploding exponentially, but that the effort to produce it is
exploding exponentially, while the output is merely raising a level at a time.
This is almost the opposite of the assumption that intelligence is exploding.
This could change at some time in the future, but artificial intelligence is
clearly not increasing exponentially now._ "

The last bit about requiring experiments in real time is also interesting.

------
bradfordarner
Interesting article from an opinion point of view but I find very little real
substance behind his arguments.

He is fight the original myth with his own myth except that his myth is
founded upon his own assumptions and intuitions as opposed to those of someone
else.

It seems more likely that we simply don't know the answer to many of these
questions yet because we still have major disagreements around exactly what
intelligence is. To use Richard Feyman's famous quote: if we can't yet build
it, then we don't understand it.

------
qsymmachus
Maciej Ceglowski's takedown of superintelligence is a much better articulation
of these arguments, and more (and it's funny):
[http://idlewords.com/talks/superintelligence.htm](http://idlewords.com/talks/superintelligence.htm)

~~~
randallsquared
And a tongue-in-cheek response from Scott Alexander:
[http://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-
ri...](http://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-risk/)

------
danm07
I didn't read the whole article... of what I did read, I didn't find it
convincing. Few things:

AI doesn't need to exceed humans in every dimension to become a threat. Just
sufficient dimensions.

Humanity is basically a bacteria colony in a petridish with I/O. Disrupt
infrastructure, and you disrupt input leading to changes in the size of the
colony. And mind you, much of our infrastructure resides in the cloud.

Of course, It will be a while before this even becomes an issue, but this is
basically how a machine would frame the problem.

Implementation wise, AI doesn't need to be general. At its most inelegant (and
not too distant) design, ML can be configured as a fractal of specific of
algorithms, with one on top with the task of designating goals and tasks, and
subordinates spawning off generations and evaluating performance.

Andy Grove had a good saying, "anything can be done will be done"

Autonomous AI, if it does not break the laws of physics, will exist. It's
development will be spurred by our curiosity or profit.

------
RcouF1uZ4gsC
One of the big issues with people that talk about controlling super human
intelligence, is that any talk of controlling it is fantasy. We cannot control
actual human intelligence for good. What makes us think we could control super
human intelligence?

~~~
JCzynski
If suddenly we had a black box containing a superhuman intelligence and no
details about how it worked, then absolutely, we could not control it. For
human minds we have something similar, but the box isn't _totally_ black;
we've done some neuroscience and psychology to figure out the heuristics and
biases it tends to use, how it's easily exploited, etc. And then we have
neural networks which duplicate the functioning of some subsystems, and of
course our own subjective experience which provides at least a _little_
evidence. It's not enough, but it means that, for example, we can write
effective propaganda and advertising.

If we didn't just have the results of scattered tests, but had an exhaustive
set of docs written by the creator of the black box, it still wouldn't be
easy. But we'd have a chance. This is why one of the main strands of AI value
alignment research focuses on building an AI that we can understand. If we can
build something we can understand, that gives us leverage to alter it to value
our interests.

(What "our interests" are, in a well-specified technical sense, is a whole
'nother problem, and one that there's very little progress on.)

~~~
RcouF1uZ4gsC
I don't see how you can call an AI created by humans that humans can
understand, "super-human"? By definition, a super-human AI would be able to
stuff we could not understand.

~~~
johnsonjo
tldr; AI could be on a higher plane of thought, but I'm of the camp that they
could come up with new formal systems to explain their advances.

Well there's the old saying, "If you can't explain it to a six year old you
don't understand it yourself." I think if there were a super human
intelligence it would likely understand where our limitations are as humans
and be able to break down the components of its discoveries into simplest
terms and be able to teach us even if it's at an incredibly slower pace then
they can process.

This reminds me of Godel's proof about how by our current formal systems of
mathematics we cannot prove everything, and it maybe even went as far as
saying every single formal system will always leave some things that it cannot
prove. Obviously a robot had to use some formal system to come to its
conclusion so if it's really so smart can it break down the system it used so
that we can understand its basic building blocks. Of course there's always the
rate of computation and memory problem of humans in the way.

Of course if you're saying that the super human intelligence would be on an
entirely different plane of thought impossible for us to understand then
that's understandable, but probably less believable.

This line of thinking reminds me of the book Flatland by Edwin Abbott.
Flatland is basically a story of a two dimensional square who lives on a two
dimensional plane and only knows his world by his limited perception of it.
One day he is visited by a three dimensional sphere who explains his world in
a way imperceivable to him. The sphere somehow magically takes him off his
plane of existence and shows him his world from his (the spehere's) view. He
then goes on to take him on a tour of different hosts of worlds who perceive
their worlds in different dimensions. He goes from point land to line land
then to three dimensional space and finally back to his home plane land. Where
is eventually locked in a mental institution for telling people of his
adventures. Any ways it's an interesting fantasy story I recommend it.

Hate to go all meta-physical on you all, but basically the story just goes to
show you we only know things from our limited perspective of the things around
us if there are any higher "planes" of perception it's entirely possible that
we wouldn't know about them. Some things can only be known with certain
perceptions/experiences/knowledge.

It may sound ludicrous, but I would even say Christianity backs this idea to
some degree attributing God to a higher plane of thought like in Isaiah
55:8-9.

Of course I would never put a robot on that level, but I could see some things
being imperceptible to the human mind, so it's similar principles. Can robots
achieve a higher plane to any degree... beats me. Honestly I'm thinking it
wouldn't be high enough that they couldn't explain their thoughts to us.

~~~
RcouF1uZ4gsC
Great points. I do not disagree that a super human AI could explain stuff to
humans, just like I could explain explain stuff to a six year old. However, a
group of six year olds would be hard pressed to constrain your actions that
you really wanted to do. In addition, first grade teachers are experts at
manipulating six year olds, and a super human AI would also be very good at
manipulating humans and working around any constraints the humans tried to
impose on it.

With super human AI, we would be in much the situation as the great apes -
their survival depends far more on what we do than on what they do. Just like
the great apes cannot constrain our actions, we would not be able to constrain
the super human AI's actions.

On a darker note, as Homo sapiens wiped out all the other hominids, there is a
good chance that super human AI would try to wipe out humanity, as we are
probably the greatest threat to their independence and well being.

~~~
johnsonjo
Definitely don't disagree with you there. These are all plausible. I think a
lot of AIs choices concerning us would come down to whether they actually
would care for us at all.

------
AndrewKemendo
Oh boy. Much respect for Kevin Kelly, but I am afraid he missed the mark with
his analysis.

Unfortunately he gets hung up on the definition of Intelligence - and not
unreasonably so - because it is very ill defined and largely unknown. So all
of what he says is true, but orthogonal to the argument he is trying to
debunk.

It's basically setting up a pedantic straw man and then taking it apart.

There are other great and more compelling arguments against an all powerful
superhuman AGI, unfortunately he doesn't make any of those.

------
psyc
Ugh, not another AI article by a Wired editor. I skimmed it and saw only
strawmen and non-sequiturs.

These issues are mind-bending topics that stretch the imaginations of the most
brilliant people I am aware of. It takes them a lifetime to build good
intuitions and analogies. I wish that writers of this caliber _felt_ as
qualified to write one sentence about it as they actually are.

------
DiThi
This person doesn't understand the concept of super AI. Of course intelligence
is not one dimensional. But the current limit in pretty much all of those
dimensions is physical: It's the larger amount of neurons and connections we
could fit in the smallest space that can pass through the pelvis while still
feeding enough energy to the brain.

You can imagine this as a bunch of people that speaks with each other. The
faster they can communicate ideas with each other, the more potentially
intelligent the group can be. Machines can surpass the speed of this
collective intelligence by orders of magnitude, even if everything else is
exactly as a human. This is exactly the reason we evolved to have so many
brain resources for language.

~~~
rspeer
No, the current limit is not physical, it's that nobody has any idea how
general intelligence works.

You do not, in fact, get general intelligence by accident by throwing a lot of
connections in one place, just like you do not get a human just by throwing a
lot of organic molecules and water in one place.

~~~
mfav
As the other commenter noted, "letting a bunch of molecules sit around" was
precisely where we got intelligence from in the first place.

So it is _possible_ that we reach AI just by randomly permuting connections
and weights. Of course it's more likely we intelligently set (or "evolve")
these connections and weights, but this allows us to set an upper bound on
computation/time needed.

~~~
gls2ro
> As the other commenter noted, "letting a bunch of molecules sit around" was
> precisely where we got intelligence from in the first place.

I don't think this was the case. Yes there is evolution but it is not random.
Actually most of the molecules sitting around did not evolve to intelligence.
In case our evolution we had natural selection. In case of AI we have
artificial selection (selection made by humans) and even if we consider
ourselves smart enough to do this we cannot prove that we are able to make it
happen (choosing the correct AIs/algorithms to survive) until it happens.
Maybe I cannot express this clear enough but the advantage natural evolution
has over artificial evolution is the huge number of "experiments" \- meaning
it had time enough to do a lot of small changes until something worked.

~~~
gnaritas
> Maybe I cannot express this clear enough but the advantage natural evolution
> has over artificial evolution is the huge number of "experiments" \- meaning
> it had time enough to do a lot of small changes until something worked.

I think you have that backwards; natural evolution is absurdly slow because it
takes a very long time to cycle through generations of animals whereas genetic
programming on a computer to evolve algorithms can happen billions of times a
day because computers are much faster at cycling through possibilities.

> meaning it had time enough to do a lot of small changes until something
> worked.

Computers can do it faster.

~~~
gls2ro
Yes, now you got me thinking more about my concept of how I see the difference
between evolution by natural selection and evolution by artificial selection.

And I agree with you that AI can be much faster.

I still think the artificial selection can be influence by us - humans - so we
might add flows in the sistem from the beginning. Of course AI can learn to
identify them maybe. But maybe not. Like in the case when looking from inside
a system one cannot see how the system really is so it cannot fix it.

Of course what I say are just some hypothesis, nothing proven and I think they
cannot yet be falsifiable.

------
nolemurs
This article is just a series of strawman arguments. It sets out
misconceptions that proponents of strong AI mostly don't believe, then argues
against them.

I'll be honest, I didn't read the arguments in detail (since they're just
rebutting strawman arguments it hardly seemed worthwhile), but I was sort of
surprised at how poorly reasoned the arguments were even for the parts I agree
with.

------
hyperion2010
At a certain point it doesn't matter how much smarter you are, the limit on
progress is the ability to take action and to make measurements, enough
measurements so that you can discern whether a subset of those measurements
are biased and in what way. As a result I tend to think that in order to get
true super human level intelligences they will need to have super human levels
of agency, and that is something that is much harder to build and get us
meatbags to support than building a really powerful brain in a jar. Building
systems with super human agency also isn't something that happens just by
accident.

~~~
hyperpallium
I agree, in the long-term, big-picture, distributed eye and hand is important.
e.g. Although relativity could have been deduced from the evidence, first you
have to obtain that evidence, by the action of constructing observation
devices.

But there's nothing to stop centralized computing from having distributed
terminal-like i/o devices, cameras and waldos.

A cognitive argument for distribution is that a diversity of ideas, developed
somewhat in isolation, using their local unique observation and unique action,
is more likely to innovate. Many points of view will see more. However, this
can be simulated with internal divisions.

------
theprop
Actually, to see how well DeepMind has mastered certain video games with
minimal instructions, AI already can look in certain cases superhuman.

What EVERYONE is missing, though, is that enhanced human intelligence is
inevitable. And will be vastly more "intelligent" than superhuman AI. Though
as human intelligence increases so will AI naturally. I think enhanced human
intelligence will have immeasurably greater impact and probably greater impact
than any conceivable technology since it lets us engineer who we are. What is
a world like that's inhabited by 7 billion Leonardos?

------
sebringj
I was at the park the other day with my sons, I noticed some other kids on the
swings, 2 kids turned and locked legs then a 3rd sat on their joining legs
like a huge human-made swing. The point being is I never thought of doing that
before with my friends when I was a kid. An AI will be able to think of things
we never tried because there are so many more things that we haven't.
Speculating on the short end of this seems laughable to me, like someone from
the 1800's talking about balloon travel in the 2000s, basing our limited
understanding of possibility on our current limitations.

------
leni536
So, extremely intelligent AI can't take over Earth in the future because
"extremely intelligent" is ill defined. No worries everyone!

------
adamlett
I liked the article, and agree with it mostly. Here's a couple of ideas I see
again and again that bother me about the singularity theory and the hype in
general about general AI:

* The idea of exponential growth, which seems like an important underpinning of the singularity theory. Nothing in nature as far as I am aware, grows exponentially. Some growth trajectories may for a time look exponential, but they always turn out to be the bottom of an s-curve. The idea that once we develop the first AI with superhuman intelligence, that it will necessarily beget one of even greater intelligence, is deeply flawed for this reason. It is analogous to predicting that Moore's law will continue forever, because ever faster and more capable computers will assist in the design the next generation of chips. At some point the laws of physics will contrain further advances, and we will encounter the upper half of the s-curve.

* The idea of AI in a box. It's the idea that anything we would call intelligence can evolve divorced from a high-bandwidth, real-time sensory apparatus, and divorced from a way to directly manipulate its environment.

* The idea that _more_ intelligence always makes a significant difference. If we look at thinking disciplines where computers are already better than the best humans (chess, heads-up poker, recognizing certain kinds of objects etc.), the differences are small. If the best human's decisions are 99% optimal, say, a computer's may be 99.9% or 99.999% optimal. The point being that a computer can never makes decisions that are better than 100% optimal.

~~~
hedgew
1\. An AI might not need long exponential growth to make humans obsolete.
Maybe the computing power of one data center is enough to outperform and
dominate humans? Imagine if you had infinite patience, and experienced time
1000x slower than everyone else. That number might be in the millions. Or
more.

2\. I agree that it's a very strange and unpredictable scenario.

3\. How optimal do you think humans are in the real world? In chess you have a
very limited number of possible actions at each point, but in reality your
possible moves are almost limitless.

~~~
adamlett
1\. I'm not sure what you mean by the "computing power of one data center".
Surely not one present-day data center? And whether or not I can imagine
experiencing time 1000x slower than everyone else, has no bearing on whether
or not that's a realistic scenario. I can imagine all sorts of things, like
Moore's law holding true forever, or P=NP.

3\. That obviously depends on the endeavour. But humans are pretty great at a
lot of stuff. We learn quick too.

~~~
_rpd
> And whether or not I can imagine experiencing time 1000x slower than
> everyone else, has no bearing on whether or not that's a realistic scenario.

It is a staple of AGI speculation that a computer program with even near-human
IQ would spark the singularity since, at least, the hardware running it could
be improved so that the AGI would be able to perform person-months of
cognitive labor in days. Since the first target of this labor would be
improving the AGI program and hardware, compounding improvements are expected.

~~~
adamlett
Yes, I get that. I just doubt whether it's physically possible. And like the
author of the original story, I also doubt very much that _cognitive labor_ is
the constraint that limits human progress. To believe so is what he calls
_thinkism_.

~~~
_rpd
> I just doubt whether it's physically possible.

Biological brains achieve their ends with relatively low speeds and energies.
Even simplistic substitution with equivalent electronic components would be
hundreds of times faster, and I'm sure we'll do better than that. I don't see
the difficulty in the conjecture.

> I also doubt very much that cognitive labor is the constraint that limits
> human progress

That statement is impossible to discuss without defining "human progress", but
if the work of the world's universities for the next 100 years was available
in one year's time, at the very least someone with access to that information
would have a significant competitive advantage. It seems clear that the next
100 years will include significant advances in automated fabrication, at which
point physical labor also essentially becomes cognitive labor.

~~~
adamlett
_Even simplistic substitution with equivalent electronic components would be
hundreds of times faster, and I 'm sure we'll do better than that_

The way you state that one wonders why we haven't already achieved superhuman,
general AI.

 _if the work of the world 's universities for the next 100 years was
available in one year's time_

But that's a fundamental misunderstanding of the character of the work that
goes on in universities. Knowledge is only rarely produced by people
_thinking_ about problems. It's produced by mundane trial and error
experimentation. Experiments that take time. And money.

~~~
_rpd
> The way you state that one wonders why we haven't already achieved
> superhuman, general AI.

Brain emulation is still expensive ...

[https://en.wikipedia.org/wiki/Artificial_brain#/media/File:E...](https://en.wikipedia.org/wiki/Artificial_brain#/media/File:Estimations_of_Human_Brain_Emulation_Required_Performance.svg)

It'll be interesting to see what we can learn at the different levels of
emulation.

> It's produced by mundane trial and error experimentation.

Fair point, although experiments in computer science and applied mathematics
can usually be carried out without constructing physical apparatus. Also
identifying and designing experiments to efficiently characterize a problem
space is a large part of experimentation. And again, once automated
fabrication improves, some automated experimentation in physics and chemistry
becomes possible.

------
js8
I am not believer in superintelligence, but for a different reason than
author. I assume the following about superintelligence:

\- It somehow needs to be distributed, that is, composed of smaller computing
parts, because there is a physical limit what you can do in unit of space.

\- It needs to change to adapt to environment (learn), and so all the parts
need to potentially change.

From this follows that the parts will be subject to evolution, even if they
don't reproduce. And so the existence of the parts will depend on their
survival. This, in my opinion, inevitably leads to evolution of parts that are
"interested" in their own survival, at the expense of the "superintelligent"
whole. And it leads to conflict, which can eventually eat up all the
improvements in the intelligence.

Look at humans. Humanity (or biosphere in general) didn't become a
superintelligent whole, capable of following some single unified goal.
Instead, we became fighting factions of different units, and most of the
actual intelligence is spent on arms races.

Anyhow, even if superintelligence is possible, I believe the problem of
friendly AGI has a simple solution. We simply need to make sure that the AGI
doesn't optimize anything, but instead takes the saying "all things in
moderation" to its heart. That means, every once in a while, AGI should stop
whatever goals it pursues and reflect on purpose of those goals, if it is not,
by some measure, going too far.

You can argue that we don't actually know how to make AI to stop and think. I
would respond, AI that cannot do that, and only pursues some pre-programmed
optimum mindlessly, is not really general.

~~~
BoiledCabbage
It's really hard for me to understand a viewpoint that non-human higher
intelligence is anything but inevitable.

At somepoint (ignoring us destorying ourselves) we will be able to accurately
simulate a cell in software. As computing gets cheaper we will be able to
simulate a human brain's worth of cells. We'll feed it inputs and give it
outputs just like a brain would have. The only technological challenge here is
scanning and reading data from a live brain. A very small challenge in the
grand scheme of things.

Once that thing works it's a brain, and an artificial intelligence. Any other
discussion simply complicates the situation. Accurately simulate a larger
number of individual interconnected neurons and you're running an
intelligence.

No I'm not 100% certain we'll ever be able to program intelligence the way to
do reasoning symbolically in math, but we sure as heck can engineer one.

~~~
ptr_void
A simulated heart does not pump real blood. Also in the process, why should
one start or stop at cell level, seems arbitrary, go further down to atoms and
electrons etc. or go higher level to mental state or what not.

There are completed projects of smaller organism whose all neurons have been
simulated, there hasn't been any revelations.

Our brain doesn't contain any 'data', so whoever decides to extract data from
it will have to decide what the data will be or why it would be of any use.
There has also been many objections made on why brain correlates are more or
less useless on question of mind/intelligence.

~~~
BoiledCabbage
> Our brain doesn't contain any 'data'

It absolutely does! Where do you think our memories are stored?

It doesn't store it in 1s and 0s like computers, but I think it's fairly non-
controversial to say that all of everything you know is encoded in the
physical state of the cells and atoms of your body.

~~~
ptr_void
> Where do you think our memories are stored?

You are applying computer metaphor and then asking where the 'memory' is
'stored' or 'encoded'. Metaphors/abstractions are useful tool, but when
talking about differences, we should be more careful.

------
SubiculumCode
I dont think I worry about general AI so much as a having specialized AI in
almost every area, including collections of algos that recognize the task at
hand and select which specialized AI to engage, as well as other collections
of specialized AI algos that select which selector algos to use based on
longer term goals, etc. That is what makes me afraid.

~~~
abvdasker
Yeah what you described is one of the current leading hypotheses of how we get
to generalized AI and not totally unlike how the human brain actually seems to
work (specific tasks or stimuli will activate specialized regions of the brain
and not others).

~~~
SubiculumCode
;-) My point exactly.

------
partycoder
While one individual might not be have "general purpose intelligence" to the
satisfaction of this author (being able to excel at different
fields/activities), at the population level, it is fair to say human
intelligence is general purpose.

Then, there are aspects that are greatly overlooked in all these narratives:

Human geniuses occur very rarely and take literally decades to learn, while
the AI equivalent could be consistently "instanced" multiple times, live
forever, evolve after birth and work 24/7 without sleep.

Then, humans have crappy I/O. AI is not bounded by the shortcomings of
writing/reading/typing/talking at low rates of words per minute...

Generally speaking, AI has theoretically a substantial advantage over humans.
Even if AI remains dumber for a time, these advantages are enough to make them
prevail.

~~~
tlb
If I gave an example of something machines can easily do today, but no human
could ever do, does that show that human intelligence is not general-purpose?
If not, what do you mean by general-purpose?

~~~
partycoder
The human brain has the ability to create models to solve problems in an
unsupervised way. Be it problems around survival, reproduction, and things
that rank higher in the hierarchy of needs that can be very abstract in
nature.

Some people might then argue that the human experience is also about
creativity, entertainment, learning, social interaction, spirituality, fitness
and a long list of things.

Part of it is because the brain is not only raw problem solving power... it is
embodied in a human body, with bodily needs and hardwired behaviors coming
from older brain structures. Like having an intuition for what may look like a
predator or threat (and therefore evokes fear), what may be fine to eat,
etc... and other stuff correlated with survival that guided our evolution, but
not necessarily has to do with survival.

But AI can be embodied into anything, and the equivalent to its primitive mind
can be played with. While we have many learned behaviors, there are aspects
that are not learned. AI will be different. What AI ends up developing into
will greatly depend on how that is done.

------
macsj200
"Humans do not have general purpose minds, and neither will AIs."

The author must not have met many humans.

~~~
mindcrime
_Humans do not have general purpose minds, and neither will AIs._

Our minds _are_ "general purpose" compared to, say, a chess playing computer
program. But they're not necessarily "general purpose" in the most, well,
general, sense. They're evolved with specific capabilities and talents that
are geared towards helping humanoid, bipedal, mammalian creatures survive and
replicate on a specific small blue planet, orbiting a particular yellow star.

As he pointed out in the article, there are examples of animals, like
squirrels, demonstrating "intelligence" of a form that humans don't even
remotely come close to having.

So, whether or not we have "general purpose mind" depends on how generally you
define "general purpose." Which I think is actually an interesting point, in
the context of what the author was driving at.

~~~
ebcode
"... orbiting a particular yellow star."

Our star actually casts white light. It just looks yellow from Earth. If the
sun were yellow, then the moon would look yellow when it was straight
overhead. The moon looks white overhead, because the light from our star is
white.

~~~
mindcrime
Good point. Guess I read too many Superman comics as a kid and that whole
"yellow star" thing really sank in.

------
IAmGraydon
Perhaps my self preservation instinct is completely broken, but why are people
so afraid of the possibility that the human race could be replaced by
hyperintelligent machines? We aren't perfect (quite the opposite), and a brain
that works in the way that ours does has severe built-in limitations. Perhaps
the greatest achievement the human race could ever obtain is to create
something greater than ourselves. Something that can carry on learning and
understanding the universe around us in ways that no human mind ever could.

~~~
tambourine_man
I agree, as long as it is better than us at _everything_.

My fear is being crushed by an amazing strategist that won't ever get poetry,
for instance.

------
khiner
I didn't read this at first because I thought it just sounded like it would be
some opinionated clickbait full of strawmen and superlatives. But then I caved
in and read it and it turns out that my instincts were right. Then I saw the
author was Kevin Kelly and I felt sad. Almost as sad as when Steven Hawking
said we should discontinue the SETI program because it is most likely that
aliens will want to harvest our precious carbon and organs and enslave us if
they found out we were here. UNSUBSCRIBE!

~~~
khiner
For a much better and more constructive read toward the effort of a stringent
definition of intelligence, I would recommend anything from Shane Legg - most
pragmattically this:
[https://arxiv.org/pdf/1109.5951v2.pdf](https://arxiv.org/pdf/1109.5951v2.pdf)

------
ooku
The author should read this:
[https://arxiv.org/abs/1703.10987](https://arxiv.org/abs/1703.10987)

------
slavakurilyak
> Emulation of human thinking in other media will be constrained by cost.

Technology assets generally decline in price as more efficient means of
production and distribution become available (i.e. cloud) and the cost of
technology components becomes more commoditized.

> I will extend that further to claim that the only way to get a very human-
> like thought process is to run the computation on very human-like wet
> tissue.

I think wetware (or human-like wet tissue) needs to be created first, before
any judgement can be made about it’s costs.

------
dboreham
Nice to see some push back but all the chatter in the MSM these days about
self-driving cars and AI that's going to replace everyone's job (except, I
note : lawyers..) makes me strongly suspect that somewhere there is someone
with an agenda, driving this chatter. Someone's bonus depends on the received
knowledge being that "AI is a commin' ta getcha..", which it decidedly isn't,
imho. Yes it works for facial recognition (sometimes) and deciding whether to
reject spam (sometimes), but not for large swathes of the blue and white-
collar job world. Long long way off.

Note: obviously there's nothing special about the meat between a human's ears,
so _one_ day someone in theory should be able to build a machine that matches
and exceeds a human's thinking ability. But that's not going to happen in any
of our lifetimes.

~~~
joshualedbetter
lol you may want to pull your head out of the sand
[https://deepmind.com/blog/deepmind-ai-reduces-google-data-
ce...](https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-
cooling-bill-40/)

[http://www.cnbc.com/2017/02/17/lawyers-could-be-replaced-
by-...](http://www.cnbc.com/2017/02/17/lawyers-could-be-replaced-by-
artificial-intelligence.html)

------
ansible
_These complexes of artificial intelligences will for sure be able to exceed
us in many dimensions, but no one entity will do all we do better._

Hah, I don't think so. For sure future systems we design will have multiple
kind of intelligence. And then we'll slap on some pattern matching onto the
front end to help it recognise what intelligence to apply to what situation.
Much like how you recognise a math problem and pick up a calculator, or
encounter a concept you don't recognise, and pick up a dictionary.

So we'll develop systems that have many more of these intelligences, which
will each have superior abilities to what we have now (think infinite
precision math library vs a cheap handheld calculator) running at high speed
and in general able to handle much larger problems.

How is this not a superintelligence by any reasonable definition?

------
EGreg
I think this whole thing misses the point.

The main difference between these machines and biology is that, once an
improvement is discovered, it can be downloaded very quickly and cheaply onto
all the machines.

Copying is perfect and can be checksummed. Unlike learning in a university,
say.

This is also what enables things like deep learning across all the world's
medical data for Watson. A doctor somewhere can't know all the news everywhere
and discover statistics and patterns on command. While Watson not only an
ingest all this info but upload the results to all the places.

This ability to perfectly replicate a program also makes the "self
preservation" aspect and the "identity" aspect of computers different than
that of biological organisms. What is identity, after all, if a program can be
replicated in many places at once?

~~~
gls2ro
> Copying is perfect and can be checksummed. Unlike learning in a university,
> say.

What if exactly this flawed way of copying information allowed us to make
discoveries?

I mean what if exactly because we have a human transmitting the
information/theory with not the same confidence as the one who
discovered/invented allowed the possibility to doubt it so making the next
discovery more possible?

Edit: formatting

~~~
tlb
Adding perturbations is a common technique in machine learning. For example,
the evolution strategies approach [[https://blog.openai.com/evolution-
strategies/](https://blog.openai.com/evolution-strategies/)] makes 100s of
randomly tweaked policies, evaluates them against the task, and recombines
them with weights proportional to their performance.

Another approach is to train several different neural networks (an ensemble)
on a task, and then train a final neural network based on the average of the
ensemble.
[[https://arxiv.org/pdf/1503.02531v1.pdf](https://arxiv.org/pdf/1503.02531v1.pdf)]

So you can probably replicate the useful features of flawed copying between
humans.

~~~
gls2ro
In light of this, my idea does not hold on. Perturbations can be simulated so
there is a way to add this type of chaos into the system. Thank you for the
counter arguments. I need to learn more about the ML.

------
sullyj3
On the criticism of treating intelligence as a one-dimensional number, I
briefly note that the one dimensional conceptualization is a simplified but
useful abstraction. It abstracts over the fairly blatant fact that things can
be smarter than other things in ways that we care about. For example, I'm
smarter than a mouse, and Einstein is smarter than me. Maybe not along every
possible dimension. But for every operational intelligence metric you could
come up with, every way of assigning intelligences numbers that was
interesting and useful, I would probably have a higher number than a mouse and
Einstein would probably have a higher number than me.

------
mengibar10
Tools mankind invented so far have been extremely productive as well as
destructive. I think worry should not be on if one day the Superhuman AI will
take over the mankind, but if we will be able to stop/limit the destruction of
that advanced tool in the wrong hands. Definition of who's right or wrong
unfortunately contested. We are a species of justifying our actions.

The leverage and exploitation of advanced AI in the hands of malicious
people/corporations/states are in a much closer timeline than the "Superhuman
AI" could get.

So Open AI kind of initiatives are very important to balance things out.
Somehow I am not optimistic.

------
FrozenVoid
Planes don't have to abide by bird flight laws. There would be some
breakthrough from bird-like mimicry of neural networks and algorithms that
allow to perform what NNs(our mechanical birds) need days to calculate. Watch
for research on how the black boxes of NNs are reverse-engineered and mapped.
"Superbird" AI is just discovering that more general laws(flight) exist from
bird emulation(bird flight) and applying it to extract direct algorithms that
birds(NNs) produce internally(as instinct).

------
cttet
The word intelligence is a word from natural language, it is natural for
different people to have different interpretation of it.

And this article basically give a redefinition and interpret upon it.

------
messel
Squirrel super memory: it's a combination of scent and memory
[http://jacobs.berkeley.edu/wp-
content/uploads/2015/10/Animal...](http://jacobs.berkeley.edu/wp-
content/uploads/2015/10/Animal-Behaviour-1991-Jacobs.pdf)

Possibly enhanced by smelling their own saliva? Just guessing

------
logicallee
You have heard this talk from faraway lands about a new type of machine that
can supposedly do more than a person with a stick and something to put it
against as a lever. The Watt steam engine, some people call it. If thousands
of years before Our Lord, humans could roll on logs, pull on pulleys, or push
on the inclined plane vast blocks of stone culminating in a Pyramid to their
pretended gods, the argument goes, what is to keep someone from making a
device more powerful than a man?

I am here to tell you that such lunacy rests on seven wrong misconceptions.
While I will freely grant that perhaps it is possible to apply a lever, yet it
is human power and human power alone that moves that lever. The idea that
anything but a human could do work is absurd on its face. Nobody will ever get
from one town to another except on foot, or perhaps on a horse. To allow the
idea that a machine could do this or any other task is as deranged as
suggesting that machines will fly like birds across continents, carrying
people, or that one day men will simply climb up and into the atmosphere and
go and land and walk upon the moon. It is clear from first principles that
raising or moving anything takes work and power: it is just as clear that
nobody but man shall ever provide that power, let alone any more.

I do not have time to rewrite the above: substitute a hundred billion neurons
doing chemical reactions, and add that it is clear computers can never do
either the same or even less so, any more, and you will see how completely
wrong the author is in every way.

Nobody but a man can ever do work, and nothing but a hundred billion neurons
can or will ever think.

------
circlefavshape
I'm re-reading a bunch of Asimov robot stories that I read as a child, and in
them there is no concept of superintelligence at all. Robots have human-level
intelligence, but there is no suggestion that they have a possibility of
becoming anything other than useful tools for people.

I blame Iain M. Banks for all the AGI kerfuffle

------
stanfordkid
I agree with his proposition that there is no linear "better or worse"

That being said there is no evidence that an AI that is fundamentally
different (and potentially inferior) from humans could not be much more
effective at controlling human behaviors, thoughts, viewpoints or actions.

Furthermore it may be the case that an AI can sense or understand information
we cannot simply because we do not have the "sensors" to understand such
information. The actual "intelligence" does not need to be very high if the
data is that much richer.

From another perspective: the AI may not be as intelligent but may have more
control over the environment than humans (e.g controlling the smart grid,
traffic routing etc.) because of this its ability to influence human behavior
is larger.

Either of these two cases could be deemed as "greater intelligence" ... just
intelligence of a different kind. We need to look at intelligence less in
terms of human constructs and more in terms of "ability to manipulate human
behavior" \-- this would be a human centric definition.

------
ptr_void
The article is good first step, the second step would be to pick up an
introduction to philosophy of mind book and realize the enormous number of
issues one has to resolve, and methods needs discovering before getting close
to answering such questions as whether AGI is possible.

------
parenthephobia
> The assumptions behind a superhuman intelligence arising soon are: > ... >
> 4\. Intelligence can be expanded without limit.

The only assumption _required_ is that intelligence can be expanded just
beyond human limits, which I think is a much less controversial claim.

~~~
mcguire
If I can solve an exponential problem with 5 elements in a reasonable amount
of time, a mind 1,000,000 times smarter than I can solve the same problem with
how many elements in the same time?

------
hcs
Irrelevant anecdote: I first saw that radial evolution chart while wandering
the UT Austin campus, I think it was in a lobby, though I remember it being
dominated by bacteria. Interesting to think that might have been Hillis's lab.

------
rojobuffalo
It's hard to tell where this author is coming from. The three main assumptions
you have to make for AGI are (via Sam Harris):

1\. Intelligence is information processing.

2\. We will continue to improve our intelligent machines.

3\. We are not near the peak of intelligence.

The author's first counterpoint is:

> Intelligence is not a single dimension, so “smarter than humans” is a
> meaningless concept.

Intelligence is information processing so "smarter than humans" just means
better information processing: higher rate, volume, and quality of input and
output. Aren't some humans smarter than others? And isn't that a power that
can be abused or used for good? We don't have to worry about it being like us
and smarter; it just has to be smart enough to outsmart any human.

He then talks about generality like it's a structural component that no one
has been able to locate. It's a property and just means transferrable learning
across domains. We're so young in our understanding of our own intelligence
architecture, it's ridiculous to build a claim around there being no chance of
implementing generality.

This statement is also incredibly weak:

> There is no other physical dimension in the universe that is infinite, as
> far as science knows so far...There is finite space and time.

There is evidence that matter might be able to be created out of nothing which
would mean space can go on forever. We might only be able to interact with
finite space, but that isn't to say all of nature is constrained to finite
dimensions.

Even still, he doesn't make sense of why we need infinite domains. You only
need to reach a point where a programmer AI is marginally better at
programming AIs than any human or team of humans. Then we would no longer be
in the pilot's seat.

~~~
goatlover
> Intelligence is information processing

That's a claim. What is your support for this? Computers in the 1940s could
process information. Were they intelligent?

> just means better information processing: higher rate, volume, and quality
> of input and output.

Computers have been able to perform information processing better than humans
from the beginning, thus the reason for their creation. Information processors
are tools to extend human intelligence.

~~~
rojobuffalo
The exact quote is "Intelligence is a product of information processing in
physical systems."

It's the scientific definition rather than the colloquial definition.

~~~
goatlover
That's a philosophical assumption of what intelligence is. It's based on the
metaphor that thought is calculation, and computers are thinking by virtue of
manipulating symbols, and human brains are therefore doing the same thing in
wetware.

~~~
rojobuffalo
There isn't any counter evidence at this point that it's a false premise.

------
blazespin
AI is undergoing exponential growth. As it gets better it becomes more
profitable and feeds investments into itself. It may not make everyone
redundant, but it will make most.

------
GolfJimB
Not sure about your list of 'some of the smartest people alive today'; makes
me think the article was written by someone definitely not nearly on such a
list.

------
goatlover
We already have the equivalent of Superhuman AI in the form of corporations,
governments, and society in general. I don't buy the claim that sometime in
the future a singular artificial mind will come into existence whose continual
improvement will make it smarter with access to more resources than that of
Google, the US government, or all of human civilization, with its billions of
organic human intelligences being empowered by machines already.

We've already achieved super intelligence. It's us empowered by our
organizations and technology.

------
akyu
I completely agree with the author. Hiding my head in the sand and plugging my
ears will completely avoid the AI apocalypse.

------
wwarner
refreshing. i think the cost factor isn't brought up enough. none of this is
to say that ai isn't going to change the shit out of everything, it's just
that the superhuman, "summoning the demon" rhetoric is imprecise, premature
and distracting.

------
aaroninsf
A one sentence rebuttal to this is that the exponential take-off of human
civilization ~= accelerating distributed intelligence.

You can quibble about what an AI is; if you draw a box around human
civilization and observe its leverage and rate of change, well, the evidence
is that we are riding the superhuman takeoff.

------
macawfish
It may be a myth, but that doesn't mean people won't manifest powerful images
of it.

------
hasenj
It seems like the only thing this author can offer is playing on words to make
things seem obscure, blurry and unclear.

His central argument seems to be that intelligence is not a thing, and
although he doesn't say it directly, but I think he doesn't believe in IQ.

He's committing the same kind of fallacy committed by certain radical
ideologues, that basically says something along the lines of: since you cannot
define something accurately 100% then any statement about the thing is equally
invalid.

We don't have to engage in this kind of meaningless argument about semantics.

There are clear and easy to understand examples of scenarios where super AIs
can cause harm to human societies that speakers like Sam Harris have
articulated pretty well.

------
Analemma_
When you want to discuss "the myth of a superhuman AI", it's important to
carefully separate the two categories of claims:

1\. The claims by economists that AI-- even if it's not "strong AI"\-- will
put lots of people out of a job with potentially severe societal/economic
repercussions

2\. The claims by the Bostrom/Yudkowsky/etc. crowd that an AI intelligence
explosion will cause the extinction of humanity

Without saying anything about the plausibility or lack thereof of either 1 or
2, I think we can all agree that they are very different claims and need to be
analyzed separately. Right from the very first sentence the author seems to
muddle the two, so I don't think there's much of cogent analysis in here.

~~~
ffwd
> 2\. The claims by the Bostrom/Yudkowsky/etc. crowd that an AI intelligence
> explosion will cause the extinction of humanity

Maybe someone can correct me if I'm wrong here, but I have a hard time
understanding what /any/ "utility function" would be, that the super ai people
talk about. It can't be a passive deep learning network that parses
information and gives an output, it has to be some kind of complex perception
/ action loop of many neural nets and actuators in the real world that somehow
leads to an intelligent self improving behavior? I guess you could make a deep
learning controller for self driving cars say, and if an input to many cars is
wrong, all the cars crash and create a big cascading mess of wrong input
values, but that kind of accident is a far cry from an intelligent chain of
events where every link in the chain is an intelligent decision but the
ultimate goal is bad.

And, do we even know any way to chain many deep learning networks together
that accurately give correct output values, that we then can hook up to a
controller to give a utility function, which can then lead to a cascade of
intelligent decisions across domains?

~~~
jononor
Check for instance the 'Paperclip maximizer' thought experiment,
[https://wiki.lesswrong.com/wiki/Paperclip_maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)

~~~
goatlover
The Paperclip maximizer assumes that you could have an AI smart enough to turn
the world into paperclips, but not smart enough to understand what the human
meant by "make me some paperclips".

My guess is that paperclipping the world takes magnitudes more intelligence
than understanding what a human means when saying something ambiguous.

~~~
Smaug123
Not necessarily. It could be smart enough to know what you meant (and indeed
to deduce the entire history of your thoughts!) and simply not care, because
you told it to care about something else.

For example, maybe you inadvertently programmed it to "be totally literally
100% certain that you've completed the task I told you to do". Then from its
perspective, there's always a tiny, tiny chance that its sensors are being
fooled or malfunctioning, so it can't be literally 100% certain that it's
_ever_ made a paperclip successfully, so by your programming, it should keep
making more paperclips. This is independent of whether or not you wanted it to
do that: it's a result of what you programmed it to do and what you told it to
do.

~~~
goatlover
If a human being is smart enough to realize that you don't want to paperclip
the entire world, then surely a super AI would be smart enough, by definition.
Part of intelligence is knowing when to ignore all those tiny chances. Human
intelligence is less brittle than artificial because we can handle ambiguity
and know when it's ridiculous to continue a task, just because there might be
some small chance that it's not finished.

We also know that paperclipping the world will get in the way of other goals,
like going to the show or making money.

~~~
Smaug123
I did indeed say that a superintelligent AI could realise that we don't want
to paperclip the world. But it need not _care_ what we want, unless we're
really, really careful about how we program it.

What is "ridiculous" about continuing a task because we're not certain that
it's done yet? It's only your human moral system saying that. Just because
humans usually don't value things enough to pursue them to the exclusion of
all else…

~~~
goatlover
I tell my super intelligent assistant, who's much better at understanding
human language than Siri, to make me some paperclips.

It understands some to mean more than one and less than infinity. In fact,
some means less than "a lot". The meaning of a lot depends on the context,
which happens to be paperclips for me.

What is "some" paperclips for me? It depends on how many papers I might need
to clip (or whatever use I might have for paperclips). My super intelligent
assistant would be able to work out a good estimate.

After having an estimate, it can go make me "some" paperclips, and then stop
somewhere short of paper clipping the entire world.

Alternatively, it could just ask me how many "some" means.

~~~
Smaug123
You're still assuming that the assistant cares about doing what I mean, rather
than doing precisely what I say or doing what I programmed it to do. That only
happens if we're really, really careful about how we programmed the AI in the
first place. I grant you 100% that the AI probably knows precisely how many
paperclips I want, but you're assuming that it wants the same number of
paperclips that I want.

What an agent considers to be "good" is orthogonal to how intelligent that
agent is. An agent of arbitrary intelligence can have arbitrary goals; the
goals of an intelligent agent need not in principle look anything like those
of a human. The only reason a superintelligent AI's goals would look like
those of a human is because the humans very, very carefully programmed them
into the AI somehow. Very, very careful programming is not a feature of how
humans currently approach AI research.

------
bhouston
Super intelligence is a cargo cult for some, you know who I am talking about,
but it doesn't mean it won't happen to some degree.

------
felipemnoa
I think It is OK to point out obvious errors in your approach when trying to
create something new. But in this post all I can read is that you cannot
create superhuman AI just because he thinks it is not possible. I don't think
I read any real arguments.

All he is doing is trying to convince us all that it is not possible to create
AI.

Hopefully nobody is convinced by this post not to try to create a superhuman
AI. Most of us will fail but at least one will succeed. I don't think it is
any exaggeration to say that this will probably be our last great invention,
for good or for bad. Of course, I may just be biased given my own interests in
AI.[1]

[1]
[https://news.ycombinator.com/item?id=14057043](https://news.ycombinator.com/item?id=14057043)

------
darawk
Why do people like Kevin Kelly? Everything i've ever seen him say is
consistently misinformed and poorly thought out. I tried to read one of his
books recently, because I heard a number of people recommend it, and I
couldn't even finish it (highly unusual for me).

Basically every point he makes in this post is just fundamentally wrong in one
way or another. He clearly has no understanding whatsoever of what he's
talking about, on the technical, biological, or psychological sides. He's just
saying things that seem true to him, with zero context or understanding of any
of the issues involved.

> Intelligence is not a single dimension, so “smarter than humans” is a
> meaningless concept.

Multi-dimensional vectors have magnitudes just like scalars do. When will
people get over this whole "intelligence is not one thing, therefore you can't
say anything at all about it" nonsense?

> Humans do not have general purpose minds, and neither will AIs.

False absolutism. Human minds are certainly _more_ general purpose than an AI.
When an AI has a mind that is more general purpose than ours, I think its fair
to call it a general purpose AI.

> Emulation of human thinking in other media will be constrained by cost.

According to who? The only person that could answer that would be someone who
already knew how to emulate the human brain. Although, come to think of it,
some 50% of the human population are able to create new brains, at quite
little cost. So it is empirically possible to synthesize new brains extremely
cheaply.

> Dimensions of intelligence are not infinite.

Lol, according to who? What does this even mean?

> Intelligences are only one factor in progress

Sure. So what?

There are plenty of perfectly legitimate, well thought out, informed critiques
of AI fear mongering. This, however, is not one of them. This is garbage.

~~~
Ngunyan
> When an AI has a mind that is more general purpose than ours, I think its
> fair to call it a general purpose AI.

Can you give an example of an intelligence that is more general purpose than
human intelligence?

~~~
darawk
I didn't claim one existed.

