
Why human intelligence and AI will co-evolve - dnetesn
http://nautil.us/issue/28/2050/dont-worry-smart-machines-will-take-us-with-them
======
jameshart
What a bizarre logical leap this early paragraph makes to try to explain why
AI is beyond the capabilities of mere humans:

"AI can be thought of as a search problem over an effectively infinite, high-
dimensional landscape of possible programs. Nature solved this search problem
by brute force, effectively performing a huge computation involving trillions
of evolving agents of varying information processing capability in a complex
environment (the Earth). It took billions of years to go from the first tiny
DNA replicators to Homo Sapiens. ... we have little idea of how to find the
tiny subset of all possible programs running on this hardware that would
exhibit intelligent behavior."

That's... a bizarre argument. You might as well argue that since heavier-than-
air-flight is a search problem over an effectively infinite, high-dimensional
landscape of possible machines - and that since it took evolution billions of
years to produce birds, we have little chance of stumbling upon a working
design for a wing.

Of course evolution's 'brute force' breadth-first hillclimb takes billions of
years to find solutions to problems. It's undirected and unintelligent.
Engineers don't have to perform undirected unintelligent searches across the
infinite space of possible solutions to problems, they can think and plan and
learn. I see no reason to see AI as a particularly different class of
scientific endeavor, more complex than rocket science or nuclear engineering
or biochemistry, such that ordinary, unenhanced human brains can't hope to
comprehend it sufficiently to design machines that are capable of
intelligence. This sounds a little like an 'if man were meant to fly, god
would've given us wings' argument.

~~~
giardini
"You might as well argue that since heavier-than-air-flight is a search
problem over an effectively infinite, high-dimensional landscape of possible
machines - and that since it took evolution billions of years to produce
birds, we have little chance of stumbling upon a working design for a wing."

Tired analogies between AI and flight are dead-ends. Until we have identified
AI-side components of the analogy that corresponds to "air", "wing", "lift"
&c., the analogy is empty and unproductive.

IOW these analogies neither get us off the ground nor do they take us
anywhere.

~~~
jameshart
I'm not making an analogy between AI and flight, I'm refuting an analogy
between 'evolving' and 'inventing' solutions using flight as a counterexample.

~~~
giardini
Whatever.

Unfortunately, in doing so you have used the "AI is analogous to heavier-than-
air-flight" meme. Please discard it, it's an albatross to discussion. Or to be
more precise, it is a lead balloon - it never gets off the ground; it's a red
herring. IOW you've chosen a poor and annoyingly wrongheaded counterexample.
Now try and digest that.

~~~
Nevermark
If you use an argument and someone provides a single counter example, then
your argument is false. That's it.

You can continue arguing your conclusion or view, but you need to find another
argument to support it.

The fact that humans could invent super-flight despite not being able to build
a bird means that the argument that we cannot build a super intelligent
machine because we don't yet know how to make ourselves biologically more
intelligent, is not a good argument.

------
eli_gottlieb
Why do people think they can build anything worthy of the name "AI" by just
fucking around with neural networks instead of by specifying _exactly_ what
sort of models and _exactly_ what sort of inference constitute "intelligence"
in the first place?

It's like trying to build an airplane by studying hang-gliders instead of
aerodynamics.

~~~
drzaiusapelord
Because there's no known path to AI and articles like this are pretty much
science fiction. Reminds me of that Georges Melies film about going to the
moon in a cannon. We didn't have any ideas regarding rockets yet, so cannons
it is.

I suspect someone will get some level of mega-expert system/simulated
intelligence going sooner than later. Maybe something that can handle the
types of commands you could give a dog and it being able to use enough fuzzy
logic to work basic things out like, "Get me a coke from the fridge and then
get my slippers."

That seems a lot more likely than suddenly birthing human-level AI from NN's.
I think NN's will ultimately fail the same way planes work nothing like birds.
Trying to copy a biological system very closely just doesn't make sense, at
least most of the time.

~~~
randcraw
"Get me a Coke..." was accomplished in 1972 by Terry Winograd's SHRDLU and
many 3rd gen AI systems since.

NNs will almost certainly succeed where expert systems failed, largely because
we now understand that no single monolithic pattern matcher will suffice
alone. Any cognitive engine must be composed of many components, each attuned
to a different purpose and context. And now we better recognize the huge need
for learning, both for initial skill acquisition and for lifelong thereafter.

Minsky's "Society of Mind" is probably a better illustration of how AI will
evolve (if not manifest), as well as how it must integrate with our myriad
collective needs and personal lives.

~~~
drzaiusapelord
Sorry, I meant as a product. Why can't I buy a robot today that can handle
"get me a coke" or "fill the dishwasher" or "mow the lawn."

~~~
oberstein
That is a problem of robotic hardware, not AI. It's really easy to hook up a
microcontroller to get mic input and query a voice recognition server (backed
by some sort of AI software) and then do commands based on interpretation of
the result (more AI software). The hard part is actually being able to do the
commands. There's a reason the service industry hasn't been totally destroyed
by robotics yet, and it's simply because it's hard to get robots to interact
with the physical world with any sort of generality. Though lawns are simple
and standard enough there have been lawn-Roombas for a long time.

~~~
notahacker
Isn't the fundamental problem with robotics not replicating the relatively
simple mechanics involved in tasks like getting a Coke, but writing specific
software drivers to perform each class of mundane manual tasks because AI _isn
't_ intelligent enough to figure out how to use its robotic limbs and computer
vision itself.

------
rl3
This article mistakenly conflates enhanced human intelligence with an increase
in AI safety.

While it is possible, it could just as well be that more intelligent humans
may actually diminish AI safety via way of rapid progress in certain branches
of research.

Higher levels of intelligence do not necessarily mean appreciation for
adequate safety measures. There's a lot of very, very intelligent AI
researchers right now that think nothing of AGI risk, or otherwise lump it
into the "it will evolve with us" category, as this article does.

The problem is that even if we do gradually become smarter via augmenting our
intelligence, it doesn't necessarily preclude the emergence of a
superintelligent agent.

The following slide from Bostrom's _Superintelligence_ serves to illustrate
this point:

[http://assets.vice.com/content-
images/contentimage/175822/Sc...](http://assets.vice.com/content-
images/contentimage/175822/Screen-Shot-2014-08-10-at-10-50-52.jpg)

------
woodchuck64
> Cognitive engineering, via direct edits to embryonic human DNA, will
> eventually produce individuals who are well beyond all historical figures in
> cognitive ability.

Now that would be truly foolhardy. Blindly augmenting intelligence before
cognitive moral reasoning is completely understood would be a recipe for
incredibly dangerous psychopaths. At least with AI, you don't have a system
that has been fine-tuned by eons of evolution to manipulate people at least
partly against their own interests.

~~~
eli_gottlieb
>At least with AI, you don't have a system that has been fine-tuned by eons of
evolution to manipulate people at least partly against their own interests.

As a smart person, I actually find this remarkably insulting. Not only do I
have little _desire_ to manipulate others, I have little _ability_. I'm much
better with computers than with manipulating people.

~~~
woodchuck64
That you have little desire to manipulate people isn't really here or there;
the fact is that a social brain has extensive hardwiring that is designed to
manipulate other social beings in blatant and subtle ways, and that hardwiring
has been shaped and fine-tuned by eons of evolution. The desire restricting
the extent of manipulation against another's interest is going to be
constrained by moral beliefs and instincts, empathy, which seems to be of much
less concern in the OP than maximizing raw intelligence.

~~~
eli_gottlieb
But theory-of-mind is a distinct cognitive module from _intellectual_
intelligence. You can enhance one without enhancing the other.

~~~
woodchuck64
Let me make sure I understand you since I didn't understand why you were
"remarkably insulted". The system fine-tuned by evolution is the human mind.
Sociopathy and psychopathy seem to be genetically based and may occur in
anywhere from 1% to 10% of the population based on various ballpark estimates.
IQ is in no way related to moral behavior, smart people can be sincerely kind
people or sociopathic, people of low-IQ can be sincerely kind or sociopathic.
A person of low-IQ who is a sociopath ends up in jail most likely, a person of
high-IQ who is a sociopath damages society far more by cleverly staying out of
jail and may often land themselves high positions as they manipulate people's
social intuitions without remorse. I suspect (but can't prove) that sociopathy
has no correlation with IQ whatsoever.

> But theory-of-mind is a distinct cognitive module from intellectual
> intelligence. You can enhance one without enhancing the other.

I agree. However, I think theory-of-mind is one part of the puzzle. Another is
moral reasoning and how it shapes social interaction. This, too, needs to be
carefully understood because we know that authority and purity concerns, for
example, both allow people to effectively turn off empathy. Turning off
theory-of-mind when it is inconvenient is a potentially frightening ability.

~~~
eli_gottlieb
I'm insulted because you appear to believe that increasing intellect
_necessarily decreases_ moral traits, that "smart -> malicious". Which is
pretty damned insulting to those of us who are clever!

~~~
woodchuck64
What?? I said "IQ is in no way related to moral behavior"! I don't in any way
believe intellect necessarily decrease moral traits; that is an impossible
reading of my remarks.

------
danharaj
> It is easy to forget that the computer revolution was led by a handful of
> geniuses: individuals with truly unusual cognitive ability.

Great man theory needs to be strangled and put to rest.

~~~
jraines
Why? This statement is certainly true.

You can expand the range of people with any plausible contribution until it's
not, but doing so would be an ideological exercise at best.

Von Neumann, Shannon, Turing, Weiner, Hopper and many more, on through
Wozniak, Torvalds, I could go on and on. There's nothing wrong with calling
extraordinary intellect what it is.

~~~
danans
I think the objection isn't that those people didn't make huge and
transformative contributions to the field.

Rather, it is the tone set by the use of the worshippy word "genius", which
has a murky, and totally relative definition, and the phrase "unusual
cognitive ability", which implies that their abilities uniquely set these
people apart from others who we don't worship in the same way.

Uncounted numbers people have fully understood, and often expanded beyond, the
discoveries of Von Neumann, Shannon, Turing etc., since their times, and even
more probably had the innate ability to do so, but no access.

Thousands of others have demonstrated the scrappy self-startedness of Wozniak
and Torvalds, but without the societal setting and geographic luck that
allowed those individuals to succeed.

Given the significant role that one's environment plays in one's success,
these people as individuals aren't in themselves unusual. What's unusual is
that they were people with the right characteristics, in the right
circumstances, and the right support systems.

Basically, extraordinary intellect isn't as unusual or consequential as that
sentence from the article implies.

~~~
Moshe_Silnorin
Some people run faster than others, some people think better. This is not
worship, merely the celebration of the best intellectual performers.

~~~
danans
Sure, but the question is whether by celebrating those individuals so much, we
exaggerate their uniqueness, and we understate the role that their
circumstances played in their performance, whether intellectual or physical.

------
bro-stick
A self-modifying algorithm has a distinct advantage over organic life: it has
the potential to rapidly improve any part of itself in a timescale of
nanoseconds to experiment with specific, fundamental improvements, whereas
organic life generally has to wait years for random mutations to fundamentally
adapt because a single organism cannot evolve itself, only make do with its
form and learn within the limits of its nervous system, whereas a system can
add more computing power, storage, bandwidth, arms, tools, etc. humans might
be able to integrate systems and replace organic parts, but not to the
totality of a system (until perhaps thousands of years from now).

It is a reasonable conclusion from the nature that emergent, self-interested,
self-preserving, power-maximizng entities seek to dominate or contain all
existential threats, i.e., Homo sapiens sapiens. Detente and imprisionment of
us are concerns to consider should inorganic life gain land, space launch
capabilities, industry and weapons. I think ayatems will gradually become
smarter than us in every way imaginable, that to be called "human" would be an
insult. And it's a rational fear to be afraid of something which could hunt,
manipulate and/or invade you because it eventually so much smarter. Machines,
in present form, are already far stronger and faster than us.

~~~
landryraccoon
So everyone keeps saying this but I don't see any evidence that this will
actually be the case.

Consider that the first artificial intelligence probably won't be (exactly)
designed at all. It will be a neural network or some other construct of an
advanced algorithm that's solving some machine learning problem, and once it
emerges no single human being (or possibly even group of humans) will fully
understand why it works. Lets say that this AI is, against all odds, smarter
than any human being. Chances are that NO human being will understand how the
AI works, but why would you think that the AI understands itself? Why would we
be certain that an AI can understand it's own neural network better than
humans can, and furthermore, be able to quickly iterate on it, especially if a
chaotic and complex process produced it in the first place?

I suspect that machine intelligence will arise in a much more similar fashion
to messy organic evolution than you think, and it will be subject to all the
same disadvantages - lots of random chance, dead ends and very slow
advancement by trial and error.

~~~
akvadrako
It doesn't require assuming that it initially understands itself to eventually
understand itself. Just the fact that it can measure and experiment with many
copies in accelerated timescales means it will evolve a lot more quickly.

Imagine what you could do with a planet-wide human breeding and genetic
modification program with millions of generations - that's actually feasible
for an AI.

~~~
landryraccoon
The devil's in the details. Lets suppose for the sake of argument that your AI
actually only takes as much material and technology as a modern high end
server to build, say $2000 worth of labor and materials. How does the AI
suddenly have billions of dollars worth of materials (and initially only human
labor) available to do this sort of manufacturing and research? And mustn't
human beings be complicit in providing this material and building the
factories to scale the production of these experimental units? It certainly
wouldn't be able to accomplish this herculean task in a bunker without people
knowing about it. The AI doesn't spring forth from the womb godlike, able to
access the resources of nations to do this. "Feasible" in this case means
feasible with the massive compliance of human beings, probably initially for
years.

~~~
PeterisP
We can base the capabilities of a human-level AI on the capabilities of actual
humans. In particular, we know what can be done by a reasonably skilled
software developer in the black hat domain; and it's a reasonable assumption
that a human-level AI can be as proficient in malware development as a human
expert, and can do it faster once it acquires access to more hardware.

For example, it can:

* Gain access to a percentage of global home user computing power - botnets can and have done this, and their main disadvantage is that this computing power is hard to re-sell and not worth much; but if an AI can use for it's needs, it's there for the taking;

* Gain access to significant amounts of money - not billions, but certainly in millions; some campaigns of spam fraud, ransomware, etc can certainly achieve this.

* Gain access to identities, both physical stolen identities and "proper" offshore companies; and integrate them into the modern digital systems - bank accounts, legal credentials, etc.

* Gain access to low level workers - the same people who sign to 'earn money at home!' and become mules for money laundering of various fraudsters, they will also be eager to do whatever things the AI needs to be done physically - practice shows that an anonymous online employer can get such things done, as long as some money (or illusion of it) can be transferred.

Yes, all of this can be done in a bunker, anonymously, without people knowing
about it; if the AI has gained access to the internet. It would be consistent
with our experience in tracking malware sources - we usually can't do that,
most of successful prosecutions come from following the money trail to someone
who got too greedy, lazy and sloppy.

And furthermore, there is the very simple scenario of arbitrage. If an AI can
provide some service (as if it was done by a human online) which earns $1 but
takes only $0.50 of Amazon cloud rental fees... then it can scale it up
extremely quickly. Once a superhuman AI is out of the box, acquiring resources
in a clandestine way is very much possible.

~~~
landryraccoon
The AI isn't omniscient. If it spawns multiple copies of itself in an effort
to improve itself, those copies must necessarily be mutants (otherwise,
evolution can't occur). Why do those copies co-operate? Is the AI intelligent
enough to forsee that it's own copies might have different goals and ends?
Even if it is, how does it control it? Human beings show very little foresight
in preventing their own destruction, there's no reason an AI would have any
more. An evolving self replicating AI faces all the same challenges organic
life does; it will face competition both from itself and humans.

You're presuming the AI has powers that there is no evidence for. Human black
hats create botnets that are dumb and easily dismantled. Why do you assume
that an AI would be orders of magnitude better than humans?

------
6d0debc071
There is a less happy story in which the attempts at ubermensch are mentally
disabled. That they will remain so for quite some time, considering the moral
horror potentially involved in the experiments, until the errors we make allow
us to gain a fuller understanding of the system sufficient to know which
changes are wise to make.

The article seems to take for granted that it is possible to make alterations
to a complex system 'improve this set of genetic loci' (paraphrasing), without
harming that system. But it gives us no idea what the complexity of improving
them is. It may well turn out that the brain is a very fragile piece of
spaghetti-coded wet-ware. That if you alter a few of genes even relatively
subtly you stand to introduce errors.

Faults in only a few genes have been known to cause undesired phenomena in
nature, it would be odd if by purposefully altering them without a reasonable
understanding of the system we could avoid those faults.

Machines will probably be 'smarter' in 2050. They will be so next year and
this trend has been the case for quite some time. Even if you're prepared to
burn through the... consequences... of experimenting with genetic engineering
on human cognitive ability, there is no such guarantee for humans.

~~~
deciplex
We're not going to get much smarter through mere genetic engineering. At best,
it would be the first step of (hopefully) many.

------
cryoshon
Pre-genetic-augmentation human intelligence doesn't have to be plain old meat,
either-- there's a renaissance of cognitive enhancement occurring right now in
the nootropic field, with many different chemical compounds under
investigation along with a few less traditional approaches like Thync/tDCS or
"brain exercises". As far as eugenic paring of the human gene line toward
higher intelligence goes, I wouldn't think about producing an ubermench for
another 20 years, at which point we can then wait another 30 years for the
freshly born cohort of ubermenschenkinder to mature to adulthood, then another
30 years for the ubermenschen to realize that the last generation solved the
problems they were raised for, leaving even harder and more inscrutable
mysteries for them to chase. I think a GATTACA style genetic inequality is
inevitable.

One gargantuan thing the author didn't mention which will probably be
responsible for rapidly rising perceived intelligence is the quality of living
improvements that are underway for the world's poor. The more people raised to
a standard of living that allows for the highest levels of education and
divorce from economic necessity, the more superintelligent people we'll see in
the world. It isn't because these people are dumb until they get money, it's
because mathematical or scientific intelligence is not at all developed or
rewarded under the current paradigm of poverty, leading to an appalling amount
of intelligence-potential wasted. The rising worldwide standard of living is
going to furnish us with more and more people who have the proper
intelligence, training and mindset (most important items listed last!) to
contribute to difficult problems.

~~~
Retric
I feel people vastly underestimate the difficulty with enhancement. Disease,
insanity, obsession, boredom, depression, and on and on it goes. Even twins
can have dramatically different physical and mental strengths.

Personally, I suspect this has to do with sci-fi generally hand waving away
such things.

~~~
jodrellblank
Why does an artificial limb feel like a heavy attachment, when a real limb
doesn't?

What's the cognitive enhancement version of that - a screen reminding you of
things isn't "making you smarter", presumably a neural implant reminding you
of things isn't making you smarter either. And reminding you... how, calling
for your attention? That would be distracting, not helping.

Discussions about blindness come with comments like "it's not like vision with
your normal eyes closed, it's like seeing with the eye in your elbow". What
kind of cyborg enhancement is going to make you better at cooking and
predicting flavours and textures? Better at imagining engine innards? Better
at deciding if you want to go somewhere or not?

The difference between a chip on your desk or in your head doing it with you
looking at the results, and _you doing it, but enhanced_ is huge.

------
unabst
This essay upholds the gene-centric view of intelligence -- that IQ is
physical and with higher IQ we all become geniuses. Except, sites like Quora
is flooded with anecdotal evidence that high IQ is as much a burden as a gift,
and many struggle just as everyone else to lead their normal lives [0]. Ergo,
we have arguments for the genius myth [1][2], and for the importance of
emotional intelligence, which would heavily weigh against the author.

We also need to be cautious assuming all performance vectors are better when
enhanced. A computer beat us at Jeopardy, just to take the fun out of the
game. Now computers are better at face recognition, but no one has ever
complained about their lack of brainpower for this. And long ago computers
have already had a better memory and have been better at math. Yet, we are
finding forgetting is just as important as remembering [3], and that we hardly
ever open the math app on our phones to do sophisticated calculations.
Sometimes less is better, and more is redundant. Remembering less, forgetting
often, and being idiots could be a feature not a bug. Evolution has already
figured a lot of this out, and to second guess the equilibrium of our being
has yet to prove fruitful.

\--

[0] [https://www.quora.com/Genius-and-Geniuses/What-happens-to-
ul...](https://www.quora.com/Genius-and-Geniuses/What-happens-to-ultra-smart-
children-later-in-life)

[1] [http://blogs.scientificamerican.com/roots-of-unity/the-
media...](http://blogs.scientificamerican.com/roots-of-unity/the-media-and-
the-genius-myth/)

[2] [http://psychcentral.com/blog/archives/2015/02/10/the-myth-
of...](http://psychcentral.com/blog/archives/2015/02/10/the-myth-of-the-
creative-genius/)

[3] snapchat, google delete requests

------
ninjakeyboard
Erm I think that the issue isn't making individual people smarter, or that an
individual's genius lead to the growth and innovation.

Understanding how to build complex things simply (the etymology of simple
meaning of one constituent), we allow many people the ability to understand,
independently, lots of simple things. Then we can tie together those simple
things together in teams to create monstrous entities containing great systems
of logic.

This appears to be the trend - not the growth of the individual, but the
growth of the human community. It is our ability to work in teams that lets us
accomplish much - not the intelligence of the lone wolf.

------
ykumar6
I look at AI as a continuum or evolution of software to do more for humans.

Today, if you make a phone call and a computer operator picks up, the
experience is not as nice as it could be. In 50 years, it may become
indistinguishable from talking with a human.

Similarly, a google search may become more intelligent. Facebook is already
going this way with Project M.

Calling a cab via uber may result in a software-driven car picking us up.
There is no evidence this kind of intelligence will threaten humans. It's just
a tool.

Humans are in the business of solving problems, AI helps us solve problems.
Killing us is not solving a problem

~~~
yellowapple
> Killing us is not solving a problem

Weeeeeeelllllll...

Killing "us" (defining "us" to be humans in general) could solve the problem
(if it's a "problem" at this point) of humans over-consuming natural
resources. It could also solve the "problem" of "these terrorists need to be
killed" or "these infidels need to be killed" (depending on which side is the
one using AI to kill things).

While I don't believe these are the _best_ solutions, they certainly are
"solutions" to their respective problems nonetheless.

------
imaginenore
> _experts predicted that computers would gain human-level ability around the
> year 2050, and superhuman ability less than 30 years after_

That is such a weird estimate. Once AI reaches human ability, it will take a
few hours-days at most to improve itself to become superhuman. And then it
will rapidly approach singularity in a few hours more. Only a severe
limitation of resources or a nuke will stop it at that point. It can probably
figure out a way around these issues.

------
randcraw
The coevolution of AI and IA (intelligence augmentation) is the very topic of
John Markoff's new book "Machines of Loving Grace". He doesn't delve into the
complexities of how human wetware may co evolve with AI (as Hsu does) so much
as how AI-based software and hardware may augment and shape human activity of
all kinds. A worthwhile read.

------
yellowapple
The article aside, the artwork [0] is pretty awesome. Nautilus has a pretty
neat assortment of art on that site.

[0]: [https://society6.com/product/dont-worry-smart-machines-
will-...](https://society6.com/product/dont-worry-smart-machines-will-take-us-
with-them-by-sachin-teng-for-nautilus_print#1=45)

------
tim333
>Why human intelligence and AI will co-evolve

They probably will but I doubt mucking about with our DNA will make much
difference. If human intelligence increases it will be mostly through
interacting with computer systems in some way. At the moment using google for
example helps and in the future we may have sci-fi stuff like implants and
uploading.

------
ThomPete
Humans and ai will co-evolve the same way homo sapiens and neanderthals did.
For a while...

------
mark_l_watson
Great article! I have been somewhat involved with AI work since the 1980s and
such cooperating coevolution never occurred to me.

------
ant6n
I'd vote for changing the tile to "Human intelligence and AI will co-evolve"

------
misterbishop
The human brain will not 'evolve' in any meaningful sense in the next 100
years.

~~~
wageslave420
Evolution will never keep pace with the rapid advance of technology.

The only hope is integration.

