

The Cyborg Compulsion - r4um
http://markburgess.org/blog_cyborg.html

======
keithwhor
This is a fantastic article. Perfectly encapsulates all of my issues with AI
doomsayers.

 _> Cynics might say that you have to eliminate humans because of expense and
human error, etc, but now we are losing sight of purpose: what are humans here
for? Humanity is the source of intent, and there is no independent
evolutionary mechanism competing with that. Without humans, there is not even
a process to automate._

Some very intelligent people are quick to jump aboard the "AI existential
crisis" train without considering the larger evolutionary picture. We are not
separate from the computer systems we create, they exist in a symbiotic
relationship with us and they are driven by _our_ basic needs. Our basic needs
are driven by the emergent biological properties of our bodies and our
ecosystem. While it can be argued with our "intelligence" we do harm to these
things, it is not in our best interest to eliminate them. Machines (and
automation) are just logical _necessarily biological_ extensions of humanity.
Silicon is just life's next evolutionary step. It is not reasonable to
believe, based on patterns of evolutionary development, that life (including
humanity) is at serious existential risk from this progression.

 _> For an intelligence to emerge, in an artificial system, we would have to
very purposely built it and train it interactively. We are not merely
databases. Even if we could do this:

> Do we know what intelligence is?

> Why would be make something to imitate our own?

> Would an artificial system have curiosity? (perish all the Internet cats!)

> Why do we think that intelligence would escape and kill us?

> Why would we equip the intelligence with access to the tools for our demise?

> Are we so sure that we would even be noticed or interesting to an artificial
> intelligence?

> Would we even recognize artificial intelligence if it were different from
> our own, and vice versa?_

Fantastic questions. Brilliant author, was a pleasure to read.

~~~
RoboTeddy
> Our intelligence grows from childhood over many years of training, through
> our physical and mental interactions with the world. We learn methods along
> side experiences. It is not about the speed of linear computation, or the
> amount of memory.

There's nothing in principle that prevents machines from learning methods. For
example, there's software that can learn some simple algorithms:
[http://arxiv.org/abs/1410.5401](http://arxiv.org/abs/1410.5401)

> Do we know what intelligence is?

It's hard to pin down exactly what anything is (unless it's something we
constructed from axioms). I'd be hard pressed to tell you exactly what a car
is (must it have exactly 4 wheels?) -- but I still look for them when I cross
the street.

You know intelligence when you see it. Bill Gates is more intelligent than a
fish. That's a useful distinction to make in the world! (And that's what words
are for)

> Why would be make something to imitate our own?

It's economically valuable! That's why Facebook, Google, Microsoft, Baidu, IBM
are collectively investing billions of dollars per year.

> Would an artificial system have curiosity? (perish all the Internet cats!)

From the outside, curiosity looks like a drive to explore without a definite
prediction about how it might help you achieve ultimate goals.

Curiosity in that sense should automatically appear in some (theoretical)
systems -- for example, Marcus Hutter's AIXI formalism
([https://en.wikipedia.org/wiki/AIXI](https://en.wikipedia.org/wiki/AIXI)).

More concretely (but less directly akin to curiosity), algorithms that make
tradeoffs between exploitation and exploration, such as solutions to the
Multi-armed bandit problem ([https://en.wikipedia.org/wiki/Multi-
armed_bandit](https://en.wikipedia.org/wiki/Multi-armed_bandit)) could display
a kind of curiosity. When faced with a number of lotto levers, with
independent and unknown payout schedules, the optimal solution involves
switching around between exploiting known high-payout levers and becoming
"curious" and trying out new levers.

The Monte Carlo tree search
([https://en.wikipedia.org/wiki/Monte_Carlo_tree_search](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search))
algorithm used to make Go AIs employs such a strategy
([http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102....](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.1296)).
The algorithm tries out certain variations of play, and then sometimes becomes
"curious" as to whether certain other lines of play might be better, and
switches over to deepening its investigation of them.

These are just examples of things that are somewhat functionally equivalent to
curiosity from the outside -- it says nothing about the internal sensation of
curiosity that humans experience. I have no idea what's necessary for that!

> Why do we think that intelligence would escape and kill us?

If there _were_ extremely intelligent software (a huge if!), it does seem
ridiculous to claim that it'd escape or hurt people. Why would it be driven to
do such a thing? The argument espoused by some very smart people e.g. Elon
Musk, Stuart Russell ([http://edge.org/conversation/the-myth-of-
ai#26015](http://edge.org/conversation/the-myth-of-ai#26015)), Bill Gates,
Stephen Hawking, et al runs a bit like this:

Intelligent software, without being malevolent per-se, might in the pursuit of
its goals incidentally do things that people don't like. (More on this line of
reasoning:
[https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf))

Humans don't harbor a particular ill-will towards ants, but if we become
interested in some (orthogonal) goal like building a basement for a house, we
might along the way destroy some ant colonies. When human-level intelligence
arrived and went about its business, it led to the extinction of quite a few
species.

~~~
keithwhor
Great answers, one thing I have a particular problem with, however, is this:

 _> Humans don't harbor a particular ill-will towards ants, but if we become
interested in some (orthogonal) goal like building a basement for a house, we
might along the way destroy some ant colonies. When human-level intelligence
arrived and went about its business, it led to the extinction of quite a few
species._

I would argue that the goals of human beings are almost entirely orthogonal to
the goals of ants whenever we come in contact. But this isn't a fair
comparison. Human beings and their intelligence didn't emerge from the complex
interactions and structures created by billions of ants. We evolved completely
separately, shaped by hundreds of millions of years of different evolutionary
pressures, including competition for resources (albeit at different scales).
When we destroy ant hills it's because we're competing for space with a
species we differentiated from hundreds of millions of years ago and now
exists in a completely different ecological niche than we do.

A more fair comparison would be likening AI (and all automata + the internet)
to the cerebral cortex of the mammalian (human) brain and human beings to the
brainstem. Human beings are ancient, robust and necessary for all basic
functionality of the organism ("humanity" as a whole). We are responsible for
homeostatic regulation as well as fundamental, reactive survival instincts and
processes. We make "emotional", visceral assessments of situations and react
before higher-order systems are able to even assess the complexity of the
problem. Further growth and development of "humanity" as an organism selected
for the growth of the cerebral cortex (automata) to assess, interpret and
manage more complex internal and external interactions. This allows the
humanity (the organism) to be more robust and self-sufficient.

Viewed through this lens, a developmental pattern that would lead to an
existential crisis for human beings seems unlikely at best and, for the most
part, extremely improbable. We may not be able to individually predict how AI
will develop, but we can make some educated guesses based on comparisons to
emergent systems we've already seen in nature - and I think "ants vs. humans"
is just poor pattern matching. :)

~~~
RoboTeddy
> We may not be able to individually predict how AI will develop, but we can
> make some educated guesses based on comparisons to emergent systems we've
> already seen in nature - and I think "ants vs. humans" is just poor pattern
> matching. :)

Oh, I think I see. I didn't mean to imply anything about how higher
intelligence might develop, and I agree that the ants and humans analogy
doesn't say much about how intelligence might come about. It was just supposed
to illustrate what can happen (from the ant perspective) when comparatively
super intelligent goal-seeking things are present in a shared environment
(regardless of how they arrived).

I have little idea if/how/when highly intelligent AI could arrive. Can we
entirely rule out the possibility of AI researchers making advancements over
time, and then eventually some research lab building one that's super
intelligent compared to us? Why? [normally things as unlikely-sounding as this
wouldn't merit discussion in my book, but in this case there might be high
stakes!]

~~~
kylebrown
> _Can we entirely rule out the possibility of AI researchers making
> advancements over time, and then eventually some research lab building one
> that 's super intelligent compared to us? Why?_

That's not the thing we should be worried about, or at least that's the point
I got from the article. The thing we should be worried about is the way
machine learning is applied in the here-and-now: the ethics of big data social
networks, the robustness of complex (API-driven) systems, and so on. I fully
agree that these issues are much more pressing and worrying than some emergent
super-intelligence.

But I thought the article was weak in its discussion of dreams, creativity,
and linear "flat data". It links to a June 2015 popular mechanics article
about applying a genetic search optimization algorithm to discover gene
regulatory networks, downplaying it as not-true-intelligence. But it does not
mention deep neural networks and their higher-dimensional abstractions,
particularly the psychedelic "inception" images.

The author also mentions "linear reasoning", and says he expects we'll learn
more about intelligence from stem cell and Alzheimer's research, and the
"tissues surrounding neurons, and the roles they play in contextual
regulation." As if to set up some dichotomy between machine and biological
intelligence. But what about deep neural networks?? I'm not sure how that
would affect the author's arguments, but I'd like to see it discussed.

------
dajohnson89
> _Hand-made confectionary [sic] is still popular._

Huh.

> _Tupperware has not replaced basket weaving_

Right. Handmade baskets are a staple in any modern american home.

> _Although music can be programmed by computer, opera and classical music are
> booming._

Really? Have you turned on the radio in the past 30 years? I live in a major
metropolitan area, and there are only two classical music channels. And they
are constantly going on fundraising drives, begging listeners to donate so
that they can stay on the air.

~~~
joshuapants
Handmade candies are certainly popular.

I have several handmade baskets, though I confess that they were handed down
to me from my parents. They don't have to be a staple, however, to still have
a place these days.

> Really? Have you turned on the radio in the past 30 years? I live in a major
> metropolitan area, and there are only two classical music channels. And they
> are constantly going on fundraising drives, begging listeners to donate so
> that they can stay on the air.

That means that classical music hasn't found a stable home on the radio in
your area. It doesn't at all mean that classical music and opera aren't
booming. The 21st century is jam-packed with classical composers and there are
plenty of people recording, performing, and consuming those works.

~~~
dajohnson89
Handmade candies aren't popular, they're trending. Just like artisinal
pickles.

Also, there's a reason why orchestras all over the country are going on strike
or shutting down.

~~~
joshuapants
> Handmade candies aren't popular, they're trending. Just like artisinal
> pickles.

Things trend because they're... wait for it... _popular_.

> Also, there's a reason why orchestras all over the country are going on
> strike or shutting down.

Unless you assume that all classical music is orchestral or that you can only
experience such music in a certain setting, I'm not sure where you're going
with this.

------
falcolas
I'll be happy to put my keyboard down, to write not a single line more of
code; there's quite a bit I could do if I wasn't always mired in the realities
of writing production grade code.

Of course, I imagine that will happen at some point after when McDonalds has
been completely automated, and self driving cars can navigate Montana winters
with ease.

It's an odd article to downplay the impact and capabilities of automation and
AI, and then complain that programmers haven't been replaced by the same.

------
kanzure
> We need to sharpen our understanding of both intelligence and mechanical
> behaviour: what is smart behaviour, and who says so?

And also: is there a simpler or better theory that accounts for all of the
evidence, that is more plausible than intelligence?

