
Google's AI Boss Blasts Musk's Scare Tactics on Machine Takeover - rayuela
https://www.bloomberg.com/news/articles/2017-09-19/google-s-ai-boss-blasts-musk-s-scare-tactics-on-machine-takeover
======
TCM
It seems like right now "superhuman AI" is a buzzword that people like to use
when they want to be covered by the press. I'm surprised OpenAI didn't chime
in. Physicists seem to use Aliens or multiple dimensions for this purpose (But
some also use AI for the same effect).

It sort of distracts people from actually asking real questions like how to
use AI / ML responsibly because the former doesn't require much to speculate
about.

~~~
revelation
Journalists don't feel like they are qualified to report on the actual
technology (which is a good thing), don't bother learning anything in order to
become qualified (which isn't), don't bother speaking to qualified people on
the front line of this technology (which is horrible).

So what they have resorted to is reporting on these "philosophical" topics,
because all you need for that is a fucking opinion, right. It's a great
Faustian bargain because you then get all those companies and people, who
similarly have no clue but are fishing for PR, to pile on.

See "should the autonomous car hit the pedestrian or save its passengers" or
"this artist drew a lane marker around his beater car".

------
tyywebb
So he compared modern computers to 4 year old and then said worries of
superhuman AI are overblown? That's kind of worrying. 4 year olds typically
get a lot smarter.

~~~
detaro
That was an older comparison, he actually updated it to say that computers are
worse, especially when it comes to general purpose tasks.

------
QAPereo
Comparing computers today to "four year olds" is lunacy... the four year old
is capable of many tasks no computer can easily accomplish. That said,
eventually it won't be a joke, and it would be nice if we had this
conversation as a species before then.

~~~
detaro
Apparently he agrees:

> _A few years ago, Giannandrea compared artificial intelligence to a 4-year-
> old child. Today, he revised his statement and said that it’s even worse
> than that. “They’re not nearly as general purpose as a 4-year-old child,” he
> said._

(quoted from [https://techcrunch.com/2017/09/19/googles-ai-chief-thinks-
re...](https://techcrunch.com/2017/09/19/googles-ai-chief-thinks-reports-of-
the-ai-apocalypse-are-greatly-exaggerated/), which also has video of the
entire thing and is IMHO a better resource than the thin article above)

~~~
QAPereo
Thanks, that's definitely then better source.

------
eighthnate
In Musk's defense, he was responding to questions about "general purpose" AI (
aka "real" artificial intelligence ) and how if that is achieved then we'd
reach a moment ( singularity ) which will fundamentally change our place in
the world. That's what he called "summoning the demon" and that's a legitimate
point.

However, the odds of achieving true AI anytime soon is remote at best. So this
guy complaint about musk has some validity too.

~~~
otakucode
I used to think AGI was a long-way off myself. Until I saw this video:
[https://youtu.be/Aj-zNjff7wY?t=1621](https://youtu.be/Aj-zNjff7wY?t=1621)
He's not just simulating a neural network, which is useless if you want to
create an actual mind. He's involving the body. He's actually got the recipe
for the approach to AGI that has, in my opinion, by far the best chance of
succeeding.

I have only an amateur-interest level knowledge of neuroscience research, but
even I know that the body isn't just contained in your skull. Its neurons
spread throughout your body. And dualism is flatly wrong. There is no
body/mind separation. They are the same thing. Changes to the body are
reflected by changes in the mind. People who experience total facial
paralysis, for one random example, lose the ability to express emotion. And
then lose the ability to feel it. And then lose the ability to even remember
the subjective sensation of feeling it. It changes their person on the most
fundamental level. How are you going to get that from a neural net with zero
inputs based upon the biofeedback mechanisms of our meat-based body? You
won't. If you get anything conscious at all, it would end up being profoundly
different from a human mind. Give it a body, however, and enable the
biofeedback even virtually... then you're talking.

------
forapurpose
The great risk of AI to survival is in military applications, I believe. Not
because of super-intelligent Skynets - in fact their lack of general
intelligence may make them more dangerous - but because of superhuman response
times. While banning military AI is as useful as banning nuclear weapons
(impossible on a practical level), I feel that Google and Facebook and others
should be leaders and address these issues, rather than conveniently bury
their heads because the problem, though unavoidable, is still on the horizon,
difficult, and disturbing and inconvenient to their happy visions.

Imagine a human stock trader trying to compete with a 'flash' trading program,
which operates on the scale of (micro?)-seconds. Now imagine they have guns
and are trying to kill each other. Imagine that the survival of millions, of
your nation, of democracy and liberty depend on winning. The military has no
choice but to build robots that autonomously decide and act to kill people and
to destroy the things that people build; it's that or surrender and learn to
speak Russian (or Chinese or the language of whoever does build those robots).

The U.S. military already has systems that do this, such as the guns that
defend ships from incoming missiles - no human could identify, aim at and
shoot down a supersonic missile quickly enough. They plan to implement it in
other situations such as, interestingly for HN readers, defending and
attacking IT systems: Imagine an AI that can launch and adjust attacks in
micro- or even just milli-seconds, while the human sysadmin's brain is just
having the first inklings of thought about the response procedure to the first
attack. Again, either surrender or use autonomous AIs yourself.

AIs and associated military robots raise other serious issues: Throughout
human history military power was tied closely to the magnitude of wealth and
population; this restricted the major threats to just a few great powers. But
perhaps you will only need to build a robot army, not a human one; one good AI
development team and an underground manufacturing facility (anywhere in the
world) might be sufficient; your AI doesn't have to be precise or make good
decisions, it only needs to kill so rabidly that others surrender. Nuclear
weapons at least require very specialized materials, limiting potential
manufacturers (though even NK can do it now). Could Singapore do it and become
the Rome of the future? Saudi Arabia? Japan? Finland? In fact, who needs a
nation - wealthy individuals, groups, or corporations might pull it off. What
about GE, Boeing, Google - and those are just a few American companies.

Putin recently said that "the one who becomes the leader in this sphere will
be the ruler of the world."

~~~
otakucode
Well first off any time someone wants to try making their technology the
centerpiece of their war effort, you just ignore the technology. You attack
the people controlling it, the people building it, the infrastructure, etc.
And yeah, all the 'civilized' countries have invested very heavily into
ballistic weaponry and agreed that's how they will fight their wars... but
when it comes to survival, to wondering whether what is left after the war
will even be worth preserving, those biological and chemical weapons aren't
going to be left on the table.

The US' view of drone technology really disturbs me. They see drone strikes as
'not warfare' and have permitted civilian agencies with no military
involvement to conduct them (the CIA). I'm not sure if they realize that a
drone could fly quite easily in the air above Manhattan or not. Most likely
they will simply do the slimey thing of waiting until that happens and THEN
deciding it's an act of war when somebody else does it. If they can even
attribute it. I'd expect such uses of drones against a capable adversary to
transform the control rooms in Kansas into the primary target. You can shuffle
war around all you want, but no adversary you are trying to kill is going to
just sit back and let themselves be picked off while you and yours stay safe
at home, expecting the robots to take the fall. It's war. They're going to
show up wherever your blood-filled bodies are and they're going to destroy
them. If it's truly overwhelming, they might just decide to salt the earth and
leave you nothing to rule, destroying whatever it was you were killing them
for just to spite you. The only winning move is not to play.

But aside from that, the trading example you mentioned is a good one. So if
you build a system that is splendidly fast at reacting... you won't have
gotten much of anywhere. The arbitrage markets, where that kind of thing is
valuable, are already owned by people rich enough to make sure you never get a
piece. In the typical market, those systems have a weakness. They can't
account for their own impact upon the market. This is a fundamental problem,
and not one which can be solved with more computing power or even better
algorithms. In order to accurately predict a system that is impacted by the
actions you take in it you need a system bigger than what you're simulating.
So until the whole of the global human economy can be simulated at a speed
greater than realtime, it probably won't be a problem there. The systems will
also face problems where they encounter unexpected feedback loops when
interacting in the same market as other automated systems.

There are dangers, though. My primary worry is turning machine learning
systems loose on fundamentally unpredictable data, and then reading the
systems outputs like the meaningless tea-leaves they are but giving them total
trust. We know you can't predict 'black swan' events, but that doesn't stop
people from claiming they can. And nothing about machine learning will change
the fundamentals like 'you can't predict a chaotic systems whose initial
condition you can't even know.' A high school in Dubai bought a 'crime
prediction' ML system and presumably is using it to target students before
they do anything wrong. I'll be surprised if that's not debuted in US schools
within 5 years. And probably on the street in the UK on the same timeframe.
The sad thing is, if the systems actually work (in the sense of actually
calling out bad things before they're done), no one will be satisfied. I
graduated high school right before Columbine happened. I watched as schools
adopted these lists of "warning signs" and used them to target students and
persecute them under the guise of 'preventing school shootings'. There was one
problem with those lists that nobody mentioned. A list of warning signs that
was accurate would identify 1 or 2 kids in 1 school every 5-10 years. Instead,
the lists identified a couple dozen kids. In every school. Every year. That's
what people actually WANTED. They just wanted tacit approval to grind 'problem
kids' under their thumb while claiming safety as their virtuous goal. Mean
little micro-dictators pretending to be superheroes while indulging the worst
flaws of their own character. And I expect the same will be the outcome of the
'prediction' systems that eventually come to dominate society. Just a way to
hide going after 'troublesome' people who don't fit into the spaces on the
forms the policy enforcers have.

------
divbit
Napkin sketch prediction that before we get to actual AI that will benefit us
/ enhance free will and lives / not just enslave us for their own / rich
peoples / military benefit, we will run into some fairly shitty versions.

~~~
divbit
I like this comment so I'm retiring this username so this stays at top of my
comment list. Who knows what the new username will be.

------
neo4sure
I think AI advancement will continue regardless of what Musk says. But his
voice is an important check on the advancement of this technology.

------
setum
Let military incorporate AI in a significant way, and then we shall see..

------
swang
"I've talked to John about this. His understanding of the subject is limited."
\- Elon Musk, probably.

------
l5870uoo9y
Google employee discourages regulation and assures that Google acts
responsible.

------
martythemaniak
There's a simple explanation for Musk's views on the dangers of AI: it's too
dangerous to develop on Earth, it must be regulated. OTOH, you can go to Mars
where you don't have to worry about such dangers and would be free to pursue
all sorts of research (AI, genetics etc) that's too dangerous for Earth.

Everybody knows getting to Mars is his life's work, but he doesn't want to
just get there, he wants to colonize it. The first several thousand people
will absolutely risk their lives for the sense of adventure and novelty alone
(I could be amongst them), but you're not going to get masses of hundreds of
thousands of people moving for this reasons, which will wear off quickly
anyway.

All previous mass movements had a strong economic incentive of some kind -
getting land, getting gold, trading natural resources for gold, etc. Mars
isn't going to be sending raw material back like the fur traders did, and even
after 2C global warming Earth will still be way more hospitable than Mars.

If Mars can become the only place where certain classes of activity are
allowed, if it's the only place where you can go an experiment and test with
little or no oversight, then exchange your findings for money from Earth, then
I can't see a more powerful economic driver for getting masses of people
there.

~~~
otakucode
I expect some of the first people to arrive on Mars will be military or police
or both. When was the last time you even saw scifi where the civilians got
there first and had the opportunity to build something new? No, the
governments on Earth will absolutely see Mars as a new colony they own, and
their laws apply there. It takes too much infrastructure to get there to
figure they'll let people do it and face the almost inevitable threat of them
declaring 'independence' once they're out of the range of ballistic weaponry.
So send the weaponry with them, along with authority-projectors. The idea of a
fresh start to human society, of being able to jettison some of the things
that only stick around because everyone has forgotten why they were started in
the first place, is one of those things that's just too good to be true.
Possible, certainly, but too potentially dangerous for those trying to hold
onto the status quo back on the homeworld.

