
Google and Facebook warn AI may do dumb things - dhh2106
https://www.wired.com/story/google-microsoft-warn-ai-may-do-dumb-things/
======
nisten
Here's one dumb but plausible scenario of how the world ends:

A new automated network-exploit tool is being researched.

One day a national-security intern brings their vape inside the faraday cage
at work. Completely forgetting the damn thing has a quad-core armv7, bluetooth
and 802.11ac for google now functionality of course.

After doing a quick battery check, they go outside to take a hit, and
unknowingly release into the wild the worst digital virus humanity has ever
experienced.

It does not take long for the vape to exploit their work phone, however this
is contained by OS. What's not contained is the wifi security camera across
the street. The tool makes short work of the bootleg windows xp POS terminal
inside the store too. It pwns the wifi-router, finds a bunch of gaming PCs
with GPUs good enough to run hash-cracking on, and the rest is history. Turns
out all the Cisco and Huawei firmware keys were still up on pastebin so the
tool starts mining ethereum instead, no one knows why. Intel gets a mysterious
$75Billion order for FPGAs. Rolling blackouts start happening. Boston Dynamics
tries to file for IPO.

The 9 mortals left on earth who still know to reconfigure BGP(Border Gateway
Protocol) have all gone afk. No one is sure where they are or if they even
wanna help after all these years.

~~~
rdokelly
How do I downvote

------
pasta
(current) AI is not smart it's a method to have a reasonable predictable
outcome from a wide range of inputs. But it is still a linear program.

Let say we program a self driving car to estimate the impact effect of a
collision with an object. Then we 'learn' it that a crash with a soft object
will be better for the human in the car.

So we think the car is smart because it can detect soft and hard objects. But
in case of an unavoidable crash it steers into a group of people instead of a
parked car...

The Pentagon (DARPA) is now investing in AI that learns from previous
experiences (without feeding it a dataset over and over again). I guess other
companies are working on this as well. That will create scary AI because then
the program will be altered all the time making it 'smart'.

~~~
b_tterc_p
Pet peeve here. We don’t need to train AI to decide what to crash into. We
need to teach it to try and avoid crashing, and doing so foremost using the
brakes. It’s ok if cars crash sometimes. It’s not ok if cars spazz out and
veer onto the sidewalk.

We definitely don’t want the car deciding prematurely that it can’t possibly
avoid a crash and start choosing what to hit (or even aim for). That will lead
to truly dumb Ai.

1) minimize likelihood of crash 2) minimize speed of likely crashes

------
0xdada
Interesting to see this and "A.I. Shows Promise as a Physician Assistant" [0]
on the front page at the same time.

[0]
[https://news.ycombinator.com/item?id=19140220](https://news.ycombinator.com/item?id=19140220)

------
czardoz
The article has Microsoft in the title, not Facebook.

