
Yoshua Bengio: ‘The dangers of abuse are real’ - Anon84
https://www.nature.com/articles/d41586-019-00505-2
======
monkeynotes
AI is like a mountain. No matter what humanity does, someone will summit just
because it's there. We can control businesses to an extent, but a curious
person in the basement, no one has control over that.

And good luck holding the military back.

One way or another humanity is on its way to the inevitable, and it's our
evolutionary destiny because we are basically hard-wired to pursue the
curiosity that is AI.

~~~
jononor
The amount of resources and thus damage that an individual in a basement can
do is rather limited.

We got treaties on nuclear and chemical weapons. It could happen for
autonomous weapons and weaponized intelligent agents also. But it might
require a serious case of abuse before we get there, as it did for chemical
and nuclear...

~~~
bitforger
Right. Currently in order to do state-of-the-art AI, you need a lot of
_computers_ and a lot of _data_. Neither of these is easy to obtain by an
individual in a basement.

We would need to consider, however, access to finished products (like trained
models). In some cases in the future, having access to the appropriate model
might be as dangerous as handing someone a nuke.

How hard would it be for someone to assemble a indiscriminate "killer drone,"
given the appropriate image recognition and flight control models?

~~~
jononor
Image recognition and flight control makes a drone, but probably not a
_killer_ drone, as that would presumably require some sort of munitions - like
a gun or explosives. So it might be efficient enough to just limit access to
those parts?

~~~
MRD85
People with low education levels can show you how to build all sorts of things
that go bang. Someone with an engineering background could vastly improve on a
lot of designs. If you could make a small, lightweight projectile weapon that
can be 3d printed then you have what you need.

My background is an engineering role in the military and my current path is
CS. I don't think it's unfeasible that an individual could build a crude drone
that would kill a single person. Building an entire swarm of death robots is a
different story.

------
0x8BADF00D
> You have expressed concern that corporations have ‘stolen’ talent from
> academia.

Nobody is stealing anything. What they don’t realize is that the free market
will incentivize AI researchers to go to the private sector. It’s just simple
economics.

~~~
webmaven
If private industry are using inflated (in the short term) valuations and low
tax rates to fund a talent war, but publicly funded educational institutions
can't compete by increasing funding (through governments raising taxes or
printing money), then the playing field isn't exactly level, is it?

~~~
GuiA
Publicly funded educational institutions can compete by reassigning some of
their budget dedicated to sports and administrative roles to their
researchers, for starters.

For instance, the public US university I attended for grad school has an
endowment of about $1B, and spent ~$150M on athletics in the past year. A post
doc salary there is about $40k a year.

Why would anyone take this over $200k a year (with lots of sweet perks) at eg
Google? Because they are passionate about academic research (and potentially
teaching), something that the university is very aware of and has no problem
using to their advantage.

Talking about level playing fields here seems disingenuous.

~~~
webmaven
_> Publicly funded educational institutions can compete by reassigning some of
their budget dedicated to sports_

Athletics programs are a huge driver of both alumni donations and student
enrollment. Unfortunately. Not to mention licensing revenue from apparel and
the like.

 _> Because they are passionate about academic research (and potentially
teaching), something that the university is very aware of and has no problem
using to their advantage._

This was more effective when the pay disparity was smaller.

------
i_am_proteus
How will regulating AI be any more successful than the 1990s attempts by the
US to regulate cryptography?

~~~
jagger27
The accessibility story is nearly identical. So I'd say nope.

------
novaRom
Not a secret that largest military companies are actively investing into AI.
Enormous opportunities and dangers. And it's not just computer vision and
robotics - almost every aspect of modern warfare will be AI-extended.

------
woliveirajr
> "has raised concerns about the possible risks from misuse of technology"

Isn't misuse possible in all (or, at least, almost all) technology?

Drugs can change humor, physical condition, hormones, and I think can be
miused even as a long-term weapon over the population of some nation.

Nuclear. Explosives. Social medias. TV programs. Data processing with punch
cards. Drones. Knifes. Herbicides. Small plastic rejects.

I understand the reasons Bengio tries to make, but it seems that it is the
same problem with all technological stuff: someone, somewhere, will find a way
to use it against others, and will cause minor or giant consequences.

~~~
jononor
Most of your examples are covered by rules and regulation in order to try to
get the most out of the technology without too much adverse effects. The point
here is that we should do the same for "AI", and that we should start thinking
about this now.

------
novaRom
AI arms race is real. Will outcomes be good for civilization like those of
space race (Internet, Communication) nobody knows.

------
alwaysanagenda
Bengio is asked: What will be the next big thing in AI?

> "Deep learning, as it is now, has made huge progress in perception, but it
> hasn’t delivered yet on systems that can discover high-level representations
> — the kind of concepts we use in language. Humans are able to use those
> high-level concepts to generalize in powerful ways. That’s something that
> even babies can do, but machine learning is very bad at."

I read this as: "We have super-advanced skip-logic software that can produce
specific results when provided a large enough data set, but "intelligence" as
it is defined, does not exist."

AI is really just sophisticated software algorithms.

In my opinion, there is no true artificial intelligence, and it will be
unlikely that we will ever create such a thing for quite some time, if at all.
AI is being used as a buzzword to garner attention.

It seems much more likely that we will build a brain-computer interface before
true AI, and it will prove more efficient than what we have today, which is
effectively many computers churning through a super-long list of "if-then"
statements.

~~~
xthestreams
This joke about "if-then" statements has gone too far, and when people cite it
it's clear that they don't understand the basics of how ML works today.

------
luxuryballs
It always seems like FUD to me when I read these articles. What is the
specific concern? Can we get some examples? Sometimes people are like oh no AI
is gonna take over! And I just think about how hard it is for humans to
integrate two software systems and laugh. What are we afraid of it doing? If
it gets out of control can’t we just you know... cut the power?

~~~
superhuzza
Did you even open the article?

"Killer drones are a big concern....dangers of abuse, especially by
authoritarian governments...AI can amplify discrimination and biases".

Bengio is specifically talking about malicious actors using AI. So saying
"just unplug it" isn't really applicable, especially when it comes to
governments.

~~~
luxuryballs
you don’t need AI to make killer drones

~~~
jhayward
This is an example of 'whataboutism', and is used to derail a discussion
without actually responding to the point.

~~~
luxuryballs
my original point was already a response to your quote from the article, what
is there to expound on? the problem isn’t an AI problem it’s a malicious actor
problem

------
peterwwillis
They keep using the term 'AI' when they seem to mean 'software'. We have all
these problems regardless of how 'intelligent' the artificial machine is.

------
drak0n1c
Meanwhile, Google has cancelled its AI ethics board in response to an internal
employee petition complaining about a mainstream conservative being included
in the 8-member committee.

[https://www.forbes.com/sites/jilliandonfro/2019/04/04/google...](https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-
cancels-its-ai-ethics-board-less-than-two-weeks-after-launch-in-the-wake-of-
employee-protest/)

~~~
Dobbs
The heritage think tank is a known anti lgbt group. They spend significant
resources in the US and abroad to fight against lgbt rights.

I for one and f __king sick and tired of my existence being subject to debate
due in part to these groups.

~~~
satokema_work
What do LGBT issues have anything to do with AI discussions? I ask purely for
information.

~~~
CamperBob2
It means you can't assume the Heritage representative is going to participate
in good faith. They are bringing a religious agenda to the table, which has
(or at least should have) no place in the discussion.

