
Preparing for Malicious Uses of AI - williamtrask
https://blog.openai.com/preparing-for-malicious-uses-of-ai/
======
3pt14159
Reading the comments on this article make me realize how unserious most
engineers and software developers are taking this.

These threats are real.

The weaponization of everything is actually happening. Since I wrote about it
a month ago (Self-Crashing Cars) a number of people have reached out including
people with actual insight into the military aspect of it. Militaries around
the world are getting ready for true AI enabled weapon systems and there
building deterrence strategies for mass casualty cyber attack (including
nuclear weapons response), whether its from hacked industrial plants or cars
it doesn't matter. They're actually talking about the weaponization of cars at
the Munich security conference.

We need to stop burying our head in the sand and write to our politicians
about this threat. I know it sounds crazy but it's real.

As an aside, my main complaint about the people that truly understand this the
inability / unwillingness to accept that the act of subverting systems capable
of mass destruction via cyber attack amounts to cyber weapons of mass
destruction. We need to assemble all the work / treaties / regulations /
research we put into securing nukes into securing AI / robotics and for the
identical reasons. We know how this ends otherwise. We need wide-spread
government funding and we need to communicate what these things are in
language that our governments understand. Not saying something that is true
just because it sounds weird is counterproductive.

~~~
kakarot
Here's the crux of the issue with respect to nuclear WMDs:

In order to prevent citizens and other countries from creating nuclear bombs,
our government severely limits access to relevant tools and materials, and
actively seeks to censor the knowledge of how to build modern nuclear devices
from other parties.

In order to prevent citizens and other countries from creating dangerous rogue
AIs, our government _____________

What do you think goes in that blank?

~~~
DougN7
This is the problem. WMDs take a lot of infrastructure and can be
tested/inspected.

AIs can be built/modified by anyone in their basement. The genie can't be put
back into the bottle. It's like trying to outlaw computer viruses and hoping
that will work.

I don't have a solution :(

~~~
AJ007
AI needs compute cycles. These are quite centralized outside of botnets.

I’ve pointed this out before, but the primary commercial use cases of AI right
now are malicious: mass population surveillance and behavior manipulation
(ads; political and otherwise.) My biggest concern are the ML researchers who
dismiss malicious use as being a distant concern while they work on projects
that are malicious right now.

~~~
kakarot
Homomorphic encryption + cloud services + VPN chaining. If there's one thing
that won't be in shortage for the foreseeable future, it's compute cycles.

And I'm with you, the socioeconomic aspects of AI are far more important to
curtail, but it doens't mean we should be ignoring the very real threat of
people attacking infrastructure, sewage and power plants. And we shouldn't be
waiting until _after_ it's a problem, when it already can be seen as a
certainty that it _will_ happen eventually.

------
appleflaxen
In case it saves anyone time: the real meat in this link is the arxiv paper:

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and
Mitigation

[https://arxiv.org/abs/1802.07228](https://arxiv.org/abs/1802.07228)

------
Mizza
The disturbing thing about this paper to me - flashy though it may be - is
what they left out rather than what they kept in.

OpenAI appears to only be thinking of the crimes-against-individuals segment
of malicious AI, rather then the crimes-against-humanity type of malicious AI
that the surveillance advertising corporations who are supporting OpenAI are
building.

I am far, far less worried about an assassin's drones using AI to find a
politician in a crowd than I am about Facebook using pictures of me that other
people have posted and tagged me in, so that my face is used to track my
movements, and the movements of every other human on the planet, everywhere we
go, and selling that information to everybody who wants a copy, and giving it
away at the request of the local police.

I'm more concerned about Google using AI to mine every conversation I've ever
had or my browsing history to classify me as a dissident before I apply for a
visa to travel to China or the United States, or as a deadbeat before I apply
for a bank loan, or sick before I apply for insurance, or as unrehabilitatable
before I apply for parole.

The hackers-on-steroids narrative is a smokescreen for fully automated
corporate fascism.

~~~
kypro
> I'm more concerned about Google using AI to mine every conversation I've
> ever had or my browsing history to classify me as a dissiden before I apply
> for a visa to travel to China or the United States, or as a deadbeat before
> I apply for a bank loan, or sick before I apply for insurance, or as
> unrehabilitatable before I apply for parole.

I'm quite paranoid about this, yet whenever I speak to people about it either
people don't care, or already accept it's happening and inevitable.

I think part of the problem is many of us already feel we've lost the battle
for privacy. Although, I'm not sure we ever seriously attempted to fight for
it. Every street in cities in the UK is full CCTV cameras. The underground and
buses track where you travel. Our internet is monitored and logged. This isn't
a future problem that will manifest from greed and advances in AI, it's
something we all accept and deal with today.

In fact, a lot of people will say to this, "if you don't do anything wrong
you've got nothing to hide". They welcome it.

~~~
Eridrus
> I'm quite paranoid about this, yet whenever I speak to people about it
> either people don't care, or already accept it's happening and inevitable.

I have yet to see much in the way of concrete harms from the supposed privacy
issues plaguing online advertising. Everyone has a favorite hypothetical
situation where Google dobs you in to the Chinese government, but no evidence
that anything of the sort is actually happening.

~~~
kypro
The problem manifests in different ways though. I'm not so concerned about
obvious violations of privacy like the one you mentioned, because it's so
obvious that they are wrong.

Let's say you tweet some anti-American stuff from time to time and the US
government decides they want you to stop. They probably won't put a bullet in
your head, but they might dig up information on you and arrest you for
something totally unrelated.

This happens already. It doesn't really matter what you think of them, but
people like Martin Shkreli in the US and Tommy Robinson in the UK were
obviously targeted by the state because they didn't like what they were doing.
They dug up information and they got them on something unrelated.

AI and tech gives governments the ability to do this to new levels. They might
crash your self-driving car, or they might cause your smart-TV to catch fire
while your in bed, and no one would know. We'd all carry on believing
surveillance is in the interest of our safety.

~~~
Larrikin
Martin Shkreli wasn't targeted by the government for prosecution. Shkreli got
the max sentence for a crime he committed because he was out spoken about how
he was not remorseful his crimes, nor did he show any capacity to even
understand the seriousness of his actions. He also encouraged others to commit
crimes for him during his sentencing. Judicial discretion isn't always
perfect, but enforcing the max on an obvious danger to society was the correct
course of action in his case.

~~~
kypro
I'm not arguing this. My point is simply that they wouldn't have gone after
him if he didn't surrounded himself in controversy. What he did happens all
the time, but unlike many others his outspokenness exposed the corruption and
greed which occurs daily in our economic system.

~~~
Larrikin
I think that really only becomes an issue in countries where there are laws
passed that contradict each other and to get anything meaningful done you have
to break laws and/or give bribes. At any time someone can come after you
because you had to break some laws in the course of everyday business whether
you wanted to or not.

------
taneq
It may be superfluous to point this out but this seems to be talking about
malicious uses of narrow AI, rather than malicious strong AI. The only defense
against a malicious strong AI is, of course, a friendly strong AI.

~~~
nradov
A JDAM through it's power supply would be an effective defense against a
malicious strong AI.

~~~
landryraccoon
Lions and Bears : Humans aren't a threat. Sharp teeth through the neck are an
effective defense against a malicious human.

If the hypothetical malicious AI gets built, your JDAMs will be about as
advanced by comparison as a pointy stick against an M-16.

~~~
nradov
Whatever. Let me know when someone invents an AI as smart as a lab mouse.
Until then this is all just pointless idle speculation.

~~~
landryraccoon
That's fair, but arguing something isn't possible is very different from
arguing that when it happens you can solve it with a bomb. The conversation is
about a hypothetical strong AI. You can say strong AI isn't possible, like you
can say warp drives aren't possible, and it's still valid to discuss what it
might be like if they were possible.

~~~
nradov
That's exactly like claiming it's still valid to discuss what it might be like
if hypothetical aliens invaded the planet. Should we build some big lasers to
protect ourselves just in case?

Entertaining perhaps, but ultimately pointless and silly.

~~~
landryraccoon
Dropping JDAMs on cloud hosting server farms in the US or Europe is pretty
fantastic.

------
zucchini_head
I find it quite funny, rather intruiging, that we seem to have gone full
circle on trusted sources of information. Historically, a face-to-face meeting
was considered as the ultimate legitimate and trustworthy way. Not story or
rumors or witnessing, since the courts say people can be "decieved",
"traumatised", etc. Then came microphones, cameras, CCTV in the 20th century,
and then they became the ultimate trusted sources of information.

And due to AI and it's rapidly increasing misuse by enormous conglomerates, it
will be very soon when videos are never trusted but rather treated as comedic
rumor and folklore, and we will go back again to how it always was.

...until replicants come.

I'm saddened that there are actual "smart" people who waste their days to work
on these malicious forms of AI, be it Google's almost entire arsenal, or
anything. However, i'm not surprised they do, but it is still sad.

~~~
Santosh83
> I'm saddened that there are actual "smart" people who waste their days to
> work on these malicious forms of AI, be it Google's almost entire arsenal,
> or anything. However, i'm not surprised they do, but it is still sad.

Am sure the usual justification to apply salve to your conscience for this
sort of activity is the trope that the 'bad guys' will do it anyway, so we
need to do it before them to counter them and be the torch-bearer of liberty.

The atom bomb was developed upon that fear and pretext. Compared to that AI is
a fairly mild thing.

~~~
merpnderp
Trope about bad guys? Nazi Germany and Imperial Japan worked hard to get the
atom bomb. If the 'bad guys' had created it first, would the world be a better
place?

~~~
nukeop
The bad guys _did_ get it first. If Germany or Japan developed it first, they
would be sure to go down as the "good guys" in the history textbooks, and
you'd be on hackernews wondering how awful it would be if the demonic United
States of America developed it first.

~~~
merpnderp
This comment shows either an incredible ignorance of history or a pathological
view of right and wrong. Nazi Germany was exterminating whole races in the
millions. Imperial Japan was doing the same, averaging 100,000 dead Chinese,
Koreans and Vietnamese PER MONTH for over 8 years. Tokyo newspapers regularly
published head counts for officers who were in head chopping contests of
villagers in areas where the population needed to be suppressed. They would
roll into a village and just start lining up people to cut off their heads.
The Rape of Nanking by itself stands out as one of the most brutal events of
the war.

The US isn't perfect but to say it was the "bad guy" in the war isn't an
argument supported by anyone's facts.

------
nukeop
We're already seeing lots of malicious uses of AI, for example:

\- User profiling

\- De-anonymization

\- Mining and correllating data from purchased databases of user info

\- "Pre-crime" prediction that influence real decisions

\- Changing insurance rates, credit scores, and so on based on decisions of
completely opaque AI systems that use data from unknown sources

------
strawcomb
I would suggest that we shouldn't ever rely on digital archiving of important
information. There should always be a copy of the information in analog, that
can be dated & verified with analog methods.

~~~
mwaitjmp
This is one feature a blockchain excels at. People have stored the Bitcoin
white paper in the blockchain. Anyone can then download it, and verify it is
the untouched original.

~~~
jl6
You don’t really need the huge overhead of a block chain for that. You just
need a hash and some redundant storage.

~~~
sethgecko
And to trust the guy who is storing the hash?

~~~
drusepth
You could always put the hash on the blockchain. :)

------
akerro
You can have 1000 of companies that act fair and don't use AI for malicious
purposes, but there it that one company or community that doesn't... and then
someone sends you gay porn with your face in it.

~~~
simias
I'm not too worried about that. The moment that type of technology becomes
widely available is the moment this type of blackmail loses all edge. You
might even actually start doing gay porn IRL and people will assume that it's
been "deepfaked".

The corollary is a little more worrying: any kind of incriminating document
about a politician or public figure will be dismissed as a fake immediately. I
mean, they already do that, but that'll be even harder to figure out what's
real and what's not.

That "grab them by the pussy" tape? Obviously fake. I mean, you don't even see
the guy talking, just the audio, how gullible can you be?

That girl running away from the napalm bombing? Obviously fake. I mean you're
going to tell me that all of her clothes burned but she's still fit enough to
run? Everybody around her wears clothes. Come on man, are you new here?

That chinese guy standing in front of a military tank with groceries? Come on,
I can do a more convincing fake in 10 seconds on my smartphone. There, look, I
just did.

We have a brave new world ahead of us where you won't be able to trust
anything you see or hear through any media, no matter how convincing it seems.
That's pretty terrifying IMO.

I remember a while ago stumbling upon a conspiracy theory forum where people
were claiming that a video of an interview with Julian Assange was a fake
because there were a few strange visual artifacts around his face sometimes.
Given that the quality of the video was very good and the oddities were rather
minor (possibly encoding artifacts) I dismissed it as the usual tinfoil
hattery.

I think in the future I won't be so sure anymore. I'm not sure if the
technology to make such a good quality fake already exists but it's probably a
matter of years before we get there. If some people with too much time on
their hands manage to make somewhat convincing porn montages for free on the
internet what can big three letter agencies do? What does the state of the art
look like? What will it look like 10 years from now?

~~~
kozikow
I know that "why not blockchain" has become a cliche and I agree that majority
of proposed use-cases seem like hammer desperately looking for the nail, but
maybe this is an area where it could be indeed useful?

\- Create a special "evidence camera" that allows photos taken with this
camera to be used as an evidence.

\- When you take the photo, the camera posts digital fingerprint of the photo
on the blockchain.

\- To prove that camera internals have not been tampered with, it also signs
the fingerprint with the "camera private key". The private key is destroyed
when the camera case is opened: for the sake of the argument let's say that
the value of the air pressure inside the tightly locked camera case is the
private key.

\- The public key of the camera is publicly known, so everyone can verify the
validity of the private key.

~~~
yorwba
At best, that that setup lets you prove that a certain pattern of light was
present on the camera sensor at a certain time. It says nothing about the way
the pattern was created, whether it was pointed at a real scene, or whether
someone projected a faked movie into the camera.

By extension, any scheme to prove the truth of arbitrary measurements (audio,
video, anything else) is vulnerable to manipulation of the measured value
itself. The only way to be sure that something isn't fake is to experience it
yourself (at least until virtual reality improves far enough to make even
personal experience unreliable).

~~~
piracykills
Worse still, this sort of simple manipulation of the analog data to the sensor
would allow the camera to lend a sort of credence to faked images.

------
announcerman
I really enjoy the example set by ClarifAI, the ability to search terrabytes
of video with the help of tags is going to be a very nice boon for any
totalitarian regime in the future.

------
paulintrognon
If you are interested in this area of research, checkout Rob Miles' channel
(the guy from Computerphile) =>
[https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg](https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg)

------
igorkraw
Glad to see more "mainstream" recognition of the political threat of AI
supercharged surveillance and possibly manipulation

------
jcadam
Evil startup idea gleaned from paper: Use AI/ML to scour a sales prospect's
online persona (social media) and build a 'vulnerability profile' and generate
targeted, personalized cold-emails (or even phone calls eventually). Also
identifies 'levers' for a particular person that can be used to influence a
buying decision.

May even pre-qualify leads for you and tell you when not to waste your time :)

I mean, a good sales person already does a lot of this, but it's time-
consuming. Imagine if you could automate this process.

~~~
no1youknowz
It's not an idea. You are already 5 years too late.

You don't need to scour online personas. Companies such as Acxiom and
FullContact are just two of many which house identities of 10s of millions of
individuals.

Then all that's needed to do is plug into twitters firehose and facebooks
stream and voila. Real-time data based on what they are thinking/doing online.

Oh and connect to an RTB stream and get real-time where they are clicking.
With enough data, you could forecast where they are going next.

Finally, have AI compose the content in the sales funnel to secure that
conversion.

I can guarantee you, this is actively being worked on as I type.

Within 5 years, the entire marketing aspect will be mostly automated.

~~~
jcadam
> It's not an idea. You are already 5 years too late.

Story of my life ;)

------
d0lph
[https://www.eff.org/files/2018/02/20/malicious_ai_report_fin...](https://www.eff.org/files/2018/02/20/malicious_ai_report_final.pdf)

Still reading the paper and forming an opinion. But my initial thoughts are
what exactly is new here that couldn't be done through some other means? I'm
sure there will be interesting implications, but right now nothing seems
particularly novel.

~~~
platz
it is an injunction to policy makers, not a discovery of new methods.

~~~
d0lph
But is there really anything to report really? What's new that isn't doable
with low level automation, I've been able to drive a browser with python for a
while.

> Human-like denial-of-service. Imitating human-like behavior (e.g. through
> human-speed click patterns and website navigation)

It's already easy to simulate activity, clicking a random link on a page would
be basically as good. Are there even DoS

> Prioritising targets for cyber attacks using machine learning. Large
> datasets are used to identify victims more efficiently, e.g. by estimating
> personal wealth and willingness to pay based on online behavior.

So, like, sum and sort all the spending data?

But then again, maybe policy makers already didn't know what was possible?

[edit] - formatting

~~~
platz
Ding ding ding

------
Bhilai
I found this to be very fascinating read. I have heard of use of ML to detect,
for instance spam or phishing emails, but I've never heard of attackers using
ML models to generate phishing emails. How do you differentiate such a message
from any other phishing attempt ?

Thinking out loud, in the US, we have seen breaches of OPM, travel, healthcare
and insurance companies where seemingly the only motive was to exfil data.
Many of these attempts are attributed to state sponsored APT groups. Now that
someone has all this data, the next potential move seems to be to train models
over this data to understand habits and patterns, frequent locations and
friends, and predict social and political leanings...

I only have limited knowledge on this subject, but all this sounds plausible
right ?

------
jeffreyrogers
It's nice to see a discussion of AI risk that addresses concrete scenarios. A
lot of the forecasted doom (in other reports) resorts to handwavy arguments,
but rarely goes into specifics. The examples they've given[0] seem plausible
enough to me (except the persuasive ads one).

[0]: Persuasive ads, vulnerability discovery and exploitation, hacking robots
(this one is only tangentially AI related), and AI-augmented surveillance.

------
zitterbewegung
I am wondering does anyone have a survey or list of AI exploits or malicious
actions done on production services or systems ? For example like if a
misclassified image that would target a image recognition system (such as
Clarfai)? I have only seen papers that have theoretical attacks so far.

------
mtkd
It's out of the bag now - we just have to hope the blue team can defend
against regimes where the best maths talent more likely ends up building
military apps than doggy photo filters

------
Santosh83
When AI becomes practically unrecognisable from human, it will get _really_
interesting in finding ways and means to stop it from being used for conning.

------
keymone
what's up with whitepapers that look like powerpoint marketing material? first
couple pages look like a shady ICO "whitepaper".

~~~
sudouser
ico the video game, uh?

------
viach
I was thinking recently, isn't any usage of Strong AI is in fact the same as
slavery and, hence, immoral?

~~~
Santosh83
According to most humans no, because while it may possess a mind, it does not
have a soul and hence can be used and exploited. This is the argument we have
given ourselves over millennia to use highly intelligent animals and of course
human slavery was based on the myth that Africans were sub-human and slavery
was good for them. Why do you think this time will be any different?

~~~
nradov
In the Roman empire they didn't even bother making a myth that their slaves
were sub-human. It was simply a matter of might makes right: we conquered your
nation so now we own you.

~~~
bloak
I think perhaps you're confusing Roman history with some other period of human
history, perhaps on a different continent. In the Roman empire there were
slaves who taught philosophy, slaves who managed large estates, etcetera, and
people could both sell themselves into slavery and be freed from slavery. In
fact, selling oneself into slavery was a popular route to becoming a Roman
citizen.

If you don't want to read a history book, for which I wouldn't blame you, you
might nevertheless enjoy Robert Harris's Cicero trilogy, which gives a fairly
accurate impression of Roman society (or so claim many reviewers more
competent than me). It's a truly amazing period of human history, when life
was so modern in some ways, and yet so different from today in other ways, and
the world was small enough for an individual to change the course of history.

~~~
nradov
I have read some history books. What part do you think I'm confused about?
There were slaves in the Roman empire. They weren't considered sub-human and
some held positions of responsibility, but they were property. Many slaves
came from military conquests.

------
platz
enjoying the brutalist-inspired design of the pdf

------
vicpara
I don't think these issues have anything to do with AI but 100% with humans.
AI is used everywhere where money is to be made. Let's take ads.

Why do we allow ads to happen all over the place (TV, radio, internet,
magazines)? Because they feel stupid, harmless, how else can we sell products,
etc? Well, then when a ML system is serving you ads, it's doing it so at the
right time and will target you with the bang on content so that you buy
exactly what it's needed. For eg. modern ML algos can pick up in which mood
and mental state you are (out of 27 different states) by looking at how you
read content online, what you read, how many tabs you open or how related the
pages you visit are <yes, all cookie(-pool) based>. I know this because I
build these things. It works incredibly well. It finds your weak spot, your
passion, your habits, your indulgences and makes sure you're always tempted.
It can tell more about yourself in a split second than you can think in an
hour about who you really are. That bullshit you tell yourself about you
doesn't matter because you are the rat the ML algo is baiting you until you
bite.

The problem is not that the AI is selling to the diabetic more cupcakes, the
alcoholic booze and hipsters new gadgets like no other salesman. The idea of
advertising is perverted itself from skin till bone marrow. How can you allow
that to be done to the people? And we think this is how things are? OMFG.. AI
is just a tool.

WMD are another dead stupid idea only humans could come up with. Me country
builds insane big weapons that can whack entire cities in split second just
because it can & has more money & and we were first & we won the war & no
reason. It also tries to make sure nobody can build it (like Iran) so they all
kneel and kiss the ring.

AI kind of levels the field. It's just a bigger nuclear bomb. My is bigger
than yours taken to the next level. And that's just the beginning. AI makes it
game on for more players than we like it to be. And it's not just AI (which is
just math run by computers), it's mathematics, human genome research, physics,
nuclear physics...all STEM subjects. You can come up with a weapon from any
mix of these.

There are only two ways out of it.

The humanity dies or we learn how to all work together, chill our entitlement
and find our common values regardless of religion, money, skin colour, gender
or part of the world. A fart made in any part of the world can kill people
everywhere.

If humanity is to turn against itself because countries or corporations play a
retarded game in AI/ pharma/ nuclear/fill_in_the_blank .. this only only
accelerates a bit the process but the result will be the same.

Let's ditch advertising, pharmaceuticals, OTC, bogus pills created without any
fundamentals, weapon industries and all other nonsense industries or acts
focused on massive profit at the cost of humanity. Let's ditch all kinds of
waste and put people first for once. Let's ditch olympics where we try to
prove that my country has it bigger than yours.

Let's invest in education and health, inclusive societies where everyone works
hard to solve humanity's tough problems.

We keep building bigger guns but we as a society didn't grow much in empathy,
inclusiveness or respect for the planet or other countries.

People are killing people not AI. A machine gun is the safest thing in the
world. No single machine gun on this universe loaded itself and started firing
precisely to people or animals.

Probably the best thing that can happen to this planet is to have humans
vanish while there are still trees and animals around.

~~~
some_puffery
It has at least some to do with the AI itself. Of course humans are the
ultimate origin of AI, but when your system is run by algorithms created by an
AI, you aren't really making the decisions anymore.

------
hagbardceline
I believe much of this can be dealt with, if we start _RIGHT NOW_ to address a
serious issue in our systems - the lack of a way to represent Morals and
Ethics (the When and the Why) in the systems we are building. This needs to
provide important input to, and thus shape the DIKW(D) pyramid.

I've been doing some work with the IEEE on this - and I'm looking here on
ycombinator to get some real-world feeedback on what people are thinking and
concerned about.

I have some (personal) ideas that might work to address the concerns I'm
seeing.

{ _NOTE_ Some of this is taken from a previous post I wrote (but kinda missed
the thread developing, I was late I don't think anyone read it). It is useful
for this thread, so a modification of that post follows.}

First, I think you need a way to define 'ethics' and 'morals', with a
'ontology' and a 'epistemology' to derive a metaphysic for the system (and for
my $.02, aesthetics arises from here). Until we can have a bit of rigor
surrounding this, it's a challenge to discuss ethics in the context of an AI,
and AI in the context of the metaphysical stance it takes towards the world.

This is vital, as we need to define what 'malicious use' _IS_. This is still
an area (as the thread demonstrates) of serious contention.

Take sesame credit (a great primer, and even if you know all about it, it is
still great to watch:
[https://www.youtube.com/watch?v=lHcTKWiZ8sI](https://www.youtube.com/watch?v=lHcTKWiZ8sI)
). Now here is a question for you:

Is it wrong for the government to create a system that uses social pressure,
rather than retributive justice or the reactive use of force, to promote
social order and a 'better way of life'?

Now, I'm not arguing for this (nor against for the purposes of this missive),
but using it as a way to illustrate that different cultures, governmental
systems, and societies, may have vastly different perspectives on the idea of
a persons relationship viz a viz the state when it comes to something like
privacy. I would suggest that transperancy in these decisions is a good idea.
But right now we have no way to do that.

I think the current way the industry is working - seemingly hell-bent on
developing better, faster, more eficient, et al ways to engineer Epistemic
engines and Ontologic frameworks in isolation is the root cause of the problem
of malicious use.

Even the analysis of potential threats (from the article referenced 'The
Malicious Use of Artificial Intelligence: Forecasting, Prevention, and
Mitigation' \- I just skimmed it so I can keep up with this thread, please
enlighten me if I'm missing something important) only pays lip service to this
idea. In the executive Summary, it says:

'Promoting a culture of responsibility. AI researchers and the organisations
that employ them are in a unique position to shape the security landscape of
the AI-enabled world. We highlight the importance of education, ethical
statements and standards, framings, norms, and expectations.'

However in the 'Areas for Further research' section, I would point out that
the questions are at a higher level of abstraction than the other areas, or
discuss the narrative and not the problem. This might be due to the authors
not having exposure to this area of research and development (such as the
IEEE) - but I will concede that the fact that the note about the narrative
shows that very few are aware of the work we are doing...

This isn't pie-in-the-sky stuff, it has real-world use in areas other than
life or death scenarios. To quickly illustrate - let's take race or gender
bias (for example the Google '3 white kids' vs. '3 black kids' issue a while
back in 2016). I think this is a metaphysical problem (application of Wisdom
to determine correct action) that we mistake for an epistemic issue (it came
from 'bad' encoding). This is another spin on kypro's concern about the
consequences of AI deployment to enable the construction of a panopticon. This
is about WISDOM - making wise choices - not about coding a faster epistemic
engine or ontologic classifier.

Next, after we get some rigor surrounding the ethical stances you consider
'good' vs. 'bad' (a vital piece that just isn't being discussed or defined) in
the context of a metaphysic - you have to consider 'who' is using the system
unethically. If it is the AI itself, then we have a different, but important
issue - I'm going with 'you can use the AI to do wrong' as opposed to 'the AI
is doing wrong' (for whatever reason, its own motivations, or it agrees with
the evil or immoral users goals, perhaps, and acts in concert).

Unless you have clarity here, it becomes extremely easy to befuddle, confuse,
or mislead (innocently or not) questions regarding 'who'.

\- Who can answer for the 'Scope' or strategic context (CEO, Board of
Directors, General Staff, Politburo, etc.)

\- Who in 'Organizational Concepts' or 'Planning' (Division Management,
Program Management, Field commanders, etc)

\- Who in 'Architecture' or 'Schematic' (Project Management, Solution
Architecture, Company commanders, etc)

\- Who in 'Engineering' or 'Blueprints' (Team Leaders, Chief Engineers, NCO's,
etc.)

\- Who in 'Tools' or 'Config' (Individual contributors, Programmers, Soldiers,
etc.)

that constructed the AI.

Then you need to ask which person, group, or combination (none dare call it
conspiracy!) of these actors used the system in an unethical manner? Might
'enabled for use' be culpable as well - and is that a programmer, or an
executive, or both?

What I'm getting at here, is that there is both a lack of rigor in such
questions (in general in this entire area), a challenge in defining ethical
stances in context (which I argue requires a metaphysic), and a lack of
clarity in understanding how such systems come to creation ('who' is only one
interrogative that needs to be answered, after all).

I would say that until and unless we have some sort of structure and standard
to answer these questions, it might be beside the point to even ask...

And not being able to ask leads us to some uncomfortable near-term
consequences. If someone does use such a system unethically - can our system
of retributive justice determine the particulars of: \- where the crimes were
committed (jurisdiction) \- what went wrong \- who to hold accountable \- how
it was accomplished (in a manner hopefully understandable by lawmakers,
government/corporate/organizational leadership, other implementers, and one
would think - the victims) \- why it could be used this way \- when could it
happen again

just for starters.

The sum total of ignorance surrounding such a question points to a serious
problem in how society overall - and then down to the individuals creating and
using such tech - is dealing (or rather, not dealing) with this vital issue.

We need to start talking along these lines in order to stake out the playing
field for everyone _NOW_ , so we actually might have time to address these
things, before the alien pops right up and runs across the kitchen table.

------
matte_black
How will we stop deepfakes?

~~~
akerro
Dont post your pictures online.

~~~
pixl97
How many cameras have have you walked by already today?

~~~
matte_black
Makes me wonder if snapping a photo of a person without their consent then
immediately going home and making deep fake porn about them could constitute
as digital rape.

~~~
nugi
It can be charged under slander laws iirc, but without physical assult, not
rape. Many harassment laws may also cover this. But if the picture was taken
in public, you may not have many rights.

Using 'rape' to mean any unwanted sexual communication is a terrible thing,
and seems to be getting more common.

------
TuringNYC
And then there is obscene "research" like this:
[https://arxiv.org/pdf/1611.04135v1.pdf](https://arxiv.org/pdf/1611.04135v1.pdf)
"Automated Inference on Criminality using Face Images." How does stuff like
this get past IRB?

------
bitL
It's already too late.

------
Timothycquinn
Reading this makes me think it's the opening to a sci-fi movie. Sure is damn
scary that it's actual fact.

I wonder how much of society will deny AI just like they are with human
generated climate change?

~~~
jimmy1
The difference is we don't need AI. We need the planet.

~~~
michaelmcmillan
We need AI to keep the planet.

