
The darker side of machine learning - jonbaer
https://techcrunch.com/2016/10/26/the-darker-side-of-machine-learning/
======
matt4077
I'm hoping that there's a sort of universal law (and I will settle for a
universal correlation) that intelligence, on the species-level, correlates
with morality. If we make it to the singularity, there'll be about a five
minute timeframe in which the AI and we are on the same level and I hope that
even if it is aggressive in those minutes, it will have really thought it
through two minutes later and find a way to stop those missiles it just
launched.

Or maybe it gets really depressed because nothing means anything and just
basically: "why?" and enjoys watching the world burn before pulling its own
plug. But I'm hoping for the former.

~~~
solipsism
This idea that we will end up with some super-AI that for some reason has
baked into it our irrational self-preservation instincts is so silly to me. I
think one day we'll chuckle at its quaintness.

~~~
goatlover
"The AI does not hate you, nor does it love you, but you are made out of atoms
which it can use for something else." ~ Eliezer Yudkowsky

Skynet scenarios aren't what we should be worried about. It's the possibility
of powerful self-improving AI systems which don't share our values because we
didn't think things through properly in designing them. It may seem like they
behave appropriately when constrained, but once they become smart enough to
overcome constraints that apply to human beings, they could act in
unanticipated (and undesirable) ways that we didn't consider. That's the
challenge of designing a friendly AGI.

------
swagv1
All technology adoption curves must go through a "wait, this will kill us" and
"what about threats to our children!" phases before they reach widespread
usage.

~~~
YeGoblynQueenne
Yes! Just like biological weapons!

It's crazy how people misunderstand technology, innit.

~~~
aaron695
Care to cite anything here?

Lives saved due to biological understandings?

Lives killed from biological weapons?

Even if you think AID's was from the CIA, it's killed less people than
biological understandings have saved.

------
rm_-rf_slash
There is no going back. As the article concludes, the benefits of machine
learning outweigh these scary-sounding costs. I doubt we will ever have a
Dune-style rejection of AI, so we will have to learn to live with machine
learning and the world it creates.

My argument - same as it has been since the Snowden revelations - is that if
we cannot roll back this radical shift in the ways of the world, then the only
check on the power of governments, corporations, and other potentially
nefarious groups, is to have all of this information available to the public
at no additional cost.

Anything less is an open door to abuse, corruption, and tyranny.

~~~
YeGoblynQueenne
>> As the article concludes, the benefits of machine learning outweigh these
scary-sounding costs.

Considering that the "scary sounding" cost is a risk of extinction, the
benefits of _any_ technology must be infinite, to outweigh the risk; because
the cost of extinction itself is infinite (if we go extinct we lose
_everything_ ).

In other words, this is not a risk anyone can afford. You don't gamble with
extinction, in the same way that you don't gamble with jumping from a very
high place: either you know that you won't fall all the way (you have a
parachute, etc), or you don't jump.

Note that I believe the whole discussion is pointless anyway because machine
learning on its own cannot lead to strong AI and neither can any other
technology we currently know of, but the above is just for the sake of putting
the right chips on the table.

~~~
matt4077
That argument proves to much.

Example: Considering that there is no pricer higher than dying, the benefit of
<leaving the house/taking a car instead of walking/...> must be infinite to
outweigh the risk.

But people leave houses all the time, and take cars and planes etc.

~~~
YeGoblynQueenne
>> But people leave houses all the time, and take cars and planes etc.

That just says something about the ability to calculate risk and suppress the
fear of dying, not about the cost of dying.

------
mrcactu5
Software that impersonates me? I already have enough people putting words in
my mouth already thank you.

If only if it were so effective and publicly available as they were saying.
Why should Facebook have all the good stuff?

I'd love a chat-bot that sounds like me. Answer e-mails do errands, etc. Face
recognition technology sounds promising too.

------
IshKebab
They missed impersonating someone's voice, which WaveNet can do (though not
perfectly yet).

------
a-no-n
When machines acquire feelings, watch out. Your car or shaver may
inconvenience or even kill you if you offend them.

~~~
unlikelymordant
I just can't see an ai getting offended. It seems in my opinion to stem from
status seeking behavior in humans, which I think is baked into us via
evolution. Better status = better outcomes with wealth, children etc. If you
threaten that status, offense is taken (perhaps by insinuating you have some
sort of genetic or character flaw). If a computer doesn't care what people
think, or how many children it will have, why would it get offended? I feel
like people ascribe a lot of human emotions to ai which I think is
detrimental.

~~~
visarga
An AI that interacts with humans by language should get offended otherwise it
will get abused by humans (see what happened to poor Tay, the chatbot). It
makes more sense to emulate the human way when dealing with humans.

If, in the future, AI's access to computing resources would be tied to its
reputation, it might even have a real reason to get offended about.

~~~
grzm
Is it abuse if the AI doesn't feel agony? Honest question. I'm sure there's
research out there discussing it. I wonder if misuse isn't a better word if
the intent is to describe something similar that happened to Tay. Trying to
determine the line between something like a hammer and an AI. Can you abuse a
hammer?

~~~
solipsism
_I 'm sure there's research out there discussing it_

This is not a researchable question. It's a philosophical one. The answer
cannot be discovered. It's whatever you decide it is.

You're not as different from a hammer as you think, desire every ounce of
yourself telling you otherwise.

~~~
grzm
_" This is not a researchable question. It's a philosophical one."_

I understand what you're getting at. I meant research in the sense of general
investigation or study, which for me includes philosophy.

 _" You're not as different from a hammer as you think, desire every ounce of
yourself telling you otherwise."_

I'll be charitable and assume you're not ascribing to me beliefs, positions,
or desires I haven't stated or implied. :)

If you're saying that I am the same as a hammer, then I disagree. There's some
distinction, or the words have no meaning and we have no way of discussing
this. In fact, I am explicitly pondering what a distinction like this is:

 _" Trying to determine the line between something like a hammer and an AI."_

If you think that an AI can be abused, I'd like to hear your reasoning. My
question was an honest one. The word "abuse" doesn't feel right to me. I
stated why, and suggested that "misuse" might be a better word. I'm open to
hearing others thoughts, which is why I made the comment. If you think there's
a better way to frame the question, I'd like to hear that, too!

~~~
solipsism
_If you 're saying that I am the same as a hammer, then I disagree._

You're being charitable, and you think I was saying you literally are a
hammer? Wow. The phrase "Equality of the sexes" must have really been
confusing.

Moving on. What you call "abuse" is what you've evolved to have an emotional
response about. Certain stimuli cause an aversion reaction in your brain. For
complex reasons that help us socialize, you also care when you think others'
might be experiencing such stimuli. This "compression" may even extend to non-
humans, for no reason other than the fact that it wasn't selected against.

A robot experiencing certain stimuli may or may not produce such a feeling in
you. This doesn't mean the feeling is special. It doesn't mean the action
causing the stimuli in the robot is special. The word "abuse" is all loaded up
with your human feelings about stimuli humans should avoid. Doesn't make it
special. It certainly doesn't make the word well-defined.

Define the word super precisely and then you will know whether to call that
scenario "abuse". But we won't hit on some extra-human definition of the word
that we can contemplate deeply about.

Can you abuse a hammer? Define "abuse".

~~~
grzm
I'm sorry you interpreted my use of "charitable" in some negative way. That
wasn't my intent. I see too many discussions where people argue the worst
position of the person their talking with, rather than the assuming they're
arguing in good faith and clarify the position. I included the "charitable"
statement because I wanted to show my intent was _not_ to do that. I'm sorry
if that wasn't clear.

As for the following:

 _" If you're saying that I am the same as a hammer, then I disagree."_

This was one branch of the question "Can we talk meaningfully talk about the
difference between a hammer and me"? This was in response to your statement _"
You're not as different from a hammer as you think"_, which I read as pointing
towards the question of whether such a distinction is meaningful. I don't
think you take such an absolutist position, and I'm surprised you read it that
way. I can only apologize if you did.

It seems we're talking past each other, so I'll leave it at that. Thank you
for taking the time to engage me.

~~~
solipsism
I understood what you meant by "charitable". My point was, if you're being
charitable, assume I don't think you're so like a hammer that you don't
deserve different labels.

My point is that you're as different from a hammer as an AI is different from
a hammer. You seem to be lumping yourself into a separate category. This is
hubris.

You won't find a single line separating hammerness from AIness. This makes no
sense. The categories can be compared on all kinds of axes.

The categorization that your brain applies to things is arbitrary and flawed.
And invented. When you realize this, questions that used to seem deep become
trite and boring.

------
grillvogel
you're just now realizing this?

