
When Algorithms Think You Want to Die - iron0013
https://www.wired.com/story/when-algorithms-think-you-want-to-die/
======
xg15
Interesting article, though I find some of the reached conclusions somewhat
unexpected:

> _The issue is not just about making graphic content disappear. Platforms
> need to better recognize when content is right for some and not for others,
> when finding what you searched for is not the same as being invited to see
> more, and when what‘s good for the individual may not be good for the public
> as a whole._

> _And these algorithms are optimized to serve the individual wants of
> individual users; it is much more difficult to optimize them for the
> collective benefit._

This seems to suggest that the psychological effects of recommenders and
"engagement maximizers" are not problematic per se - but today are simply not
used with the right objectives in mind.

I find this view problematic, because "what's good for the public" is so
vaguely defined, especially if you divorce it from "what's good for the
individual". In the most extreme cases, this could even justify actively
driving people into depression or self-harm if you determined that they would
otherwise channel their pain into political protests.

If we're looking for a metric, how about keeping it at an individual level but
trying to maximize the _long-term_ wellbeing?

~~~
ethbro
As the popular quip on HN goes, _' The majority of mobile web tech fortunes
have been built on avoiding the costs of consequences and regulation, while
retaining all the profits.'_

> when what‘s good for the individual may not be good for the public as a
> whole

Is as good a summary of what Facebook has done wrong as anything I've read.

The problem is not that Facebook and its ilk are inherently evil, but that
they seem willfully ignorant. Ignorant that past a certain scale they have an
obligation to the public: an obligation very different from the laissez-faire
world The Facebook started in.

The internet majors seem to be _gradually_ awakening to this, but I'd argue
that only Apple (with their stance on privacy) and Google (with their internal
privacy counterparties) really grok the change. And to be fair, both have
business models that can tolerate having principles.

When you've got a recommendation algorithm that could push someone to suicide
or change an election outcome, you have a responsibility to optimize for more
than corporate profit.

~~~
DoctorOetker
if thats the correct interpretation than it should have been written as:

"when what's good for the _business_ may not be good for the public _both as a
whole and as a set of individuals_ "

~~~
ethbro
I guess a clearer analogy is via environmental economics and externalities.

The byproduct of dominant market share in an industry where you influence
people's thoughts is toxic responsibility.

And currently, some large companies are avoiding and externalizing the costs
of that responsibility.

------
weddpros
Instagram thinks you want to starve yourself to death when you search for
#fasting and offers help... but has no problem with #bingeeating or #meth...

Maybe it's not only algorithms that have a problem: I think it's our very
belief that Instagram should do something to curb certain ideas and push
others that's wrong.

~~~
omeid2
I think there is good and important reasons for unfettered social media and
internet in general, but no one should knowingly serve graphic content to
kids, specially when they're actively targeting them.

------
proxygeek
Similarity and suitability are two radically different scores. But for various
reasons - including, but not limited to, the ease of calculating a similarity
score - we end up conflating the two in a lot of cases.

While calculating similarity scores is getting easier by the day across a lot
of content formats (think image classifiers, sentiment scores, etc), the same
is not true for suitability scores.

Technically, I'd think calculating suitability would need more than just
matching patterns based on some selected criteria, which is how essentially
all recommendation engines work today.

------
Memosyne
This reminds of the Person of Interest episode "Q&A" where a software company
was specifically targeting people with mental illnesses and debt to satisfy
their advertisers. Some serious effort needs to go into preventing algorithms
from suggesting this stuff, lest we risk a few negligent developers
threatening the lives of millions.

Then again, that course of action could be a slippery slope: what happens if
the algorithms start censoring things that could potentially upset us? We
could end up in bubbles, completely unprepared and unwilling to face the
hardships that life presents.

I think the problem with personalized advertising is that it often isn't
personal, since the algorithms base their assumptions on data gathered by
observing people who haven't lived through the same experiences. I mean, yes,
we can average things out and disregard outliers in the hopes of maximizing
our finances, but by doing so we'd be neglecting the individual circumstances
befallen a person.

I suppose this is the million dollar ethical dilemma that advertising
companies are struggling with right? Too much moderation makes content stale,
but a lack of it makes things dangerous.

~~~
Odenwaelder
I would think that the impact of this content is overrated. You aren't not
prepared for the hardships of life just because you are in an algorithmic
filter bubble. People don't kill themselves just because they see graphic
content. This reminds me of the discussion in the early 2000s about first
person shooter games. You don't turn into a school shooter because you played
those.

There's a life outside the internet.

~~~
sverhagen
>People don't kill themselves just because they see graphic content

This was never the point. The article describes how people who have a
predisposition to self-harm get an above-average amount of content related to
that, which just compounds the negative feelings they may already have and
thus possibly accelerate them actually making the step of harming themselves.
Respectfully, but you seem very insensitive to the subject matter.

~~~
Odenwaelder
Is there research on the matter? Can displaying graphic content actually nudge
somebody into committing suicide? While this may sound insensitive to the
matter, I'd rather opt for having data than join into the generall HN hatred
of algorithmic optimization of content.

~~~
DanBC
Yes, there is ample research that shows suicide and self-harm feature elements
of contagion and suggestibility. See eg "werther effect"

[https://en.wikipedia.org/wiki/Copycat_suicide](https://en.wikipedia.org/wiki/Copycat_suicide)

eg this example of media reporting guidelines that appear to have reduced
deaths by suicide:
[https://www.ncbi.nlm.nih.gov/pubmed/18082110](https://www.ncbi.nlm.nih.gov/pubmed/18082110)

> In Austria, "Media Guidelines for Reporting on Suicides", have been issued
> to the media since 1987 as a suicide-preventive experiment. Since then, the
> aims of the experiment have been to reduce the numbers of suicides and
> suicide attempts in the Viennese subway and to reduce the overall suicide
> numbers. After the introduction of the media guidelines, the number of
> subway suicides and suicide attempts dropped more than 80% within 6 months.
> Since 1991, suicides plus suicide attempts - but not the number of suicides
> alone - have slowly and significantly increased. The increase of passenger
> numbers of the Viennese subway, which have nearly doubled, and the decrease
> of the overall suicide numbers in Vienna (-40%) and Austria (-33%) since mid
> 1987 increase the plausibility of the hypothesis, that the Austrian media
> guidelines have had an impact on suicidal behavior.

~~~
Odenwaelder
Interesting, thank you! Is this effect specific to suicide or are other
behaviours such as mass shootings also related to this?

~~~
pjc50
It seems quite likely that the inevitable publicity of mass shootings
encourages copycats. Occasionally the shooters even leave a note / manifesto
explaining this.

------
DanBC
Here's Matt Hancock's letter to IT companies:
[https://twitter.com/MattHancock/status/1089864139835670528](https://twitter.com/MattHancock/status/1089864139835670528)

He's current secratary of state for the department of health and social care.
He's by far the most tech-orientated SoS we've had for years, doing a lot of
work to push digital in health. He's rampantly pro-IT.

Sometimes when politicians make requests like this (make it harder to access
images of self harm) people dismiss them as "think of the children". That
would be a mistake here. He's not asking for all images to be removed; he is
asking for the malgorithmic pushing of self harm content to vulnerable people
to be fixed.

People sometimes complain about laws that appear out of the blue. His tweet
above is the start of a long slow proces of building a law. It's a clear
warning: get better at self-regulating, or we'll regulate you.

The lead for suicide prevention in the UK (Professor Louis Appleby) has this
to say:
[https://twitter.com/ProfLAppleby/status/1089528954158043136](https://twitter.com/ProfLAppleby/status/1089528954158043136)

"Self-harm images on Instagram just part of problem we need to address. In our
national study, 1/4 under 20s who died by suicide had relevant internet use &
most common was searching for info on methods"

and this:
[https://twitter.com/ProfLAppleby/status/1089525522084884480](https://twitter.com/ProfLAppleby/status/1089525522084884480)

"Important change in political/social attititude. Just a few years ago,
internet seen as free space, no restrictions, complete lack of interest in
#suicideprevention from big companies. Now mood is for regulation, social
responsibility, safety."

Finally, here's my example of malogrithm ad placement. I've mentioned this
example before, and I think it got fixed (so thank you if you fixed it!) but I
search for suicide related terms for my work, and sometimes the ads are
terrible.

[https://imgur.com/hhOYUJb](https://imgur.com/hhOYUJb)

~~~
Kalium
> People sometimes complain about laws that appear out of the blue. His tweet
> above is the start of a long slow proces of building a law. It's a clear
> warning: get better at self-regulating, or we'll regulate you.

You're absolutely right! One reading is that it's a request. Please fix this
problem, before we have to regulate you into fixing it.

Is it possible that there may be an alternative reading? A cynic might suggest
that humoring such a plea is a great way to demonstrate that content problems
like this can be solved! Then regulators can require those very useful tools
be applied to whatever they please in a much more general way.

The odds of whatever Secretary Hancock gets to solve the very real, pressing
problem he has so wisely pointed to being _completely inapplicable_ to
literally anything else are virtually zero. I can think of a few places where
safety and social responsibility means things like never disagreeing with The
Party.

As technologists, it's on us to think through the consequences of our choices
where we can. It's often not plausible - nobody thought TCP/IP would lead to
malgorithmic ads! But tools designed to enforce arbitrarily defined social
mores?

------
esotericn
Social media seems to blur the lines between fantasy and reality for many
individuals in a way that they don't seem able to deal with.

In times gone by we'd generally expect that children realise what happens in a
movie or a videogame is fantastical.

By contrast, social media is treated as a set of interactions with real
people, whether those be your friends or whoever else.

Even posting here on HN is an example. The platform guides me; my (and I
assume your) viewpoint of what the development community thinks about things
is swayed.

I don't think the platform creators are to blame as much as, well, the entire
society we're in. We really need to push organic interactions with the
communities we're in, the people around us, not online bubbles with incredible
bias that aren't even necessarily made of real humans.

------
rhodo
It's the same exact thing where YouTube is promoting pedophilia. These systems
are content agnostic and will give you what it thinks you will click on. If
somehow people really really liked videos about the number 27, then if you
click on one video it will start showing you more. It seems to me that it's a
fundamental part of what these systems are. It's nigh impossible to say "do
this but keep the bad stuff out" unless you have human moderation. I'm
certainly not saying these companies aren't culpable. It just seems weird to
talk about individual cases in abstraction like Instagram is going out of it's
way to promote self harm.

~~~
colordrops
> nigh impossible

Couldn't these same systems be trained on moderator censorship to learn what
to weed out?

------
minikites
Algorithms are brutally efficient and this is what happens when their creators
don't have any incentive to think about the end result of their work beyond
"user engagement" or "more clicks".

~~~
Operyl
I think time and time again the creators do end up understanding these things,
it’s just by the time they do their ultimate owners don’t care.

------
nullandvoid
As much as I don't like the whole 'blame the algorithm' movement that's been
going on I think it's pretty clear this is a large part of the issue for me.

I know that I've been trapped into YouTube's reccomendation trap before - good
luck getting out without deleting all previous history etc. whilst I'm 'wise'
enough to notice this fact and clear all YouTube history etc does this option
even exist for Instagram and would kids/ those vulnerable think to do this?

------
taneq
Information systems are a force multiplier on gaining knowledge. They don't
magically stop working if what you're interested in is not what "society"
thinks you should think.

~~~
delusional
Maybe they should.

~~~
johnchristopher
\- Hey Siri, don't think about a pink elephant.

\- Okie, I won't be thinking about a pink elephant.

~~~
taneq
I'm sorry, Dave, I'm afraid I can't not do that.

------
hamilyon2
Ethical AI is a serious business opportunity.

Machines are getting more and more intelligent. They find content for us,
summarize things, generate speech and text.

Look at how complicated human sosiety is. Trying to directly program
questionable and socially unacceptable behavior is next to impossible since
border is way too thin.

There are loads of unwritten rules about minors, for example, that vary from
culture to culture. About what is ok for what age.

So anyone who uses machine intelligence opens himself to liability. Filters
are needed, and everyone needs same set of filters

------
sigfubar
We wanted platforms that connect the world - now we’ve got them. We’ve given
everyone a voice: the pedophiles, the self-harm fetishists, the terrorists.
Now we’re reaping what we’d sown.

~~~
xg15
I think that misses the step inbeteewn, where the platforms that connect the
world started to prioritize "growth hacking" and keeping the user glued to the
screen as long as possible above all else.

You do not need agressive autoplay, recommendations plastered everywhere and
"we miss you" emails every few hours to organise the world's information.

