
If Facebook/Google wanted to programm you - aszantu
... how would they do that, if they wanted you to engage in behaviour that is beneficial to you, while preserving that addiction that keeps you sticking with them?<p>Example: They want to help you be less fanatic about race, gender, etc.... 
right now the algorythm suggests more and more extreme views to keep you hooked, but it&#x27;s not doing something beneficial to you apart from providing the links
======
r721
Relevant:
[https://journals.sagepub.com/doi/full/10.1177/17470161155995...](https://journals.sagepub.com/doi/full/10.1177/1747016115599568)

>The Facebook study, entitled Experimental evidence of massive-scale emotional
contagion through social networks (Kramer et al., 2014), was a collaborative
endeavour between Facebook and Cornell University’s Departments of
Communication and Information Science. In it, Facebook researchers directly
manipulated Facebook users’ news feeds to display differing amounts of
positive and negative posts from the people they followed in order to
determine whether their subsequent posts were affected by the positivity or
negativity of the set of posts they were viewing. This effect, that more
positive or negative posts read by a user could change their own emotional
state positively or negatively, is the ‘emotional contagion’ referenced in the
article.

Other links:

[https://www.theatlantic.com/technology/archive/2014/06/every...](https://www.theatlantic.com/technology/archive/2014/06/everything-
we-know-about-facebooks-secret-mood-manipulation-experiment/373648/)

[https://www.bbc.com/news/technology-29475019](https://www.bbc.com/news/technology-29475019)

------
pdkl95
While the goals are slightly different, I highly recommend reading[1] about
the methods used by GCHQ's JTRIG unit (including the full Snowden
document[2]). They are not really about _specific_ metho9ds to manipulate
groups of people, but the general ideas behind solving that type of problem.

A particularly important topic discussed in [2] is "Attention Management". The
first step humans use to process sensory input is deciding which (small) part
of that data is important. We end up _blatantly ignoring_ [3] most sensory
data, because our capacity for attention is surprisingly small _and very
easily manipulated[4]_ if you know how to flood someone's input buffers[5].

The more troubling problems start when it isn't a human deciding how to
manipulate attention, but instead it's a bunch of machine learning trying to
"optimize engagement" (attention).

[1] [https://theintercept.com/2014/02/24/jtrig-
manipulation/](https://theintercept.com/2014/02/24/jtrig-manipulation/)

[2] [https://theintercept.com/document/2014/02/24/art-
deception-t...](https://theintercept.com/document/2014/02/24/art-deception-
training-new-generation-online-covert-operations/)

[3] how long does it take you to notice the difference between these two
flickering frames:
[https://www.cse.iitk.ac.in/users/se367/10/presentation_local...](https://www.cse.iitk.ac.in/users/se367/10/presentation_local/Change%20Blindness_files/plane.gif)

[4]
[https://www.youtube.com/watch?v=GZGY0wPAnus](https://www.youtube.com/watch?v=GZGY0wPAnus)

[5] Apollo Robins is shockingly good at using banter to DoS someone's
attention:
[https://www.youtube.com/watch?v=dTa7rC1oUnk](https://www.youtube.com/watch?v=dTa7rC1oUnk)
, [https://youtu.be/1kkOKvPrdZ4?t=1882](https://youtu.be/1kkOKvPrdZ4?t=1882)

