
How YouTube Radicalized Brazil - otterley
https://www.nytimes.com/2019/08/11/world/americas/youtube-brazil.html
======
Synaesthesia
Apparently lies and fake news on whatsapp played a huge role in Bolsanaro’s
election.
[https://www.washingtonpost.com/news/theworldpost/wp/2018/11/...](https://www.washingtonpost.com/news/theworldpost/wp/2018/11/01/whatsapp-2/)

------
voidhorse
Disclaimer: All the views expressed here are solely my own, and not those of
any company or corporation.

Doubtless, this is an incredibly complex issue, and companies behind
recommendation algorithms do have some degree of culpability for the outcomes
of the algorithms they design, but I can't help but feel that most of this is
an issue with human beings more than it is with algorithmics.

One could presume youtube has an extreme ideological agenda and has developed
its algorithms in service to that agenda, yet a more likely possibility seems
to be that youtube's algorithm simply maximizes on the content that _humans
are drawn to engage with strongly_ which, surprise, surprise, all of history
will show you happens to be vicious, inelegant, untruthful, and ultimately
disastrous ideologies.

The problem is not the algorithm. The problem is the masses that believe
everything they watch without questioning it. The problem is not internet
communities. The problem is the utter lack of localized communal structures in
the modern world which leaves people defenseless in the face of efforts of
galvanization toward some other, hateful view--there are no institutional
anchors sheltering unwitting individuals from being duped any longer, and
passions, blame, and fury still speak loudest among the figures of rhetoric.
The problem is not the dissolution of truth and spread of misinformation,
which as always existed. The problem is the increased efficiency with which
that misinformation spreads _without_ equivalent compensatory development of
the _intellectual and critical thinking_ armature we need to defend ourselves
and keep our reason sound and free of influence.

You cannot call on these companies to be stewards of the world's information,
grand censors, stiflers of expression in the name of the greater good. That's
tyranny that will leave the blind blind. What we should be focusing on is how
we can _equip citizenry_ to better inform themselves, better guard against new
means of propaganda and information, better imbue them with the critical sense
to second guess the radical and largely unsupported claims they encounter on a
daily basis. "fake news" is not a problem solved at the level of the
distribution channels, which will _always_ be susceptible to dastardly uses--
it is a problem solved at the level of the _receivers_ of the information, who
need to be imbued with the critical foresight and reasoning skills required to
defend themselves from manipulation.

Yet no one seems to focus on that.

Instead, the pundits and essayists of our day assume the general population
will remain a dark and unenlightened herd, susceptible to every trick and puff
piece, slaves to the whims of algorithmic wizardry. It's a narrative that buys
into the very power structure it seems to want to question, and instead of
lauding and promoting the dignity of human thought, the possibility of
enlightenment, and a pursuit of a rational future, has abandoned all to a dark
technological determinism, assuming that we've already made the devil's trade
between human thinking for algorithmic obedience. We'll never solve this
problem so long as we continue treating human subjects as irrational receptors
of whatever ideology comes their way instead of trying to promote intellectual
edification. The very title of this article reveals this bleak worldview that
assumes human beings have no free will against the larger movements of
technology--note that the title is not "How Radical Far-Right Politicians
_used Youtube_ to radicalize Brazil" but rather "How _Youtube_ radicalized
Brazil"

We live in a techno-dystopian fantasy that posits we've already lost our
freedom to the "machines".

~~~
anigbrowl
_The problem is not the algorithm. The problem is the masses that believe
everything they watch without questioning it._

This is wrong. You're basically saying people should have been educated
against an emerging technology, a process that can take a generation. You let
the technologists off the hook on the basis that if they don't do it someone
else will, and the negative outcomes are the fault of uneducated for having
failed to educate themselves.

The same argument was used to dismiss the way Facebook facilitated the
genocide in Myanmar. For sure, social media companies are not the originators
of human evil nor in any way the monopolists of it, but they are massively
profiting from it despite _themselves_ having the education and knowledge of
how their tools are being abused, so that makes them culpable.

~~~
voidhorse
That’s a fair point. It does seem morally reprehensible to be aware of the
fact that your platform has become a megaphone for genocide and to profit from
it nonetheless. So I agree that on those grounds we should hold Facebook
accountable.

Still, this is not a new problem. In 1939 an entire nation was susceptible
enough to propaganda to accept global war and genocide, and several other
nations were willing to turn a blind eye and ally themselves to that nation.
Mass propaganda has functioned successfully for a long time, the only
difference is that it used to be driven primarily by national interests while
now its driven by global iconoclasts on platforms that have economic interests
but are otherwise neutral. The reason populations in ‘39 were suceptible to
harmful ideogy is the same as the reason they are today—they are cultivated in
environs that, either becuase of material limits or cultural emphasis, do not
emphasize instilling the ability to conduct critical analysis of agrumentation
in the wider population.

Its important to recognize that while technology has certainly resulted in an
acceleration of the spread of harmful thought and populist rehteoric it has
not enacted a fundamental change in the nature of that rhetoric or of its
function. Totalitarianism today looks much the same as it did a few years ago,
and as it did even further back in history—only its vehicle has changed. In
fact, one of the prime errors in the analysis of modern ills seems to lie in
the willful ignorance of history, which in flavor differs from or day but in
essence remains the same.

Also, I’m not making the argument that we should have adapted to technologies
we couldn’t have foreseen—I’m saying the time to adapt is _now_ not a few
years ago. I simply think focusing too heavily on attacking the technologists
(now) is the wrong angle and will fail to address the whole issue (though of
course, holding them accountable is defintely a big part of the story), simply
ensuring the problem lives on in another form.

The goal should be better education systems, and particularly ones that focus
on producing _rational human subjects_ , not economic cogs (what public
education, for the most part, produces today) that, while plenty efficient
(and pleasingly expendable) for corporations, have no aptitude for discerning
judgement or humanistic reasoning to save them from political radicalization.

~~~
anigbrowl
We're in general agreement, but the problem with the educational approach is
just the relatively long timeframe over which it takes place. Sticking with
your ww2 example, you could say some people certainly saw the risks by the
time of the Nazi regime's ascent to power in 1933, but lacked the time as well
as the will to adapt sufficiently quickly. We have less of an excuse because
while a fully networked society is certainly novel, we certainly have
sufficient historical perspective to know and really understand the downside
risks compared to earlier generations.

