
Zuckerberg, Dorsey spar over Twitter’s flagging of Trump’s tweets - evo_9
https://www.politico.com/news/2020/05/28/zuckerberg-dorsey-spar-over-twitters-flagging-of-trumps-tweets-286881
======
AbrahamParangi
They’re not sparring. They’re not talking to each other at all. Zuck’s
positioning for advantage. Title should be: “Zuckerberg signals right-friendly
position to curry favor and hopefully avoid regulation”

------
chanmad29
In defense of Zuckerberg, he has always maintained this hands off stance. But
isn't the reality that facebook is already an arbiter of truth when it banned
Alex Jones? I much rather prefer if these platforms owned up their
responsibility like Twitter did.

------
igotsideas
What happens when people start making false ads about Zuckerberg on FB? Does
he feel the same way and let it ride out?

~~~
roflchoppa
You're paying him money, I don’t think he cares.

------
buboard
Does Snopes fall under section 230? If yes, why isn't facebook? Twitter is
arguably even worse since they don't use the ruse of some 3rd party

------
dmode
I long for the day when I don't have to hear about Trump's useless tweets
anymore

~~~
bzb3
Don't consume media.

~~~
dmode
I mean, this is on Hacker News. It is unavoidable

------
vulcan01
> White House press secretary Kayleigh McEnany also told reporters later in
> the day the president was expected to sign an executive order Thursday aimed
> at social media platforms, contributing to speculation the administration
> would target a 1996 statute protecting the companies from lawsuits.

So Trump's plan is to suspend Section 230 and then let the DoJ loose.

------
robomartin
This is a difficult problem. One where popular agreement on anything other
than an emotional basis is unlikely. In other words, in this particular case,
half the population will think one way and the other half will take the
opposite stance.

However, these are not and should not be questions that address a specific
case, particularly that of someone who is notable, famous, important,
privileged, etc. (take your pick). I think this is an important question given
the reach and scale these companies have and, yes, the influence they can
exert. And it should be a question about how they treat the average person
rather than someone of note.

There are far more people posting nonsense online --sometimes very dangerous
nonsense-- than a single person of note. Even with Trump's 80 million follower
reach, he cannot compete with the networks of networks of people pushing all
kinds of nonsense on every platform every minute of every hour of every day.

It's an important question because of the scale and the way in which it
distorts society world wide. Before social media these kinds of effects were
impossible. The scale, range, domain, reach and specificity of what's possible
today could not happen in an era where creating social graphs of scale was
very difficult.

Today it is plausible to imagine having a million people world wide believe
that eating rat feces will inoculate you against a disease...or that the earth
is flat. Et cetera.

What is the right answer then if what you are after is a better society?

Freedom of Speech, as implemented in the US, is often misunderstood. IANAL but
I made it a point to understand this over the years. The basics go something
like this: Freedom of speech does not mean you are free of the consequences of
your speech. The first amendment to the US constitution is intended to limit
the ability of government to restrict speech, not private enterprise. The
instant you enter a grocery store you can't stand there and say whatever you
want. Outside, in a public space, yes, go for it. Inside, no, you can't. And
you are subject to whatever rules the company might have in place. Same with
your employer.

And yet we have the effect of social media companies, where very public speech
is transacted every microsecond of the day, networks are created, fed,
encouraged and monetized by algorithms and passionate communications are pure
gold. Passion, hatred, outrage, despair, fear; everything on the negative side
of the human emotional scale feeds these companies millions of dollars per
year through the delivery of something every internet business craves:
Engagement.

OK, great. What do we do?

It's hard to imagine how to deal with this due to the many legal and ethical
tentacles behind the issue. We can, however, list two extremes: Censor
everything that does not meet some criteria is one end of that scale. The
other end is not to interfere with communications in any way at all.

What's a business to do? What should a business be responsible for? How does a
business get past the inevitable biases of the people who make up the
business?

Do you force an employee who is against recreational drugs or abortion to not
tag posts related to those subjects?

Do you force software developers who are for Freedom of Choice to write code
that protects the opposite stance?

How do you deal with issues in different regions, countries, cultures,
religions and political systems?

Is a business to be responsible for the consequences of what they,
effectively, sell?

Yes, social media companies sell speech, emotion, networking, etc. If they are
responsible for the consequences of what they sell, should other companies be
charged with similar responsibility? Should Walmart be liable because they
sold a knife to someone who used it during a crime? Should the company who
sold the pressure cooker to the Boston marathon bombers be responsible for
what they did with their product? And, the big one, gun manufacturers and
retailers. Should they be responsible for what's done with the products they
sell?

Things become more interesting and complex as you start to explore the
generalization of what started with a simple tweet and the company choosing to
insert itself between the source and the audience in some way.

To get personal, we have two relatives who are in the grips of these horrible
effects social media can produce. I have watched each of them self-radicalize
(it's the only term that applies) over the years. It has been a very slow
process. The start of the effect, thinking back, was truly imperceptible. As
they descended deeper and deeper into their respective the effect accentuated
and the pace accelerated. Years in the making. Today they are so deeply and
solidly entrenched in their camps that it would be impossible for us to pull
them back into a reasonable middle. One of them has absolute and total hatred,
virulent hatred, for Trump. He hates every molecule in Trump's body, his
family, Republicans, his very existence. The other hates everything Democrat
with equal intensity. They are polar opposites. And the worst part: One is the
son of the other.

How does this kind of self-radicalization happen? Now without social media. At
least not to the depth and intensity someone is able to reach today. And, yes,
it is absolutely destroying lives. In our case, most of the family dreads
talking to these two relatives. Nobody is interested in the conversations they
want to have. What's ironic is that this means they go back to their
respective social media dens and walk deeper into their respective dark caves.

The problem, is the algorithms these companies are using.

The problem isn't censorship, moderation or the lack thereof.

What do I mean?

Concerned with what was happening to our relatives, I decided to conduct a
simple experiment:

I went on Facebook and clicked on the Video tab. I browsed until I found an
anti-Trump video. I watched it from start to finish. I then browsed some more
and found another one. Click. Watch. Within two or three clicks everything FB
was showing me was anti-Trump. The selection and intensity of the hatred
increased as I continued to click-watch these videos. It was truly horrific to
realize how quickly these algorithms allow someone to descend into a dark
hole.

I then decided to take the stance of someone who, perhaps, noticing he is
being fed the same kind of stuff, wants to pull the range of recommendations
back to an average setting. The only way to do it is to find and watch videos
on the other end of the scale. I sat there scrolling through videos for
probably 30 minutes before I came across something that was mildly supportive
of Trump. I click-watched. The effect on the recommendation set was
imperceptible. About 20 minutes of scrolling and I found another. Click-watch
again. Minor gains. Now it took about ten minutes. Click-watch.

It took somewhere in the order of two hours of resisting click-watching
hateful anti-Trump videos to feel like the recommendation set has some
semblance of balance. It certainly had nothing else. It was full of Trump-
centered videos (love or hate) and nothing else. No cute puppy videos. No
videos about anything else. It was a set of videos representing polar
opposites.

And click-watching any one of them would instantly drop you into a local
minima that would lead to radicalizing the selection set within a few clicks.

This. This is it. This is horrible. And this is what's causing much damage.

These algorithms are great when you are trying to figure out how to remodel
your kitchen or get WebRTC working. You want "radicalization" because it leads
to better and better information. However, there are subject areas where these
algorithms lead to nothing positive. The outcome is damaging,
counterproductive and detrimental to society.

I firmly believe this is the step social media companies need to take: They
need to modify the way algorithms treat some subjects in order to always
attempt to present a balanced view of the world. Don't allow anyone to descend
into the Trump-hatred or Republican-hatred cave just as nobody should be
allowed to enter the Biden-hatred or Democrat-hatred tunnels. Same with other
subjects: abortion, drugs, religion, racism, etc. This isn't censorship, this
is balance.

Don't allow easy radicalization when it comes to key subjects. It should be
nearly impossible to be exposed to a constant steam of anti-<pick your topic>
videos and posts no matter what someone does. The same algorithm that helps
you learn how to remodel you kitchen is harmful when the subject matter falls
into certain categories.

And do, by changing the recommendation engines such that they apply
reinforcement when warranted (home remodeling, looking for a vacation, coding,
learning, woodworking, etc.) and refuse to behave in the same manner with
potentially damaging topics (politics, hatred, etc.) the social media
companies would be able to deliver a service to society and do right by people
not by censoring but simply by not becoming a conduit for self radicalization.

Change the algorithm. Save the world.

