
Facebook is rating the trustworthiness of its users on a scale from zero to one - bradleybuda
https://www.washingtonpost.com/technology/2018/08/21/facebook-is-rating-trustworthiness-its-users-scale-zero-one/
======
pjc50
"Lyons said she soon realized that many people were reporting posts as false
simply because they did not agree with the content."

Well, yes. Have they not learned from any of the preceding work on moderating?

~~~
mic47
I worked for FB anti-spam team, and believe me, nobody trusted user feedback
absolutely (i.e. it's good to send to review, but not automatically take
down). I guess that with fake news the signal to noise ratio is even worse.

The whole article sounds more like focusing on one implementation details of
review queue prioritization (i.e. what is the difference of definition of
"fake news" for this user vs Facebook's definition), rather than bigger
picture.

~~~
WiseWeasel
To me, fake news also includes slanted coverage of true events (i.e. covering
events in such a way as to convey a sponsored message, biased polling
methodology, etc.). That encompasses statistically approaching 100% of trashy
sources you’re likely to encounter as 'sponsored content', as well as a
healthy portion of more established sources' content. I'd be likely to report
a majority as fake news in perfectly good faith if given the option.

Maybe the term isn’t as narrowly defined as your (former?) employer would
like.

Edit: upon a bit more reflection, I do take for granted a desire to protect
life and liberty (in that order), and anything that appears to threaten that
view will draw special scrutiny, and potential abuse of reporting features.

~~~
rmrfrmrf
I think the use of "trustworthiness" is misleading. Really it's more "how
likely is this person to accurately flag an article that matches Facebook's
internal definition of fake news."

That would mean that your number would be on the lower end, but that's not a
reflection of your character. More likely it just means that whatever queue is
set up for review will wait until more and higher-scoring individuals also
flag the article before bumping it up the queue.

Funny how this article is something I would flag if I used your definition of
fake news. To me this article is trying to allude to some nefarious Black
Mirror nonsense.

~~~
WiseWeasel
Well said, and I agree completely about this story.

There's still an aspect of this with which I'm unsatisfied, however. What's
the character of leadership who would address misuse of the feature by
devising weights intended to devalue the opinions of large swaths of their
engaged user base? Significant numbers of people are expressing a sentiment
through their alleged abuse of the feature, and instead of trying to channel
that constructively, the company is devising ways to ignore it.

~~~
rmrfrmrf
By using "significant numbers" alone, you're vulnerable to coordinated
information suppression efforts (by state actors, for example).

~~~
WiseWeasel
Certainly there are more than bad faith actors abusing the feature, which
could be separated out through changes to the interface.

------
ChuckMcM
I am tempted to paraphrase the Elephant man, "I am not a sigmoid function, I
am a human being!" in response to that headline :-)

That said, I think it is great that they are trying to characterize this sort
of thing. The second law of institutions is that every process must require
more people to execute than can conspire without being detected[1]. But
relying so much on humans in one way or another is going to gum it up. My mom
for example, a certified human, would always flag anything that offended her
belief system as 'fake news.' Fact checking is fine but it is important to
understand that conspiracy folks actually believe what they are saying is
true, regardless of any facts presented that contradict their world view. I
have always attributed that to an internally rooted desire to "need" it to be
true, either for their own self image or their own conscience.

[1] No, I just made that up. But the kernel is true, if you have a system that
is being gamed you have to build a defense that cannot itself be gamed.

~~~
panarky
_> every process must require more people to execute than can conspire without
being detected_

Step 1 - let users report posts that are false

Step 2 - human fact checkers investigate user reports

Step 3 - rely more on users who report posts as false that really are false,
and less on users who report posts as false that really are true

Step 4 - repeat step 3 but for the fact checkers themselves

Step 5 - train an ML model to automate more of the fact checking work

~~~
degenerate
Step 6 - abandon the project when an edge case creates a PR nightmare

------
artemisyna
...and so does every other big tech company that needs to deal with spam. (Ie,
any social media half worth their salt.)

~~~
currymj
yeah, if you have a spam detector that outputs a probability, then that's a
rating of trustworthiness on a scale of 0 to 1.

------
FabHK
I wonder whether there's some PageRank-like principal eigenvalue problem under
the hood (similar to EigenTrust).

As my Linear Algebra prof used to say - deep down, everything is just linear
algebra...

[https://en.wikipedia.org/wiki/PageRank](https://en.wikipedia.org/wiki/PageRank)

[https://en.wikipedia.org/wiki/EigenTrust](https://en.wikipedia.org/wiki/EigenTrust)

~~~
akhilcacharya
I bet your graph theory professor thought otherwise!

~~~
antidesitter
I thought so too but then there’s the edge space and cycle space which are
essentially linear algebra over F2.

------
dragonwriter
> "Lyons said she soon realized that many people were reporting posts as false
> simply because they did not agree with the content."

So, people report posts as false when they do not agree that they are true?

I'm struggling to understand how that is a surprise, or what other possible
thing you could expect. That's literally exactly what “false” means.

~~~
ithilglin909
By "agree with the content", the author clearly doesn't means "believe that
the content is _factually_ incorrect", rather that they disagree with the
viewpoint expressed in the story.

~~~
dragonwriter
> By "agree with the content", the author clearly doesn't means "believe that
> the content is factually incorrect"

I disagree that that is obvious, and, moreover, I would disagree with the idea
that the author would have any way of telling whether someone flagging
_merely_ disagreed with some aspect of the viewpoint/presentation outside of
the base facts or disagreed with both the viewpoint _and_ the fact claims,
seeing the latter as misrepresented to serve the former. Further, it's
virtually impossible to cleanly separate disagreement on viewpoint from
disagreement over facts, because the two mutually interact (particularly
viewpoints not only shape perception of facts but also perceptions of the;
_relative importance_ of facts.)

For instance, there is a story right now circulating in even fairly
mainstream, especially conservative, media (notably including Fox News) and
being widely shared on social media alleging that a proposed California law
would ban restaurants from serving children any drinks except the two choices
of milk and water. This story is invariably couches in a lot of viewpoint
around crazy nanny state liberals and other conservative stereotypes of
California. It is very likely that lots of people would disagree strongly with
the viewpoint of the story.

But the actual proposal, as one discovers with a few minutes of research, (1)
doesn't limit what restaurants can offer to children (it limits “default drink
options” for children's meal combos) and the limitation it proposes isn't to
just milk and water (it includes milk, water, flavored water, and nondairy
milk alternatives). It is, in other words, factually false in several
dimensions.

Now, if one agrees with the viewpoint, one may be more likely to dismiss those
factual errors, even if you are aware of them, as minor and tangential. If one
disagrees with the viewpoint, one is likely more inclined to see them as
significant and central and indicative of a deliberate distortion to push an
agenda.

~~~
ithilglin909
Maybe I’m overly pessimistic about how people collectively behave.

But seriously, imagine an article posted on Facebook stating that Planned
Parenthood sells body parts of aborted fetuses (or something similarly likely
to elicit reactions). Obviously, that's either factual or it’s not. Yet I'm
certain that if Facebook users had the ability to vote on whether the story is
true of false, they would vote based on personal beliefs, not on what they
would find by fact-checking the story.

Likewise, for anything else that in _any way_ touched on an inflammatory
issue. Do you actually think otherwise?

~~~
dragonwriter
> Obviously, that's either factual or it’s not.

Sure, but unless people actually work at PP or carry out their own individual
on-site investigation, their knowledge of the fact is going to be based on
what third-party sources they trust (including those acting as fact checkers),
which is tightly correlated with viewpoint.

> Facebook users had the ability to vote on whether the story is true of
> false, they would vote based on personal beliefs, not on what they would
> find by fact-checking the story.

They'd probably vote based on their _beliefs_ about the facts, which would
often include checking with sources they trust on facts. Of course, those
beliefs—and where fact checking is involved, the set of trusted sources—will
be strongly correlated with their beliefs about abortion.

The ivory tower ideal of a clean separation between how people address fact
claims and the value system in which they give significance to facts, where
you could then isolate which process was involved in a decision excluding the
other, is, while convenient, not at all a reflection of reality.

~~~
ithilglin909
> their knowledge of the fact is going to be based on what third-party sources
> they trust (including those acting as fact checkers), which is tightly
> correlated with viewpoint.

But that's true of nearly everything.

On anything controversial, users will generally vote based on what they like
to believe, and such a system of ranking news stories or posters of news
stories would never be reliable.

~~~
dragonwriter
> But that's true of nearly everything.

Yes, that's my whole point: the claim made that users flag content bases on
ideology _rather than_ factual disagreement is essentially impossible to be
justified, since perception of facts and ideological viewpoints are deeply
intertwined.

~~~
ithilglin909
I don't disagree, but I'm sure that users will also flag stories as false that
are most likely true, but that don't reflect well on their side of an issue.

------
vorpalhex
> Because Facebook forwards posts that are marked as false to third-party
> fact-checkers, she said it was important to build systems to assess whether
> the posts were likely to be false to make efficient use of fact-checkers’
> time.

This seems like a "the road to hell is paved with good intentions" kind of
thing. On one hand, I understand not wanting to forward every single story
when a troll is mass-reporting, on the other hand though, this seems like a
very 1984 approach that like all algorithms is horrendously naive.

Any closed group is likely to have a skewed sense of reality. On one hand,
third party fact checkers are hopefully true to their title, but on the other
hand, they could just be adding their own bias to the decisions with no
oversight in a way that affects users going forward.

Add in no ability to appeal and no visibility into the system, and you have a
closed system ripe for abuse.

Good job Facebook, you have learned nothing.

~~~
mipmap04
What would be a viable alternative? I don't think they can just ignore the
issue and, as you mention, can't review every single story. They'll have to
rely on some form of sampling for the reviewers. Would it be better to just
take a truly random sample of all reports? I don't think that would be very
effective given the number of publications they have to review. Do you let
users review the content for veracity? Seems like that just creates a
situation where the group with the most participants pick the news. I'm not
sure this is the optimal path, but to say that Facebook has learned nothing is
disingenuous.

~~~
cabalamat
> What would be a viable alternative?

Not attempting to the arbiter of truth and falsehood would be one.

Or reporting on what users think of a story's truthfulness: "49% rate this
story as true, 21% rate it as untrue".

Or, if they want to be honest, acknowledging that it is about policing taboo
thoughts. "Smith, 4980203 Winston. This article is crimethink! Warning: If you
read too many articles like it your Citizen Credit Score will be downgraded.
You have been warned!". OK, maybe they won't go for that :-)

~~~
web007
There are a lot of cases where truth and falsehood are absolute. Those cases
where something is provably false should be called out as such.

Think of something like Pizzagate - the fact that the restaurant in question
didn't have a basement belied the assertions that bad things were happening in
said basement. Any article or comment that passed on that provable lie should
be flagged as a lie.

Sure, there are grey areas beyond that. Those are opinions, and should be
separable from facts.

~~~
dragonwriter
> There are a lot of cases where truth and falsehood are absolute

Truth and falsehood are absolute for pretty much all fact claims.

Whether they can easily by determined is another story.

> Think of something like Pizzagate - the fact that the restaurant in question
> didn't have a basement belied the assertions that bad things were happening
> in said basement.

Sure, but suppose (they weren't, by for sake of illustrating the problem) bad
things were happening in, say, a attached a controlled by the owners of the
restaurant, but somehiw as the story got out and before it was fixed in text,
it was distorted so that it referred to a non-existent basement instead. Would
actively suppressing the story because of the impossible location described
really be in the public interest?

Sure, in strict terms any single factual error in the most involved story
makes the story false. But, most accounts will accumulate some false elements
in a complex story: there has to be consideration of importance and
materiality.

------
intended
>some users began falsely reporting items as untrue, a new twist on
information warfare for which it had to account

"new",

> But how these new credibility systems work is highly opaque, and the
> companies are wary of discussing them, in part because doing so might invite
> further gaming — a predicament that the firms increasingly find themselves
> in as they weigh calls for more transparency around their decision-making.

Fractal stupidity - this is the term for this situation

You build better systems, which dont ACTUALLY capture the core bad behavior.

Users figure out the edges of said system, and start finding newer ways to
slip through the gaps.

Now you need a new system.

Repeat till the history of the system looks like a pullitzer prize winning
Kafka novel.

------
eivarv
I don't understand why the big players such as Facebook and Google keeps
insisting on using (appeals to) authority as a proxy to truth, when we have
rules and systems for critical thinking.

It seems to me that if the claims and stories in question could be re-posed
(e.g. as falsifiable, logically coherent statements or arguments) the worst of
"fake news" could be weeded out.

The rest (and biases, etc.) would likely have to be left as an exercise for
the (critical) reader.

~~~
chongli
Google and Facebook want to have their cake and eat it too: billions of users
without having millions of customer service workers.

What they're trying to accomplish, to my mind at least, would require the
invention of general AI. Automatic content moderation sounds very close to a
Turing test to me.

~~~
MichaelKSpencer
Google Home is maybe years ahead of what Facebook voice-AI will feel like.

------
luka-birsa
Is it anything like the Chinese credit system? Seems the two could share the
notes.

~~~
cheeseomlit
Sounds just like it to me, except we are being categorized by corporations
instead of the state. How long until potential employers make a good social
score a pre-requisite to hiring?

~~~
prolikewh0a
Will I be denied employment if I don't have social media?

~~~
cheeseomlit
I wouldn't be surprised if that's already considered a red flag. "No facebook?
Sounds anti-social.."

------
dunpeal
I wonder what the score would be if Facebook users were to rate its
trustworthiness on a scale from zero to one.

~~~
amelius
Perhaps there is a reciprocal element. If Facebook appears not trustworthy to
me, then I don't trust Facebook, and then probably I will appear not
trustworthy to Facebook.

~~~
MichaelKSpencer
Yes I imagine most consumers will eventually come to think like this. You
might be a bit ahead of the curve though?

------
forapurpose
A bunch of related thoughts:

1\. How do we know FB will assign the scores fairly and objectively? We don't,
of course, but is there any legal requirement or protection for users?

2\. How is it like the Chinese government's social credit score? Right now
Facebook says it's used for filtering user reports, but if FB becomes
confident in it, what else might it be used for? Will they sell the score to
advertisers and vendors so that they can filter customers? To credit score
companies? To insurance companies and potential employers? Will it be used to
'nudge' mass behavior in certain directions? Is there anything preventing FB
from doing those things?

3\. As usual it's the powerful (FB, potentially their advertisers and other
partners) who have information about ordinary people (FB users). Isn't it more
important that ordinary people have trust information on the powerful? What is
Mark Zuckerberg's trust score? Google's? Elon Musk's? Various politicians?

4\. According to the article, FB isn't releasing the algorithm because it
would enable users to game it (in fairness, I don't remember if FB said that
or someone else). But those with resources and motivation, such as the high-
end propagandists, will learn how to game it anyway, which can have at least
two consequences: 1) Propaganda may become harder to identify, and 2) power
will be even more concentrated in the hands of the few - FB, their partners,
and those who can game the system.

5\. As seems the norm for tech, the users have very little agency (other than
to try to please the system); they are objects of technology and of those who
control it. It bears repeating that we've come a long way from the world of
free-as-in-speech systems and the idea that the goal is end-user control. In
another context, an article posted to HN paraphrased Alan Kay as asking,
'Should the computer program the kid or should the kid program the computer?'
[0] I think our society has answered that question for the present.

[0]
[http://wheels.org/spacewar/stone/rolling_stone.html](http://wheels.org/spacewar/stone/rolling_stone.html)

------
deanclatworthy
It's smart to go public with this. There is established research in this area
and a trove of algorithms for measuring trust. Even if it's as simple as
lowering a user's trust ranking after false reports - when applied across a
dataset as large as FB's it's going to be somewhat accurate.

The article goes on to suggest this is just one of a range of heuristics use
to determine trustworthiness. I can imagine all kinds of profile interactions
that would be be good indicators of this.

------
pnathan
Before reading the article, I estimate that the technical detail is this:

a normalized statistical value is generated for a particular user's
willingness to share fake news

after reading the article:

a normalized statistical value is generated for a particular user's
willingness to report news they disagree with.

yeah, nothing to see here..... :-/

I'll write a note to the journalist, hopefully they can elide or contextualize
this kind of thing in further articles.

------
lez
I'm curious how Facebook will distinguish between a fake news and a sci-fi
essay I just posted on my blog. I'm guessing I'll be having close to zero
visitors on my sci-fi blog.

~~~
MichaelKSpencer
Try writing about stuff that is against Silicon Valley, all of these platforms
band up against you - it's amazing.

------
21
Can you GDPR request this?

~~~
knorthfield
It's not personally identifiable data, so probably not.

~~~
icebraining
From the GDPR, Art.4 (1): "‘personal data’ means any information relating to
an identified or identifiable natural person".

This certainly seems to fit.

------
safgasCVS
There have numerous times in history where free speech has been stifled and it
turned out to be a bad idea.

But I'm sure this time is going to be different

~~~
aglavine
Free speech is not being stifled. They want to regulate content in their site.
You can publish your content in other places.

I appreciate fake news being cleaned in my timeline, if the cost is no more
than some occasional legit news being pruned once in awhile.

~~~
safgasCVS
Alternatively don't offload the responsibility of what is acceptable content
to a powerful 3rd party and use your common sense. Free speech isn't there to
protect stuff what you agree with

~~~
aglavine
It takes valuable time to do that and I don't care who does the cleaning as
long as it works. If it doesn't, people will always be aware of that and sites
like Facebook risks to lost all of their business in that scenario.

------
qrbLPHiKpiux
Didn't I see this Black Mirror episode?

~~~
bopbop
No, this from next series and has a happy ending /s

------
octosphere
Facebook is already strict enough and doesn't need detection algorithms like
this. I once setup an array of doppelgänger accounts to see how long it would
take for Facebook to correlate the accounts and ban all those accounts in one
fell swoop. It was not long until each account was subjected to a rigorous
test, where I was asked to match people's Facebook photos with their username.
A daunting task if most of your friends were token friends added merely to
boost your profile and make it seem popular. Of course I managed to solve half
of those prompts, but the other doppelgänger accounts were swiftly banned,
leading me to think that Facebook's algorithm works half the time, which in
computer science terms is not good enough, but in reality, is enough to deter
a substantial amount of bad actors.

------
_bxg1
I'll just leave this here

[https://www.wired.co.uk/article/china-social-
credit](https://www.wired.co.uk/article/china-social-credit)

~~~
pc86
A. A dictatorial, totalitarian government rating all its citizens based on
opaque criteria and using that to restrict access to essential services,
financial support, and general life and liberty.

B. A private company scoring its users to combat spam, as all social networks
do.

~~~
_bxg1
It's a short hop to banks (also private companies!) asking Facebook for those
numbers, and it's not a stretch for the government to do the same.

~~~
dahdum
I wonder if they will let you advertise to people based on trustworthiness,
seems like a metric that would be rather useful for a lot of different
scenarios.

Also fantastic as a hiring screen.

------
ada1981
So the bots will auto-post links to Wikipedia articles in certain groups and
then 1 in 1,000 will be on the target groups with the fake news.

This will become like gaming Google rank.

~~~
MichaelKSpencer
I had a revelation a few days ago, a crypto "Airdrop" is nothing but a
campaign that turns people into bots.

------
jwilk
Archived copy without GDPR nag screen:

[https://web.archive.org/web/20180821143120/https://www.washi...](https://web.archive.org/web/20180821143120/https://www.washingtonpost.com/technology/2018/08/21/facebook-
is-rating-trustworthiness-its-users-scale-zero-one/)

~~~
MichaelKSpencer
Thanks I've yet to read the original.

------
kstenerud
Facebook is finally learning from the playbook of insurance companies, using
your hangouts, friends, job, acquaintances, activities and behavior to assign
you a risk score.

It's designed to reduce THEIR risk, which means by default that it will be
stacked against YOU, and they won't care about those getting ground up in the
wheels of the machine.

------
ForHackernews
Interesting. I wonder if this includes lies told in private messages?

~~~
MichaelKSpencer
That would be the day - encrypt your Messenger convos, but they show Ads and
ensure you aren't bullying or lying to anyone.

~~~
ForHackernews
"No honey, I'm still at work"

 _< GPS Location: Mistress' apartment>_

trustworthiness_score -= 0.1

------
gfo
Is there such a thing as a perfect algorithm where you can publish it and
trying to "game" is nearly impossible?

In this context, a "perfect" algorithm would be one where someone trying to
"game" the system would essentially be forced to post only true content to
achieve a near perfect score?

I hope one factor in their scoring involves number of cases where someone
reports factually accurate content as false on numerous occasions or does the
same for opinion-based content. Of course, that then requires a means for
validating the content as opinion or fact-checking, but they're already
heading down that path.

~~~
trumped
It's not a matter of if they will game the system, it's a matter of when (no
software is perfect, not even PageRank).

------
halfjoking
"You're either a 1 or a 0, alive or dead"
-[https://m.youtube.com/watch?v=r3cM5P8tUhc](https://m.youtube.com/watch?v=r3cM5P8tUhc)

------
creaghpatr
I'm here for the Black Mirror 'Nosedive' comparisons.

~~~
dwd
It's very Bookerian.

I think we can all agree that the secret algorithm is going to be in the
social signalling - job status, who you know, where you live.

Make everyone's rating public and you're there, or worse make it available to
whoever pays.

------
j-c-hewitt
As they will hire biased 'fact'-checkers, you will wind up with people getting
greylisted by stating controversial but popular positions (e.g. "eating meat
is immoral" / "raising taxes lowers economic productivity" / "welfare programs
create perverse incentives" / "sales taxes are regressive" / "Islam is an
inherently expansionist religion / "Islam has bloody borders" / "Christianity
is inherently misogynist" / "Abortion is murder" / "Republicans are Nazis in
discount menswear" / "I voted for Donald Trump and I support our president").

What you will be left with is what many people originally flocked to the
internet to avoid: the 'MSM echo chamber' of anodyne center-left opinion
against which no dissent will be tolerated.

------
makecheck
One of the things I liked about Slashdot was the option to periodically review
random comments (with context), and how they were rated, with the option to
state whether this was appropriate. Furthermore, the ratings themselves had
descriptors: Insightful, Funny, etc.

I think this idea should be taken further so that you _can’t post as much_ (or
even at all) unless you are actively participating in meta-moderating, i.e.
helping the community.

------
rmrfrmrf
With how many bits of precision?

------
tzm
If measuring emotional response is a signal for trust, does non-participatory
behavior yield the lowest trust "score"?

------
nixarian
But remember, China's social credit system is terrible! How could anyone think
of rating people like that?

------
protomyth
Why do I get the feeling someone is having fun making fuzzy system models?

[https://en.wikipedia.org/wiki/Fuzzy_logic](https://en.wikipedia.org/wiki/Fuzzy_logic)

------
justfor1comment
Now it's my personal goal to have a score of zero on FB.

------
misterbowfinger
Could we change the title to "Facebook is rating the trustworthiness of its
users"? The zero to one bit is irrelevant and obvious to programmers.

~~~
MichaelKSpencer
People just don't like being equated to a number, even if it's for a "good
cause".

------
InfiniteBeing
I'm going to guess that a lot of truly trustworthy right leaning people are
going to get worse scores than they should because they don't believe in the
left wing dominate media's narratives.

~~~
prolikewh0a
>left wing dominate media's narratives

Like what?

------
diogenescynic
I’d rate Mark Zuckerberg a zero.

~~~
MichaelKSpencer
So not running for President in 2020? I think Bernie at 80 might make a better
leader.

------
synaesthesisx
Are you a one or a zero?

