Hacker News new | past | comments | ask | show | jobs | submit login
Fact Checks (developers.google.com)
126 points by fanf2 on Oct 22, 2017 | hide | past | favorite | 100 comments



I wonder how far they are willing to go with this. How about these kinds of claims:

"Increasing the minimum wage helps improve the economy"

* Fact check by Conservative-example: false

* Fact check by Progressive-example: true

Another good one:

"God exists":

* Fact check by Conservative-example: true

* Fact check by Progressive-example: false

This is going to be fun. This is typical of Google, providing a technical solution to a human problem that can't be solved easily. Not everything can be tweaked with an algorithm.


> “God exists ... > Fact check by Progressive-example: false

I know you’re making an observation so this isn’t directed at you, but to those who got us to this segregation - so no offense intended towards you friend.

It makes me so sad that this is the assumption. Yes, the traditional definition of God is not one many of us subscribe to, but to generally group progressives and atheists together, conservatives and religion together is just as much the problem as “fake news”.

Reference, myself, a progressive who found his spirituality in his late twenties while working at a startup and finding the church of consumerism unacceptable. Yet, look at that, still progressive.


Well, in the US, most people believe in God, regardless of political affiliation. The % of Democrats that are also atheists is larger, but it's still under 15%.


True, but "nones" (those with no religious affiliation) are on the rise, now accounting for a fifth of the U.S. population, and the overwhelming reason for that is a backlash against the religious right.

Sources: Gallup, https://www.sociologicalscience.com/articles-vol1-24-423/


The "Nones" are actually two different groups of people:

(1) Those with religious belief but no religious affiliation. They believe in God and/or an afterlife, but they don't participate in or identify with any organised religion.

(2) Those with neither religious belief nor affiliation. They say that there is no good reason to believe that God exists and that when we die we irreversibly cease to exist.

Both groups are growing, but group (1) is significantly bigger than group (2).

I'm not sure how helpful it really is to lump these two groups together.


Unfortunately, while I personally appreciate you sharing this, I thought I'd mention that I think you are being downvoted because your post is too far off the primary topic--which is an appropriate reason to downvote--and not because your idea does not contain merit (I think that it does).


"Progressive" and "Conservative" are sometimes used in extra-political contexts this way, e.g. in linguistics when characterizing dialects.


> “God exists ... > Fact check by Progressive-example: false

Not even false. Religion is immune to fact checks.


But if you introduce categories that are immune to fact checking the flat earth society will claim that too. Same for evolution.


[flagged]


-1 Flamebait


Not my intent, but can totally see how it could look like that.


The reason you were down-voted to oblivion was your presumption that religious belief is irrational. Rationality is a slippery concept (if not a completely empty one), and it seems like you've chosen to define it as strict adherence to the doctrines of Physicalism. This kind of view plays well in /r/atheism, but outside those quarters it comes across as arrogant and naive.

My advice: try to empathize with those who don't share your chosen axioms. Certainty is the enemy of critical thinking.


Your appeals to physicalism as a limiting factor might do well in a university debate class but they aren't philosophically sound so much as they are hand-waving.

Nice straw-man of my position though, that was masterful.


Would you please stop posting acerbic swipes to HN? You've done this a ton, and it's against the site guidelines: https://news.ycombinator.com/newsguidelines.html.

(Taking a bit of a leap from your username: there's no bigger fan of wicked British wit than I, but you can't routinely do it here. The container is too fragile to withstand acid, and all that happens if you breach it is tidal waves of internet shite in your wake. Following the rules is a bit of work and a bit boring and bland, but not nearly as boring as it'll be if HN ends up in that state, i.e. dead. Blandness is the cost we have to pay to have a public forum.)


That is the best case scenario. The worst case scenario is when one of those is prohibited from publishing "fact checks" because they "don't meet our high standards" (which Google is free to set as they see fit) and the other is free to declare "false" anything opposing tribe says because everybody knows they're the bunch of damn liars anyway.

Also, "fact checkers" are free to misinterpret the content of the article, i.e.:

"Increasing the minimum wage helps improve the economy"

* Fact check by Conservative-example:

Claim: "we should take all the money away from successful people and give it free to drug addicts and moochers"

Verdict: false

* Fact check by Progressive-example:

Claim: "we should let poor people to just die from hunger"

Verdict: false

etc. Neither of that contributes to the discussion much, in fact, it makes it harder by caricaturising positions of opposing sides and making it harder to discuss arguments on the manner, while wearing the veil of "objective fact checking". The expression "fact checking", which once meant verifying objective facts, has already been hijacked by propagandists to make their propaganda more plausibly sounding. I predict more such abuse is going to happen and the term would be further eroded until it's completely meaningless.


But your examples aren't 'facts', they're 'opinions'. Unless we're going to just throw away the definition of 'fact', we shouldn't be pretending that those example claims can possible be 'fact'.


> But your examples aren't 'facts', they're 'opinions'.

Exactly.

> we shouldn't be pretending that those example claims can possible be 'fact'.

We shouldn't. But "we" do all the time. Slapping API and UI with Google logo on it won't change that. But it would let people pretend that now these are "facts" - after all, the source of all knowledge in the universe, Almighty Google says right here "fact check"! How's that not a fact?!


Yes, but that's actually a great thing. It allows Google to discern controversy. If people disagree about a fact, that's fine, Google can choose how to present facts that have that property. For the ones people do agree about, they can present them as truth.


There are things that are false that everyone believes and things that are true that nobody believes...

Google's algorithns weren't designed to tell the truth, only to tell what's popular.


Thank you. For example, one of the worst forms of evidence, that is not permitted as science, is eye witness testimony. People are too fallible be relied on. We only forget that because of our judicial system, unfortunately. But what's even worse than that is what's next in line: a collection of testimonial opinions, or a democracy on truth, especially one run by a non-opensource algorithm.

Maybe nothing bad will come of this, but every step Google makes has me worried that they will abuse their power. There's something that seems dangerous about a nation of people fact checking primarily via a single corporation.

Remember when NASA burned up the Challenger space shuttle because of the groupthink that prevented a couple engineers from letting everyone know that they knew this would happen? And NASA is not evil, nor have incentives to allow this, which is why we need to work together to be dilligent to protect ourselves from these problems--they easily sneak in.


Otherwise known as "common sense". If this in an engine to promote common sense then we're just digging ourselves a bigger hole.


I'm not sure what your point is. If a thing is true but nobody believes it, then there is no way to determine that truth from Google searches in the first place.


That's exactly my point. Google is advertising this as a truth-telling mechanism, but it's a "majority belief" telling one instead.


Did you actually read the link? It's clear that this Google feature is designed to identify fact checks from websites, as opposed to providing an authoritative fact check itself.

If you run a website that hosts fact checks, you are now encouraged to annotate these fact check articles with "ClaimReview" elements that describe the fact in question and your website's verdict on the fact.

In the search results, users see "Fact check by Example.com". Google does not present fact checks as a "truth-telling mechanism" in any way, shape, or form.


> In the search results, users see "Fact check by Example.com"

Correction, the users sees "Fact Check by example.com: False"

https://developers.google.com/search/docs/data-types/images/...

If that is Google is not "presenting as truth-telling mechanism", it's a hell of a dark pattern.


It is completely clear that the fact check comes from a particular source. Stop calling it a "dark pattern" when this element is not deceptive in any way, shape or form.


It's not like people didn't know how to disagree on the internet before. Google, however, purports to structure this disagreement in a way that predefines certain semantic roles. For example, it designates one content provider as "fact-checker", and their claims as inherently more authoritative (because fact-check is always more authoritative than the article under the check, isn't it?) and capable of rendering verdicts which can not be further challenged, while the other role is completely passive - and even the claims are formulated not by the original article but by the checker, leaving the whole authority only in the hands of the checker role.

This does not look like a good setup to structure the discussion. Why somebody would get so much authority just for using some API? It's like giving people password-less sudo access and then wondering will it work well.


Why are you assuming that Google is going to trust these people implicitly? They are providing markup so they can associate claims of fact with identities. They can then choose how to weight those claims using other evidence.


> Why are you assuming that Google is going to trust these people implicitly?

How are they going not to? Hand-verify each provider and each claim? Not scalable and volatile, reliable site today can become trashy in two years. It's either completely uncurated list (welcome, 4chan trolls, food bonanza for you!) or list curated according to biases of a small overworked team isolated somewhere deep in the guts of Google corporate behemoth. What other options are there for selection?

> They are providing markup so they can associate claims of fact with identities

URLs already provide that markup. Google's markup assigns specific elevated role to "fact-checkers", implying they are somehow more authoritative than the original article, otherwise there won't be any need for special designated status for them.

> They can then choose how to weight those claims using other evidence.

That's what I am afraid of - that they (Google) would choose to weight the claims according to how they see fit, thus blocking from me information they consider not necessary for me and skewing my informational stream.


It's more important to start with the easy stuff than to worry about the hard stuff.

And this isn't about a technical solution to conspiracy theories, it's about the markup that will help surface the highest quality human answers to conspiracy theories.

Reminds me of the Bielefeld Conspiracy [0], a German satire of conspiracy theories that spread on Usenet.

The theory poses three questions:

1) Do you know anybody from Bielefeld?

2) Have you ever been to Bielefeld?

3) Do you know anybody who has ever been to Bielefeld?

A majority are expected to answer no to all three queries.

Anybody claiming knowledge about Bielefeld is promptly disregarded as being in on the conspiracy or having been themselves deceived.

Conspiracy theories will always be with us, but at least now Google can help show high-quality answers to easy questions like Bielefeld.

[0] https://en.wikipedia.org/wiki/Bielefeld_Conspiracy


>I wonder how far they are willing to go with this.

It's reducing assertions to single sentences. Calling it fact-checking is also deceptive. That phrase is a tool of the media to get viewers to stop looking at counter-arguments and to take the pundit's side. It's another 'headline' and for sure, will have click-bait content from news companies.

It would be different if it showed the assertions and then linked to rebuttals (and then linking rebuttals to rebuttals -- like debate, you have a limited number of "rounds" constrained by the amount of space Google will allow you). However, dropping a bool is going too far. That's implying the matter is settled, and for politics, it isn't likely to be settled. For sure, it needs "mostly true/false" and "partially true/false".


Agree with you. It actually not boolean though:

-1 = "Hard to categorize"

1 = "False"

2 = "Mostly false"

3 = "Half true"

4 = "Mostly true"

5 = "True"

What it doesn't have though is "debatable" or "unknown". As if we always have perfect information. "Hard to categorize" one fits more for unintelligible questions which don't make sense at all.

I guess google could assign "debatable" if for the same claim different sources assign different truth. But that's pretty wack too, because a single source can't concede that a question is debatable and is forced to pick a side.


I had interpreted the "hard to categorize" as "hard to put in any of the other categories", which could also mean "unknown" or "conflicting information".


> This is typical of Google, providing a technical solution to a human problem that can't be solved easily.

Google didn't want to do this. It was the traditional media attacking google, facebook and social media after the last election.

> Not everything can be tweaked with an algorithm.

It's not a tweak. It's getting google to align with the traditional media's rhetoric.

If washingtonpost or the nytimes says X, then google has to adhere to X.


I'm not sure I understand this objection.

Google already lists sites that claim god does and does not exist (and many variations on those themes) and presumably will return them in searches based on whatever the Google algorithm does. This new thing just lets the site encode that information for machine readability when it is responding to a specific published claim.


These kinds of claims are almost never what fact-checkers are after, even in places with particular political leanings.


> Not everything can be tweaked with an algorithm.

But the problem they are trying to solve was kind of caused by algorithms..


This will be abused by trolls more than used by the "good guys". It's just a matter of time before some trolls decide to use this feature to create more lies about elections, hitler, holocaust, immigrants, and every conspiracy theory out there. You're going to search "who did 9/11" and have false websites trying to "fact check" the top pages (which at least today seem to be sane).


Yep, 4chan will have a field day with this. I think the goal of this is to harness the wisdom of experts, but can't help but think the end result of this will just be more noise in signal to noise search ratio.

Without human input as to what is correct and real of these inputs, its just another input that can be gamed.


Pretty much. Google was sad when it realized that 10^x layers of "Deep Learning" couldn't detect propaganda, and it will be sad when trying to get people to do it for free doesn't work, either.


I could see this working if sites with a good track record were trusted more when making claims.

A site's trustability could be punished when one of its fact checks is flagged for review and discovered to be wrong.

Will be interesting with Wikipedia in the mix. It could be the web's fact checking backbone.


> I could see this working if sites with a good track record were trusted more when making claims.

Google mentions this in the docs, but if it's page ranking, that can be gamed, and if it's "pay us to be a fact checker", well...


It could be "if your fact check gets flagged a lot we'll review it and punish your reputation if it's inaccurate."

Not ideal to leave Google in control of Truth (and they might not want the burden), so it'd be interesting to offload that responsibility to Wikipedia, which seems to have a pretty good system for coming to a consensus.


I suppose "trolls" also includes companies with an interest in pushing their own agenda, which historically have been far more influential than the more diverse and random Internet trolls.


Or say ... Russian propaganda machines. And now we've come full circle.


Assuming Google trusts anything marked as a fact. Which would be a ridiculous assumption.


They seem pretty lax about "rich snippets" already. Perhaps they don't trust them, but they present them as things to be trusted.


It seems like you need to be manually approved by Google? (Hence the "If your organization is interested in implementing, or seeing issues with, ClaimReview, submit your contact information here".)


The linked page explains that fact checks aren't guaranteed to be displayed, but use an algorithm similar to page rank to choose which if any checks should be shown.

Fact checking doesn't automatically mean people will believe the fact checker over the top link anyways. Exposing people to disagreement to the issue from the start may be important because once people form a belief, they're much more likely to hold that believe despite contradictory evidence.


Indeed. Google already has a huge astroturfing problem and there is no evidence this is going to do anything to address this at all.

Incidentally, those who think ML would help need to take a look at the training corpus.


Indeed, fact checking systems are only as good as the link between identity credentialing services and a person.

http://schema.org/ClaimReview (as mentioned in this article) is a good start.

A few other approaches to be aware of:

"Reality Check is a crowd-sourced on-chain smart contract oracle system" [built on the Ethereum smart contracts and blockchain]. https://realitykeys.github.io/realitycheck/docs/html/

And standards-based approaches are not far behind:

W3C Credentials Community Group https://w3c-ccg.github.io/

W3C Verifiable Claims Working Group https://www.w3.org/2017/vc/WG/

W3C Verifiable News https://github.com/w3c-ccg/verifiable-news


In terms of verifying (or validating) subjective opinions, correlational observations, and inferences of causal relations; #LinkedMetaAnalyses of documents (notebooks) containing structured links to their data as premises would be ideal. Unfortunately, PDF is not very helpful in accomplishing that objective (in addition to being a terrible format for review with screen reader and mobile devices): I think HTML with RDFa (and/or CSVW JSONLD) is our best hope of making at least partially automated verification of meta analyses a reality.


Everyone: "We've got a problem with fake news and journalistic integrity on the internet."

Google engineers: "Oh, I can make an API for that!"


Well, this is a problem caused by technology to begin with.

I don't get the negativity directed to the intention of this. Sure you can criticise the particular execution.


Partisan fact checking -- which this effort intends to standardize and elevate -- is part of the problem, not the solution.


Can you explain this?


It's a problem that has always existed. We are only now more aware of it because there are fewer barriers to communication.

Technology is an easy scapegoat for what are very human problems.


It's a problem that can't be stamped out, yes, but we should try to mitigate it, no?


This sounds very useful, and I can think of at least one claim I would very much like Google to fact check. It involves an alleged difference in the distribution of personality rates between men and women.


Fact check: False. Verified by: You want to be fired, don't you?


That assumes such a question could be definitively answered. First you’d need to quantify the scale of personality traits. Too much of the discussion around this is hand wavey.


My understanding is that the big 5 personality traits have been shown to be fairly robust and quantifiable.

Caveat: I have only recently begun digging into this area.


What's a personality rate?


I meant to say "traits". I'm not even going to try to figure out how that came our as "rates".


Nice try, Mr. Damore ;)


I'd love to know how you would check 'true or false' on logic based speculation such as this article on the recent Las Vegas shootings https://www.veteranstoday.com/2017/10/20/las-vegas-massacre-...


-1 => "Not even wrong/conspiracy theory"

Honestly, this isn't the sort of thing that needs to be fact-checked. It just needs to be tagged as being baseless. It's a little different than the types of things this solution is aimed at.


I disagree. this is Google's tools being used to speculate on why there are so many discrepancies and unexplained things in the 'official' narrative of what happened. As I said previously, how do you 'fact check' something which is questioning those logic lapses.

'Being tagged as baseless' reminds me of the old Soviet Union.


I think the trolls will obviously come out to attack this but it seems like a nice extra data point from sites. They aren't obligated to show the data, they could just whitelist sites that they believe are 100% not trolls and then consider other sources based on a history of correctness or human verification. Imagine if they just added the top 10 news sites as valid sources. Then I bet some site like Snopes would pop up as valid because they had a history of agreement with some credible source. It's like Pagerank with trust instead of inbound links.


Google has a logistical leg up here, in that the internet isn't a sea of faceless IP addresses like it might be for a startup in the same position.

Google has a lot of information on us explaining how to rate this new information.


What could possibly go wrong?


I guess there's no point in trying? /s


Right, but when the most likely outcome is that it will make the problem worse rather than solving the argument for not trying becomes a little stronger.


The only winning strategy is skepticism of all claims.

The idea that computers will help discern absolute truth in matters that mostly are based on opinions and prejudice is pitiable and arrogant.


Big Brother further pressing their place in defining what is and isn't true.


It's a good thing Eddie Bernays never ran Google


Pretty sure Schmidt has read all of his books.


A good attempt, and it's good to see some CSR in this domain along with the demonetization drive going on in youtube. Kids (and adults) may be slowly getting better at discerning truth from motivated narratives in advertising, but these critical literacy skills are some way underdeveloped with regards to making such a discernment when truth is algorithmically generated. Whether this will be an extension of this problem or a partial solution, we'll have to wait and see.

Those hoping that it will stem some of the reactionary influence we've seen establish itself over the last two years will be disappointed, I fear. A fact checker avails you little against a fiercely anti-intellectual movement.



Looks like this is only for approved Google News sources. It’s a good start, but I’d be worried about relying on something like this.


Given that Google News literally showed a 4chan thread about an invented Islamic terrorist as a news source during the Las Vegas shooting, I'm extremely skeptical that this will end well.


Sure, but at some point Google has to decide whether they're a search engine and should return relevant results based on what the user is looking for or a publisher that curates results and is therefore responsible for the content that appears in their results.


I don't really think this is even remotely useful and will probably be abused within the hour.

Plus, I'm not sure if I want to trust Google to check what my website or blog says is true. I don't see any extensive track record of being a fully neutral entity when it comes to things they care about, which would only hamper discussion on the topic.


We don’t need technology to fix this. We need better education. This is a problem not created by technology but by ignorance.


Arguments that are based on the premise that everyone else is stupid are condescending and wrong.

https://xkcd.com/1901/


How do we solve the issue of trust? This is the fundamental issue I think is at stake here.

It reminds me of the two generals problem https://en.wikipedia.org/wiki/Two_Generals%27_Problem


This is an interesting idea that seems like a good attempt in informing others .

But the big issue people don’t believe facts whether they are presented with information that refutes it. They believe facts because it aligns with their ideology.


But of course you are above that sort of thing. Condescension isn't an argument.


I think the best method for detecting propaganda or fake news is to look for rhetoric. If the author is overly verbose or presents something other than basic facts or logic they're probably lying.


This a great way to dismiss in-depth investigations and listen only to soundbytes.


pretty much Google is sending out the message that they have failed.

can we expect the overly hyped palantir stocks to crash now that they publicly stated they can't do even basic text/sentiment analysis?


Supercool idea! I wonder what the W3C has to say about it though.


Why should the W3C have anything to say about it? It doesn't conflict with html in any way, it's just structured content inside html.


It's JSON-LD, which has been a standardised W3C recommendation since 2010.


Ministry of Truth as a Service, now that's something google can get behind!

Either way, this factbook would greatly improve Siri, Cortana and the google counterpart.


It's absurd for content creators to implement it. Google will leech your work and use it to build their Google brain bigger, when you're not necessary anymore, Google will ditch you.

Same with the stupid Google catchas that make me sick.


Suppose a left leaning fact checker is more critical of conservatives? Many believe this is common.

"Conservatives just lie more"

Suppose they are more critical regardless of this (of truth)

And what about statements which can be interpreted as false by an uncharitable reader? This is extremely common.


Someone at Google should have read and understand this first: https://en.wikipedia.org/wiki/Epistemology


Quasi-distributed censorship.


We don't need russians anymore. Google's new mission : organising the world fake information and making it accessible to everyone.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: