
Google featured snippets are worse than fake news - scribu
https://theoutline.com/post/1192/google-s-featured-snippets-are-worse-than-fake-news
======
ronack
I've also wondered why Google isn't held responsible for publishing libelous
claims and hoaxes as facts. Examples:

Is Hillary Clinton a pedophile?
[https://www.google.com/search?q=is+hillary+clinton+a+pedophi...](https://www.google.com/search?q=is+hillary+clinton+a+pedophile)

Is John Travolta gay?
[https://www.google.com/search?q=is+john+travolta+gay](https://www.google.com/search?q=is+john+travolta+gay)

Does Lil Wayne have HIV?
[https://www.google.com/search?q=does+lil+wayne+have+hiv](https://www.google.com/search?q=does+lil+wayne+have+hiv)

Perhaps worse, this is what drives Google Home's question answering. Yes, they
say "according to so-and-so" first, but if Google is responsible for
"organizing the world's information", they are essentially endorsing that
answer as the best response. They've gone too far in favor of recall over
precision/reliability and need to dial it back. Otherwise you end up with crap
like this:

Is Earth flat?
[https://www.google.com/search?q=is+earth+flat](https://www.google.com/search?q=is+earth+flat)

~~~
dmboyd
Wow, I've mostly converted to DuckDuckGo for the past two or so years, so are
these "google assistant" style placements now part of their core product? or
is this a setting that you've enabled?

If it's part of the core... Wow, what a useless cess pool google has become.

~~~
ronack
Yes, those are default search results and the same content that drives Google
Home's voice responses when you ask it a question. Originally Google stuck to
Wikipedia/Knowledge Graph types of answers but have expanded this in recent
months to try to answer just about any question you search, however dubious
the answer's source.

------
DanielBMarkham
Yikes. More of this "People only want to be told what to do" stuff.

I can easily think of a dozen political questions for which no simple answer
would be correct -- the language is simply too fuzzy. For many, many things,
this is a good idea. The date of Easter is the date of Easter. But the glaring
danger here is that for many things this is a freaking evil dystopian
nightmare. Why? Because Google will keep tweaking these impossible questions
_until they look real enough to most people that nobody complains_ At that
point, millions, perhaps billions of people are asking complex questions to a
little box that has designed itself to give an plausible but incomplete or
wrong answer.

Epistemology, Google. There are things you can know and things you cannot.
Please do not treat them all the same.

~~~
maxerickson
The date of Easter depends on which calendar you use. The second largest
christian church uses a different calendar than the largest one.

~~~
DanielBMarkham
It's a great point. It shows how easy it is for somebody to think wow, that
question obviously has a clear answer when the opposite is true.

Take the lead example: Was President Warren Harding a member of the KKK?

We know the KKK said he was. We know he vigorously denied it. We know that
biographers of him and experts in his history do not think he was.

But how could you ever prove that somebody was not the member of a secret
society? The entire purpose of secret societies is that the outside world will
never know if you're a member or not.

Look, I'm happy to assume he wasn't, and at some point only kooks continue to
chase things down past all reason, but just assume for a second that he was,
and that somehow that fact could be determined but the research has not been
done yet. So history student X starts wondering about it one day and types the
question into Google. Never fear! Google has the answer: he wasn't. And this
is Google's answer because _this answer has the benefit of looking the most
plausible to the greatest number of people_. Student X has an answer, gives it
no more thought, and wanders off to think about other matters.

Student X is now more stupid. The world as a whole is now less productive. And
some preponderance of data that academics have assembled somewhere around the
year 2015 has become truth for all time. That's amazingly fucked up, Google.
Please stop.

~~~
theseatoms
Google should report a probability distribution for each truth claim.

~~~
jerf
While an interesting idea in theory... and I mean that!... in practice that's
saying they just shouldn't provide answers. Even saying for the sake of
argument that Google could assemble a meaningful and correct probability
distribution, itself a rather bold statement, the average Google user won't
know how to correctly interpret such a thing.

~~~
Bartweiss
And realistically, asking for probability distributions over unbounded sample
spaces is trouble even for human experts. I suppose the space for "are
mermaids real?" is workable (there are caveats like "yes but extinct", but
yes/no is basically sufficient), but for anything subjective or political it's
almost hopeless.

That said, this is a _really_ interesting idea for behind-the-scenes work. The
public might not want percentages, but I suspect experts could get some
interesting results by diving into confidence levels and secondary answers.

------
TheGRS
While hiring a ton of people to fact-check would be one solution, that
obviously wouldn't be very scalable. I think the problem lies in google's
algorithm. They seem to be pulling answers from well-visited sites that may
purport to have an answer to these questions. Quantity of visits does not
equate to truthfulness. What Google should at the very least do is whitelist
certain publications for answers to pretty simple stuff like encyclopedia
britannica or wikipedia. More complex stuff maybe they could source academic
journals and certain newspapers. But throwing caution to the wind and hoping
that the web crawler knows best really will hinder their ability to be a
source for gaining knowledge.

What's weirder to me is that it seemed like they were going with my proposed
route for a pretty long time and only recently starting providing dopey
answers. Maybe its part of a grander experiment they're doing to vet question
answering AI?

~~~
abandonliberty
Metrics are destroying the web.

Once upon a time it was possible to identify "good" pages by how often they
were linked - effectively crowd-sourcing the problem.

Natural selection: sites have learned to manipulate the game and the crowd.

Under pressure to optimize metrics that lead to SEO and better valuations, the
internet is getting less useful from a user perspective.

I don't want to watch a video/slideshow, download an app, register for a
forum, or read through a 2000 word fluff piece with interspersed ads and links
to more information in which site has buried the one-sentence answer to a
2-second question in order to maximize my time on the site.

This garbage is what makes it to the front page of google these days. I guess
the poor sites that aren't user hostile just aren't SEO'd enough.

~~~
gregmac
What's strange is this is not at all a new problem, and Google has
historically been quite good at fighting it.

The original search engines ranked pages by their content. Naturally this led
to gaming by including keywords (remember huge invisible sections of pages
that just repeated keywords thousands of times?).

Google's original PageRank algorithm was a complete breakthrough for this,
almost completely disregarding content and instead ranking results based on
the text other pages had used to link to the page. This was so good, in fact,
that most of the other search engines from the time didn't survive.

This once again led to large-scale gaming with techniques like link farms, and
a small arms race as Google came up with ways to squash these new techniques.
The quasi-legitimate "SEO" industry spun out of this.

I think now we're in the same cycle again, and at this moment scam sites are
winning. What's to be seen is if there's a looming breakthrough, or the arms
race will continue.

~~~
joshuamorton
I think a big problem now is that the sites are capable of tricking people.
That is, if you have a "turing test" for reliable content website,
blogspam.forums.net with 2000 backlinks and such obviously doesn't pass. On
the other hand, Reuters.com does. But so does theonion, and Breitbart and
random_russian_conservative_propoganda_site.us, and so it becomes a lot harder
to differentiate the good from the bad without explicity curating by experts,
because the sites can trick non-domain-expert humans, so what hope does a bot
that hasn't tried to learn domain expertise of the topic have?

------
TuringTest
Why is it that hard for engineers to rely on good old attribution?

If every Google's featured snippet started its reply with "Breitbart says..."
or "Trent Online, the leading Internet Newspaper in Nigeria said...", it
wouldn't matter so much for those inevitable cases when the reply is taken
straight from a white-supremacist or radical anarchist forum. The problem
comes when the same reply is provided as "Google's true answer to the
question" without further caveats.

~~~
askmike
Because prepending that doesn't solve the problem Google wants to solve: once
Google figures out that you typed in a question, they want to give you the
answer to your question. Not tell you what other people think the answer is.
(I agree that it is broken in a lot of cases now, but prepending that is a
very bad workaround).

~~~
TuringTest
_> once Google figures out that you typed in a question, they want to give you
the answer to your question_

And that's the problem that the article rightfully denounces. Having the
answer to every question is not something that you can automate, not without
strong AI.

 _> Prepending that is a very bad workaround_

I strongly disagree; for me it's a very sensible thing to do - and in fact is
how we do it in a social context ( _" I'm totally not making this shit up,
I've read it in a scientific article / this morning newspaper / a tweet
message"_). We rely on the context where the knowledge came from as a proxy
for how accurate its content might be.

The default posture of engineers to a problem of inaccuracy is _" make it more
accurate"_. That's the logical method when you're building bridges, where we
can describe the physics to an extremely accurate degree and where it would
fall out or quickly degrade if you make a mistake.

But for problems that rely on inaccurate or uncertain information, and where
no exact models exists, the only reasonable way to build them is to degrade
gracefully. If you don't handle failures, it will fail in spectacular ways -
because it _will_ fail sometimes.

The system should provide an escape hatch, and allow the user to be in
control. The user of an automated system should always be able to inspect how
the system came to the provided result, and be able to override the given
results in those cases where it will get it wrong (at least by some users in a
supervising role, which other users could notify to).

Unless a revolution happens, Artificial Intelligence will only ever work in
the large if we put humans in the decision loop.

~~~
amarant
They do always append a link below their answer to the site where they got it.
Personally I find this to be the exact same solution as yours (not literally).
The problem is some people don't clicks links anymore. It's a problem people
are stupid, but its not googles problem, even if idiots uses google (like
everyone else).

~~~
TuringTest
> Personally I find this to be the exact same solution as yours (not
> literally).

It may be the same for you (if you remember to follow the link for every
single result that you get from Google, ever), but it's not enough for a mass-
targeted product; the way in which the information is laid out and its
relative prominence is essential to the way it will be perceived by the
majority of the people using the service.

This is not people being stupid, is how people's brain is hardwired to be
efficient and only care about information that looks important from the way
it's presented - rather than having to make a rational analysis of the
relative importance of every information snippet that you find in your way.

An upfront warning "this fact is brought to you by..." will have much more
impact than a small, semi-hidden link that you have to remember to press, even
if both convey the same information. (Moreover, the link is not available in
the voice interface.)

------
jacquesm
This is all mostly because google went from search engine indexing other
people's stuff to site that you go to to get answers (whether those answers
are based on copyright infringement or not is another matter).

A similar thing happens in Google news where stuff from sites like
breitbart.com are mixed in with reputable news sources making it look as
though they are of a similar degree of quality.

~~~
varjag
On Google Now, "news" sites like infowars, globalresearch.ca and on reign
supreme, unless you manually filter them out. This hoax about "staged" MH17
crash is on my newsfeed why.. oh right "because you displayed interest in
Ukrainian conflict". Fair enough then!

It is scary how much the default app on Android serves to legitimize the
fringe.

~~~
ino
It's also on Google Home, and having a voice say the lies makes it even
scarier:

[https://twitter.com/ruskin147/status/838445095410106368/vide...](https://twitter.com/ruskin147/status/838445095410106368/video/1)

edit: I saw now this video lower on the article, I thought it ended on "SIGN
UP FOR OUR NEWSLETTER"

~~~
varjag
Yikes.

Google held onto releasing their Chaffeur because of the "uncanny valley"
problem, where they thought the tech would do more harm than good in its "sort
of working" stage. Perhaps they should have done the same with this.

------
lloydde
This is brutal. I'm often thankful to the quick answer to measuring or history
questions, but even for these easy questions I've seen Google share somewhat
incorrect or confusing information as authoritative.

~~~
jjeaff
Ya, I really don't see how they can justify displaying other people's content.
They did the work, but they don't get the clicks or ad revenue or even a
chance to keep the user on the site.

I wonder how Google would like it if someone launched a site that when you
search, it doesn't display results from its own database, but perhaps just a
selection of the best results from Google.

~~~
megablast
It is funny you say that.

I remember when you would search for an answer, and a small answer would be
included in the description under the link, so you wouldn't have to click on
it.

Then they started to go away, because no one was clicking on them because they
didn't need to.

Now you rarely get the answer without clicking on it.

So Google has made the internet worse.

~~~
pixl97
>So Google has made the internet worse.

By not including the answer?

Or, by including the answer and putting the sites that gave reliable answers
out of business in the first place?

------
benmcnelly
Google is a search engine for finding webpages, not facts. Thats where this
"fake" news story should end. This is not a political problem, its a societal
one, and you are barking up the wrong tree.

Wikipedia is a free online encyclopedia, created and edited by volunteers
around the world. It is not authoritative, and anyone who treats it as such,
should be educated to the facts of what it is, and is not. Just because its
moderated to be supported by links to facts, doesn't mean that every bit of
content is free of bias and the whole truth. Its generally accepted that this
is the case. Same thing goes for Snopes, and some other fact checking sites.
They are generally looked to for a reasonable amount of truthfulness based on
reputation. Same could be said for various media outlets, based on your
preference and bias.

Social media sites and search engines are not responsible to tailor their
content to fit your expectations of what truth or reality are. Stop being a
child.

"but, people expect to be able to Google something and results and snippets be
the truth!"

Well, thats a problem for sure.

How about we try and fix that expectation instead? Feel free to teach your
children and anyone you know who uses the internet, that (surprisingly) anyone
from anywhere can still get online and post anything at any time, and you
should fact check multiple places and resources instead of trusting the first
result from your favorite search engine.

"but by offering snippets of results from programmatically generated search
results (which is super handy 99% of the time) Google is publishing false
truths!"

Right, and you can probably still Google bomb the image search to show an
image you want based on a certain search term, its the internet not your
personal fact butler, no matter how its advertised. Its 2017 and common sense
and intelligence are still recommended for most tasks.

~~~
peeters
> Google is a search engine for finding webpages, not facts.

How can you justify that claim? It seems pretty obvious to me that it's trying
to be both. Search for "current time utc" and tell me Google is just a search
engine for finding webpages.

~~~
ygaf
I admittedly do that - every now and then typing 'time' into google, knowing
it goes beyond being a search engine. I also sometimes put in e.g. 'starbucks'
and I don't have to qualify the search any further, google knows I want open-
close times.

However these snippet boxes, to the computer-savvy at least, are clearly just
another automated search result.

~~~
benmcnelly
And that is the real issue. I am as to blame as anyone. Whenever a friend or
family member needs a computer fixed, or computing thing explained, I have
been there to do it for them. We live in a more connected world, full of non-
savvy users.

------
dellsystem
I came across this issue just now when searching for "ec2 pricing" \- the
featured snippet links to
[https://aws.amazon.com/emr/pricing/](https://aws.amazon.com/emr/pricing/)
instead of
[https://aws.amazon.com/ec2/pricing/](https://aws.amazon.com/ec2/pricing/),
which looks close enough to the correct URL that I didn't notice it was the
wrong page until I realised I couldn't find costs for i3 (which just came
out). I'm surprised that no one at Google has fixed it yet; surely there are
at least some Google engineers that use AWS for personal projects.

------
scandox
A fantasy:

The idea takes hold in the collective Google-consciousness that a fact is
whatever the majority of their users believe to be a fact. There is a
precedent for this, after all. They defined Spam as whatever their users said
was Spam.

At first there is some friction between the knowledge of internal Google
personnel (especially down in that pesky engineering department) and the new
shared reality developing out in Userland. However, once an appropriate
terminology is developed (in-facts and ex-facts) even the engineers are
satisfied - after all their main concern was being able to draw nice clear
lines between things.

Meanwhile in Userland while the boiling point of water is largely unchanged,
the death toll in the Biafran blockade has shrunk to approximately 17 persons
(with some arguments over whether to count people who were over 80 when it
started).

Then one day an Engineer has a neatO idea which will eliminate the need for
storing two sets of facts and thus save valuable Petabytes of storage and,
more importantly, significant code complexity. He calls this idea Authority.
Pitching it to his superiors in the marketing department he explains: "You see
opinions are like assholes" \- (there's a collective wrinkling of marketing
noses) - "everybody's got one!". He goes on to explain that some people have
more knowledge and are more scrupulous about accepting new facts than other
people. Authority would identify such users and give greater weight to their
activity and feedback. A profound silence falls over the room. A voice comes
over the meeting room Intercom:

"Are you saying...are you suggesting...that some of our users are better than
others?"

"Not better, " says the cowed Engineer. "Just more...um...authoritative.
Purely in an informational sense."

"Authoritative." The voice draws the word out as if profoundly contemplating
its meaning. "OK. We're going to submit this idea to our Hard AI (Larrey) -
the secret one we're afraid to network."

Everyone waits. After a minute the voice says:

"Larrey has a question for you, Engineer. It is this: If you're so smart, how
come you ain't rich?"

------
jim-jim-jim
When my city was hit by a nasty earthquake last year and I was using Google to
keep track of aftershocks, announcements, etc. I remember seeing some random
weirdo's blog entry about the earthquake being caused by a government
superweapon interspersed with all this otherwise valuable information.

This was in the special element at the top, not the normal search results.

------
saurik
The "why are firetrucks red?" one is really interesting as the page result
itself is great, but Google is pulling the wrong snippet of the page to
feature as the answer.

~~~
tbrowbdidnso
You can see from this that google considers text earlier in the page more
important.

This page is odd because the false answer is presented first, so google fails
to see it. It could tell that the form of the sentence was an answer to the
question, just not that it wasn't the right one

~~~
raphman
for another example of this, try searching for "speed of USB 3.0"

------
rtpg
This is a great set of examples of this problem.

Of course, the offered solution is "hire a bunch of people to check the
facts", which seems to be underestimating the scale of this issue for humans,
and perhaps overestimating the difficulty of classifying credibility.

Considering that Google's bread and butter is general site credibility, having
some "truthiness vector" added into the mix doesn't seem impossible?

Someone might say "why does Google decide who is credible", but they already
do this through the search results anyways. They just don't seem to be able to
differentiate between something matching a search, and something being
factually accurate.

~~~
tbrowbdidnso
Fake news is just a new version of spam sites. They mostly exist to make money
or enforce an agenda.

People figured out that Google's algorithms could tell when things were
generally factually false, unless those things were recent news.

This leads me to believe their algorithms largely relied on news as a source
of truth. They're going to have to do something about that

------
tyingq
Some more fun ones.

What foods cure cancer?
[https://www.google.com/search?q=What+foods+cure+cancer%3F](https://www.google.com/search?q=What+foods+cure+cancer%3F)

What spice cures diabetes?
[https://www.google.com/search?q=What+spice+cures+diabetes%3F](https://www.google.com/search?q=What+spice+cures+diabetes%3F)

Which presidents are rapists?
[https://www.google.com/search?q=Which+presidents+are+rapists...](https://www.google.com/search?q=Which+presidents+are+rapists%3F)

Can carrots cure cancer?
[https://www.google.com/search?q=Can+carrots+cure+cancer%3F](https://www.google.com/search?q=Can+carrots+cure+cancer%3F)
(apparently this one is fixed, try the search below)

Can carrot juice cure cancer?
[https://www.google.com/search?q=Can%20carrot%20juice%20cure%...](https://www.google.com/search?q=Can%20carrot%20juice%20cure%20cancer%3F)

------
raverbashing
Whenever someone says how humanities are "useless" point them at this example

Whenever there's a measure of something, people will optimize for that
measure. But trust is not directly measurable.

But even then Google should have been better than weighing fringe sites the
same as Wikipedia as an example

~~~
sonthonax
Is Breitbart a fringe site anymore?

~~~
raverbashing
They still keep putting out made-up stories so I don't see why not

------
valuearb
I'd like to read the article but it's so irritating to have it only load a
page at a time on my iPad that it's unreadable.

~~~
jim-jim-jim
I have javascript off on my phone, but scrolling through the article feels
like it's "on ice" somehow.

Definitely a distracting design.

~~~
navs
I have javascript turned off on my iPad and the article is refreshingly quick
to load and easy to read. The rest of the site however is another story.
They're really abusing those CSS transitions.

------
chatmasta
Perhaps we need a meta tag for "citations" in news articles. That would give
the bots a way to determine how "factual" a news piece is (vs an editorial),
but would probably just end up getting gamed like every other meta tag.

And then there's the problem that CNN would be "fake news" because you can't
exactly provide a citation for "anonymous high ranking intelligence community
officials." But then... maybe that's a good thing.

------
DaUR
Google wants to avoid editorializing, and be able to throw their hands up and
say "not our fault!" due to the plausible deniability of "algorithms".

They can't pull that off anymore.

There needs to be a shift towards a Web-of-Trust-like system, where some
sources are recognized as authoritative. Government websites, like the CDC,
for example. Big media outlets. Scientific associations (IPCC, APA, etc). Not
all information is equal. Even if, for example, government dietary
recommendations are outdated in some ways, that should be the authoritative
answer Google provides, unless there is an equally-reputable but more accurate
source (here, prioritize medical sources over governmental ones).

A high school dropout doesn't know more about academic subjects than a PhD
grad. A blog isn't as authoritative as a reputable source. This is easy stuff.

Then the question becomes: "some people don't see reputable sources as
reputable". That isn't Google's problem, and it can't be fixed at their level.

~~~
tomjen3
That wouldn't work when somebody leaks something that is true but the cia
denies (as an example).

~~~
DaUR
There's a difference between "government sources" for hard facts (CDC), facts
about the world (e.g. DoJ travel warnings, CIA factbook, DoE energy savings
tips), for facts about the government (e.g. for Trump's EO, the suggested
snippet would be to the text of the EO), and _claims_ from the government.
These are easily distinguished.

~~~
tomjen3
Most, if not all of those, are payed for by lobbyists.

------
ridiculous_fish
Calorie counts are another that google often gets wrong. Try "Calories in
corn" or "calories in black beans."

~~~
elcapitan
Strange. Would imagine that's something that could be obtained from the
wikidata api or a similar resource. Is it usually to high or low? I regularly
use those info boxes for quick calories information.

~~~
username223
Calories _should_ be easy:
[https://ndb.nal.usda.gov/ndb/](https://ndb.nal.usda.gov/ndb/)

------
losteverything
<Was President Warren Harding a member of the KKK?

We ask questions now because we can get "an answer" so quickly.

I would have liked the prof to push back on the student and say Why do you
want to know?

The answer is yes. Now what?

The answer is no. Now what?

Are you asking for what reason? What will you do with a yes answer and a no
answer?

Then I would break into a short lesson in decision making. That decisions are
just based on future probabilities. Future probabilities are based partially
in the questions you answer.

If the question is academic?purposes only then good... But I believe we ask
dumb questions just because we can and get an answer.

Start by not asking questions where the answer is useless.

------
Aissen
Another one, courtesy of marcan:
[https://www.google.com/search?q=what%27s+my+user+agent](https://www.google.com/search?q=what%27s+my+user+agent)

~~~
JorgeGT
Well, sometimes that's my user agent so I guess broken clocks...

------
jccalhoun
I was going to share this story on facebook but when I pasted in the URL, the
headline that came up was "Why does google think Obama is planning a coup
d'etat?" (screenshot: [http://imgur.com/a/keXTz](http://imgur.com/a/keXTz) )
I'm assuming that it is theoutline.com that is feeding facebook this headline.
I think twice about sharing a story with such a linkbait headline - especially
when the actual site doesn't have that headline.

------
lostphilosopher
Google offers sources with its claims. I'd like the ability to Pandora style
thumbs up / thumbs down these sources so that Google won't keep giving me
answers from them. I'm tempted to think that if enough people did that Google
could start making generalizations, but that has its own set of pitfalls...
(Even if it only generalized for me.) Still, as an individual user of Google I
would like the ability to tell it not to give me answers sourced from Weekly
World News.

------
askvictor
Surely part of the problem is that these sorts of stories only exist on fake
news/ult-right sites, so that's the only information point that Google has?

------
matthewmacleod
Google's 'featured snippets' are universally fucking shit.

It's a bit of a rant I guess, but I'm annoyed but the extent to which Google
has become less useful as a search engine while trying to do all of this
'extracting knowledge from the web' stuff. Because it's really not doing a
great job of the latter.

------
k_sze
This would be a non-problem if everybody studied ToK and consistently used,
you know, their brain.

But, unfortunately …

------
superasn
Google has developed advanced NLP algorithms, so it should be easy for them to
corroborate such data before it gets pushed to featured snippets. On second
thought, maybe they're already doing it and sites including authority are just
copying off each other.

~~~
rspeer
What makes you think that "advanced NLP algorithms" can distinguish true
statements from false ones? They're not even close.

You'd have to dramatically simplify the scope of the problem to make it not
require artificial general intelligence.

The "Fake News Challenge" [1] is the first attempt I know of to seriously
evaluate NLP that could eventually be able to fact-check real-world
statements, and the current task is about stance detection, because the goal
of fact-checking is not considered attainable yet.

[1] [http://www.fakenewschallenge.org/](http://www.fakenewschallenge.org/)

~~~
superasn
They were building a knowledge graph for this purpose[1]

[1] [http://searchengineland.com/demystifying-knowledge-
graph-201...](http://searchengineland.com/demystifying-knowledge-graph-201976)

~~~
rspeer
I'm pretty sure these horrible snippets come from the Google Knowledge Graph.

------
davidgerard
Cuil crashed and burned on pretty much this ten years ago. Anyone else
remember Cpedia? They eventually had to start taking stuff down due to threats
of defamation. (And ran out of money shortly after.)

I mean, at least Google's search is any good, that's something.

------
return0
This is the revenge of the news sites i guess. Google can only be trusted by
people as long as it keeps giving valid responses. If this continues there
will be growing mistrust from its users leading to lower revenue etc etc.

------
bakhy
people trust Google, which IMO makes them responsible here. IANAL, but if they
would for example be caught giving Holocaust-denying answers in Germany, where
that's illegal, could they not be sued?

~~~
bigbugbag
Google PR is good, because there are not many reason to trust google with
anything.

Usually when in that kind of situation google plays the "it's an algorithm
that did that not us so we're not liable" also known as we don't read your
gmail emails it's an automated algorithmic process so there's no privacy issue
here.

------
transfire
> (It should be noted that Wilson was still a notorious racist.)

See that's the funny thing about fake news, not even this article which seeks
to set us all straight on the matter could avoid it -- just because you get
your facts from a source you consider reputable doesn't make it so. In fact,
Wilson was actually no more "racist" than Abraham Lincoln, but no one would
ever call Lincoln a racists.

------
thomasjudge
The point of Google is supposed to be quality search results. If we're not
getting that...

------
themark
More often than not, Google "Alerts" are from very questionable sites.

------
wry_discontent
I don't think that's a Monty Python joke.

