Hacker News new | past | comments | ask | show | jobs | submit login
Facebook is gaslighting the web. We can fix it. (dashes.com)
276 points by slig on Nov 21, 2011 | hide | past | favorite | 89 comments

I work on the team that generated the warning that seems to be the crux of this post. I am pretty convinced that it is a bug.

His central theme, though, is a bit misguided. I don't understand why 1) using opengraph, or 2) using a like button implies facebook should trust your link and whitelist it. Even pages with those integrations can be malicious.

In this actual case though, the notification link (generated from the commenting widget) seems to malformed and causing it to trip a security check. I've pinged a bunch of people about figuring out what is happening and getting it fixed. The guy sitting next to me is currently trying to repro.

As for convincing Google/Microsoft to warn users when visiting facebook.com because of security false-positives, I'll leave that discussion for you guys.

Let's argue Reductio ad absurdum.

Why does not Google pop-up similar warnings when you click on its search results?

-Because Google is dependent on the richness and abundance of third-party websites, for its search to be meaningful.

What is the objective of Facebook?

- To suck users into facebook.com, and sandbox them there. Similarly, the smaller objective of Facebook Social plugins is to lift the userbase from third party websites and move it into Facebook.

You seem to be under a collection of interesting misapprehensions...

1. Google does warn in various ways when it detects possible badness. As it should.

2. We don't gate ALL links through such warnings. This can be verified by going to your news feed and clicking just about anything.

3. This is about a specific issue with notifications generated from comment widgets (a very common spam vector).

4. Detecting all badness via the domain name at "write-time" is not a sufficient solution to the malicious link problem.

5. Whatever that was, it wasn't reductio ad absurdum.

I have a problem with the way Facebook allows me to share links. Even by using your 'advanced' login security measures, I have been required to fill in CAPTCHAs hundreds of times for legitimate links. For instance, why is a link to an imgur JPEG file considered possible spam? Do you not whitelist domains? And isn't the repeated use of CAPTCHAs generally an abuse of the actual purpose of a CAPTCHA?

Additionally, Facebook has disallowed me from posting specific legitimate links. You've failed as a communication medium when you censor links. There was no indication that anything was wrong with these links that I shared with friends. There's no excuse for this practice.

Yet, at the same time, you allow seriously terrible practices on your own site, such as pages which require users to click on fake button images to do actions. It makes absolutely no sense how you are "policing" the integrity of your own site and the linking to other parts of the greater web.

>4. Detecting all badness via the domain name at "write-time" is not a sufficient solution to the malicious link problem.

Doesn't Google has this problem too, that detecting badness at the "indexing time" is not a sufficient solution? The content of a site may change between their checks. No pop-ups are shown in between indexing times nevertheless.

With your abuse reporting volume, you should be able to almost instantly detect statistically significant malicious links, and remove them from your news feeds, should the content change to malicious after "write-time".

I can't believe I'm posting something that might be taken as defending Facebook but...

If a site appears to contain malicious content at time X but not at time Y than I would PREFER to be notified that it is a dubious site until the site has earned back trust in some way. Continuing to warn users about a site that historically contained badness seems to me to be a FEATURE.

That's actually exactly Google's solution: they give you a set of reports and an indication of when the malware or malicious link was last detected on the site.

But I don't think that's the issue here. That facebook warning does not, as far as I know, get generated from a positive malware/spam/badness metric. It's just thrown up as a default action when someone links to an unblessed site on the web. That's what the poster doesn't like: it goes against the whole idea of hyperlinking.

But as lbrandy mentioned, this is in one rare case and appears to be the result of a bug. If this happened when most links were clicked leaving Facebook, you can rest assured that all publishers would be up in arms.

Hopefully it's a bug. But I see it routinely. It's definitely not a "rare case" by any metric I can think of.

Google scans for malicious sites as well. The difference is of course, that on FB people share links whereas Google can just decide not to include a link in the results and the user will never know.

If they had a limited objective of stopping malicious sites like Google does, they could reject the malicious domains at the moment they are attempted to be posted, instead of scaring users from leaving facebook.com each time they try to navigate away.

I happen to run a well known service, and we encountered the malicious links problem. It has never even crossed our mind to display those pop-ups, instead we stop malicious links from being posted after a domain is reported or detected otherwise.

Google has been criticized about banning less websites than they should - their preference is clearly towards false negatives, whereas Facebook's is clearly towards false positives.

google doesn't seem to have a problem with accounts being compromised from visiting malicious sites. I do like the incentive razor you've applied, but i think there is a simpler explanation in this case.

I appreciate your thoughtful response and want to make clear: I'm not ascribing ill intent to you or to any of your individual coworkers. What I am suggesting, instead, is that the overall goals of Facebook as a company combine to yield this result, and that the overall result is a deliberate outcome of the company's strategy.

That being said, I'd eagerly await resolution of the bugs you've described.

I do not mean to suggest that use of OG or the like button should imply trust, but rather that crawling of a site by Facebook consistently over months or years should show whether it has ever been a bad actor, or whether it's ever been flagged by others as a site with ill intention; Indeed, that's exactly what Stop Badware et. al do.

FWIW, I see that warning every single time I click a link from inside the FB app.

I find it annoying as hell, but I took it as a bad UX decision and not a conspiracy.

Thank you for expressing this so cogently and calmly. It's easy to get defensive when somebody starts flinging wild accusations like Anil does here, but informative and level-headed responses like this are much better at keeping the conversation on track.

It’s hardly calm and cogent — in fact it’s sort of a wild accusation — to describe a post as “flinging wild accusations” when it does nothing of the sort. Yes, Anil’s post is sharply argued and it’s definitely a polemical pushback against Facebook’s practice, but it’s a valuable part of the debate (especially with the added value of the comments below it), not a flame.

I don't see how you can describe the statements "Facebook is gaslighting the web"† and "Facebook has moved from merely being a walled garden into openly attacking its users' ability and willingness to navigate the rest of the web" as anything but wild accusations. You might argue that, as wild as they are, they're nonetheless true — but they are accusations and extreme ones at that.

I certainly feel that Anil's post could have benefited from at least a cursory application of Hanlon's razor.

† For those who don't know, the term "gaslighting" refers to a form of mental abuse where you undermine a person's confidence in their own perceptions and competence in order to retain their belief or loyalty. The typical example is an abusive husband who keeps his wife from leaving by making her feel like it's all her fault.

Agreed. I was expecting a much more substantial argument from Anil, but instead the post offers a list of disparate concerns without any real attempt made to link them together or to justify such strong statements. There are all kinds of explanations for discontinuing RSS import support and for malware warnings that do not point to nefariousness on the part of Facebook.

Appreciate the support, HalSF -- I try very hard to be intellectually honest even when making an obviously strongly-felt point.

Do you really think so? It's intellectually honest to suggest that Stop Badware — a system intended to cordon off viruses, spyware and surreptitiously installed junkware — should be used to punish a site because its threat detection algorithms are overly cautious? That doesn't strike you as even a little bit spiteful?

Stop Badware regularly flags sites that issue spurious security warnings in an attempt to mislead users. Facebook certainly falls under that description.

Can you give me an example? Sites get entered in there if they try to install malware on people's machines. I think it's straight-up ridiculous to say that FB's text there is an attempt to mislead users. It gives good advice that most people don't keep in mind at a funnel point that's known to be relevant to an enormous number of phishing and other attacks.

Do you actually believe the things you're saying? I'm struggling here.

(StopBadware actually doesn't flag sites at all.)

But you didn't make a case either that Facebook is attempting to mislead users, or indeed that their actions are misleading users at all. A number of sites gate outbound traffic for security reasons, and the language Facebook shows users is totally consistent with this - it doesn't read "You are about to visit a dangerous site", instead it warns users in generic language of phishing, etc. The error here is attributing malice to a bug or annoyingly-implemented feature. That feature clearly falls short of any reasonable "badware" criteria.

I 've had this domain that was since forever a simple redirect to our facebook game spam-listed. I 've sent countless appeals for almost a year now and never heard back. Have something to suggest?

Having been bit by this warning message, all this sounds a bit too familiar.

At the time it was being caused by McAfee (yes, dust-off the anti-virus conspiracy theories) had flagged our domain as untrusted because our main virtual host (www.) was returning an HTTP 200 on a 404 Not found page. Yes, that's the "security risk" they found. sigh

we've been a big user of facebook but now we're all cancelling our accounts. we used it to organise trips and meet people but now this is just all becoming too suspicious.

Cache link is essential, of course the term 'gaslighting' [1] may be common in some groups, it was new to me. The general theme is that Facebook is making changes which make the service benefit Facebook more and is less user friendly. I'm not sure it rises to the level of abuse implied by the term but that is clearly subjective. The 'answer' of course is to leave.

I know, I know, "But all my friends are there!" or "Nothing else has the reach of Facebook!" or "I've invested thousands of hours in Facebook!". At the end of the day, Facebook is on the road to becoming a 'public' company, and they are making choices which are in Facebook's interest (mostly about the whole Open Graph stuff which they will sell for money to advertisers for revenue.

The 'good' Facebook you are looking for has to charge its users for accounts because that is the only way to pay the bills without selling you off to less purient interests.

[1] https://en.wikipedia.org/wiki/Gaslighting (warning Wikipedia link)

The 'good' Facebook you are looking for has to charge its users for accounts because that is the only way to pay the bills without selling you off to less purient interests.

Is that related to how the only 'good' search engine would be one that has to charge you per-search lest it be forced to sell your click info and search terms to less prurient interests?

Personally, I would rather be stalked by a search engine than by a social network that has seen my photos and knows who my friends are.

However, with the arrival of G+, this issue has become far more serious...

You can always ignore G+ and browse like you used to.

Hell, create a whole new Google Account completely disconnected from your prior history and G+ connections on your main account. Thats what I did right as it came out.

The difference is that the connections on G+ hold much less meaning than the connections on Facebook. Facebook automatically populates specifically-named friend lists based on your network data and asks that you refine their lists to improve their data collection.

Essentially, users who choose not to provide Facebook with their close friends, family, university, and location will be scoped out and tagged by their own friends using these smart lists. It's a disturbing strategy for data collection.

Its a fair point. One would hope that the only information you share with advertisers is what the person is looking for, not things like they just signed up for a ski vacation with family, or recently became single, Etc.

One of the ways Blekko (a search engine) differentiates itself in this space is by having a privacy policy that let you keep all of your data private, up to and including not being advertised to at all. (and yes I work there, and no this isn't my pushing 'my' search engine, just that you applied your response to search engines and there is an existence proof of folks who took that step to give you the option of not sharing anything about your searches with anyone)

>(warning Wikipedia link)

What's the warning for? If it's warning against being confronted with Jimbo's piercing gaze, that seems a bit snarky.

I took it as a warning that you're about to click a link that has a high potential of sending you down a rat hole that eventually ends up on the Philosophy article.

I took it as a riff on the "Please be careful" popup warning from the article.

What's the warning for?

Same reason as TvTropes. Potential tab explosion/epic wasted time event.

I generally prefer to quote somewhat more authoritative sources than Wikipedia, it isn't that they are always (or even most ways) incorrect or ill-written, its just that for subjective things they cannot be relied upon. It is the nature of Wikipedia of course.

Oh, it's snarky. But not for that reason.

I'm usually pretty tolerant of Facebook's more aggressive initiatives, but I've started blocking all of the "frictionless sharing" apps. The user interface they present is ridiculous.

1. I see a friend read an article that looks interesting. I click it.

2. Every time I click one of these I'm asked to add the application before I'm allowed to view the link.

3. My options are "ok" and "cancel". The first few times I assumed I couldn't read the article without clicking ok so I just hit the back button. It turns out "cancel" really means "don't add the app, just take me to the link".

Without that confirmation dialog (which most people probably blindly click through) this is exactly how social media worms work.

Perhaps alert users will click "cancel" the first 5 or 10 times, but eventually they're going to accidentally click "ok" or just give in. Not cool.

Plus, why would I want to see articles that friends read, but didn't think were worthy of manually posting about in the first place?

I tried clicking “Cancel” on an app today, I was returned to FB without reading the article. I won’t bother trying again to discover whether it was a one-off, whether it was just that app, or whether the problem had something to do with whatever I was using to read FB (Flipboard, Facebook for iPad, Mobile Safari, Desktop Safari).

It is enough for me to know that regular links always work (even though FB tracks click-through) and “frictionless” links, aren’t.

Yes, you should block apps that you don't want to see. They are (per your reasonable definition) badly behaved, and you should exclude them from your stream. I think that's exactly the behaviour that FB would want, because it ties the utility/economics of those apps to the quality of their user-experience.

(Disclosure: I start at FB next week, though I don't have any inside info on this.)

It's an ok solution for me, but unfortunately I think most users will just click "ok" so they can see their friend's link without realizing what they're actually agreeing to.

I think that's what Facebook wants: "making the world more open and connected"... at all costs. They're constantly pushing the limit of what level of openness users will accept. It's going to backfire.

If you believe FB is out-there trying to conquer and destroy the web then please don't use FB comments on your website.

Or just don't use FB.

Everything I read about it just makes me happier that I don't.

This is the part I don't understand - last week there was an article about Salman Rushdie and FB. I doubt anything is going to change by complaining - FB is continuously going to do everything in its power to monetize and lock down it's users. Why not just stop using FB? May be I don't 'get' the importance of FB, but I think people can happily communicate with just email, phone and a bunch of other sites, which are less offensive than FB.

But as a website user I don't get to choose the commenting system.

As a blog maintainer, there are three major drop-in comments providers (disqus, intensedebate, facebook included), and if I've had gripes with one of the others, I might end up going for FB comments, despite allowing FB to extend it's "creep" there.

Regarding "Web sites are deemed unsafe, even if Facebook monitors them", wouldn't it be worse if Facebook deemed websites that it monitored 'safe'? Then Facebook would be saying: "use Facebook services on your site, or we'll scare all of your users with an interstitial message"

This is true. I presume a preferred option would be dumping the warnings altogether, but this would be a big boon for phishers.

What I don't like about the article is the suggestion that the best way of combating Facebook's excessively paranoid and usually unwarranted warnings about offsite content is to show excessively paranoid warnings to people trying to get into Facebook. Particularly when one of the principal gateways to Facebook is the browsers and websites operated by Google - not exactly a disinterested third party.

Why would it be a boon to phishers? Existing solutions for phishing/malware blacklisting are mature and quite well supported. There's lost of competition in the market, and they work quite well overall. It's not like Facebook couldn't just buy one of these for decent coverage.

The point is that putting up what amounts to a false negative malware warning on basically every "minor" link on facebook (I know it's not really a false negative, but that's how it's perceived) is just terrible overkill for the problem, and a terrible experience for the users.

That it survives at all leads one to wonder seriously about facebook's intentions.

People who use Facebook are silly people. No this is not a flame, it's AOL all over again. The walled garden, except the garden is a dystopian big brother state.

I don't mind people being in the dystopia of their own choosing at all. I enjoy going for walks outside the cyberdome. It's quiet and peaceful out here and people are not monitoring and trying to manipulate and control me.

Those who enjoy being a cog in a machine I am sure lead happy fulfilling lives inside The Facebook.

You know, the dystopia in which I'm forced to share pictures of my newborn son with my family, and talk to friends on the other side of the country that I haven't seen for months, is better than pretty much any other dystopia I can imagine.

>the dystopia in which I'm forced to share pictures of my newborn son with my family, and talk to friends on the other side of the country that I haven't seen for months

the things impossible without Facebook :)

The main thing that Facebbok does isn't [technically] enabling some specific communications in some specific space, the main thing is that Facebook puts people into that space, bumps them one into another and thus forcing/making/nudging/tempting to perform act of communication that otherwise may not have happened. Facebook makes [helps to unleash] one a social beast.

Vs. the force in dystopias, in the Facebook case we have dopamine generating activities grounded in the Facebook environment. While they can be replaced by dopamine generating activities in other environments [ i enjoy mine here on HN for example ], why would a social dopamine addicted beast that gets it fix on-schedule bother?

I guess I just haven't met these Facebook victims - vegetables chained to their computers, grinning and drooling as they click reflexively on adverts all day, unaware that they could be outside, surrounded by swarms of butterflies and running with wild horses in the warmth of the Summer sun.

Mostly it's just a website.

I'm definitely not criticizing anyone who wants to be in the Matrix, I believe in freedom and free will. I read the referenced article and it was yet another one by someone in the Matrix complaining about the Matrix. It seems such a strange complaint since people are not only able to leave at any time but there are many other cyberworlds they could inhabit alternatively, and many they can even create themselves to their own specifications and desires. There is no need to be unhappy with the offerings of a giant multinational conglomerate run by a cynical sociopath with publicly stated contempt for his customers in an age of freedom and choice.

I'd rather save criticism of this fabulously wealthy conglomerate to be directed at the disturbing ways they are trying to claim ownership of things outside their Matrix, such as that of my personal identity, and to track my comings and goings outside their Matrix without my consent. Those parts are disturbing and dangerous, especially when coupled with their deep interest in mixing it up in politics and influencing laws to create a captive universal audience.

Matrix is fine, but I want it to stay bottled up and optional.

Okay, so long as we're not being incredibly creepy and paranoid then.

You know what's incredibly creepy? 3000 faceless people in Palo Alto know more about your distant friends than you will ever know.

Depending on how much time you spend on the site, they probably also have profiled a sizable portion of your thoughts simply by tracking clicks and pageviews.

And yet this Christmas we've _still_ all had to manually make our Amazon wish lists, despite being watched round the clock by these crazy sociopaths who know exactly what we're all thinking.

Point 3 ("WEB SITES ARE DEEMED UNSAFE, EVEN IF FACEBOOK MONITORS THEM") has been addressed. Here are the other two.


False. Facebook's API allows all sorts of external content to enter Facebook. They're just shutting down their app that does that automatically. There are plenty of third party apps that already solve this problem.


False. The Washington Post has chosen to embed their stories within the Facebook canvas pages, but that's not a requirement. The other popular news sites on Facebook, The Guardian and Yahoo, do not do this.

This entire post is woefully misinformed.

Thanks Dewey.

The only thing that will "fix" facebook is the next thing to popup to diminish their influence.

Of course shooting themselves in the foot wouldn't hurt either.

Remember MySpace? How about Digg?


It is only when the annoyance levels reach a breaking point that the FB alternatives will be made as user-friendly as FB and brought to the attention of the masses. One will emerge as dominant. And the cycle begins again.

Every itch gets properly scratched, eventually.

There's also the "fad" and "uncool" factor.

When everyone's mom and dad (and their mom and dad) is also on facebook, it's going to slowly start losing it's edginess.

Then the next "cool" site will emerge.

I put more hope to the "uncoolness" factor rather than the privacy and security concerns to drive FB's demise.

I see them as the same. It may take some years for non-nerds to see the "uncoolness" of FB, but I believe they will see it, in time.

I am no great fan of facebook. The timeline sucks.

I think the warnings fb use are necessary, there's so many worms and spam wall postings. You can debate the wording and motive. Many users need paternalism.

You can AUTOMATICALLY have posterous post a link on your wall every time you write a blog post. It doesn't use the notes system at all. It sounds like fb are stopping people using notes for something they weren't designed for.

If he's saying every dumb aol/xp/ie6 user will be too scared to ever leave fb for the rest of the web, wouldn't that be the end of the Eternal September, which some would welcome?

Apologies for the server flakiness; Trying to address it now. Please feel free to repost/share -- everything is CC licensed.

If that doesn't come up, the text-only link works for me:


This sounds pretty similar to the complaints from SEO gamers whenever Google changes their algorithms and removes them from the top ranks for a search. I don't agree that any of the examples used by the author is anything particularly harmful.

I applaud the author's use of detailed documentation and the ability and willingness to dig deeper into the technical side of what he believes to be the problem. As soon as I had installed the Firefox addon Noscript I began to notice the facebook scripts put in place on many sites having nothing to do with Facebook. Their real interest is not in being nice by providing you with a free service, but in using data aggregated by its large user base in order to find patterns - and to sell that information to the highest bidder. Pretty soon advertisers and governments will know more about you than you do yourself.

The Ghostery plus RequestPolicy addins. Don't leave home without 'em.

I agree completely.

Well thought out and sound reasoning.

And the effort is probably hopeless, but maybe it will at least draw some attention to facebook's abhorrent practices. But they get plenty of negative press already, doesn't seem to slow them down.

I figure it's going to take a lot more efforts like this, to stop the abuses when portals gain monopoly power on user's attention.

User's will put up with it, (and probably put up with much worse), there is no alternative to facebook for what facebook does and is. Chickens and eggs ...

Good luck with that. Imagine all the YouTube-quality comments that would flood crbug:


We can have discussions all we like about whether or not Facebook is a net-positive or a net-negative for the Web, but there's no way Google, Microsoft, or Apple is going to blacklist them.

Exactly! I hate to say this because it makes me sound like a tech elitist, but some people really do need those warnings because they can't tell what website they're on.

See the ReadWriteWeb article that had to put a big "this is not Facebook" notice on it.

I like to login to FB for several minutes once or twice a week - a quick way to see what some friends are doing. However, I logout as soon as I am done. I also went through and disabled all FB apps, except for my own test app.

Given these simple precautions, is there really anything wrong with FB? Am I missing something?

If you think that people less prudent than you should be "protected" from Facebook, then your participation may be harmful to them because it's part of what encourages them to participate.

The only way to stop Facebook is to protest and boycott it. It worked in 2006 and 2007 (News Feed and Beacon, but Beacon might have been 2008) and it'll work again.

I agree with your first sentence. Whether it worked in the past is arguable, since they continue to do slimy things. But one thing is sure: they can't abuse your data if you don't give it to them.

.. and that it's not "abuse" if you know what they're doing with it and expect it.

Don't share anything on Facebook that you wouldn't want to see on the news the next morning. Simple and easy.

That sounds simple and easy but for today's teens there's no limit. I know people who have gotten expelled from private schools for posting stupid things on Facebook. It's a disturbingly common occurrence. It's not Facebook's fault obviously, but for kids, dangers about "oversharing" go in one ear and out the other.

Aside: I really like the typography of your title/subheads. I mistook the font for Gill Sans Light initially, though.

Disappointing. "We can beg other powers to intervene" is not "we can fix it".

How ironic that this blog's own comment system relies on Facebook.

It's not ironic, it's deliberate. I want to make sure I'm making informed criticisms of Facebook based on actual usage of the service as both an individual user and as a publisher.

Been a reader of your blog for a while now, and enjoy your insight. I do feel that chimeracoder's criticism is valid, regardless of your argument of establishing objectivity through participation. As a reader, the inclusion of Facebook Connect and Comments really detracts from the credibility, err. sincerity, of your article.

"toolbar that helps you shop online more effectively but neglects to mention that it will send a list of everything you buy online to the company that provides the toolbar."

What about a website that injects its content on every website you visit, regardless of your willingness to participate as a user? Or tracks every visit regardless of your willingness to participate?

Exhibit A: http://mashable.com/2011/11/17/facebook-reveals-its-user-tra...

There is no terms of service or privacy statement on your site disclosing that you are effectively sharing my activity with Facebook. By the way, I opt-out through the disconnect and Ghostery Chrome extensions, so your site has no comment system... just a Comments heading.

"Facebook has moved from merely being a walled garden into openly attacking its users' ability and willingness to navigate the rest of the web."

As the website operator that uses Facebook Connect, you are signaling to Facebook that you are OK with their current strategy to exist on every page on the internet and dictate the way content should be shared. You cannot complain that they are getting rid of the ability to automatically share your blog content in facebook when they have given you the ability to incorporate all the same functionality directly on your page. It is a genius move on their part. They no longer have to worry about users visiting facebook less over time if they are on every other page the user might visit.

Sorry, but this seems like a platitude.

If you had actually used the Facebook Comments system as a user outside of your own blog, you would realize that the behavior you're complaining about doesn't exist (except on your site, where a bug is apparently causing it.)

Comments on Techcrunch, for example, which uses this system do not flag any errors.

Haha, I agree!

The walls are closing in on Facebook. It's only a matter of time for it's business model. People eventually wake up. Better squeeze that IPO for all it's worth.

Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact