His central theme, though, is a bit misguided. I don't understand why 1) using opengraph, or 2) using a like button implies facebook should trust your link and whitelist it. Even pages with those integrations can be malicious.
In this actual case though, the notification link (generated from the commenting widget) seems to malformed and causing it to trip a security check. I've pinged a bunch of people about figuring out what is happening and getting it fixed. The guy sitting next to me is currently trying to repro.
As for convincing Google/Microsoft to warn users when visiting facebook.com because of security false-positives, I'll leave that discussion for you guys.
Why does not Google pop-up similar warnings when you click on its search results?
-Because Google is dependent on the richness and abundance of third-party websites, for its search to be meaningful.
What is the objective of Facebook?
- To suck users into facebook.com, and sandbox them there. Similarly, the smaller objective of Facebook Social plugins is to lift the userbase from third party websites and move it into Facebook.
1. Google does warn in various ways when it detects possible badness. As it should.
2. We don't gate ALL links through such warnings. This can be verified by going to your news feed and clicking just about anything.
3. This is about a specific issue with notifications generated from comment widgets (a very common spam vector).
4. Detecting all badness via the domain name at "write-time" is not a sufficient solution to the malicious link problem.
5. Whatever that was, it wasn't reductio ad absurdum.
Additionally, Facebook has disallowed me from posting specific legitimate links. You've failed as a communication medium when you censor links. There was no indication that anything was wrong with these links that I shared with friends. There's no excuse for this practice.
Yet, at the same time, you allow seriously terrible practices on your own site, such as pages which require users to click on fake button images to do actions. It makes absolutely no sense how you are "policing" the integrity of your own site and the linking to other parts of the greater web.
Doesn't Google has this problem too, that detecting badness at the "indexing time" is not a sufficient solution? The content of a site may change between their checks. No pop-ups are shown in between indexing times nevertheless.
With your abuse reporting volume, you should be able to almost instantly detect statistically significant malicious links, and remove them from your news feeds, should the content change to malicious after "write-time".
If a site appears to contain malicious content at time X but not at time Y than I would PREFER to be notified that it is a dubious site until the site has earned back trust in some way. Continuing to warn users about a site that historically contained badness seems to me to be a FEATURE.
But I don't think that's the issue here. That facebook warning does not, as far as I know, get generated from a positive malware/spam/badness metric. It's just thrown up as a default action when someone links to an unblessed site on the web. That's what the poster doesn't like: it goes against the whole idea of hyperlinking.
I happen to run a well known service, and we encountered the malicious links problem. It has never even crossed our mind to display those pop-ups, instead we stop malicious links from being posted after a domain is reported or detected otherwise.
That being said, I'd eagerly await resolution of the bugs you've described.
I do not mean to suggest that use of OG or the like button should imply trust, but rather that crawling of a site by Facebook consistently over months or years should show whether it has ever been a bad actor, or whether it's ever been flagged by others as a site with ill intention; Indeed, that's exactly what Stop Badware et. al do.
I find it annoying as hell, but I took it as a bad UX decision and not a conspiracy.
I certainly feel that Anil's post could have benefited from at least a cursory application of Hanlon's razor.
† For those who don't know, the term "gaslighting" refers to a form of mental abuse where you undermine a person's confidence in their own perceptions and competence in order to retain their belief or loyalty. The typical example is an abusive husband who keeps his wife from leaving by making her feel like it's all her fault.
Do you actually believe the things you're saying? I'm struggling here.
(StopBadware actually doesn't flag sites at all.)
At the time it was being caused by McAfee (yes, dust-off the anti-virus conspiracy theories) had flagged our domain as untrusted because our main virtual host (www.) was returning an HTTP 200 on a 404 Not found page. Yes, that's the "security risk" they found. sigh
I know, I know, "But all my friends are there!" or "Nothing else has the reach of Facebook!" or "I've invested thousands of hours in Facebook!". At the end of the day, Facebook is on the road to becoming a 'public' company, and they are making choices which are in Facebook's interest (mostly about the whole Open Graph stuff which they will sell for money to advertisers for revenue.
The 'good' Facebook you are looking for has to charge its users for accounts because that is the only way to pay the bills without selling you off to less purient interests.
 https://en.wikipedia.org/wiki/Gaslighting (warning Wikipedia link)
Is that related to how the only 'good' search engine would be one that has to charge you per-search lest it be forced to sell your click info and search terms to less prurient interests?
However, with the arrival of G+, this issue has become far more serious...
Hell, create a whole new Google Account completely disconnected from your prior history and G+ connections on your main account. Thats what I did right as it came out.
Essentially, users who choose not to provide Facebook with their close friends, family, university, and location will be scoped out and tagged by their own friends using these smart lists. It's a disturbing strategy for data collection.
What's the warning for? If it's warning against being confronted with Jimbo's piercing gaze, that seems a bit snarky.
Same reason as TvTropes. Potential tab explosion/epic wasted time event.
1. I see a friend read an article that looks interesting. I click it.
2. Every time I click one of these I'm asked to add the application before I'm allowed to view the link.
3. My options are "ok" and "cancel". The first few times I assumed I couldn't read the article without clicking ok so I just hit the back button. It turns out "cancel" really means "don't add the app, just take me to the link".
Without that confirmation dialog (which most people probably blindly click through) this is exactly how social media worms work.
Perhaps alert users will click "cancel" the first 5 or 10 times, but eventually they're going to accidentally click "ok" or just give in. Not cool.
Plus, why would I want to see articles that friends read, but didn't think were worthy of manually posting about in the first place?
It is enough for me to know that regular links always work (even though FB tracks click-through) and “frictionless” links, aren’t.
(Disclosure: I start at FB next week, though I don't have any inside info on this.)
I think that's what Facebook wants: "making the world more open and connected"... at all costs. They're constantly pushing the limit of what level of openness users will accept. It's going to backfire.
Everything I read about it just makes me happier that I don't.
As a blog maintainer, there are three major drop-in comments providers (disqus, intensedebate, facebook included), and if I've had gripes with one of the others, I might end up going for FB comments, despite allowing FB to extend it's "creep" there.
What I don't like about the article is the suggestion that the best way of combating Facebook's excessively paranoid and usually unwarranted warnings about offsite content is to show excessively paranoid warnings to people trying to get into Facebook. Particularly when one of the principal gateways to Facebook is the browsers and websites operated by Google - not exactly a disinterested third party.
The point is that putting up what amounts to a false negative malware warning on basically every "minor" link on facebook (I know it's not really a false negative, but that's how it's perceived) is just terrible overkill for the problem, and a terrible experience for the users.
That it survives at all leads one to wonder seriously about facebook's intentions.
I don't mind people being in the dystopia of their own choosing at all. I enjoy going for walks outside the cyberdome. It's quiet and peaceful out here and people are not monitoring and trying to manipulate and control me.
Those who enjoy being a cog in a machine I am sure lead happy fulfilling lives inside The Facebook.
the things impossible without Facebook :)
The main thing that Facebbok does isn't [technically] enabling some specific communications in some specific space, the main thing is that Facebook puts people into that space, bumps them one into another and thus forcing/making/nudging/tempting to perform act of communication that otherwise may not have happened. Facebook makes [helps to unleash] one a social beast.
Vs. the force in dystopias, in the Facebook case we have
dopamine generating activities grounded in the Facebook environment. While they can be replaced by dopamine generating activities in other environments [ i enjoy mine here on HN for example ], why would a social dopamine addicted beast that gets it fix on-schedule bother?
Mostly it's just a website.
I'd rather save criticism of this fabulously wealthy conglomerate to be directed at the disturbing ways they are trying to claim ownership of things outside their Matrix, such as that of my personal identity, and to track my comings and goings outside their Matrix without my consent. Those parts are disturbing and dangerous, especially when coupled with their deep interest in mixing it up in politics and influencing laws to create a captive universal audience.
Matrix is fine, but I want it to stay bottled up and optional.
Depending on how much time you spend on the site, they probably also have profiled a sizable portion of your thoughts simply by tracking clicks and pageviews.
"YOU CANNOT BRING YOUR CONTENT IN TO FACEBOOK"
False. Facebook's API allows all sorts of external content to enter Facebook. They're just shutting down their app that does that automatically. There are plenty of third party apps that already solve this problem.
"PUBLISHERS WHOSE CONTENT IS CAPTIVE ARE PRIVILEGED"
False. The Washington Post has chosen to embed their stories within the Facebook canvas pages, but that's not a requirement. The other popular news sites on Facebook, The Guardian and Yahoo, do not do this.
This entire post is woefully misinformed.
Of course shooting themselves in the foot wouldn't hurt either.
Remember MySpace? How about Digg?
It is only when the annoyance levels reach a breaking point that the FB alternatives will be made as user-friendly as FB and brought to the attention of the masses. One will emerge as dominant. And the cycle begins again.
Every itch gets properly scratched, eventually.
When everyone's mom and dad (and their mom and dad) is also on facebook, it's going to slowly start losing it's edginess.
Then the next "cool" site will emerge.
I think the warnings fb use are necessary, there's so many worms and spam wall postings. You can debate the wording and motive. Many users need paternalism.
You can AUTOMATICALLY have posterous post a link on your wall every time you write a blog post. It doesn't use the notes system at all. It sounds like fb are stopping people using notes for something they weren't designed for.
If he's saying every dumb aol/xp/ie6 user will be too scared to ever leave fb for the rest of the web, wouldn't that be the end of the Eternal September, which some would welcome?
Well thought out and sound reasoning.
And the effort is probably hopeless, but maybe it will at least draw some attention to facebook's abhorrent practices. But they get plenty of negative press already, doesn't seem to slow them down.
I figure it's going to take a lot more efforts like this, to stop the abuses when portals gain monopoly power on user's attention.
User's will put up with it, (and probably put up with much worse), there is no alternative to facebook for what facebook does and is. Chickens and eggs ...
PLZ FIX TEH FACEBOOK LOGIN SCREEN!!
See the ReadWriteWeb article that had to put a big "this is not Facebook" notice on it.
Given these simple precautions, is there really anything wrong with FB? Am I missing something?
Don't share anything on Facebook that you wouldn't want to see on the news the next morning. Simple and easy.
"toolbar that helps you shop online more effectively but neglects to mention that it will send a list of everything you buy online to the company that provides the toolbar."
What about a website that injects its content on every website you visit, regardless of your willingness to participate as a user? Or tracks every visit regardless of your willingness to participate?
Exhibit A: http://mashable.com/2011/11/17/facebook-reveals-its-user-tra...
There is no terms of service or privacy statement on your site disclosing that you are effectively sharing my activity with Facebook. By the way, I opt-out through the disconnect and Ghostery Chrome extensions, so your site has no comment system... just a Comments heading.
"Facebook has moved from merely being a walled garden into openly attacking its users' ability and willingness to navigate the rest of the web."
As the website operator that uses Facebook Connect, you are signaling to Facebook that you are OK with their current strategy to exist on every page on the internet and dictate the way content should be shared. You cannot complain that they are getting rid of the ability to automatically share your blog content in facebook when they have given you the ability to incorporate all the same functionality directly on your page. It is a genius move on their part. They no longer have to worry about users visiting facebook less over time if they are on every other page the user might visit.
If you had actually used the Facebook Comments system as a user outside of your own blog, you would realize that the behavior you're complaining about doesn't exist (except on your site, where a bug is apparently causing it.)
Comments on Techcrunch, for example, which uses this system do not flag any errors.