Hacker News new | past | comments | ask | show | jobs | submit login

I work on the team that generated the warning that seems to be the crux of this post. I am pretty convinced that it is a bug.

His central theme, though, is a bit misguided. I don't understand why 1) using opengraph, or 2) using a like button implies facebook should trust your link and whitelist it. Even pages with those integrations can be malicious.

In this actual case though, the notification link (generated from the commenting widget) seems to malformed and causing it to trip a security check. I've pinged a bunch of people about figuring out what is happening and getting it fixed. The guy sitting next to me is currently trying to repro.

As for convincing Google/Microsoft to warn users when visiting facebook.com because of security false-positives, I'll leave that discussion for you guys.

Let's argue Reductio ad absurdum.

Why does not Google pop-up similar warnings when you click on its search results?

-Because Google is dependent on the richness and abundance of third-party websites, for its search to be meaningful.

What is the objective of Facebook?

- To suck users into facebook.com, and sandbox them there. Similarly, the smaller objective of Facebook Social plugins is to lift the userbase from third party websites and move it into Facebook.

You seem to be under a collection of interesting misapprehensions...

1. Google does warn in various ways when it detects possible badness. As it should.

2. We don't gate ALL links through such warnings. This can be verified by going to your news feed and clicking just about anything.

3. This is about a specific issue with notifications generated from comment widgets (a very common spam vector).

4. Detecting all badness via the domain name at "write-time" is not a sufficient solution to the malicious link problem.

5. Whatever that was, it wasn't reductio ad absurdum.

I have a problem with the way Facebook allows me to share links. Even by using your 'advanced' login security measures, I have been required to fill in CAPTCHAs hundreds of times for legitimate links. For instance, why is a link to an imgur JPEG file considered possible spam? Do you not whitelist domains? And isn't the repeated use of CAPTCHAs generally an abuse of the actual purpose of a CAPTCHA?

Additionally, Facebook has disallowed me from posting specific legitimate links. You've failed as a communication medium when you censor links. There was no indication that anything was wrong with these links that I shared with friends. There's no excuse for this practice.

Yet, at the same time, you allow seriously terrible practices on your own site, such as pages which require users to click on fake button images to do actions. It makes absolutely no sense how you are "policing" the integrity of your own site and the linking to other parts of the greater web.

>4. Detecting all badness via the domain name at "write-time" is not a sufficient solution to the malicious link problem.

Doesn't Google has this problem too, that detecting badness at the "indexing time" is not a sufficient solution? The content of a site may change between their checks. No pop-ups are shown in between indexing times nevertheless.

With your abuse reporting volume, you should be able to almost instantly detect statistically significant malicious links, and remove them from your news feeds, should the content change to malicious after "write-time".

I can't believe I'm posting something that might be taken as defending Facebook but...

If a site appears to contain malicious content at time X but not at time Y than I would PREFER to be notified that it is a dubious site until the site has earned back trust in some way. Continuing to warn users about a site that historically contained badness seems to me to be a FEATURE.

That's actually exactly Google's solution: they give you a set of reports and an indication of when the malware or malicious link was last detected on the site.

But I don't think that's the issue here. That facebook warning does not, as far as I know, get generated from a positive malware/spam/badness metric. It's just thrown up as a default action when someone links to an unblessed site on the web. That's what the poster doesn't like: it goes against the whole idea of hyperlinking.

But as lbrandy mentioned, this is in one rare case and appears to be the result of a bug. If this happened when most links were clicked leaving Facebook, you can rest assured that all publishers would be up in arms.

Hopefully it's a bug. But I see it routinely. It's definitely not a "rare case" by any metric I can think of.

Google scans for malicious sites as well. The difference is of course, that on FB people share links whereas Google can just decide not to include a link in the results and the user will never know.

If they had a limited objective of stopping malicious sites like Google does, they could reject the malicious domains at the moment they are attempted to be posted, instead of scaring users from leaving facebook.com each time they try to navigate away.

I happen to run a well known service, and we encountered the malicious links problem. It has never even crossed our mind to display those pop-ups, instead we stop malicious links from being posted after a domain is reported or detected otherwise.

Google has been criticized about banning less websites than they should - their preference is clearly towards false negatives, whereas Facebook's is clearly towards false positives.

google doesn't seem to have a problem with accounts being compromised from visiting malicious sites. I do like the incentive razor you've applied, but i think there is a simpler explanation in this case.

I appreciate your thoughtful response and want to make clear: I'm not ascribing ill intent to you or to any of your individual coworkers. What I am suggesting, instead, is that the overall goals of Facebook as a company combine to yield this result, and that the overall result is a deliberate outcome of the company's strategy.

That being said, I'd eagerly await resolution of the bugs you've described.

I do not mean to suggest that use of OG or the like button should imply trust, but rather that crawling of a site by Facebook consistently over months or years should show whether it has ever been a bad actor, or whether it's ever been flagged by others as a site with ill intention; Indeed, that's exactly what Stop Badware et. al do.

FWIW, I see that warning every single time I click a link from inside the FB app.

I find it annoying as hell, but I took it as a bad UX decision and not a conspiracy.

Thank you for expressing this so cogently and calmly. It's easy to get defensive when somebody starts flinging wild accusations like Anil does here, but informative and level-headed responses like this are much better at keeping the conversation on track.

It’s hardly calm and cogent — in fact it’s sort of a wild accusation — to describe a post as “flinging wild accusations” when it does nothing of the sort. Yes, Anil’s post is sharply argued and it’s definitely a polemical pushback against Facebook’s practice, but it’s a valuable part of the debate (especially with the added value of the comments below it), not a flame.

I don't see how you can describe the statements "Facebook is gaslighting the web"† and "Facebook has moved from merely being a walled garden into openly attacking its users' ability and willingness to navigate the rest of the web" as anything but wild accusations. You might argue that, as wild as they are, they're nonetheless true — but they are accusations and extreme ones at that.

I certainly feel that Anil's post could have benefited from at least a cursory application of Hanlon's razor.

† For those who don't know, the term "gaslighting" refers to a form of mental abuse where you undermine a person's confidence in their own perceptions and competence in order to retain their belief or loyalty. The typical example is an abusive husband who keeps his wife from leaving by making her feel like it's all her fault.

Agreed. I was expecting a much more substantial argument from Anil, but instead the post offers a list of disparate concerns without any real attempt made to link them together or to justify such strong statements. There are all kinds of explanations for discontinuing RSS import support and for malware warnings that do not point to nefariousness on the part of Facebook.

Appreciate the support, HalSF -- I try very hard to be intellectually honest even when making an obviously strongly-felt point.

Do you really think so? It's intellectually honest to suggest that Stop Badware — a system intended to cordon off viruses, spyware and surreptitiously installed junkware — should be used to punish a site because its threat detection algorithms are overly cautious? That doesn't strike you as even a little bit spiteful?

Stop Badware regularly flags sites that issue spurious security warnings in an attempt to mislead users. Facebook certainly falls under that description.

Can you give me an example? Sites get entered in there if they try to install malware on people's machines. I think it's straight-up ridiculous to say that FB's text there is an attempt to mislead users. It gives good advice that most people don't keep in mind at a funnel point that's known to be relevant to an enormous number of phishing and other attacks.

Do you actually believe the things you're saying? I'm struggling here.

(StopBadware actually doesn't flag sites at all.)

But you didn't make a case either that Facebook is attempting to mislead users, or indeed that their actions are misleading users at all. A number of sites gate outbound traffic for security reasons, and the language Facebook shows users is totally consistent with this - it doesn't read "You are about to visit a dangerous site", instead it warns users in generic language of phishing, etc. The error here is attributing malice to a bug or annoyingly-implemented feature. That feature clearly falls short of any reasonable "badware" criteria.

I 've had this domain that was since forever a simple redirect to our facebook game spam-listed. I 've sent countless appeals for almost a year now and never heard back. Have something to suggest?

Having been bit by this warning message, all this sounds a bit too familiar.

At the time it was being caused by McAfee (yes, dust-off the anti-virus conspiracy theories) had flagged our domain as untrusted because our main virtual host (www.) was returning an HTTP 200 on a 404 Not found page. Yes, that's the "security risk" they found. sigh

we've been a big user of facebook but now we're all cancelling our accounts. we used it to organise trips and meet people but now this is just all becoming too suspicious.

Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact