Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tell HN: Facebook is blocking the publiccode.eu open letter initiative as spam
316 points by dusted on April 28, 2022 | hide | past | favorite | 66 comments
Just try and share the link on facebook, it's blocked as spam.

It's next to impossible to believe that this is not deliberate, considering what kind of fake news, and scams they happily take money to advertise and spread.



Why would facebook block an open source initiative, considering that they've contributed a ton of open source software (react, graphql, pytorch, etc)? The more simple explanation is that an automated detector misclassified the link because it shares patterns with spam links.


Another alternative explanation that wouldn't require Facebook's malice, assuming that this initiative does have its opponents, is that Facebook's some reporting tools were weaponised, i.e. lots of false reports caused the automated detector to misclassify it.

(I don't actually expect it to have such staunch opponents, though, so the misclassification is probably due to other reasons.)


open source software from bigco it's only a smoke and mirrors to make it look like they are nice.

https://publiccode.eu/ and the current legislation trends in the EU are not fun, because they are not able to drive the narrative.


First, whether you like React or not, it's hard to argue that the most popular JavaScript framework is "just smoke and mirrors". It might be great PR, but there's also a lot of substance to it.

Second, even if publiccode.eu doesn't help Facebook in any way, that's not evidence that they have motive to intentionally block it. You'd have to demonstrate that they have some reason to want the initiative to fail, not just claim they don't care.


Does metabook create government related software? I can understand such behaviour from Microsoft, not sure what the motivation would be for FB


Workplace from Meta is not targeted specifically for governments, but are used by government orgs I've worked at.


“open source software from bigco it's only a smoke and mirrors to make it look like they are nice.”

So the React ecosystem is smoke and mirrors? GraphQL?


I mean... yes. This is the case with Microsoft, Apple, Google and pretty much anyone else on the Fortune 500 that owns a software stack. I still find it hard to believe Facebook would go out of their way to intentionally flag random political articles for seemingly minimal gain.


[flagged]


Could you please explain why you think this is being naive?

Same as GP, I don't see what interest Facebook would have in blocking this initiative. The impact the initiative would have would have very little impact on Facebook's business. Their moat is their userbase and no amount of good "public code" would change that.


Facebook is fighting against the "huge target on its back" effect. The bigger of an audience you are, the more interest there is in posting abusive content, whether it's spam, trolling, or something else.

The result is usually that you end up digging a bigger and bigger moat around your submission and curation process, so that your service is not completely flooded with crap.

I think this is what you are seeing here, rather than censorship of this specific link or cause.

I once posted a URL to a forum site I was developing for a community to Facebook, and somehow it set off the spam detector. For the next several months, I would get regular notices that a comment I left on another group's discussion was marked as spam, and was not visible to anyone else. I was posting regular comments, without links, apolitical, not critical of anyone, just adding to the discussion.

The notices have since stopped, but I still rarely get any response to my comments, and anything I post to my profile's feed usually gets "ignored" completely, while in the past I'd get a pretty stable number of likes from friends (in the low double digits or high single digits.)

As someone who has moderated a decent size sub-reddit in the past, I understand that there's no malicious intent on Facebook's part in this. They are just so much bigger than this one individual that they simply cannot see me, the same way I cannot see a bacteria, and may not notice a tiny gnat.


Or maybe, they are trying to cost-save, by using algorithmic methods to block people, and spam? And maybe, just like the little guy, they should curtail, and moderate manually?

There is no reason they cannot, other than it costs. And if the "little guy" can manually moderate a forum, and respond to unblock requests, then it is no problem for Facebook, or any other big player too.

In truth, Facebook would still save over "the little guy's" moderation methods, as Facebook has the size and scope to develop software tools to significantly improve the workflow of manual moderators.

All this "but it is impossible!" blather I hear, re: manual moderation just means "cause our profits would drop a little".

Oh boo hoo! Manual moderation is the future.


I ran a site in the mid 2000s with about 2500 forums. The site barely paid the server cost. I was no Facebook ofc. I had about a million visitors per month. So me not being able to monetize this circumstance lead to me manually fighting spam, which took up all my time. Plus answering support requests and continued development of the site as well as maintenance.

I can absolutely understand they automate that. It IS impossible to manage the volume of posts and comments. Even with hired labor Facebook can afford.

It's not all that simple as you make it to be. How many moderators do you need to moderate millions of people's posts?


Similar to arguments about getting human support from Google being impossible... There is a very simple solution: allow users to pay a modest amount, say €25, to get 10 minutes of real human support. This will immediately rule out people spamming at scale. The other side of this equation, is that if the ban is upheld, they must justify it in human terms, beyond : "the ML said it's bad". A positive judgement could then then be used to train the ML to be more lenient on this user going forward, at least when it's a borderline case


That would be immediately seen as Facebook charging people to be unbanned. That would lead to people claiming FB bans accounts to make money.


Then have the money go straight to some uncontroversial charity. Facebook still benefits by the reduction in frivolous support requests, perhaps making this a feasible option.


> That would lead to people claiming FB bans accounts to make money.

More realistically it probably would lead to FB banning accounts to make money. Theoretically you could setup the incentives such that money from banned accounts can only go towards paying moderators though I'm not sure if anyone would believe you.


Meta just posted billions in profit. Profit is not money spent on improving moderation. With such affluence flowing into coffers instead of investing in sorely needed improvements, GP is absolutely correct in the underlying premise: Facebook/Meta is doing next to nothing about its moderation problem. Even though they can easily afford to put a billion into it.


Facebook has a much more profitable revenue stream than you ever had, and as I mentioned, the ability to code tools to vastly improve the moderation workflow.

There is nothing wrong with automatic flagging, only with automatic action after that flagging.

And key here is, a way to legitimately dispute.

Your case is not a parallel case.


You seem confident in this, so I assume you have run the numbers. How many support agents do you need to moderate 3bn monthly active accounts?


We don't know because they haven't even tried, they'd rather make excuses which conveniently also make them richer.


I haven't tried it either, but I've tried moderating a decent size community, though nowhere near the numbers of Facebook, of course.

There are two challenges which combine into one problem:

1) The amount of bad content posted to a forum rises exponentially with the size of the membership. For every x new people reading, x^y bad content is posted. This is because a large target becomes much more attractive to spammers, trolls, and anyone else looking for a cheap audience.

2) It is extremely difficult to find good moderators. A good moderator will be, within reason, consistent, reliable, AND unbiased. Even with a salary at stake, it is difficult to find people who are even two of those things.

In order to succeed in moderating content well, you need to leverage reliable moderators (today, this means rare humans) with unreliable ones (text analysis and less rare humans) through some sort of statistical system. To date, we have not found a reliable system which can work at scale with this problem, but my faith is in a transparent and verifiable web of trust type system. Slashdot does pretty well, but it's still too opaque, in my opinion.


They don't have to look at every single post manually. But there needs to be a proper appeal process in place for posts/comments flagged as spam by their algorithms that doesn't require making a big fuss about it on social media / HN to get an actual human to respond to you. Big companies like Facebook, Google, etc. would have absolutely no problem affording that, they're simply greedy and prefer to maximize profit at the expense of users.


I agree that leveraged manual moderation is the future, and there is no way around it.

The problem for Facebook is that the requirement for more moderators grows roughly linearly with the userbase, but the attractiveness to abusers grows exponentially.


You know reddit is basically based on that and is suffering heavily from not being able to find moderators for its communities? Now increase than exponentially and you get fb communities


Indeed. It's possible that there's someone maliciously flagging your link as spam, but to a great extent Facebook and Twitter etc do not think deeply beyond "does this look like spam / is this getting reported as spam a lot".

Try setting up a publiccode Facebook page and see if that gets flagged.


They should hire more people to do things they have not successfully automated!

Oh, maybe that would carve too much into the profit margin?

Luckily we do have governance that can make companies comply with doing ethical business -- even though they don't hit bulls eyes every time.


Legally, the government can force compliance by large technical companies. In practice, there's no current chance of that happening. Believing that Facebook, Google, Twitter et. al. are beholden to the government given the past several years is somewhat laughable. Government is slow, and tech business is fast. Our politicians cannot keep up, even when they try.


Tech corporate change has slown down significantly, in as, the market is now full of entrenched players, and the market place is extremely static compared to 20+ years ago.

Plus, 20 years ago only a small number of people were on the internet compared to now.

Where did all of Google's, of Facebook's growth come from? An expanding internet.

That's over. US adoption rates went from basically single digits, to essentially everyone, over 15+ years.

Other places are even pulling away, or becoming more restrictive. EG, China, Russia, and even the EU slapping fines around.

Google, Facebook, the massive growth is over here, in their entrenched markets.

Meanwhile, more and more politicians grew up with the internet, use it daily, have smart phones, and "get" it.

EG, an early adopter at 18, in 2000, would be 40 now. And, adopters of tech at 30, in 2010, are 42 now!

During the rise of Google, eg 2000s, even the youngest Senators and Congress members were unaware of email, or the implications of smartphones, ad networks, tracking.

Now, they almost all use the Internet daily, see annoying banner ads, notice how they are tracked.

Further, some ... before entering politics, will experience the horrors of trying to deal with a real problem with Google, Facebook, and their lack of customer service.

My point is, it was not the speed of market change, but instead, that it was a new market.

That's over now. The tech market is old.


And to speak to this...

Electronic computers have essentially been around for a century. The internet is easentially a subset of that market, and bam!, that massive growth.

With that growth has come all of this change. Some of this change is not even about the internet, but instead, about storage, capacity growth, and thus centralized control.

(Not just govenment, but for example, large banks have a central office approve or deny things, where 50 years ago branches made such decisions).

My point is, big change in an existent market, due to new tech.

That market has cooled, with repect to change. Another grows. Biotech.

Fueled by computing changes, and other tech changes, it will soon be as transformative as the internet was. Just genetically engineer humans as a start.

But all the other tech surrounding this, it will make the change the internet wrought seem tiny in comparison.

Just imagine furries everywhere, for example.

shudder


> Electronic computers have essentially been around for a century.

That's like saying cars have been around for more than a millennium, since the Romans and Egyptians already had horse-drawn carriages.

World War 2 spurred innovation in automated computing. Post-war, the first computers appeared. Still not enough for the famous quote from the 60s "i believe there's a world market for maybe 5 computers". Whether or not that's an urban legend, in the 60s, that quote was not a priori ridiculous. And let's not claim that less than 5 suffices for "being around" (eg, animals considered extinct have later been found in such numbers).

Nevertheless, let's graciously say that computers have been around since 1960. That's 62 years; rather far short of a century.


Eniac was around in the 40s, and electromechanical computers before it.

Eniac meets the test, hands down. But, 80 years or 100, close enough.


This sort of naive, haughty attitude is why I can't stand hanging out with tech people in person any more.


Naive - definitely. But I'm speaking to what seems to me a fairly common observation. Feel free to criticize! I'm not so sure about haughty- that doesn't feel like a good faith interpretation.


> tech business is fast

Not nearly as fast as it used to be, except when it comes to breaking things. They get away with it because of folks' relentless optimism towards anything tech-related, and the fact that we are all used to muddling through random issues

Every time I hear someone express that users' bandwidth or compute are free and freely available, I cringe so hard...


> They are just so much bigger than this one individual that they simply cannot see me, the same way I cannot see a bacteria, and may not notice a tiny gnat.

"They're too big to do the right thing, so suck it up." What a terrible idea.

If Facebook is really that big, they should be broken up.


The reality is that Facebook is huge, and I personally cannot break them up, nor can I take any action that is likely to make it happen, so I am left with finding the best way to operate in this reality.

For me, that means creating my own little corner of the Internet which I'm comfortable in, and limiting my interactions with the giants.


Totally agree. You're describing Hanlon's razor. No malace behind blocking it, just a result of incompetence.


> I understand that there's no malicious intent on Facebook's part in this

There mere existence of such system is malicious intent.


On the contrary, it's not even slightly difficult to believe that this is not deliberate because it happens all the time.

Not everything has to be a mad conspiracy. It's roughly one billion times easier to believe that a bad automated flagging system has triggered and marked this particular link as spam, than it is to believe that someone at Facebook is twirling their moustache, stroking their long-haired white cat, and hatching a plot to thwart the dastardly freedom-fighters of the open-source movement.

Facebook's automated moderation systems are famously shit.


> Facebook's automated moderation systems are famously shit.

Unlike Tumblr's NSFW filter, which is always correct /s

Read: ALL automated systems are bad, the whole AI thing is a joke, and will still be for a long time.


Update: I'm now able to post the link, I'm totally sure it has NOTHING AT ALL to do with this making HN frontpage, just a funny coincidence.


Something similar happened to me after criticizing Youtube's playlist drag speed on an HN comment, then they fixed that. Happy coincidences, right?


When you say 'blocked as spam' - I Just pasted it into a new post, and it worked just fine? Or do you mean other people can't see it?


I was able to post it recently on FB, from the UK.


Something I wrote in the last 2 weeks on fb was autoflagged as spam. As a result I set my account to be deleted. I thought I was shadow banned. When I reactivated it yesterday I received the post marked as spam, object or confirm wizard. Idk if that's new. But it does feel like they're training a new AI and crowdsourcing that process.

But then again they might genuinely suppress this initiative, idk.


What link? Can you provide a little background for those of us who have no idea what you're talking about?




Trying to sign this, I got a 429 "too many requests" when clicking on confirm, and something about 20 per 1 hour. Did I sign, or didn't I? Reloading the page tells me there's no pending confirmation ID.


do they also block it in messenger? afaik they block any url pattern that people tend to share often



Why would it not be spam? If many accounts are sending that link to their contacts, doesn't that exactly mimic bot-spam?


Absolutely not.

Proposing that just because something mimics something then it equates it is so absurd.


I thought this is what trending means.


With so many large "crime waves", these small ripples which seem like crimes... Are they? Is MZ(read: Mark Zuckerberg) next to AC(read: Al Capone)? Does he still, in his heart of hearts, refer to his users as "stupid *f's"(as he referred to them in 2000's for "trusting" him. Nothing changed to make his users any smarter, so getting caught openly admitting how he feels did not change his opinion on his users, merely changed who he talks openly with, and the medium.


Does anyone here not consider Facebook users “stupid f*’s” for trusting him?


You could say this about any other network people take part in. The majority is always stupid from the point of view of someone above average intelligence. Does that justify exploiting them though? Are people fair game to be ripped off because they're naive or too busy with other stuff to be able to care?

I can trick a child into handing over their proverbial candy too but that doesn't make it mine to keep.


.eu isn’t the most legitimate looking domain. They probably just have some rules around which domains tend to be spam/scam/bs


Let me guess: You are not from the EU?

.eu domains are not uncommon over here.


".eu isn’t the most legitimate looking domain." -> Is this based on actual and specific information ?


Brexit means Brexit.


A few years ago I changed the domain of a customers website from .eu to .nl because it refused to show up in Google. Anecdotal evidence suggests .eu is not a good domain name for your business.


i think it's comparable to .us

neither are commonly used, but neither are spam domains


.eu domains are somewhat common; from a quick grep of my Firefox history:

adamj.eu alsd.eu alternatives.eu belleslettres.eu bergfreunde.eu berthub.eu bitsnbites.eu bizin.eu btcdirect.eu carlschwan.eu ceridap.eu coolblue.eu cpcwiki.eu cubilis.eu datadoghq.eu doshaven.eu epicompany.eu eui.eu eupl.eu euplf.eu eurescrossborder.eu europa.eu forum.eu framelabs.eu gdprhub.eu geizhals.eu gmic.eu hownormalami.eu ibabs.eu inflation.eu itgovernance.eu johnmathews.eu juliareda.eu list.eu maxlath.eu noyb.eu politico.eu postgresql.eu publiccode.eu qonto.eu sagefund.eu secondwheels.eu sifted.eu skikk.eu successfactors.eu surnamemap.eu tjinstoko.eu xahteiwi.eu z80cpu.eu

Some of these are fairly small/obscure; some are fairly significant: europe.eu is the site for a lot of what the EU does; politico.eu is a well known media organisation; coolblue.eu is major Dutch retail business, etc.


Does it matter if a domain LOOKS legitimate? I mean .com doesn't look legitimate anymore either because of the amount of bad websites on there.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: