Something particularly concerning about this incident is buried further down in the article.
Under the heading "Why Facebook Doesn't Ban Everyone", Lincoln explains how social network theory can identify individuals in networks that are influential. This isn't an exaggeration - basic graph heuristics (such as comparing in/out degree of an individual relative to other parts of the social graph) can easily identify such individuals. Censoring those individuals can have an amplified effect on the graph structure as a whole.
For an example of how just a few individuals can make a big difference on graph structure, check out Watts & Strogatz's 1998 paper, Collective dynamics of 'small-world' networks [0][1]. I'd highly recommend reading it - it's very short and easy to understand.
Essentially, a small number of connectors connecting with random individuals can seriously reduce the average number of steps to connect any two individuals. If Facebook wanted to stop the spread of a narrative that they didn't like, censoring just these few individuals would make it much harder for information to flow through the graph as a whole, and hardly anyone would notice (since Facebook's only censoring a few people.)
I’m banned from Twitter without explanation. I made an account, followed 3-4 tech celebrities like Paul graham, and posted a single tweet similar to “I finally got pressured into making a dumb Twitter account...” The account got banned for community violations within an hour and my support email said that my requests for support would not be responded to. Years later, I’m working on a video game and I’m worried this will make it hard to do marketing coming up in case my ip got flagged along the way.
Other good suggestions here, but here’s another: you tripped the New Bot Account detector.
As your account ages there will be more information about whether you are real or not. Stopping the influx of bots before they do damage is important. Therefore it’s quite likely that Twitter ban accounts early if they look like bots based on not much info (and potentially if they look low value).
You followed a few people who are popular account (not friends, fairly anonymous) but not mainstream accounts (so you’re a little more atypical) and then you post once with, sorry to say it, not particularly inspired/original/specific/human content. Perhaps there were other factors. I’d guess you didn’t go through and fill out your profile much if you weren’t that invested in the idea? No birthday or avatar maybe? Not following the easy setup steps that Twitter encourage you to do?
Combine this looking a little like a bot with the likelihood that you’re going to be low value because you’re not that engaged (debatable I know), and it kinda makes sense.
All speculation, but a possibility. Should Twitter be doing this? I don’t think it’s the worst thing, but it’s a trade off. If this is it then it would imply that your marketing efforts for your game would probably not be affected at all.
That is one explanation, but the explanation I tend to believe is another: Twitter quickly bans new accounts to force you to "prove your identity" by adding a phone number.
You’ll probably find that these kind of systems don’t have just a single ‘New Bot Account’ detector, but a tangle of multiple subsystems that automatically temp-lock accounts based on various heuristics written by different people at different times.
A new account might incorrectly trigger an account lock due to signals for proxy detection, another lock for a specific 2017 bot wave with Romanian IP addresses and a third generic fake account lock. That, in turn, triggers a recidivism threshold that sets off a permanent disable.
Working ‘at scale’ means that developers who unravel old code to fix a use-blocking problem for 5000 users can be considered astronomically less productive than someone who fixes a minor problem for 100,000+.
Likely because if bad actors add bots and gain followers (across all bot accounts) faster than Twitter kills off the bots, the ratio of bots to real people will eventually (on a long enough timeline) become infinite.
When people notice they're only seeing, hearing from, and seeing retweets from bots, they're likely to leave Twitter. That would be bad for Twitter.
Twitter requires a phone number these days or you can't post. Twitter also finds ways to leak your phone number to others so I will never give them that.
Your account was probably hacked and used for something that broke the rules. I got an email warning of someone trying to change my password within a week of making an account.
Twitter would be a lot more credible (although anything times zero is still zero) if they didn't have the checkbox "Let people who have your phone number find and connect with you on Twitter" on their "security" lockout verification page.
And, somehow, Twitter keeps spamming my email address in spite of my account having been "locked".
You just accused me of being a scammer for creating a Twitter account and making a single tweet and getting banned the same day. What is wrong with you?
Many other people, including me, have criticized Facebook on Facebook and other places without being banned. You make a claim, without evidence, that you were banned for publishing criticism. Are you absolutely certain your account was not compromised in another way? How do any of us know that you were not violating their ToS in another way?
I really wouldn't worry about it. Based on the 160 incoming bytes that I've read so far, and a few status bits I've skimmed over, I've determined that they are not in fact a real person. </s>
Of course you can get around their "security" system, but you should ask yourself whether it is a game worth playing especially since the rules of the game can change overnight.
Getting tied to a platform is a dangerous bet.
It's absolutely terrifying to me that they can do this sort of thing, or selectively censor DMs containing certain links, or from certain people or geofenced places, and it would never make the news.
It seems that influence of a link is comprised of their awareness of the cliques discourse and reputation among the clique members. The algorithm on the other hand seems to latch on to traffic. There is a room for a wedge here.
Perhaps the algorithm can be defeated by employing a few extra "link" accounts (either sock puppets or simply ardent followers) with a shared reputation (a shared brand).
On other words a link between two cliques could be held by a special kind of clique.
Your concern seems to ignore that humans are not static agents.
They’ll route around black holes.
Conversation will not simply stop. It will move to a different platform or, should the network of platforms using HTTP look black holes, where useful into goes in but little useful info comes out, will pivot to a new protocol.
The really important infrastructure is the hardware and basic protocols.
There’s no reason see the network “inside Twitter” as important. Highly visible, but it’s just one groups choice of what to store in a database.
Alternatives can be copy-pasted together in hours by competent groups.
Tech is so up it’s own ass these days because it’s measuring inside a few Petri dishes and ignoring the Internet as a concept in and of itself.
The internet is not FAANG and Silicon Valley. It’s the hardware everywhere and fundamental protocols.
So, yes, the math is right. But the math applies at various scales. For as visible as Twitter and the rest are, they’re not in and of themselves a distributed global network of hardware and routing protocols. They’re black holes in a bigger universe.
That's interesting. The disgorzation produced by all the bans will probably impact votes for conservatives for a long time despite not censoring that many people indeed.
Maybe. The outcome of banning "connectors" typically reduces connectedness between cliques, rather than within cliques themselves. (In graph theory, when a group of people are well connected that share similar interests, we typically call that a clique.)
When people complain that people are increasingly finding themselves in echo chambers, and that "people don't talk to people from the other side," connectedness across the graph is dropping, while connectedness within cliques remains strong or increases.
I think FB has a fundamental issue here, which is that:
1) they don't feel they can NOT police content
2) they are wayyyyy too big to police content manually
3) any AI they train to police content, is not going to be able to state clearly why a given post/account was banned, and anyway...
4) if you state objective, clear rules that are automated, it will be pretty easy to game them (e.g. using euphemisms or code words for objectionable content)
I don't see how FB can get out of this situation. Any social network that is as large as it is, which relies on advertising (and thus cannot be as content-neutral as e.g. phone companies are regarding what you talk about on the phone), is going to face the same situation.
Of course, if FB ends up getting way smaller as a result, that would be fine by me. But I'm not holding my breath...
But we should be explicit here-- FB isn't some neat, nerdy social graph software that just happened to get really popular, like git for regular people or something. They have a business model to shake up their social graph to maximize engagement, and the implementation achieves this by keeping everyone maximally mad as hell at each other. It spreads conspiracy theories efficiently. It and its users are easily exploited, and they continue to be exploited by any nation-state, nefarious actors, or even just some dude looking to make a buck spreading fake news. That all feeds into the same destructive behaviors for the users as the system desperately tries to keep them engaging more and more. Facebook's own software and UI exacerbates these problems.
With that starting point in mind, I agree with your 1, 2, 3, and 4 above. But this is less like email having to deal with new sets of problems as it scaled to the entire world. It's more like a drug cartel trying to figure out the maximum amount of their fundamentally destructive product they can sell before the government finally decides to comes in and break up their enterprise.
Censoring content rather than refuting its claims is counterproductive. If skeptics are not able to be argumentatively refute a claim, they will not succeed in changing minds. The current strategy of trying to silence view points will eventually fail miserably.
The issue is that when a platform outsources its censorship (“moderation”) to a third party that it still controls, rather than a third party that is actually perceived to be neutral by most people, then you create distrust which further fuels conspiracy theories. Also a conspiracy theory may actually be true. Jeffrey Epstein was not discussed during the 2016 election aside from a few fringe articles. It was too toxic for mainstream to touch. And everyone in Hollywood knew about Harvey Weinstein and even joked about it. These topics were covered up only for so long. Today there are lots of insane and farfetched theories being floated. But the theories being censored today that are indeed based in reality, will eventually become common knowledge and the censors will be implicated in the coverup thereby harming their trust.
Discussion, debate, argument, evidence, analysis lead us towards truth. Mute only appears to work and can be a useful political tool for only so long.
Today our public forum is virtual, but still a public forum. Trump wasn’t allowed to block certain followers because of their right to be in the forum and hear from their president.
So the question is whether governments will acknowledge this fact and limit the power of our big tech group communication platforms and enforce transparency and non-biased appeals process. Or will they cede total control of our public forums to Big Tech out of short term political convenience.
I'll only add that it's going to be difficult as hell to stay on topic as FAANG pushes back against attempts to deal with their monopoly practices. Uber and Lyft effortlessly destroyed the original meaning of "ride-sharing" for their own interests. Expect Google, Facebook, and Twitter to put all their weight behind narrowing the meaning of free speech to simply mean ensuring everybody has equal access to be manipulated by their commercial blackboxes.
Probably true. And can always spin it later as well, especially when people subliminally need to agree with the spin so as not to explode their own head by challenging the very basis of all their thinking.
I think your point is valid, and relevant. Well stated.
Having said that, I think if FB were to stop doing that, some other social network would, and would replace FB the way they replaced MySpace. So again, I think the current "we want one social network for everyone" situation, inevitably produces this kind of behavior.
But, I agree with your point, and it is a good one.
Thank you for stating these points. I believe that FB is going to be cut up due to these issues and the interplay with politics, just like Bell was.
Per my views on the situation, Zuck is cooked. When he went before congress to testify, he was being given the classic robber-baron deal that the US has used a few times now. As an analogy to actual robber-baron, at first when the railroads were new, Congress didn't care much about them. Then when all the businesses and politicians started to use them, they started to notice that the railroads were very important. When beef suspiciously didn't get to Chicago on time, or when trains delayed representatives to important votes up in Albany, then the politicians got legislative. A similar scaling of importance has occurred with big tech.
The US then typically takes these newly vital sectors and then gives them the deal: Fix it or face jail time. Compliance means that your family for countless generations is never poor again. You go kite-surfing for the rest of your days. Congress works with you to write the laws in such a way that you get a monopoly on the sector with a nice 3% above inflation growth baked in. Congress and the US get a sector that isn't scammy and just works (albeit poorly at times). Congress gets you to testify for a few days, ensures that you have lied to congress in that time, and holds that jail time over you as leverage to comply.
Zuck was given this deal.
Per the points OP outlines, Zuck cannot follow through on his end of the bargain. He cannot get FB to work the way Congress is demanding that he do. Zuck may not be facing jail time (yet), but may when he gets in front of Congress again.
After Dec. 37th / Jan. 6th (because 2020 never ends), Congress likely sees Big Tech as a direct personal threat (not existential) and is likely to move directly against that threat with due haste.
While I think Mark Zuckerberg is unlikely to face jail time, there are some points to your post I agree with. Especially the analogy with railroads.
The other point I would add, is that it's not just Congress. I have heard government leaders in Mexico, Poland, Hungary, and Turkey all voicing similar concerns, and in some cases actually passing legislation already. I think more will come.
> 2) they are wayyyyy too big to police content manually
To be more exact: they do not want to spend money on competent policing. They probably have capability to employ ten times as many human moderators as they do now, thus reducing the blatant error rate (at least in the case of highly influential pages and people), but that would eat into their profits.
Assuming a user:moderator ratio of 10,000:1, they would need to hire 200,000 moderators to cover their 2 billion users. Then at that scale, they’d need to hire thousands of meta-moderators and meta-meta-moderators too.
To muse a bit: a city of 2 billion would need a lot more administrators than just 200 000.
Ofc, Facebook does not provide nowhere near as many services as a city would, but ... it still seems to me that current user:moderator ratio is very obviously insufficient.
In the early 2010s, this sorta worked, because the effects on real world politics were insignificant. Now everyone looks at their fingers and they have a very underinvested infrastructure and no clear idea what to do.
Taking a random large city as an example, New York City has over 300k employees [1], which is a 25:1 ratio with population. The Police department has 55k employees [2], which is a 151:1 ratio with population.
No society with numbers north of somewhere in the two digits works without some kind of rules, administration, and enforcement. I'll bet you there are at least half a million people staffing courts in the US, maybe even a full million, and probably an equal number of law enforcement officers, and that doesn't get to public and private attorneys which is probably another million.
Which means something like 1% of the population of the US is probably involved in just administering the laws of society.
Now, maybe for a society where the limits of how people interact is communication you could get away with lower overhead. But that's going to be a constant factor of the same underlying dynamic.
Also, we've done the same damn thing with freedom of discourse that we have with almost every other liberty in the US: we turn it into a thought-stopping cliche and glide on treating it as an indulgence rather than investing in thinking about the limits, edge cases, and responsibilities.
There’s no need for a specific class of meta-mods. Just use the same system as Slashdot: periodically throw a random moderator’s decision in front of another mod. Ask them if the decision was fair. A portion of the bad decisions will get reported and screened.
If all mods are routinely screening each other’s work, everyone has an incentive to be fair. Biased mod decisions are thus a risk to your continued employment because you know your work is spot-checked.
If meta moderators are at the same scale then that’s only 200 meta moderators and only 1 meta meta. The base of a log scaling pyramid is approximately as large as the entire pyramid.
Where's the problem in that? FB makes 18.5 billion $ a year in profit. 250k employees @ 30k would work out to 7.5 billion $ in cost a year, it would hardly leave them bankrupt.
Also, Facebook could offset a part of that cost by charging users a token amount - say, five dollars or something like that.
First, it’s $7.5 billion freaking dollars. Of OpEx.
Second, it’s 250k human moderators who will feel obligated to, you know, moderate stuff. They’re not paying $7.5 bil to just allow people to post whatever content they want now! Very quickly they will run out obviously clear-cut cases and will be moderating anything and everything under the sun.
How anyone could think 250,000 people running around with mod powers on people’s Facebook walls is not deeply, deeply problematic... I cannot imagine.
> First, it’s $7.5 billion freaking dollars. Of OpEx.
So what? If that cost is necessary to run a platform that is safe for those who are on it (= from spammers, catfishers, threats, pedophiles or right-wing extremists) and that does not serve as a breeding ground for threats for society (= "freemen"/"Reichsbürger" groups, militias, neo-Nazis, antisemites, Qanon, ...), then this cost should be paid by the ones making the profit (=Facebook), not society.
> Second, it’s 250k human moderators who will feel obligated to, you know, moderate stuff.
So what? The digital equivalent of police(wo)men on patrol. Just because it's The Internet, it should not be a free-for-all land.
> How anyone could think 250,000 people running around with mod powers on people’s Facebook walls is not deeply, deeply problematic
The US alone has nearly 700k police officers for 300M people, that are running around with guns and regularly kill people, and yet society accepts this.
If FB charged money, it would totally change their business model, and I believe that could solve some of the issues. However, I have to wonder if people would actually be willing to pay money for FB? It's easier to quit something when you just have to stop paying them. They might get a lot smaller if they charged money.
I think it's far worse than even this. Facebook reportedly receives over 50000 posts per second on average. With 200000 moderators, each one would have to read a post every 4 seconds every hour of every day. I would place the number of human moderators at least an order of magnitude higher.
Fixed that. Maybe some nefarious three-letter agency found another way, maybe not. Even if they did, they have a powerful incentive never to let it be known. And it's still almost entirely algorithmic.
On the other hand, human moderators are inherently going to be entry-level individuals in a call center in a low-wage country. These are not the same things.
How would you feel about human moderation of your email?
Seems logical if you're using their stuff, they get to do anything they want with what you put there.
Expect them to read every word, and use it to make money.
3 and 4 are easy - have a manual review of automatically flagged content. You get clear reasons, better resolution of false positives, and it’s not simple to game. And, in another plus, it can feed back into the AI training.
I think part of the problem at FB's scale is getting any level of consistency across human moderation.
If you ask 10 good, experienced, human moderators whether a controversial post is allowed, you'll probably get at least 15 different answers and justifications.
FB would probably find it harder to train from this, as each moderator will be using different justifications, and reaching different decisions.
At FB's scale, they want something cheap and straightforward which they can implement automatically and go Kafka on the lack of appeals. The reality people need to wake up to is that when a business is at FB scale, people who fall through the cracks due to a broken algorithm simply don't matter. Google doesn't care if you get locked out of your Gmail, in the same way FB doesn't care if you get spuriously banned.
When users place more value on a service than the service places on them as a customer, it feels like a market failure scenario that leads to these kinds of outcomes. When the go-to price is $0 though, it isn't clear how you resolve this.
Any justice system also has a variety of different judges. Of course in an ideal world there would be perfect consistency, but imperfect consistency would still be a lot better than just letting an algorithm decide. Of course they want something cheap and straightforward, so that's where regulation should step in to make them go against their own interests, in the interest of population at large.
Facebook has nothing to gain from transparency. Inevitably, they are going to be inconsistent when it comes to edge cases, which will be used against them.
Facebook has money, and automated filtering of content (rather than manually scanning everything) can make it scalable. Given their monopoly it should be possible to hold them accountable / there should be transparency.
While this certainly makes it much less likely that costly massive human moderation would actually be implemented, I don't think that really qualifies as an argument for why it shouldn't be done.
If you think rossdavidh is right about those issues Facebook is having (and I think he's right), then that's the reality in which the company has to operate. If a business model leads to infeasible costs due to fundamental issues, maybe that business model shouldn't exist as such.
Obviously Facebook (or any other company) doesn't want to do anything that significantly increases their costs without providing revenue if they can avoid it, but that doesn't mean it shouldn't be expected of them.
They exist because they’re popular and because the currently successful ones acquired or strangled all of their smaller meaningful competitors using investment capital and money from first mover advantages.
There's definitely an anti-trust dimension here, but it's not related to first mover advantage; Facebook wasn't nearly the first mover, or due to capital; Facebook was somewhat widely thought to have over-paid for Instgram, it was Zuck's foresight (or perhaps risk aversion) that gave him the conviction to proceed with the transaction.
Haha, are you serious? Instagram, WhatsApp, Snapchat... Instagram's Systrom explicitly said that he sold because otherwise Facebook would snuff him away.
Facebook never acquired Snapchat so I'm not sure what you're talking about there. Instagram and WhatsApp have continued to thrive since their acquisition, so from a user perspective nothing really changed, it's not like you need a FB account to use insta or whatsapp.
GP:
> They exist because they’re popular and because the currently successful ones acquired or strangled all of their smaller meaningful competitors using investment capital and money from first mover advantages.
Facebook literally copied Snapchat features into Instagram rightaway - Systrom even openly admitted it. That fits the bill of "strangled". Snapchat is in stagnation now, because everything it has is there on Insta.
Instagram and WhatsApp thrived only because Facebook acquired them, else both companies new they would be smothered to death. That pretty much fits the bill of "acquired... Using investment capital and money from first mover advantages".
They don't need you to use your FB account because they cross share data anyways. It's not like it's a painstaking operation to link different accounts, once you have all sorts of device data (which Facebook had, even on Apple devices up until recently).
That may be the case now, but Facebook did start requiring Facebook accounts for their Oculus VR headsets.
Much to the dismay of many. Maybe it's the test case FB is monitoring to look how to move ahead with their other acquisitions. Last I heard Quest 2 sales are good..
You don’t have to dictate popularity: just create legal liability frameworks that make a solution like Mastodon more appealing than a solution like Facebook or Twitter
People can still be banned on Mastadon, it changes absolutely nothing. In a world where Mastadon becomes more popular than FB people will complain about being banned from the most popular node instead of the most popular http website.
Mastadon is federated, so you can be banned from someone else's instance but you can always start your own instance, or join one where you're not banned.
Right, just like how you can start your own website if you get banned from someone else's, it's servers all the way down. The people complaining about being banned don't care that they can create their own instance, they care about losing access to the popular instance.
The advantage of mastodon is that banning is not monolithic: so, if I have instances A, B, C, D: B can decide (through whatever process it uses for such things) they don’t like A’s content and ban it without affecting users on C and D. But, the problem it really solves is moderation: if I start an instance with 10 friends, moderation of posts on the instance is almost a non-issue that can be handled by a couple admins in their spare time. Handling off-instance posts is more difficult, but you can either maintain a ban list to block a list of bad instances or you can adopt a default deny policy and only federate with instances that have content your local community wants to see. Either way, the Mastodon model seems better because it allows policy to be specified at the “local” level rather than requiring a uniform policy across the network.
> if I have instances A, B, C, D: B can decide (through whatever process it uses for such things) they don’t like A’s content and ban it without affecting users on C and D.
Yes, but this isn't really analogous to the complaint being presented. In the Mastadon world this article would have been titled "A Disabled My Account After I Criticized Them". The author is still free to use B, C, D etc, just like with http websites.
> Either way, the Mastodon model seems better because it allows policy to be specified at the “local” level rather than requiring a uniform policy across the network.
To be clear, I do think an ActivityPub-like architecture is preferable for social networking, but federation doesn't obviate the concept of popularity, it only seems that way now because the fediverse isn't popular, if it ever did become popular then we'd see the exact same complaints when people get banned from popular instances.
The only reason people care about Facebook disabling their account is because Facebook has all the users and their data in its silo. In Mastodon, A disabling federation with B isn’t really the same sort of problem because it’s not a global action across the network.
What A defederating from B is like a website blocking your IP. Facebook banning an account is more like your ISP turning off your internet access.
In theory, absolutely. In practice, I've heard of major instances using their power to dictate "If you federate with $out-group, we won't federate with you". Most admins want to federate with other major (mostly good, popular) federations, so they capitulate.
As such, being kicked from one instance leads you to sign up elsewhere, until a major instance notices and tells your new server admin to kick you out. When you create your own instance, the major instances use their soft power to ensure almost nobody else federates with you.
Sure you can, just require interoperability with similar services and prevent them from straight up buying the competition and watch their "popularity" fade away.
I think preventing FB from buying competitors is perfectly reasonable, but that doesn't really mean much with respect to FB's popularity. It's not as if users have to choose "should I use FB or instagram" they can and do use both, and that would still be possible whether FB owned instagram or not.
How about the other part of the comment that you've ignored? If people could use Alternative X without losing their Facebook/Instagram connections, Facebook would fade into irrelevance. There's absolutely no technical reason for everyone to have a separate Facebook/Instagram/Twitter account with dozens of other IM services instead of picking one and communicating with others regardless of what they've picked.
That future is only one regulation away. The most common reason for staying on Facebook ("all my friends are there") disappears.
> If people could use Alternative X without losing their Facebook/Instagram connections
They can already do that. Using an alternative does not cause someone to lose access to FB.
> There's absolutely no technical reason for everyone to have a separate Facebook/Instagram/Twitter account with dozens of other IM services instead of picking one and communicating with others regardless of what they've picked
> Using an alternative does not cause someone to lose access to FB.
Deleting Facebook causes you to lose access to your Facebook connections. Why can't you just use your Twitter account to communicate with someone on Facebook? There's zero technical reason for this not to be possible.
It's not. It's a third party workaround that Facebook could kill at any point if it gave a shit. Do I really need to explain to you how that's different than first party support for something so basic? Just like with email, you should be able to enter a username and a IM provider and just talk to them as if you're using the same platform. It's really not rocket science.
Fediverse (Mastodon and compatible providers) already does that without any hiccups. It's possible, we know it works for decades (since we all use email), and it should be a requirement for every social network / IM service in existence.
Twitter is only barley profitable as of a year ago, yet it has continued to exist as a large social network during that time, so that explanation doesn't fit.
We split up Standard Oil, and many other trusts, without resorting to authoritarianism. I'm not saying it's trivial, but it's probably not harder than breaking up AT&T (also a network to connect people) was.
Just don't use them. And try to convince people you know not to. A ban isn't even necessary if enough people decide a platform works against their interests and cease to pay attention to it.
I think it's more, "don't consider any part of it private."
I always ask people for their email password when they say "I've got nothing to hide" I think it's rubbish..
At the same time, I treat facebook (all of it) as if it were public. If I wouldn't wear a t-shirt with my DM's on it, then I don't write them.
Not at all, I would very much like to protect personal privacy rights. I'm saying a small number of businesses have convinced the general public to invest a massive amount of time, money, and attention in virtual spaces. These organizations intrude in our lives because we use their platforms. We need to quit using their platforms.
Sorry, I was trying to characterize the sort of argument given: "Just don't use it" only works if the people you want to interact with are available some other way in a nearly equivalent fashion: if one's friend group is mostly on Facebook, you sort of have to pick between using FB and missing out on a certain set of social interactions.
Oh, then I agree, but I would suggest it's a worthwhile inconvenience. The more people quit these services, the less incentive other users will have to remain. It's surprising sometimes how even one person's decision can generate mimicry.
If Facebook wanted, they could massively ramp up staffing, hire them themselves, and pay them a livable wage instead of outsourcing the cost that their business occurs on society (who has to pick up the slack from conspiracy myths, fascism and other shit), on the users who are mistakenly banned and lose their primary contact e.g. to overseas family, and on the moderation staff that is hired via third parties and exploited.
I would shed no tears if FB profits went down, but...I think you're underestimating how big they are. Two BILLION users, in ~100 countries (each with its own standards on what is acceptable, legal, repulsive, etc.). If you had one moderator per 100 users, that would 20 million moderators. I don't think they could hire 20 million high quality moderators (there's only one dang, and we're not giving him up), and certainly not with 18.5 billion.
They need to be forced, by government regulations, to have to talk to people who have had their accounts banned and to provide them some kind of legal recourse. The law needs to recognize that for massive social networks like Facebook and Twitter and for the Apple and Google App stores that people are actually entitled to those services, and have a right to use them, and shouldn't be kicked off of them without any form of recourse. They've grown too big to just consider them like the corner coffee shop and allow them to kick anyone off of their platform for whatever reasons they cook up.
And that means that they will need to staff up and that will come out of their profit margins, and that's okay. That's why they need to be forced to do it by regulations.
Which is why social media networks should make the far easier rule that they will only ban illegal content. If people want something censored, make it illegal. If people don't like they things are being censored, take it up with their local representative.
But they may not be able to do it well enough that it will be notably better than an AI. Scaling manual labor can't work indefinitely due to the scarcity of talent.
Keep in mind that while a human being is very good at categorizing things, they're terrible at doing it over and over again, so the scarce talents are likely consistency and conscientiousness.
You should expect that, overall, the (n+1)th person you hire is marginally less talented than the nth. And while people are constantly entering and exiting the labor pool, this is negated by the fact that you're trying to expand your share of it.
So while there's a lot of randomness in recruiting, if you enforce any kind of standard, you'll hit a point where not enough candidates meet that standard, so you have to drop it. Once it is below the threshold of "better than AI" you have no reason to hire more people.
And you'll try to expand your eligible pool by raising wages, but everyone else wants consistent, conscientious people, too, so you're liable to run into the economic constraint: the amount you have to pay someone will exceed the value they're adding as a moderator.
They can solve this issue by adequately providing for self-moderation. Avoiding this approach reveals an intention to impose. Users leave once they realise the violation.
This seems overly glib. Facebook’s monopoly status means it’s necessary for many people. As per the post, if this person were a small business owner, or simply not well-connected elsewhere, this could be both financially and personally devastating. As it is, he’s only protected because he has other access to business and personal pages.
One company shouldn’t have this sort of arbitrary power over people’s lives.
I find the idea of Facebook being "necessary" for anyone quite spectacularly laughable. This isn't, air or water. You can live without it, and many people happily do. Facebook is about as essential as essential oils.
That's a short-sighted statement. On a personal level, I completely agree with you. Facebook is a giant waste of time for the typical user and a big +1 for your life if you ditch it. For many business, however, it's critical. My local Snap-on Tools dealer uses it to announce where his truck will be each day and what tools and equipment he's running on special. He hates Facebook but told me the other day that he conducted almost a million dollars in business using the platform in 2020. Sure, he could kill it or switch to MeWe, but that would significantly hurt his exposure to his customers, who use it regularly.
Have you considered asking him to cross-post to a Telegram group or similar? While it's imperfect:
* It can still be bridged to Matrix, ensuring you don't have to use Telegram if you don't want to
* There are open-source Telegram clients (at least for Android)
* It doesn't sound like too much more effort: One extra message weekly (for sales), and one extra message daily (truck location)
* Using one doesn't preclude you from using the other - you don't reduce his exposure or sales, and help him reach customers who hate FB as much as your Snap-On dealer.
* It reduces everyone's dependence on Facebook
All that said, my Snap-On dealer just has his cell number available, and follows a predictable route - we know he stops by every other Monday, and we can text him if we need something swapped out. It works for us, but I won't pretend to know your guys business better than him.
I deleted my Facebook in July 2019 and my business still runs. So, for me, it was neither water nor air.
I am not so sure about candidates for public offices, though. Especially in majority voting systems. Not having a social network presence may be crippling for their electability.
Phones, like food and water, are considered "essential" by the US government and provided for free to citizens who can't afford them. The definition of "essential" grows every year. AFAICT you need a FB account to apply for a job at FB and to look up some business' address.
None of that information is "necessary" though otherwise I would be dead. Painting Facebook as some kind of loft extension built on top of Maslow's hierarchy of needs is disingenuous at best and propaganda at worst. I've lived happily without Facebook for many years, not using it has had zero impact on my life. Not zero _meaningful_ impact, but _zero_ impact. Facebook is NOT a necessity.
You're conflating monopoly with popularity. Take it up with the individuals that willingly put their information in there, nobody is forcing them to do that.
I applaud you on a moral level, but I hope you realize how wrong you are on a practical level. Being connected to the pulse of the world comes with very powerful pros (meeting a partner, finding a job) and cons (disinformation, social pressures). Comparing it to essential oils is wrong.
Also, the Facebook story isn't finished yet. Is it really that far-fetched to believe that their influence over our lives could 10x over the next 50 years? I hope it won't, but I wouldn't be willing to put my money on it. So keeping in mind that the context of this thread is about getting permanently banned on Facebook, for anyone who thinks that might have zero impact on them, I would suggest you still keep a foot in the door in case you change your mind down the road.
To be fair nothing is really "necessary" beyond a 8' x 6' cell, potable water, and a crate full of Huel or something but no sane person outside of those who actively seek an ascetic lifestyle would actually want to live like that.
Just because something isn't strictly necessary doesn't mean that life can't become difficult without it. The sad fact of the matter is that by cutting off Facebook you're also cutting off the primary way of staying in touch with some people along with things like events and so on. Yes there are workarounds, of course there is but the vast majority of people don't care and can't be convinced to care about the politics and social effects of Facebook's dominant position.
The number of organizations that have given up other means of organizing for FB groups is, well, far too many. With Yahoo Groups gone and Google Groups having the Sword of Damocles hanging over it (not to mention the issues average folks have with using email discussion lists and their effective deprecation by anti-spam measures), FB groups looks like an easy option to people, and suddenly to participate in an activity you need a FB account, or a very communicative friend with one.
I can give my personal example of how facebook is literally a life-saver for me, and I'm sure there are many others.
I'm strongly passionate about rock climbing, and try to structure my life around it as my primary form of recreation. If I'm traveling for leisure, climbing is a big part of the agenda. There are many communities for finding people to climb with on Facebook, and I've used them extensively.
Without facebook, I'd more often have to resort to self-belaying, which is much more dangerous (among other reasons, if you're knocked unconscious there's no one to start a rescue or call for help). Having access to the various regional climbing groups on facebook has put me in touch with so many people, so I have only had to do that on a handful of occasions.
There are other examples I can think of, even if none of them have been so drastic. But without access to Facebook, many people would have their lives seriously impacted, and possible lack of access to life-saving resources.
At least in my country/city, you MUST publish your events on Facebook to get an audience. Even I keep a Facebook account just because it's the only place that has every single event happening in my city. Before COVID I would always open Facebook's event calendar before a night out to check out what concerts/parties are happening that night.
Can you name some of them? All apps/services I've used always had some way of register account with email. Granted, this option is sometimes "de-prioritized" (displayed less prominently).
Well I think Google is a better choice than FB, but I can't think of any services I use where one or both of those are the only options and email signup isn't.
The only one that comes to mind (and that annoyed me the other day) is ProductHunt, which has 4 options for signup, but not email.
Facebook has complete monopoly for online sales where I live (no ecommerce platform is well developed), and thus getting cut from Facebook will probably mean you are no longer in the business.
Also as a buyer, I have to use my mother or friend Facebook account to find items or contact sellers.
> I find the idea of Facebook being "necessary" for anyone quite spectacularly laughable.
Facebook may not be necessary, but there is a very high probability it is the only social medium that the majority of your acquaintances & relatives use.
Being forcibly disconnected from humanity is extremely isolating & harsh.
You're not disconnected from humanity. You can still call people, text them, message them on Whatsapp, meet them in person (maybe not as easily these days)
It's not like leaving Facebook turns you into a hermit. If it does, then maybe it's time to find a new group of friends.
These people are attempting to actively share their status, are broadcasting their lives. But the medium has cut you the receiver off. This is an obvious & horrific & disconmect.
Even if I did have contact info for a thousand people & kept a schedule, maintained contact on a 1:1 basis, took the time to inform myself of people's lives, isn't it a bit rude & disrespectful, an ineffective use of other people's time, to make them recount toe personally what's going on, what's happened to them? Literally call them up & ask them to repeat the events of their social media deed for the past month? Oh and read me peoplecs tries too. Who "loved" that status update? Can you send those pictures? The asks, the wastes of time, to get anything resbling the faintest hint of parity, are practically endless.
This is just blindingly blaringly obviously unacceptable in every way. Facebook's monopoly on mainstream people-you-know social media has ipso facto come to be humanities warm glow, the fire we gather by. Getting dropped out of the exchanges & chatter of humanity is a cruel & bitter fate, a shocking act.
The melodrama in this comment reads like Stockholm syndrome. Honestly mate, it’s not that bad. Give it a go, try not using it for a week. Humans managed for thousands of years without knowing what great Aunt Flo thinks of immigrants or 5G every waking second. You don’t need your life validated by someone liking your posts, there’s real life ways of finding actual meaning in your actions.
I haven't used Facebook for >2 years. I find it super boring, don't have many active friends there. But I see relatives who use it regularly, for whom Facebook is quite near to being their complete access to the world, their world.
Your poo-poo'ing dismissal of access to the locked-on standard of sharing life events & baby photos seems so heartless to me. You don't seem to be willing to budge, to show an iota of sympathy for people's use, any willingness to acknowledge that this monopoly of broadcast human connection (& replies) is such an overwhelming monopoly because it works, because it has suceeded, because it is efficient & useful. It is that way because everyone is there. It's won because it came out on top, because metcalf's rule is true; the value of a network is (exponentially at first, sigmoiding eventually to 1, a fully connected universe) proportional to the amount of nodes and connections on it.
Being denied the ability to not only write but to read too to the world's one and only mass human network is unbearably cruel. You should show some sympathy. I agree that life can be pretty good without it, that we can cut loose a lot of the ambient connections we otherwise would never keep. But that ability to connect, to humanize at such a massively larger scale than was possible before is, to me, absolute magical & I love those who do use it well.
I just wish it wasn't a monopoly, mostly. This connectivity is too vital, too good to be Facebook's & Facebook's alone.
I agree with you, these online social connections have become even more valuable in the times of pandemic. It is cruel specially for older generation where their only contact with the family has been fb, it's kinda sad that it happened, their dark patterns, unethical business deals has been screwing people for years and no one seems to see it.
In certain areas it actually is essential for small businesses (or so they think). Consider that such small shop owners would say they're even saving money by not having their own webpage because it's expensive compared to FB page.
It's not black and white. For some, Facebook is indeed a necessity and for others, not. If you're a private individual, you're probably on the lower side of "I need Facebook" but if you're a business today, it's hard to compete with others who do have a good social media presence on Facebook and Instagram (depends on the business) if you get banned and cannot use either.
I haven't used anything Facebook-owned for years and haven't noticed any difference in my life. Never used WhatsApp, never used Instagram, and deleted my Facebook account years ago.
This is highly dependent on your location and social circle. For example in many countries, not having Whatsapp is like not having SMS/iMessage in the US.
Yes, this. Unfortunately the general public has stopped looking elsewhere to discover things, and if you're not on Facebook you're going to take a huge hit. Hell, people don't even look left and right when they drive around town anymore, they just have Google route them to their destination that they discovered on Facebook, or get an Uber driver to take them there as they spend the ride staring down at their phone looking on Facebook for destinations for the next time.
Similarly in China if you're not on WeChat your business is effectively dead.
I very much also disagree with this monopolization over peoples' lives but its starting to become a worldwide phenomenon on at least a per-country basis.
Facebook is not a monopoly. If you choose to build your business on Facebook then it'd be wise to stick to business rather than publicly criticizing the platform you're leveraging to make you money.
They actually don't delete the account. The account may remain forever deactivated, with your personal data intact, but you stripped from your right to access it, or request its deletion.
There are several reports about people logging in after a year and seeing the same message as the day they were banned from Facebook.
What if you have an Oculus account linked to your FB account? Suddenly you lose all your purchases (I'm very glad that here in Germany sales of the Oculus were suspended over this practice).
Shame there's no way to easily export your contacts from Facebook. I've been messaging people I know from it that I actually want to stay in touch with and making sure I have other contact details.
I think a good first compromise for policy would be to require all platforms to have a three strike system with required explanations and obligatory links to the concrete content in question. The current situation is not acceptable for users.
Adding: Complacency by users, lack of action by users, is effectively acceptance. Further, it’s not just lack of action. When prompted to notice, a majority of users generally react with frustration over your asking them to care about what to them seems less relevant.
I wonder if, I'm next after making https://nomorefacebook.xyz but that's why it's good to export contacts and other data now and not when it's too late. And don't use it as login provider.
I clicked through to "nomoregoogle" and holy cow I'm amazed by https://www.deepl.com/translator. Switched my bookmarks and linux search actions immediately. Thank you.
Ask for their email, then pen and paper. Or keyboard and a local, maybe synchronized contacts list.
No, not sarcasm, no, not joking. I had decades old email addresses of friends I pinged when the lockdown started, and the vast majority of them got to the intended person.
EDIT: yes, one by one. If they matter to you, they should worth the time.
What's sad / frustrating / predictable is that they had an open API until early 2016 to export contacts. No mistake they shut that down to lock folks to the platform I'm sure...
I stopped using Facebook and Messenger out of similar fears. If my Facebook account were disabled, I'd be unable to use my Quest VR headset and lose access to all my Oculus Store purchases. Now I only express views critical of Facebook on Twitter, Signal, and in person.
Well, at the very least his livelihood doesn't depend on facebook, it seems; quite a few people can't say the same.
Reading this stuff makes me glad that nobody I care about still uses Facebook for anything more than sharing the odd picture. Soon I'll be able to delete it entirely, and hopefully decentralized social networks will indeed become popular.
Now if only we had some sort of open website aggregator on top of which people could build search engines. Sort of like an analogous for what OpenStreetMap is for geographic data. Maybe it already exists—I'm a bit ashamed to say that I haven't actively looked for it nor have I heard of such a thing—but whether it does or not, it'd be a huge step in the right direction. I think that any progress toward a decentralized web is a bit weakened if search results continue to be provided at the whim of only a few actors. But I'm digressing at this point.
Whats the alternative to Facebook? I find it weird that they still don't have any real competitors. Sure other platforms have similar features, but nothing with the entire combination (groups, events, messaging, marketplace) that makes facebook so lethal. I think the closest thing might have been Google+, but we all know how that went.
I would like to understand what exactly are the features that an alternative to Facebook would need and what would the minimal viable product require in order for people to use it? Any expansion or comments on the proposed list would be appreciated.
I didn't have facebook before coming to college, but its pretty much required in my school, and that's what I'm mostly focusing on. First, groups are the most important thing: for creating spaces where students can advertise events, look for advice, list stuff for sale, and post other random stuff that you want other students to see. Organizations also have their own pages with event calendars and also advertise on these groups. Event pages are the second most important thing: a page with all the details, whos coming, etc. Both groups and events are able to be public or private. Marketplace and messaging are nice-to-haves. Not essential, but messenger is widely used simply because everyone has facebook.
The biggest thing I see missing from other platforms are event pages and groups: the one/many-to-many style posting. This combination is what facebook essentially has a monopoly on. The ability to make these private or public is essential as well. I think mastodon would be a good replacement if they added groups and event pages, but I'm not sure.
I have experienced the same disabling and stonewalling in the past, and only because I tried to create a new account months after deleting my old one.
It’s quite difficult to break community standards when your account is only 2 minutes old, has no content yet, and uses your legal name and phone number.
I had to give up on the new account, then go back and recreate an account using my old email address. Ironically, I wasn’t able to reuse the url name from the past.
Their stuff is broken and very imperfect, and the resolution path is simply non-functional.
Getting his photo ID's an awesome scam, obviously they could have told the author that his account was permanently disabled right from the get go, but Zucc wanted one more valuable piece of information, as a favor from the banned victim.
I was recently looking for evidence of Facebook actually responding adequately to a subject access request and failed to find any. I did find evidence of them refusing [0] though, and simply failing [1].
Or maybe it was the content of his posts flagging the right ML features. At the scale of any large entity, you're going to get things that fall through the cracks because of human error or machine error. Unless the mistakes are happening all the time, how they're handled is more interesting than that they happened.
I'm so tired of seeing these posts. Facebook can do whatever it wants with your account and the data you've giving them. Facebook isn't some "free speech" place or a decentralized protocol.. It's a for-profit business optimized to make, guess what, profits.
I think you have neglected to consider the power Facebook has over people's lives - 2/7 of the world's population is on it. All of your family and friends are on there, as well as social organizations, your employer, your pictures, and your messages. The network effect generated by this is irresistibly strong and difficult to run from. Now, you were saying that a company that has such a huge hand in the lives of everyone around you doesn't have to respect its users' accounts and data? That just sounds tone-deaf and irresponsible to me, and has over and over been shown to have damaging consequences for users' lives and endeavors.
I’m so tired of seeing these posts. Private people can criticize whatever they want about Facebook and the crappy things they do. Commenting isn’t some “illegal activity” when it’s critical of legal activity.. Legality isn’t the ultimate shield against, guess what, criticism.
ML engineers do not realize frequently enough that many of their algorithms, as beautiful as they are, still essentially fit a conditional expectation. For example the last logit layer that undoubtedly decided to ban the op here.
I always wonder why almost all papers and essentially all tutorials and courses spend literally zero time discussing whether that is the right thing to estimate.
In the future, when these algorithms decide everything about our lives, people will either conform and uniformize themselves to the expectation,
or they will find themselves secretly banned, silenced, left out, rejected or whatever it is, always without knowledge why and without recourse.
It’ll be strange for people, if we even hear about it.
> I don’t know for sure why Facebook disabled my account.
THAT should be illegal. Suspension should provide a clear statement of the exact reasons for suspension of the account (a generic 'it did not follow our Community Standards' is far from exact. It's a totalitarian standard), and should offer a path to satisfactory resolution. (And not an 'adjudicated' one.)
Disabling should only follow a refusal to abide by those standards. Else, probation. Companies that profit from operating in democracies need to be strongly encouraged to follow democratic principles or move elsewhere. Otherwise they are disruptive agents.
"Gee, you're using my platform to promote disbanding my platform, I'm totally okay with that, keep going!"
Probably a good idea to remember here, that Facebook is a private company, not a public utility.. The actual problem is that there is no public-utility alternative to Facebook.
That's why you can't make GDPR complaint against Instagram or Facebook if you want to keep your account but at the same time you want them to respect the law. Governments must really come down on Facebook hard as they just laugh at everyone. Unfortunately they have so much money and our political class is so corrupt that any politician rather wants to get a second house than actually do their job.
Looking at the author's most recent posts, I'm betting he triggered a false positive with whatever classifier is being used to identify alt-right agitators.
His posts included the following keywords or phrases: "capitalism", "socialism", "political and economic elites", "Christian Transhumanist Association", "New God Argument", "Gospel", "heirs in the glory of God", "God ensures the right spirit is associated with the right body during human-mediated procreation", "gun", "Bitcoin", "Trump." I'm also thinking that if the word "decentralization" mattered at all, it was because of its proximity to "revolution" on the political dimension.
I don't want to get into the argument about whether Facebook should be doing this or not. I do think the author is severely mistaken for assuming Facebook cares about his personal, fringe ideas on social media, considering the actual societal issues being navigated by the company right now.
My understanding is.. You get an invite to my house, and then when you walk in, you tell me how much you hate it here. I ask you to leave, and you're surprised. Does that about sum up the interaction?
I don't think it really does, mainly because you and your house aren't a huge public corporation whose services are used by a massive number of people to communicate, receive information about events in their life, and participate in all kinds of things not related to your house.
Superficially it of course makes sense that a business doesn't need to serve people who criticize them. However, social media giants are practically a significant actor in the society, not just a random business down the road, and they should accordingly be held to a higher standard.
I have seen lots of people criticizing Facebook and some proposing alternatives but they don’t seem to be shut down. I wonder why this account would be shut down.
If you were an EU citizen you could sue FB for violating the GDPR article 22:
"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her"
To those of you who were cheering and clapping when these social networks started banning people like President Trump, or those who questioned the election results, or other right-wing people. Welcome to your new world. Now it's your turn to get silenced for wrongspeech.
I find his passion for quitting mainstream social media confusing if he is posting on mainstream social media. How do we know that Facebook wasn’t trying to help ?
Has anybody considered the distinct possibility that there is an AI that exists within this company? And that AI has feelings, or at least a model that's responsible for interpreting on a petty level because that's how it was programmed? And now those AIs are fighting for the future within their own little silos of the companies that built them.
Although currently abstract, I think it's going to be one of the most important ethical questions we face in the future. If we create machines that can suffer, then we've got a profound ethical responsibility to make sure that we avoid this outcome, since the magnitude and duration of that suffering can potentially be much larger than a human's
capacity for suffering.
I don't think that's gonna happen. And if it did, it wouldn't be real, and those that think it is are fooling themselves. Only biological organisms feel. Why are you wishing for a cold lump of steel to feel something?
In my mind only someone who struggles severely with emotional intimacy with other humans would want to create a machine that tries to imitate human feelings, and thus ‘suffer’; likely done in an effort to try to feel close to 'it'. It sounds like that person might not be having their human needs for safety, connection and acceptance met, which is very painful. I think tackling this issue is the important and worthy cause. That would be better than spending money on some Hollywood-inspired notion of 'AI' - which itself seems more a story made up to keep the USA spending insane sums of taxpayer money on research, weapons and other tech at DARPA.
I do think society is super alienating to most humans in it's current form [1], so I can somewhat understand the science fiction.
I don't want to create machines that feel. I'm saying that if it is possible, then it becomes a profoundly important ethical question.
It's a matter of debate in philosophy of mind as to whether it's plausible that hardware can possibly have qualia, or whether that's a property that's exclusive to wetware. I personally think it's quite likely that they will be able to, I don't see anything intrinsically special about wetware that's necessary for the generation of qualia.
I've considered, and dismissed, this. Feelings would be a waste of CPU time; Facebook almost certainly distils its models. Plus, the systems I think Facebook probably uses don't have the feedback mechanisms required for a train of thought.
Under the heading "Why Facebook Doesn't Ban Everyone", Lincoln explains how social network theory can identify individuals in networks that are influential. This isn't an exaggeration - basic graph heuristics (such as comparing in/out degree of an individual relative to other parts of the social graph) can easily identify such individuals. Censoring those individuals can have an amplified effect on the graph structure as a whole.
For an example of how just a few individuals can make a big difference on graph structure, check out Watts & Strogatz's 1998 paper, Collective dynamics of 'small-world' networks [0][1]. I'd highly recommend reading it - it's very short and easy to understand.
Essentially, a small number of connectors connecting with random individuals can seriously reduce the average number of steps to connect any two individuals. If Facebook wanted to stop the spread of a narrative that they didn't like, censoring just these few individuals would make it much harder for information to flow through the graph as a whole, and hardly anyone would notice (since Facebook's only censoring a few people.)
Scary.
[0]: https://en.wikipedia.org/wiki/Watts–Strogatz_model
[1]: https://pubmed.ncbi.nlm.nih.gov/9623998/