Hacker News new | past | comments | ask | show | jobs | submit login
Facebook Censored Me For Mentioning Open-Source Social Network Mastodon (complete.org)
638 points by joeyh on Sept 11, 2021 | hide | past | favorite | 284 comments



I doubt anyone at Facebook is freaking out about Mastodon and setting out to censor all mention of it. It's probably just a keyword or link that tends to co-appear with other, actually rule-breaking content, and some automated system has learned to block it.

Still, it does seem like the sort of thing that could get Facebook in trouble with a regulator if you squint at it.


Yes, Facebook's upper management is definitely not freaking out about Mastodon. However, from my own experience working at a large, somewhat ethically-challenged organisation (not fb), it would not surprise me in the least if they gently yet actively pushed down a competitor, provided it can be done discreetly.


There may be more reasons to see Mastodon as a potential threat. There's something of a different paradigm to it. First of all it is decentralized (federated), and anyone can spin up servers for themself or a community of users, set their own topic and rules (CoC, ToS) and connect with the larger social network (the Fediverse). Second of all this fediverse is based on open-standards and anyone can develop their own social apps and integrate them with others. Mastodon just happens to be the most popular / well-known. Other apps, such as Pixelfed and Peertube offer nice, ad-free, and 'calmer' experiences to their large-scale social media alternatives (Instagram and Youtube). See https://fediverse.party/

Also note that Twitter has taken an interest in decentralized protocols with Bluesky project.


uuhm going back to a time before the algos took over my timeline would be awesome.


Designing your own works much better than human labor + big brother deciding for you.

On the contrary, Newscorp for example is great to get dynamic patterns of headlines you don't want to read.


For those innocent days of Usenet k00ks, crapfloods, MAKEMONEYFAST, sockpuppets, crossposts, GREEN CARD, spam, warez, pr0n, Serdar Argic, and Hasan Mutlu.

And the ~100,000 or so people who actually used that network.


Mastodon is just multiuser microblogging with linkbacks.


More than just that. Mastodon is one of multiple microblogging apps on the fediverse that can seamlessly interact, and get gradually more integrated with other types of application e.g. for blogging, media publishing, livestreaming, podcasting, event planning, etc.


What happened to Bluesky? They made a bunch of promises and it sort of... disappeared after that.


They recently hired Jay Graber as project lead: https://twitter.com/arcalinea/status/1427314482154414080


They've also started setting up community working groups -- the whole thing's coordinated in a Discord server, which is very weird to me, but you can see what they're up to.


Current hypothesis is that they're just moving slowly, there are signs of life


There are far more discreet approaches than marking these messages as spam and notifying users. My guess is that an actual person reported this as spam and Facebook automatically accepted the report.


Re: discreet, how true.. since their news feed prioritization algorithm is a black box they can just rank posts about Mastodon very low, so low they'd probably show your American contacts ads about monkey-proofing your house (something relevant for India) before showing that post...


Spam is an easy plausible deniability


Mastodon is a “competitor” to Facebook the way a grandma doing middle school tutoring in her backyard is a competitor to Harvard.


Yes, of course, at least now. Yet, it does present a non-negligible risk, in the same way that some startups can become unicorns, or some college kids web project can become a global giant.


> Yet, it does present a non-negligible risk,

Negligible: so small or unimportant as to be not worth considering; insignificant.

The first page of search Mastodon results for me are all about a heavy metal band from Atlanta. Sounds like a negligible risk to me.


But when you have near-infinity resources, everything is technically "non-negligible," and that's the case here. Let's say there's a 0.01% chance that Mastodon beats Facebook. Okay, what's 0.01% of however much Facebook makes? That's how much they should spend on it, and I'd bet that amount of money is nothing to scoff at.


It sounds like you are arguing that there is no level of risk low enough to be ignored, and that there is no such thing as negligible risk.

So if there is 2^-500 chance of an alien invasion, lets put a team on it!


It sounds like you're missing the point of this entire thread's argument.

Does the team cost about 2^-500 of your resources and attention? Then, YES, do it. That's my point.

I'm not saying that this is what you SHOULD do all of the time, I'm saying that it is entirely plausible, even likely, that Facebook might deliberately "go after" Mastodon because it very very easily can.


There is a cost, clearly. Also what is benefit? Now the projects creator can't talk about it to his friends and family it's doomed to fail. lol.


Loving, LOVING this fantasy world you're living in, in which a primary driver of a person's behavior at a big company is the esteem of friends and family, and, you know, not because the boss said so and/or ruthless thirst for profit and domination.


Your response is somewhat unintelligible...

I was saying what is the impact (damages) if this guy can't post about his software on facebook to his friends and family. Do you even think this would be in the top 100 ways to crush the competition???


Yeah, we're missing each other somewhere. I'm just talking about Facebook's incentives here. It makes sense for someone who works for Facebook to try to deliberately crush Mastodon, because even if the risk of Mastodon beating Facebook is very low, it's not zero, and Facebook's pockets are so deep that it might as well, even if the risk is low. I'm not sure what individual guy you're talking about?


Grandmas don't tend to undergo exponential growth.

Networks can.

Whether or not Mastodon will is another question. (I've been on it since ~2016.) But in FB's position, paranoia pays, and is worth throwing billions at, if deemed a sufficient threat.


... and Instagram was just a photo sharing app - how could it ever compete with Facebook?


Yes. It doesn't cost them anything to censor simply mentioning the word 'Mastodon' since they can do it anyway and it sounds like a weak version of anti-competition.

What you're really looking for is what happened to Parler which looked like a serious act of anti-competitive behaviour which showed the true brutish nature of these large companies destroying an alternative social network.

That was a much worse form of 'anti-competitive behaviour' than this.


If that were so, why censor mentions of it?


you dont know that theyre censoring mentions of it. You have one post that got removed that happened to have it in it. And an article trying to be a victim and jump on the facebook is evil bandwagon. Do you have proof of a trend?


The post was now updated; it mentions that at least three more FB posts with links to joinmastodon.org were marked as spam: https://octodon.social/@yhancik/106897948169079191

Four data points ain't a lot, but you can no longer claim it's a one-off event either.


Same reason the above comment mentions. It's a wild west, and while that brings freedom it also brings cowboys (i.e. people using it to post dangerous links or whatever - is the reasoning).


You can't do such a thing discreetly, since people will notice if you take down their posts, and Facebook is not known to ban mentions of their actual competitors like YouTube, Twitter or TikTok.


Doubt it, not worth the liabillity risk.


What liability? A few million dollar fine? Look at what Microsoft did to numerous competitors over decades: Borland, Mozilla, etc. FB squelching Mastodon is minor league.


Yes, Microsoft has famously never had any major issues with antitrust


While they have run afoul of the government a few times the penalties and "negative press" ever amounted to anything that slowed down their growth or revenue. So to the OP's point if you can afford the fine it doesn't really matter?


I would love to live in the world in which there were some actual liability risk to facebook on this (or for that matter, on anything important to it.)

I genuinely am curious, in Facebook's world today, how do you see this playing out in a way that actually hurts Facebook. I legit want to know so I can actively work toward making it happen.



recalled that too. so far for Zuckerboi not known to 'be evil' when it comes to competition and/or what one may think of in context of so-called 'opponent research'. and, seemingly not beeing 'known-for' doesn't necessarily mean seriously not-involved and not-guilty. Epstein, Prince Andrew and the Weinstein creep too were not known -- however, turned out, eventually, there were and there are some who knew and appear to know ..


Well, Facebook bought the Israeli startup Onavo explicitly so they could track which new social apps are starting to become popular on user phones, so they could quickly acquire those or copy their features and kill the competition.


But Mastodon isn't becoming popular (and likely won't be).


and that 'kill' fo 'the competition' in essence is what it's all about. no matter what's obvious or not, seems likely or not -- given FB's history, Zuckerboi's history ..


Given the date of the post, the content and that it happened with delay, I am quite sure his friends who disagreed with him politically reported it as a spam.

Perhaps after certain number of people report it, it automatically will be marked as one (such logic would be more reliable on FB, than for example reddit or HN, since most FB users have a single profile tied using their real name)


It's probably an AI thing. When these things occur it's a good idea not to presumptively jumping to attributing to human interference until more evidence surface.


I would argue the opposite. We should not allow companies to use "AI" as an excuse to avoid responsibility. I think it is perfectly reasonable to hold Facebook directly responsible for this.


Exactly. Delegating decisions does not reduce responsibility.


> could get Facebook in trouble with a regulator if you squint at it.

How could it?


I think congressional Republicans might hold a hearing about this… But I don’t think this runs afoul of mainstream interpretations of current regulations. Happy to stand corrected if someone has a plausible explanation for what agency would bring what kind of action on this.


Congressional Republicans? Lina Khan from the FTC who is personally taking up targeting FB as her claim to fame is a Democrat. Rep. Mike Doyle, chair of the House subcommittee on Communications and Technology, who grilled FB in 2020 on mis-information and Section 230 is also a Democrat.

Make no mistake, this is bi-partisan. Both sides have their own agendas against Big Tech.


" it does seem like the sort of thing that could get Facebook in trouble with a regulator if you squint at it."

Oh, I hope so...


I have personally never heard of, or seen, any scams or malware related to mastadon - aren't you giving too much benefit of the doubt?

My best guess is that their lists of competitors to keep an eye on got mixed up with other stuff. Or, of course, that they simply don't want to promote competitors on their platform, which would be normal for any non-monopoly.


I could imagine that the wording regarding corporate control could have erroneously triggered a filter because that phrase appears for example in anti-wax and conspiracy theory posts.

It would be interesting to do some experiments here: post the same text to see if it gets removed again, and then repost it and remove sentences to see which one triggered the filter.


>aren't you giving too benefit of the doubt

the largest mastodon instance has 500k users. Facebook has two billion users. If you can post twitter and tiktok and tumblr links on facebook, do you seriously think there's someone sitting at facebook taking names and making lists about a social network that practically nobody even uses

there are competitors a hundred times as large you can link too. My first guess is it probably tripped some NSFW filter because on some mastodon instances there's quite a lot of porn.


> do you seriously think there's someone sitting at facebook taking names and making lists about a social network that practically nobody even uses

Yes, when I worked at a 500-person startup there was an employee who’s sole task was to stay aware of our established competitors and nimbler startups. It was jokingly nicknamed the “office of paranoia.”

That said, I highly doubt FB is using their list of competitors to block posts mentioning them.


There are a number of extremely poorly moderated Mastodon instances with a large amount of hateful and hurtful content. I’d imagine they’re automatically blocking a lot of these sites.

Source: I used and helped maintain one of the “don’t federate with these instances” list.


I mean, sure, but the post content doesn't mention `examplebarelylegal.moe` or `exampleviolenceagainst.gay` or some such domain -- it's just the project's main instance list, which lists only instances intended to be compliant with a moderation standard.


Oh, Fediblock! The list of based instances. It's amazing to me that the people who maintain these lists don't seem to realize that they're maintaining a useful directory for all the people they don't like to help them find places to congregate, meanwhile isolating them from more moderate influences.


The explanation is much simpler, look at the date, text of the post and that it was removed with a delay. His friends who disagreed with him politically must reported it as a spam. Since FB accounts are tied to real names, FB unlike other social sites can trust reports more and start blocking automatically.


As we say in crypto "its probably nothing"


But… it makes for great publicity for Mastodon.


[flagged]


If you think that Mastadon is in any way a 'competitor' to FB or anyone at FB gives a shit about Mastadon I think that the bridge you are trying to sell only exists in your mind. Programmers are lazy and almost always breaking shit; the probability that this is anything more than some sloppy code somewhere pruning an offending URL in a way that blocks more than it should is so remote as to be laughable.


Amazon will delete a review if it mentions alternative-to-amazon ways to acquire the product. It's a known tactic among our cybernetic overlords.

Because they feel threatened by your neighbor's garage sale? Of course not.

I imagine it's like squashing the ants in your kitchen. We aren't really afraid that one of them is gonna make off with the milk jug. But all the same we don't like them touching our stuff. So we implement that policy. Almost unconsciously.


What is the violation? I'm serious. If I post a link to a target product page on walmart it will be taken down. I'm sure amazon does the same. Do business have to allow you to promote their competitors?

Such a weird thing for regulators to be chasing - there seems to be so many more obvious issues than this. Is this a political winner - in other words, does the average person think I can put of Burger King flyers in a McDonalds store?


Walmart is not a site devoted to helping users created and share their content, and they are open about their moderation rules, a good ethical standard.

Walmart's legalese:

> C. Prohibited Content

> * contains advertisements, solicitations, or spam links to other web sites or individuals, without prior written permission from Walmart;

https://www.walmart.com/help/article/walmart-com-terms-of-us...

Facebook on the other hand, promotes itself for creating and sharing user content. If it then moderates in ways that are not disclosed, opaque, and with little recourse, their deceptive behavior is neither ethic or in line with how they promote themselves as a service.


Facebook is an entirely different kind of business than Walmart or Amazon.

Facebook (and social media in general) is essentially a public forum and comes with the expectations of such since these companies control such a large part of internet discourse.

If all of them said "we won't allow anyone to talk about any of our competitors on our site" then it strikes me as a company using their dominance to silence other players in the market, i.e. anti competitive behavior.


Sounds anti-competitive in the UK. Using dominant market power to censor.


That is a weird UK rule then. In the US the common issues are things like tying, predatory pricing, exclusive dealing, loyalty discounts, bundling etc. Allowing competitors onto your own platform is an unusual requirement in UK I think - I haven't seen a case like that in US but don't follow closely.


Communication platforms are a bit different. Nobody would be okay with Comcast blocking the signup page or mentions of Google Fiber or your municipal broadbrand network, or Verizon disconnecting your call if you mention AT&T.


I think folks really do not understand the difference between Verizon calls (regulated under common carrier rules) and Facebook (a social network).

Facebook claims to be a social network. Those types of networks normally DO NOT allow you to promote other social networks.

Even review platforms which are a bit more communication in nature block posting other sites reviews.


> Facebook claims to be a social network. Those types of networks normally DO NOT allow you to promote other social networks.

I don't see how that makes it legal.

Herey other examples of potential actions that would also clearly qualify as illegal anti-competive behavior:

Microsoft could decide to block the download pages for Chrome and Firefoz from being shown in IE.

Google could block results related to Bing in their search engine or browser.

I don't see how this behavior by Facebook is any different. If it can be shown that this was done deliberately by Facebook, I have little doubt that it would also qualify as anti-competive behavior.


Microsoft does steer you away from Chrome.

When you first search on bing for chrome downloads they will put up a big edge promo above the result. After you download they will pop up a box asking if you are sure you want to switch as edge is "Faster and more secure".

The problem for mastadon is it has a TON of content that is against facebook policies. So facebook can simply say - users are reporting this crap as spam - we've blocked it. Done.


Yes Facebook is clearly closer to Walmart or McDonalds than to an internet provider. Yet section 230 applies to ISPs and FB but not to Walmart...


It's always worth remembering that the European and American legal philosophy on anticompetitive markets are different, and lead to different conclusions.

European law tends to favor maximizing competitors. American law tends to favor maximizing consumer value. At first glance, these can be considered equivalent, but they differ at the margins (which is why, for example, Amazon keeps getting hit with antitrust in France but not in the US).


I doubt Facebook even thinks of Mastodon as a competitor.


No doubt - but there was also a big move of Gab and some other more right wing communities to Mastodon I think at one point - be curious to know if the Mastodon links are all showing as high scoring in whatever generic facebook system is being used. Could easily imagine some involve links to stuff facebook isn't interested in pushing.


Gab and Torba are awesome. Crusaders against the tech hegemony


Well they already realised that early and so they built their own social networks and they are still up and running today. This 'censoring' should be unsurprising to everyone here.

I thought what happened to alternative social networks was a warning to show that not only they can do it to anyone but it shows how anti-competitive they really are.

What happened here to this person mentioning Mastodon is no different but was like 1% of what Facebook and many other private platforms can really do.


I remember reading an article about censorship in China, someone talking to his friend on the phone mentioned a particular term that the government didn't like, and the line got disconnected a second later. Was it a coincidence, or was someone listening, or was it computers?

Nowadays most of our communication channels are owned by corporations. Are you okay that they get to decide what we're allowed to talk about? Zoom for example banned meetings talking about the Tiananmen Square massacre; You can't post links to The Pirate Bay in private chats on Facebook... on private chat!

And this coming from me who's quite okay with Twitter banning Trump and other idiots off their platform.


Interesting how "because COVID" has become the thinly veiled excuse here. A similar manifestation of the "think of the children" trope that can be used to manipulate emotions and shut down dissent.


> Interesting how "because COVID" has become the thinly veiled excuse here.

The message says it triggered spam filters. It's not related to COVID misinformation.

The only place COVID appears is in the warning that their manual review queues are longer than normal due to COVID.


Yes, that's the excuse. It's equivalent to "because we don't want to pay for it."


If there's one job that doesn't exactly depend on what location you're working from...


I disagree with the notion that anyone anywhere can do the job. These moderators don't just moderate the rants your grandma posts about shopping malls. There's a documentary about what the content moderation people do, and they end up seeing horrific stuff like (sexual) child abuse, gore, and all the other worst thing humanity has to offer. Many of these people end up depressed or in therapy.

I don't know if Facebook has their stuff together, but I think it's unethical to have people review random user uploaded content without close access to a mental health specialist.

You can have several degrees of intensity a reviewer might be able to see (to not expose all reviewers to the very worst on a regular basis), but no algorithm can clearly identify the nastiest of the nasty content. The algorithm sees "government pedo club", flags it as fake news, and who knows what the shared content actually contains. It could be a conspiracy nut, it could just as well be actual child porn. The probability is low, but you need someone standing by just as well, in my opinion.


Isn’t it a bit silly that these Big Brother companies (e.g. Facebook, Twitter) are so afraid of global communication being taken over by small, yet persuasive groups, that they set about to take over global communication by their…small, yet persuasive company?


> If there's one job that doesn't exactly depend on what location you're working from...

No, the opposite is true: Any job that requires reviewing potentially private content must be done in a controlled environment.

I wouldn't be surprised if content reviewers weren't even allowed to have cameraphones at their desks.

Can't risk having someone snap photos of the screen while reviewing content flagged as sensitive. Doing this job from home is not an option.


…then it's not content moderation, which AFAIK companies like Facebook and Google require to take place on-site, in a controlled environment with no electronic devices, due to the potential data security issues involved.


It is more nuanced.

FB team can be overwhelmed with Covid related misinformation.

A lot of content moderation is outsourced to countries like India where productivity, availability may have degraded due to covid that ravaged through the country.

Many firms still have backlogs from Covid disruption.

If you have worked at / started any half-decent sized company you'd have known


This seems to be a wildly incongrous conclusion based on a clear indication that a global pandemic is impacting operational efficacy and staffing levels. Not that it's an excuse or justification for blanket censorship.


At this point anything done with covid as an excuse should be thought of as a power grab.

We have vaccines for those that want them.

We have drugs that treat it very well.

There’s no reason to censor information or shut things down over it.


That seems like a much more complex line of thinking than "a combination of lots of ongoing misinformation campaigns combined with global staffing and availability issues means content moderation teams are less responsive".


https://joinmastodon.org/ Might as well see what FB is scared about


If I used social networks, I might give it a try. (I did actually check it out though) But all social media networks seem anti-productive to me and, well, just not actually very social.

With respect to connecting with family & friends, I'd much prefer a pure platform based pretty much on just that.

With respect to other people with interesting things to say, I'd prefer blogs aggregating & curated sites like, well, HN itself.

For the former, I don't know how you get to a "pure" platform like that where you can communicate & share experiences/photos with each other without also letting meme-ish "lol this person of <political affiliation I hate> is an idiot"> posts through, but at the very least it could avoid surfacing them algorithmically and rewarding them with "internet points".


IME mastodon/pleroma/"the fediverse" is really cool because you have two kind of communities you interact with. Depending on the instance you can have a grand old time interacting with people in your "local" timeline (your instance only), but you can also venture into the wild west that is the federated timeline from all instances.


You’re right that it’s infeasible to block everything dumb from an online system. Humans gonna human.

> avoid surfacing them algorithmically and rewarding them

This is the key. And I’ve come to believe that the only way to prevent the platforms doing their algorithmic engagement maximization thing is to encrypt everything E2E.


The social networks you may be used to are historically manipulative, censoring and shaping your feed to benefit the social network, not the society.

The nice thing about these open protocols is that they are simply reverse chronological. You see what you choose to see, in the order it was published.

It's a totally different experience than the engineered rollercoaster that is corporate social media.


So, seems mastodon is fragmented or segmented into many independent networks. Do those interact with each other in any way? I'd hate to join one and miss truly interesting content on another...is it possible to distill multiple into one on the user end? Just asking, as I've never tried it before and the concept seems a little confusing.


Yes, on mastodon you can follow @someone@anothermastodon.local even if you are on firstmastodon.local, and the server running your instance will phone out to anothermastodon.local to retrieve posts from the person you follow.


Thanks. But how would I find said user...is there a way to get erm, the top posts of the day from anothermastodon and firstmastodon together?


There is a timeline that randomly mixes in posts from users from other instances whom users from your instance follow


Perfect. I'll give it a go. Any specific recommendations? I figured mastodon.social just because it's the largest, is that bad reasoning?


It's not bad for a user, but somewhat kills the distributed effect in the long run.


In reality, Mastodon instances block a lot of other neighboring instances a lot of the time.


Beware, many instances will blacklist other instances simply based on rumor or clique, and it makes it impossible to follow or read users on those other instances via the first.

I recommend evaluating primarily based on the censorship policies of the instance operator. For example, the list of servers on joinmastodon is restricted to those who are actively engaged in censorship of legal speech (full uncensored instances are not indexed there) so you may be interested in searching for instances not shown there, depending on your attitude toward censorship.


if you're into Free and Open Source Software, fosstodon.org is an option, but you'll have to wait until your account is manually reviewed though.


While mastodon.social would ensure you are always on the latest branch of the mainline Mastodon server software as it's the 'flagship' maintained by Mastodon's main/original developer, its large size has caused an increasing number of instances to mute (still allowing their users to follow users on mastodon.social, but not to include its posts in their 'federated timeline'), or outright block the instance (meaning none of the posts on mastodon.social are accessible to the instance's users at all). Reasons for these decisions can include but are not limited to:

- the instance has grown too big and thus some consider it counter-productive towards the federated nature of the protocol

- disagreement with the direction its main developer / maintainer is taking Mastodon, such as intentionally hiding the local timeline from the official iPhone app

- some consider it under-moderated, or not responding quickly enough to reports

- disagreement over its content moderation guidelines

- in case of a mute, it could also be not wanting their federated timeline to be flooded with primarily mastodon.social posts

Lack of federation between these instances and mastodon.social could be a reason not to pick mastodon.social. (Similar situation applies to mastodon.online btw, which is a spin-off server of m.s.)

Another reason to pick a different instance could be not wanting to use mainline Mastodon software. For example because you want to run your own instance on limited hardware (Mastodon can get a bit resource intensive), don't like Ruby, miss certain features, don't like the front-end (though alternative external front-ends to Mastodon do exist), or some other reason.

Personally I've switched my primary use over to an account on an instance that runs Mastodon Glitch Edition, also known as Glitch-Soc (https://glitch-soc.github.io/docs/), which is a compatible fork of Mastodon which implements a bunch of nice features such as increased post character count (Mastodon defaults to 500 characters per post, Glitch-Soc supports increasing this in the server settings), Markdown support (though only instances that also support HTML-formatted posts will see your formatting; mainline Mastodon servers will serve a stripped down version of your post instead), and improved support for filters / content warnings / toot collapsing, optional warnings when posting uncaptioned media, and other additional features.

Another alternative Mastodon fork is Hometown (https://github.com/hometown-fork/hometown) which focuses more on the local timeline (showing posts only from your own instance) with the addition of local-only posts, to nurture a tighter knit community.

Aside from Mastodon there are other implementations of ActivityPub which can still federate with Mastodon instances, such as:

- Misskey (https://github.com/misskey-dev/misskey)

- diaspora* (https://diasporafoundation.org/) (which AFAIK inspired Google Plus back in the day)

- Hubzilla (https://hubzilla.org//page/hubzilla/hubzilla-project)

- Peertube (https://joinpeertube.org/) (focused on peer-to-peer video distribution)

- Friendica (https://friendi.ca/)

- Pleroma (https://pleroma.social/)

- Socialhome (https://socialhome.network/)

- GoToSocial (https://github.com/superseriousbusiness/gotosocial)

- Pixelfed (https://pixelfed.org/) (which started as a sort of federated Instagram alternative) and more.

Fediverse.party (https://fediverse.party/) is a nice way to discover various protocols that make up the bigger Fediverse.

Instances.Social (https://instances.social/) can also be used as an alternative to find instances, though I believe it is limited to Mastodon-based instances.


There's also the minimalist Honk[1] from 'tedunangst

[1] https://humungus.tedunangst.com/r/honk


I've found koyu.space to be very friendly, as long as you're cool with left leaning politics posts in there.


Does firstmastodon.local need to be federated with anothermastodon.local for you to follow someone on it?

I don't understand the system well enough to know if this is a dumb question or not.


Ain't a dumb question at all! It actually takes reading the ActivityPub specs to answer it, so no surprise if you didn't get it just from reading the landing page ;)

The answer is: it'll happen automatically. Just search for someone's handle, and your server will talk to that other server. When you follow that other users, your server will start federating with that other servers.

Note though that servers might block each other. For example, many Western servers block Japanese pawoo.net, since it allows posting lolicon. Western servers don't want this content in their timelines and caches, so they block it. If your server blocks another.social, you won't be able to follow anyone on there.

But your question also hints at a real problem with Fediverse (of which Mastodon is a part), which is: each instance only sees a subset of the Fediverse. Thus, searching by hashtag will only get you a subset of all posts that contain it. Full-text search is even more complicated.


Gotcha, thanks for the info. That does seem like a real problem, and I do see the complexity of the issue. Are there any current proposals for tackling it without adding centralization? Or do we just acknowledge/accept that that's a tradeoff?


There are ActivityPub relays: they are actors that re-broadcast anything sent to them to anyone who subscribes.

They are still decentralised, the aim is to populate the federated timeline. Obviously, the visibility is still a subset of the network.


As far as I can tell, Fediverse embraces this as a privacy feature. (It's more of a bump than a roadblock, but still.) I'm not aware of any projects to fix this (although I'm not very immersed into Fediverse development, so maybe check https://fediverse.party, they have a list of Fedi projects).


Theoretically you could layer something on top right? Something that consumes and publicizes everything. And then I guess you come full circle back to centralization but at least you could potentially view and search both a "local" and "global" feed.


Sure. Centralization could probably be avoided by storing the index in a DHT or something.


Do you have to use federation, or can you create your own little bubble and everything is self contained?


Federation is not forced. You can set up an instance just for you and your friends/family.


It's self-contained by default and as you follow outside users it starts receiving their posts and so on. You don't have to participate in the whole fediverse.


Apologies, I should have worded that differently. As the server admin, can I configure it so that it's impossible to follow someone outside of that server and that server has no outbound network dependencies?

My /etc/resolv.conf currently uses

  options attempts:1 timeout:1 max-inflight:1
  nameserver 127.0.0.50
There is nothing listening on 127.0.0.50 and iptables does not permit outbound connections.


Yes, but then it's probably not the best tool for whatever you need.


Thankyou. Yes I currently use Murmur/Mumble + UnrealIRCd + phpBB for friends. Just exploring options. The UI's of my existing choices are a little dated.


Federation can be disabled and it will still work totally fine.

I am just wondering why is it such a hard requirement for you to stop other people to follow someone else that is not part of your server? Costs?


Bleedover between communities. My circle of friends are isolationists.


I am sorry, but I have to ask: why does "isolationists" sound like code for "people who are doing and sharing all types of illegal content and can not leak any info?"


I'm sure that is a thing, but my friends are not that exciting. We are just privacy lunatics and old fuddy duddies that study the behavior of corporations, governments and others. We postulate what countermeasures could be enacted to minimize the sprawling intrusive nature of the internet into our lives. We always seem to reach the same conclusion as Joshua. "The only winning move is not to play" [1]

[1] - https://www.youtube.com/watch?v=MpmGXeAtWUw


I don't get a reply link for rglullis so will reply here.

I am the crazy bold one of the group and the canary in the coal mine. You won't see any of my friends interacting on this or any other social site. I on the other hand email and make phone calls to everyone at every level of government, C-levels in corporations, investors, military leaders, scientists, influencers, etc...


Interesting. Follow-up question: how do you know that? What makes you so sure that you are the only one that is breaking out of the circle?


Ok, now I am wondering how you reconcile what you are saying with the fact that we are discussing in a forum that is maintained by venture capitalists who are not-so-indirectly shaping the minds and financing all these companies that are very focused on intrude even further on our lives.


The appearance of impropriety is in the eye of the beholder


They are independent but federated.

The best example of how it works would be email. You can set up your own email server, and interact with other independent email servers seamlessly, or just find a provider you trust and get your email access from them.


You can "subscribe" to hashtags which allows you to follow them in a column ("timeline") as you would your followers, mixing hashtags into a single timeline if you wish.

There's also the option of adding "featured hashtags" to one's profile, allowing a user to search for users of a particular interest.

Along with the "Federated Timeline", which others have mentioned, and your follower's boosting posts (akin to retweeting) I've found it quite easy to find a diverse list of people to follow and interact with.


They're not fragmented or segmented at all. It's like email—just because you have an account on gmail.com doesn't mean that you can only email people who use GMail, you can email anyone who supports the same protocol. In the same way, users with an account on any Mastodon server can follow any other users who support the same protocol, whatever server they happen to be on. (Modulo moderation—many servers with lax moderation policies may find themselves blocked by other server administrators. Again, just like email)


Or maybe, OP wanted a whole bunch of people here to post joinmastadon.org to their FB pages. I bet it worked too.


The big problem I have with Mastodon is that it has a culture of censorship equivalent to Facebook. My recollection is that to be listed in their directory of instances, you need to abide by the content rules created by key Mastodon people, and those instances in turn are required to only peer with other instances that follow the same rules. Those rules basically include moderation based on various progressive political stances, so you can’t honestly discuss controversial topics from different perspectives. It creates a federated network that is still an echo chamber rather than a platform for civil discourse and free thought. And if that’s the case I am not sure why I need Mastodon or why I would lend it attention or credence.


Mastodon follows the activitypub protocol. There are other backends that implement the same protocol, like Pleroma; the Fediverse (all the systems that use activitypub) is bigger than mastodon, and much bigger than mastodon.social and co.

There's a sort of blocking firewall around mastodon.social and sites broadly on the same 'side' as it, in that all these servers tend to share blocklists. One of the things they'll block a server for is being 'free-speech maximalists'.

But outside of the mastodon.social bubble, there are lots of free speech maximalist fediverse instances that don't block anyone, or block different people.

Pleroma instances tend to be more free-speech oriented (because the technical choice of using Mastodon or Pleroma as your backed became part of a signalling game). I think Pleroma's better software, anyway.


We at mastodon.social don't copy our blocklist from anybody and don't consider ourselves to be on anyone's "side". Our blocklist is of a quite reasonable length for 5 years of operation and based on personal experiences of our moderation team only:

https://mastodon.social/about/more#unavailable-content


I've seen your blocklist, and meant more that smaller instances that are cut from similar broad-culture cloth are likely to copy mastodon.social's blocklist (I think doing that makes sense if you know you share sensibilities with mastodon.social's moderators).

I follow plenty of people on mastodon.social, mastodon.tech, etc., as well as people from a lot of the suspended instances in your list, and I can clearly see/feel two (really more) different 'cultures' in the fediverse.

The GP is more aligned with the second culture, I think- the culture you could label 'free speech maximalist'.

I don't really think mastodon.social should change- you've banned things you don't like, you perceive certain messages as pernicious enough to warrant a ban, that's fair- you've made a space with a certain tone and flavor, that suitable for a certain type of person.

But people are diverse, and so there are plenty that find the culture and tone of mastodon.social inferior to, say, Poast.

I think people with different sensibilities are suited to different spaces, and that it makes sense to point out that some of the spaces on mastodon.social's blocklist have value- maybe not to the median member on mastodon.social, but to people not of your culture.

To be frank, you're progressives. There's nothing wrong with that! Some of my best friends are progressives! But it's a lens that colors how you view the world, and what's ban-worthy. Again, nothing wrong with making a space that conforms to your sensibilities- but I wanted to make it clear to the GP that there are plenty of instances that don't have the same sociopolitical 'flavor' as mastoson.social, and that mastodon.social sits at the graph -centre of a particular subset of the federated network that is of similar flavor.


> My recollection is that to be listed in their directory of instances, you need to abide by the content rules created by key Mastodon people, and those instances in turn are required to only peer with other instances that follow the same rules. Those rules basically include moderation based on various progressive political stances, so you can’t honestly discuss controversial topics from different perspectives.

Here are the requirements for us to promote your server:

https://joinmastodon.org/covenant

The only hard requirement related to content is that racism, sexism, homophobia and transphobia be not allowed. There is no requirement to peer with anyone or have any specific political stances. If your political stance or perspective requires you to dehumanize people of different races or sexual orientation, then yes, you are not welcome.


> The only hard requirement related to content is that racism, sexism, homophobia and transphobia be not allowed.

I don't know anyone who's a self-declared racist, sexist, homophobe or transphobe.

But these things mean different things to different people.

So let's say JK Rowling wants to join. Would she be welcome?

What about Glenn Greenwald?

Or Andy Ngô?


On one hand this might be a problem, on the other hand, one has to realize, that someone hosting mastodon is also in some regions of the world, responsible for taking care of preventing illegal content on the service. If I hosted a mastodon instance for example, I would have to check everything, because of upload filter rules, and if other people uploaded or posted inappropriate stuff, then I might face legal consequences for it. This needs to be kept in mind.


I think you are correct about having your instance listed, but I created my own server from a digital ocean droplet (newathens.net) (its quite easy) and I can follow anyone. So I don't know about that second part


Censorship is always abused. Maybe this time or maybe not. But it's always abused.

And yes, it's worse than misinformation.


I'm not sure it's worse than misinformation. In my field, bad data often has a more damaging impact than no data.

But I suppose it will depend on the circumstances, and I'd honestly be interested to hear your thoughts on why censorship is worse.

As for the inevitability of abuse? When it comes to corporate interests, that seems to be nearly axiomatic. The Verge's list of fascinating & horrifying exchange at Apple about app approvals & secret deals makes for a great case-study in this. [0]

[0] https://www.theverge.com/22611236/epic-v-apple-emails-projec...


Censorship is bad data, because it is selectively excluded data.

If gamma rays randomly excluded one post in a thousand, that would be mussing data. Censors excluding one post in ten thousand is worrying because they have motivations of their own, which gamma rays do not.


It looks like we're going to get a massive test of largely misinformation (US) vs. largely censorship (China) writ large in the coming decades. Place your bets on the outcome.


From my understanding, China's model of media control focuses more on dillution and distraction than on overt censorship.

Both exist. But the larger effort is put into distraction.

The recent Russian model is more on bullshit and subverting notions of trust entirely.

American propaganda seems largely based on a) what sells and b) promoting platitudes, wishful thinking, and c) (at least historically) heart-warming (rather than overtly divisive) notions of nationalism.

The c) case is now trending more toward divisive and heat-worming.


> Both exist.

Yes, censorship and propaganda go hand in hand. In 1922 Walter Lippmann wrote in his seminal work, Public Opinion,

> Without some form of censorship, propaganda in the strict sense of the word is impossible. In order to conduct a propaganda there must be some barrier between the public and the event. [1] [2]

[1] https://www.gutenberg.org/cache/epub/6456/pg6456.html

[2] https://en.wikipedia.org/wiki/Public_Opinion_(book)#News_and...


Right.

Both are also tied inherently to monopoly, along with surveillance and both general and targeted manipulation.

https://joindiaspora.com/posts/7bfcf170eefc013863fa002590d8e...

https://news.ycombinator.com/item?id=24771470


This isn't 'some bad data' vs 'no data'

This is 'some bad data' vs 'systemically biased data' and the latter is much worse. Most datasets will contain some bad data but it can be worked around because the errors are random.


Bad data vs no data at all? I would think no data would put you out of a job while bad data would require more hires to filter the data.


I prefer employees who say "I don't know" over confident bullshitters.


No data clearly indicates that there is no data.

A statement of "I don't know' clearly indicates a lack of knowledge.

A statemnt of "I have no opinion" clearly indicates that the speaker has not formed an opinion.

In each case, a spurious generated response:

1. Is generally accepted as prima facie evidence of what it purports.

2. Must be specifically analysed and assessed.

3. Is itself subject to repetition and/or amplification. With empirical evidence suggesting that falsehoods outcompete truths, particularly on large networks operating at flows which overload rational assessment.

4. Competes for attention with other information, including the no-signal case specifically, which does very poorly against false claims as it is literally nothing competing against an often very loud something.

Yes: bad data is much, much, much, much worse than no data.


Data that's had data censored from it is bad data.


False.

Outlier exclusion is standard practice.

It's useful to note what is excluded. But you exclude bad data from the analysis.

Remember that what you're interested in is not the data but the ground truth that the data represent. This means that the full transmission chain must be reliable and its integrity assured: phenomenon, generated signal, transmission channel, receiver, sensor, interpretation, and recording.

Noise may enter at any point. And that noise has ... exceedingly little value.

Deliberately inserted noise is one of the most effective ways to thwart an accurate assessment of ground truths.


Defining terms here is important, so let's avoid the word bad for a moment because it can be applied in different ways.

1) You can have an empty dataset.

2) You can have an incomplete dataset.

3) you can have a dataset where the data is wrong

All of these situations, in some sense, are "bad"

What I'm saying is that, going into a situation, my preference would be #2 > #1 > #3.

Because I always assume a dataset could be incomplete, that it didn't capture everything. I can plan for it, look for evidence that something is missing, try to find it. If I suspect something is missing but can't find it then I at least know that much, and maybe even the magnitude of uncertainty that adds to the situation. Either way, I can work around it understanding the limits if what I'm doing or if there's too much missing, make a judgement call and say that nothing useful can be done with it.

If I have what appears to be a dataset that I can work with, but the data is all incorrect, I may never even know it until things start to break or, before that if I'm lucky, I waste large amounts of time to find out that the results just don't make sense.

It's probably important to note that #2 and #3 are also not mutually exclusive. Getting out of the dry world of data analysis, if your job is propaganda & if you're good at your job, #2 and #3 combined is where you're at.


I'd argue Facebook's censorship leaves us with 2 and 3. They don't remove things bevause they're wrong; they remove them because they go against the current orthodoxy. Most things are wrong, so most things that go against the modern orthodoxy are wrong... but wrong things that go WITH the modern orthodoxy aren't removed.

It's a scientist who removes outliers in the direction that refute his ideas, but not ones in the direction that support it.


Let's note that this thread's been shifting back and forth between information which is publicised over media and data, with the discussion focusing on use in research.

These aren't entirely dissimilar, but they have both similarities and differences.

Data in research is used to confirm or deny models, that is, understandings of the world.

Data in operations is used to determine and shape actions (including possibly inaction), interacting with an environment.

Information in media ... shares some of this, but is more complex in that it both creates (or disproves) models, and has a very extensive behavioural component involving both individual and group psychology and sociology.

Media platform moderation plays several roles. In part, it's performed in the context that the platforms are performing their own selection and amplification, and that there's now experimental evidence that even in the absence of any induced bias, disinformation tends to spread especially in large and active social networks.

(See "Information Overload Helps Fake News Spread, and Social Media Knows It". (https://www.scientificamerican.com/article/information-overl...), discussed here https://news.ycombinator.com/item?id=28495912 and https://news.ycombinator.com/item?id=25153716)

The situation is made worse when there's both intrinsic tooling of the system to boost sensationalism (a/k/a "high engagement" content), and deliberate introduction of false or provocative information.

TL;DR: moderation has to compensate and overcome inherent biases for misinformation, and take into consideration both causal and resultant behaviours and effects. At the same time, moderation itself is subject to many of the same biases that the information network as a whole is (false and inflammatory reports tend to draw more reports and quicker actions), as well as spurious error rates (as I've described at length above).

All of which is to say that I don't find your own allegation of an intentional bias, offered without evidence or argument, credible.


An excellent distinction. In the world of data with research & operations, I only very rarely deal with data that is intentionally biased. Counted on the fingers of my hand. Cherry picked is more common, but intentionally wrong to present things in a different light, that's rare.

Well, it's rare that I know of. The nature of things is that I might never know. But most people that don't work with data as a profession also don't know how to create convincingly fake data, or even cherry pick without leaving the holes obvious. Saying "Yeah, so I actually need all of the data" isn't too uncommon. Most of the time it's not even deliberate, people just don't understand that their definition of "relevant data" isn't applicable. Especially when I'm using it to diagnose a problem with their organization/department/etc.

Propaganda... Well, as you said there's some overlap in the principles. Though I still stand by more preference of #2 > #1 > #3. And #3 > 2&3 together.


Does your research data include moderator actions? I imagine such data may be difficult to gather. On reddit it's easy since most groups are public and someone's already collected components for extracting such data [1].

I show some aggregated moderation history on reveddit.com e.g. r/worldnews [2]. Since moderators can remove things without users knowing [3], there is little oversight and bias naturally grows. I think there is less bias when users can more easily review the moderation. And, there is research that suggests if moderators provide removal explanations, it reduces the likelihood of that user having a post removed in the future [4]. Such research may have encouraged reddit to display post removal details [5] with some exceptions [6]. As far as I know, such research has not yet been published on comment removals.

[1] https://www.reddit.com/r/pushshift/

[2] https://www.reveddit.com/v/worldnews/history/

[3] https://www.reveddit.com/about/faq/#need

[4] https://www.reddit.com/r/science/comments/duwdco/should_mode...

[5] https://www.reddit.com/r/changelog/comments/e66fql/post_remo...

[6] https://www.reveddit.com/about/faq/#reddit-does-not-say-post...


Data reliability is highly dependent on the type of data you're working with, and the procedures, processes, and checks on that.

I've worked with scientific, engineering, survey, business, medical, financial, government, internet ("web traffic" and equivalents), and behavioural data (e.g., measured experiences / behavour, not self-reported). Each has ... its interesting quirks.

Self-reported survey data is notoriously bad, and there's a huge set of tricks and assumptions that are used to scrub that. Those insisting on "uncensored" data would likely scream.

(TL;DR: multiple views on the same underlying phenomenon help a lot --- not necessarily from the same source. Some will lie, but they'll tend to lie differently and in somewhat predictable ways.)

Engineering and science data tend to suffer from pre-measurement assumptions (e.g., what you instrumented for vs. what you got. "Not great. Not terrible" from the series Chernobyl is a brilliant example of this (the instruments simply couldn't read the actual amount of radiation).

In online data, distinguishing "authentic" from all other traffic (users vs. bots) is the challenge. And that involves numerous dark arts.

Financial data tends to have strong incentives to provide something, but also a strong incentive to game the system.

I've seen field data where the interests of the field reporters outweighed the subsequent interest of analysts, resulting in wonderfully-specified databases with very little useful data.

Experiential data are great, but you're limited, again, to what you can quantify and measure (as well has having major privacy and surveillance concerns, often other ethical considerations).

Government data are often quite excellent, at least within competent organisations. For some flavour of just how widely standards can vary, though, look at reports of Covid cases, hospitalisations, recoveries, and deaths from different jurisdictions. Some measures (especially excess deaths) are far more robust, though they also lag considerably from direct experience. (Cost, lag, number of datapoints, sampling concerns, etc., all become considerations.)

It's complicated.


I've worked with a decent variety as well, though nothing close to engineering.

>Self-reported survey data is notoriously bad

This is my least favorite type of data to work with. It can be incorrect either deliberately or through poor survey design. When I have to work with surveys I insist that they tell me what they want to know, and I design it. Sometimes people come to me when they already have survey results, and sometimes I have to tell them there's nothing reliable that I can do with to. When I'm involved from the beginning, I have final veto. Even then I don't like it. Even a well designed survey with proper phrasing, unbiased likert scales, etc can have issues. Many things don't collapse nicely to a one-dimensional scale. Then there is the selection bias inherent when by definition you only receive responses from people willing to fill out the survey. There are ways to deal with that, but they're far from perfect.


Q: What's the most glaring sign of a failed survey analysis project?

A: "I've conducted a survey and need a statistician to analyse it for me."

(I've seen this many, many, many times. I've never seen it not be the sign of a completely flawed aproach.)


Bad data is often taken as good data, because sifting through it incurs 100x more friction than taking it at face value. When you ultimately get bad results you can just blame the bad data, and you still end up with a paycheck for the month(s) you wasted.


As a metaphor, you can imagine a blind person in the wilderness who has no idea what is in front of him. He will proceed cautiously, perhaps probing the ground with a stick or his foot. You could also imagine a delusional man in the same wilderness incorrectly believing he's in the middle of a foot race. The delusional man just run forward at full speed. If the pair are in front of a cliff...

As the saying goes, it's not what you don't know that gets you into trouble. It's what you know for sure that just ain't so.


Not quite: if you have no data, you get new hires and news systems to collect and track it.

You may be ignorant, but you know it, and can deal with it. Let's call is starting from 0.

When you have bad data, you frequently don't know that you have bad data until things go very very wrong. You aren't starting from 0. 0 would be an improvement.


This seems like extending the "known knowns" concept to an additional dimension, involving truth.

In the known-knowns model, you have knowledge and metaknowledge (what you know, what you know you know):

     K   U   -- What you know
  K  KK  KU

  U  UK  UU
   \
    What you know you know
If we add truth to that, you end up with a four-dimensional array with dimensions of knowledge, knowledge of knowledge, truth-value, and knowledge-of-truth-value. Rather than four states, there are now 16:

         TT   TF   FT   FF   (Truth & belief of truth)
        ---- ---- ---- ----
  KK  | KKTT KKTF KKFT KKFF
  KU  | KUTT KUTF KKFT KKFF
  UK  | UKTT UKTF UKFT UKFF
  UU  | UUTT UUTF UUFT UUFF
False information is the FT and FF columns.

In both the TF and FT columns, belief of the truth-value of data is incorrect.

In both the KU and UU columns, there is a lack of knowledge (e.g., ignorance), either known or unknown.

(I'm still thinking through what the implications of this are. Mapping it out helps structure the situation.)


Reminds me of epistemology, where this is a distinction between certainty and truth.


Yes, there's an element of that in this.


Authoritarian countries collapse because everyone is lying about reality. Same thing happens with metric driven management.


Any platform where people can speak to an audience needs some kind of 'censorship', otherwise you'll quickly find it's a platform solely for trolls and the like.


Censorship is the sloppiest possible solution to the epistemological crisis. I thought we figured this out during the Enlightenment.


Censorship, meet Filter Bubble. Filter Bubble, Censorship. And this little tike you've got with you, what's his name? Engagement Metric? Oh, how cute. Nice to meet you. You look innocent. I'm guessing you couldn't do any major societal damage at all. You're certainly not a little problem child.


Filter bubble is not a real problem:

https://twitter.com/degenrolf/status/1261164727486615559?lan...

https://twitter.com/degenrolf/status/1067780924014772224

Whereas censorship is lindy among things that have bad effects on society.

https://en.wikipedia.org/wiki/Lindy_effect

So give the most caution against the proven bad thing and not the one you're in a trendy moral panic about.


So Rolf Degen argues that filter bubbles aren't a problem, and if you think they are, then it's because you're a "political junkie" trapped in one. That's some pretty twisted logic mixed with a nice helping of poisoning the well. I guess political junkies would never do anything crazy like assault the capital building to prevent certification of an election result. Yep, nothing to see here.

I'm going to adopt this style of argument from now on.

"Oh, you think that X is a big problem? Well, it isn't, because you have problem X, and only think that way because of it! It's your cognitive distortions talking! Zing!"


A couple of boomers went on an unguided tour, unarmed. I didn't know insurrectionists tended to leave their guns at home


Almost every photo I've seen of the event has not been "a couple of" anyone nor largely "boomers".

On a similar note, I somehow doubt if people broke through the doors to enter your home, assaulted people trying to protect it, yelled about how they want you dead, and then took some of your stuff you'd be calling it an "unguided tour".


The question is: do we want to learn the lessons of history the easy way, or the hard way?


New people are being born every day who weren't around for the steps forward made in the past. If only there was an institution that could step up to the task of teaching them. Instead there are institutions for getting them to buy toys and making them do algebra drills.


It's a really hard problem to solve. History has shown time and again how easy it is to coopt institutions as well.


If my kids were late coming home from school, and I asked you if you know what has happened, and you either:

1. Don't say anything because my neighbour tapes your mouth shut

2. Lie and say, "They were brutally murdered by your neighbour", resulting in a dead neighbour followed by my kids showing up unharmed from school

...can you explain in this scenario how censorship is worse than misinformation.

I'm not trying do be a jerk. I hear your argument a lot (especially on tech-heavy web sites) and I want to understand it.


I think there is a tiny little bit of a jump if you are acting this quickly and this harshly on information without verifying.

Concretely to your hypothetical: don't attribute to misinformation the issue that is most like your barbaric reaction. Not to say that the liar should not be punished, it should bear a big responsibility in the consequences of the actions. But at the end of day it was not the liar the one that killed your neighbor, you were.


It would be, but also you showed him pictures to prove it, he just didn't know they were photoshopped. And linked him to a news article on thebostontribune.com that was reporting that his kids were dead. And his family and friends were sharing their condolences.

It's not as if folk AREN'T acting on misinformation or showing that they aren't really capable of distinguishing between the two. Tons can. And tons won't realize that The Boston Tribune isn't real.

We're having to deal with almost literally shouting "fire!" in a crowded theater when there's no fire, only there's special effects and major campaigns to convince people there's fire, not just taking some guy at their word and stampeding because of it.


This seems like really stretching the analogy just to remove personal responsibility.

If I am the father of the missing children and I see the "family and friends" sharing their condolences, I would go talk to them first. If someone comes with pictures trying to accuse someone of something, no matter how shocking the accusations, there would still be the question of (a) why is someone bothering with taking pictures and not taking to the authorities beforehand and (b) what are the consequences for me if I went on a rampage attack based on bogus evidence.

To get a little bit on topic: the reason that censorship is worse than misinformation is that we should always operate on the premise that our information is incomplete, inaccurate or distorted by those controlling the information channels.

Without censorship, I can listen to different sources (no matter how crazy or unsound they are) and I can try to discern what makes sense and does not. With censorship, any dissent is silenced, so we get one source of information - who can never get questioned - or worse we get to see many sources of information but only the ones that are aligned with the censors and gives us a false consensus and the illusion of quality in information.

Only idiots can walk around in the world of today and confidently repeat whatever they hear from "official" sources as unquestionable truths.


Thanks for your reply, rlgullis.

The extremes of my example were only to show that there could be real and serious consequences from misinformation rather than silence. If we dial it back from "killing my neighbour" to "lost my job" or even "missed my bus", I believe my point still stands. In many scenarios that we experience every day, we would be better served by accepting censure over misinformation.

You claim "we should always operate on the premise that our information is incomplete, inaccurate or distorted by those controlling the information channels" and I agree with you in theory. But in practice this is impossible. The human brain is physically unable to work everything through from first principles. This makes sense conceptually and has been verified in research.

And this to me is the fundamental issue of our time:

In theory, social media and unrestrained free speech are a boon for all society.

In practice they have turned people against each other with very real and serious consequences.


> In many scenarios that we experience every day, we would be better served by accepting censure over misinformation.

No. Not at all. I refuse your premise. Not only you are begging the question here (what scenarios? Your example was terrible and I really don't think you can come up with a good one), I honestly worry more about those that believe this rhetoric than the "victims" of misinformation.

Also, it's curious how those that so easily accept censorship never think that they will eventually be on the wrong side of the taser gun.

> I agree with you in theory. But in practice this is impossible. The human brain is physically unable to work everything through from first principles.

Good thing then that this is NOT WHAT I AM SAYING.

There is no need to "work though things from first principles". The idea is NOT to determine a priori what is "right" or "safe" and then make a binary decision. The base idea is to decide on what action to take (or to refuse to take) by asking yourself what is the worst possible thing that can happen if the information I have is wrong? What are the odds of me being wrong?.

I'd suggest you get acquainted with Nassim Taleb and Joe Norman to understand better how to deal with complexity and uncertainty.

> In practice they have turned people against each other with very real and serious consequences.

Bullshit. There was no Facebook during the time of the Crusades. There was no Twitter during the Cold War and no smartphones during WW1 and WW2. None of these things would be avoidable if only we could censor wrongthink.

On the other hand, THERE ARE video records of Tienanmen Square who have been successfully hidden from an entire country for an entire generation.

(Sorry for the harsh language, but I start reading any kind of censorship-apologetic and fighting instincts kick in. If you don't see how much of a sign of being morally bankrupt it is to casually defend the hellish things like state-sponsored censorship, I see no point in continuing the "debate")


Hey, no worries, rlgullis. I get heated too. :) I don’t know if it will be fruitful to continue this discussion here either, but I appreciate your comments and I suspect that if we spent an afternoon trying we’d find our common ground was vast. Have a good one!


There are some things that - no matter how much "common ground" we have - simply can not be discussed in relative terms. Advocating that we all should be subjected to censorship and silence anyone who speaks against the status quo is one of them.

To think that is okay to have one all-too-powerful entity controlling information channels is stepping into fascism and totalitarianism. This is a lesson that we should have learned already: no possible good comes out of that.


[flagged]


Publishing provably false statements or reports with intent to deceive (or even just gross negligence) falls pretty squarely under the definition of misinformation. This isn’t very controversial, except among nut cases.


Oh. I see. So when a computer system sends signals over a wire, if it represents "provably false statements", it's actually misinformation, and not information. All those 1s and 0s instantly switch from information to misinformation, the minute their final representative form embodies a "provably false statement".

Who decides what's provably false, by the way?

Are the novels of Tolkien "misinformation", since it could presumably be easily proved that the events described in them didn't actually happen?

And what is the burden of proof for "intent to deceive"? And in which court is this all decided?

Who decides? Just people in your group right? What ever your group happens to be. Sure hope we all worship your god then, because the other gods are all "Misinformation".


Censorship being “worse than misinformation” seems like whataboutism, given that FB has serious problems with both.


It's not a whataboutism, it's pointing out that with one of the most popular proposed solutions to misinformation, censorship, the cure is worse than the disease.


I like the quote the from the Supreme court on the topic:

"If there be time to expose through discussion, the falsehoods and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence"

However, I think it's also important to recognize that in today's algorithmically driven content presentation, "more speech" is often comically ineffective because it is never consumed in the emergent content bubbles that silo people from contradictory information. Not to mention the fact that misinformation that confirms your preconceptions is a much more powerful influence than actual information that contradicts them. Given this, an important caveat embedded in the above quote is: "If there be time". A recognition of the fact that, in some circumstances, there will not be an opportunity for more speech to prevail.

I don't have a solution to this. There may be no good solution to this, except lesser degrees of bad solutions.


The degree to which social media allows both the amplification, and as you said, siloing of viewpoints to a mass audience, is IMO qualitatively different from the media available pre-Internet.


Buddy Buddy WAKE UP, Facebook is not a public service, it's not there to serve a nobody like you, it's a private company that exist to make money. You can't use their service to trash them and promote their competition, that is not a reasonable business model for them. To finance you by giving you a free platform to promote their competition and trash them is not a good trade for them. DO YOU UNDERSTAND???


If it serves the public, it's a public service. Utility companies were private companies until people decided they weren't.


What companies would not be public services under that definition?


Ones that aren't natural or in-practice monopolies


Here's an idea. If everybody who dislikes Facebook removed their profile picture, then it would quickly become a dull place and people would flee the platform.

Alternatively, instead of removing one's profile picture, one could replace it by the Mastodon logo to make a statement.


Or delete your account


Yep. Facebook censored me for linking to an entry about COVID-19 growth on my personal webpage — https://samiam.org/COVID-19 for the record — saying the post violated Facebook’s community standards, falsely claiming it was spam (no, I do not have a single ad over on my personal website).

Also, interviewing for a job at Facebook has been one of the worst job interview experiences I have ever had.


Tell me more about what happened at your Facebook interview


They asked a lot of really senior-level questions about B-trees. Since I tend to use hashes instead of B-trees for my data structures, I was caught completely flat footed.

To interview for Facebook, study B-trees like crazy (no, the recruiters did not warn me about this).

Also, after the interview, the recruiters at Facebook I was in contact with completely ghosted me. Very rude.


A recruiter explicitly told me to study trees and related algorithms and even offered an FB study guide. However this happened after I told them I didn’t do game show style interviews. YMMV.


I'd be willing to bet that the phrase "corporate-controlled network" is what set off the censor, and not Mastodon. That more often appears next to links to fake news sites.

On the other hand, the opaque/nonexistent review and appeal process is sleazy and YouTubesque.


Facebook has a blacklist on competitors, facebook wants to control the narrative, there is a reason your feed isnt by timeline, its artificially controlled.

Same thing happens on youtube, twitter, etc.

I've been using addons and rss feeds to go back to time/date ordered feeds, so I don't miss things I want to follow.

You can use RSS feeds to go around soft censorship, using apps like IFTTT, etc. Pockettube addon for youtube. etc.


The naivety in here is astonishing. "The AI just messed up". Facebook's entire existence is 1) identifying the smallest details in information 2) buying up small competitors before they are an issue.


Exactly, don't accept "blame the algorithm" excuses. Facebook/Google/Twitter has entire departments working on tricking and controlling people with psychology. If they can bend you to their narrative, it's great for Advertising, voting, etc.

The whole hacker/programmer community is/was freeing information, never trusting governments or mopolies. Culture sure has changed.


This has happened to me lots of times, for innocuous posts. Like, for example, copying a screenshot of a graph from the CDC, along-side with a link pointing to the CDC website I obtained it from. Or copying a inflation graph from a blog, and then linking to that blog in the same comment or post.

Not some conspiracy, just incompetence on FB's part. Of course some people would prefer to believe something nefarious.


I've been censored in the exact same way for linking a friend to relevant government legislation. I think it's a combination of the tone / wording coupled with something about my posting behavior (I took a break for a while then came back). I dunno but it was annoying as heck (though somewhat understandable) that there was no way to appeal.


I'm fighting Facebook by not participating. I implore the rest of you to do the same. As we leave, the folks remaining will have less reasons to stay. A trickle can turn in to a torrent. Don't worry about an alternative even, we'll find something new when we do, until then, a bit of a detox from the feed will do us good.


Did that a long time ago cant recommend it enough, you have a calendar to keep up with birthdays..


Haven't used Facebook in years. Only reason I ever access it is to see some business info. As for some stupid reason they don't have proper internet presence with up to date information.


I used Marketplace for a minute before becoming permanently banned beyond appeal for selling a computer and making a Bitcoin joke in the listing. So I’m done there.


Ech. I understand the general point OP is making but I don't mind admitting I see things differently. I am free to enter Joe's Cafe and do 99.9% of normal things one might do in any similar setting. But if I enter Joe's cafe and solicit my friends there that if they don't like the way Joe runs his cafe they should be aware of Lenny's Cafe, I feel that Joe has every right to stop me from doing that. To pedants: Yes, I know the analogy isn't perfect. But please don't pretent it isn't relevant.

This said, I remember when many MAGA friends announced they were leaving for Parler or MeWe or Gab and I never heard any of them claim their posts were removed (the ones who didn't leave right away).


Where did he post this? My guess it wasn't just to his "wall."

If he posted it to a group (i.e., his college alumni group, or a sports fan forum, or a gamers group), no doubt several people tagged it as spam--which it may have been--and the algorithm kicked in.

There's no Big Conspiracy at Facebook to keep Mastodon down.


I'm just sort of discovering the fediverse "scene". I still have a lot of research to do, but decentralization is very appealing to me. I have some questions, and maybe the answers will help other people too:

* What aspects are currently priority for improvements?

* What are lower priority problems or features that could be started now and worked on at a slower pace?

* Where is a good place for someone with programming, networking, and/or engineering skills to start getting involved with development?

* What can someone with little-to-no programming/networking or related "technical" skills do to further the development and uptake of decentralized social media?

* Are there any suggestions for good reading on this topic, both technical and non-technical? Websites, books, people/groups to follow?


Hey, I'm a contributor to Mastodon and while I can only answer for myself, and not the project, here are my current thoughts:

* What aspects are currently priority for improvements?

I really want to see better authorization & authentication features across different instances. Right now, a really high priority feature for users is "Disable replies", but the way replies work, anyone can construct an activity that is set as "replying" to whatever posts they want to reply to, just by linking to those posts. Figuring out some way to "authorize" those replies (we have a few ideas, but need to work out a lot of the details) is important for us. Additionally, we've been thinking for a while about implementing more group-focused experiences, something kind of like old LiveJournal comms or the new Twitter Communities, and now that there are a few different projects looking into similar things, and we think it's an idea whose time has come. And of course, improving on-boarding and general user experience are always at the top of our priority list.

* What are lower priority problems or features that could be started now and worked on at a slower pace?

Interoperable clients. When the ActivityPub network was first envisioned, the idea was that servers would be completely generic, like email servers, and users could connect multiple different, opinionated "clients" to get different UI experiences of the same inbox. However, most of the current fediverse projects implement only the server-to-server federation experiences, and use more standard, domain-specific REST APIs for client communication. I think

* Where is a good place for someone with programming, networking, and/or engineering skills to start getting involved with development?

It depends on your inclination! My perspective is that you should always write code that you know you yourself are going to use, because that's the best way to ensure that you're going to stick to it long term.

As a more practical suggestion, https://blog.joinmastodon.org/2018/06/how-to-implement-a-bas... and https://blog.joinmastodon.org/2018/07/how-to-make-friends-an... are still the two best tutorials out there on how to implement the basic ActivityPub protocol.

* What can someone with little-to-no programming/networking or related "technical" skills do to further the development and uptake of decentralized social media?

Use it! Invite your friends to use it. There are lots of non-programming technical skills that are always in demand for these types of projects—UX, design, product management, support, fundraising, comms—but besides those the biggest way you can support decentralized social media is simply by using it! The more people who are part of the community, the more vibrant, stable, and welcoming its going to be for new members.

* Are there any suggestions for good reading on this topic, both technical and non-technical? Websites, books, people/groups to follow?

There's a lot of good writing out there, but it's hard to recommend anything that I would regard as really authoritative and summing things up. I think we're kind of in a place where we need less people writing about possible futures, and more people building them. As a comparison, you can write all you want about possible startups people could make, but the thing that's really valuable is going to be going out there and trying them. Execution, as always, is 99% of the game.


Remember private platform and only governments can censor. Also first amendment. Free speech only applies to government not private companies. They can show you door if you try to use their platform to market your competing product...


How does that work when the government admits to directly working with that private company to ask for specific posts and people to be censored? Is that still not a violation of 1A?


This is not surprising as there is no such thing as 'free speech' on private platforms like Facebook.

I thought that the author of this post knew this given the mass de-platforming going on throughout the years.

This shows once again that it can happen to anyone. Facebook and the rest of them will never change.


This seems like an anticompetitive practice

Even if its toward an open source federated network that has no head and can host marginalized content

The implementation here seems to be an anticompetitive practice, which is sanctionable by governments in the US


One can dream. That's still free.


I'm looking forward to dreams as a service, with personalized ads sprinkled in. Only 9.99 a month!


Related:

https://dxe.pubpub.org/pub/dreamadvertising/release/1 ("Advertising in Dreams is Coming: Now What?")


"practice" assumes a lot here.


okay, replace the word with “action”.


This exact thing happened to me last week.

I was notified on Sept 4 at 6:40pm.

Clicking on a link in my post to joinmastodon.com notifies you that the link goes against their community standards.


So they are at it again.

In late 2015 Whatsapp, which was acquired by Facebook in 2014, was caught with the pants down intentionally crippling functionality when detecting links to Telegram.

https://www.androidpolice.com/2016/09/09/whatsapp-is-blockin...


Facebook is not known to ban mentioning their actual competitors like Twitter, YouTube, TikTok, Telegram etc. so why would it single out this small social network?

It seems more like it was a false positive of some moderation system that triggered because the post sounded too much like an advertisement.


Wouldn't be the first time they've done something like this: https://heavy.com/tech/2018/10/facebook-block-minds-com-unse...


They block https://joinmastodon.org but not, say, https://mastodon.social - so probably it's not a part of a strategy.


try it again with pleroma and see what happens!


I'm no fan at all of Facebook. Don't have it, never have. Think it should be destroyed utterly. You can find me on Mastodon.[0]

That said: the company sees 2--3 billion MAU,[1] and sees on the order of 5 billion pieces of content submitted per day.

The best I understand, their measure of exposure is not items but "prevelance, that is, the number of total presentations* of a particular content.[2] Long-standing empirical media evidence suggests that this follows a power curve, where the number of impressions is inverse to the number of items. So, say, 1 might see 1 million impressions, 10: 100k, 100: 10k, 1,000: 1k, etc.

This means that a service can budget and staff for either the minimum prevalence threshold before manual review, or the total number of items granted more than some maximum unreviewed threshold. Machine-assisted filtering can help. In either case, though, mistakes will happen, and at 5 billion items/day, the number of misclassifications even at very high accuracy is large:

- 1%: 50m/dy

- 0.1%: 5m/dy

- 0.01%: 500k/dy

- 0.001%: 50k/dy

... which necessitates secondary review and additional costs, as well as, of course, malicious appeals by bad-faith actors. If the filtering system is fed by user reports (flags and the like), then malicious or simply disagreement-based flags may well trigger moderation. (Crowdsourcing has its own profound limits.)

Another element is that, especially with AI-based filtering systems, what results is determination without explanation. We know that a specific item was rejected, but not why. And in all likelihood, FB and its engineers cannot determine the specific reason either.

(I've encountered this situation more often from Google, again, as I don't use FB, but the underlying mechanics of AI-based decision systems are the same between such systems.)

The upshot though is:

- Moderation is necessary.

- It's ultimately capricious and error-prone. There are initiatives and proposals for greater transparency and appeals.

- Cause-determination is ... usually ... poorly founded.

________________________________

Notes:

0. https://toot.cat/@dredmorbius Also Diaspora (see below).

1. Monthly active users. https://investor.fb.com/investor-news/press-release-details/...

2. See Guy Rosen, VP of Integrity for both content and prevalence references: https://nitter.kavin.rocks/guyro/status/1337493574246535168?... I've written more on the topic here: https://joindiaspora.com/posts/f3617c90793101396840002590d8e...


Mastodon Ivory is illegal to sell—my money is on anti-poaching filters and dumb coincidence.

FB I’m for hire! Plenty of experience spin-doctoring/downplaying incidences for PR.


checking with https://blacklistalert.org/ there's a website which is blacklisting joinmastodon.org

http://www.justspam.org/check-an-ip?ip=66.111.4.71


I think this is highly unlikely to be a targeted attempt to censor Mastodon. The simpler explanation (and the explanation with most historical evidence behind it) is just that Facebook's AI algorithms are kind of bad and nobody in the company understands how they work or what associations they build.

However, the underlying idea that Facebook would block links to competitors is historically valid. As recently as 2016, Facebook blocked links to competing networks from Instagram (https://www.theverge.com/2016/3/3/11157124/instagram-blocks-...), and leaked internal emails from Facebook have shown that the company has an extremely broad view of what does and doesn't count as a competitor (https://panatimes.com/facebook-bought-instagram-to-neutraliz...). The company is extremely anti-competitive, it's not shy about this, and internal emails show that this anti-competitive attitude is entrenched very deeply and very consciously within upper management.

I think taking down this post in specific is very unlikely to be deliberate because:

A) Mastodon is likely not a large enough service to warrant it, and because

B) The explanation based on Facebook's AI being weird, opaque, and generally untested is a much cleaner, simpler explanation that requires fewer jumps in logic.

But it would be completely in character for Facebook to target a real competitor in this way. The reason it's unlikely to be deliberate is not because Facebook would never do something like this, and it's not because Facebook would be too frightened of regulators to do it so openly. Facebook has very openly done stuff like this in the past. It's just that there are other explanations that are more likely, and it's that if Facebook was going to start doing this, Mastodon probably wouldn't be among the first competitors they would target. I need a lot more evidence to show that this is deliberate before I jump off of the (extremely compelling) explanation that automated moderation is really buggy across the board and regularly does unexpected things.

The article comes off as a little uncurious to me, I feel like the author is jumping too quickly to a specific conclusion without a lot of critical thought. But part of why Facebook has these problems with people jumping to conclusions about how it tracks and moderates is because Facebook has a very real history of being openly corrupt in these areas, and Facebook has a real history of being deceptive about their motivations behind decision-making processes. The reputation hasn't come out of nowhere.


Definitely agree that the first assumption you should jump to re FB is that the algorithm is bad, or something broke. Of course FB knows this, so their lack of investment in people who could clean up after these repeated errors is damning.


I'm sure someone could link here to the Mozilla thread about Google being slow to fix "convenient" mistakes that broke Firefox, regardless of whether or not those mistakes were intentional or not. Sometimes even accidents can be revealing about where a company's priorities lie and what things they actually think are important.

But that's probably a much deeper, longer conversation to have. I do believe that Facebook regularly uses the poor performance of its moderation algorithms at scale as a shield against public scrutiny, and as a way to occasionally influence public policy.


This was the final straw that lead me to start moving away from Facebook Messenger. They were preventing me from sending seemingly random links in 1:1 chats. That was enough reason to start moving friends to a solution with E2EE.

This example my have not been malicious, but it is a start reminder that you are allowing them to see, and control, your communication. That is something that I would prefer not to occur.


The fact that people buy into any of these excuses (Russian manipulation, Covid misinformation) as anything but an excuse to shut down competition and consolidate control is utterly beyond me.

Read literally a single history book, people.


Seems pretty lucrative right now to be crying censorship.


FB can ban whoever they want as long as it is not a protected class.

FB can ban you because you like to eat broccoli or for whatever any reason.

Many people support this idea during trump's ban. So you will just need to suck it up.


Weird take. People's concern here is that this sort of behavior is anti-competitive, enforcing rules that applied to everyone re: inciting violence is pretty different.

Seems odd to think that folk have to support a generic action rather than how that action is done. Like, there are people who like baseball but would probably be a bit upset if you randomly threw a ball at them at 90mph in the middle of the street despite them being really supportive of it in a different context.


Political affiliation is a protected class where FB and TWTR are headquartered. Considering the overwhelming majority of Republicans remain on the platform, it seems the removal had nothing to do with class.


> "Political affiliation is a protected class where FB and TWTR "

This is false.

What is true is that in some limited cases and with a lot of caveats, you cannot be fired for your political affiliation in California, assuming that affiliation is expressed outside of work, etc. That's it. Please do not try to stretch that bit of weak labor law to protecting posts on TWTR because they are headquartered in CA.


Try linking to pushbullet.com in FB messenger


I don't have either, could you just tell us what happens?


It doesn't send the message. "Could not send the message. Tap for details."

When you tap, then tap Learn More, it just sends you to to this page

https://m.facebook.com/help/messenger-app/1723537124537415


The tech oligarchs are out of control. We need to classify them as common carriers now.


This part is interesting...

"We have fewer reviewers available right now because of the coronavirus (COVID-19) outbreak..."

That smells fishy. Seems like a job that would be a really good fit for work-from-home. Wouldn't you then have more reviewers available?

Or maybe the exposure to graphic content means they do this in the office?


There was a long article about how they have to do it in an office, and how you can't bring any electronics with you, and how you have to click a special button when you go to pee, and there's a daily limit on how much you can use it.

edit: https://www.theverge.com/2019/2/25/18229714/cognizant-facebo... it was probably this one


I know someone who works in content moderation at Google and they said the company requires them to come in to the office for data security reasons. They even have to put their phone in a locker while they are actually reviewing content. I think it makes sense considering the kind of content they review (including CSAM).


I believe that using the line 'our lines are busy right now' cuts down on complaints. I assume this was A/B tested and found that it has the same effect.

Until very recently, Google Play also had a similar notice without mentioning COVID when an app was in for review.


Frankly: Google is only now-ish regaining the balance from the hiring slowdown COVID brought. It isn't far-fetched that some team went short-staffed until recently. And then, updating notices was not their top priority.


It seems perfectly plausible that the reviewers they do have are busier right now, because more people are using and abusing facebook, so they have "fewer available".


The biggest unreported scandal in big tech is the 100s of 1000s of contractors that work exclusively for these companies. Accenture provides thousands of content moderators to FB, none of whom have any recourse to FB HR or rights to pursue union status at FB. Google is the same.


Why should they have right to pursue union status at FB? They should do that with whatever company is employing them and if there is that many it might even be realistic.


I think it’s BS for the biggest, most profitable companies in the world to subcontract vital work. It’s professional apartheid. If you’re a product manager, you deserve a FB.com email and a sushi bar in the office and the best benefits, but if you are an Accenture contractor that reports to that FB product manager, you get no benefits. For all the talk of pay equity that activists within these companies do, they are largely silent on the pay disparities between themselves and the outsourced contractors on which they are utterly dependent.


It’s actually outsourced to Accenture.


Covid is just a convenient excuse that people somehow still keep swallowing. If 2 years on covid is still a problem for you, that's less the fault of covid and more that you are incompetent at running your business.

Surprisingly enough I've had more "our response to covid-19" and similar crap from tech companies that would be near-immune to it than from companies that would legitimately be impacted by it (those whose business requires on-site staff, etc).


Perhaps tech companies care about their employees more than say food processors.


I went to joinmastadon.org

Please disable your ad blocker and reload the page.

I disable adblock and reload.

Please disable your ad blocker and reload the page.

Yknow


It's joinmastOdon.org, not joinmastAdon.org. We don't own the misspelled domain. I wonder how many people fall prey to it though.


A shocking amount of people in the comments are misspelling it despite it being written in the headline.


It's some kind of brain thing.


thanks


Ugh. I made the mistake of trying to join the art mastadon. Now I can't do anything until my invite gets processed.


...


That's what he was doing?!??! (And yes, promotion is a big part of making a product)


If the article is correct, that’s what they are doing and Facebook is using their market and financial dominance to stop it.

How is “they’re a private company, make your own” still being used as an argument when situation is obviously beyond that? We have conclusive evidence from FAANG and Governments that they work together.

You practically CAN’T make your own Facebook. Facebook will stop you one way or another. Google who has a dollar or two, tried and failed spectacularly. Do you know how much better an organic startup would need to be to rival Google’s Day1 investment in Plus?


"You practically CAN’T make your own. Facebook will stop you one way or another. Google who has a dollar or two"

Google+ was successful and Google shut it down for Google reasons. If they gave away Google+ instead I can't think of anyone who wouldn't gladly take it off of their hands.

You can make your own facebook and facebook will not stop you. But people don't want another facebook, many are realizing they probably want off of facebook, replacing it with something similiar isn't helpful. What facebook offers (network effect) is the main value of the platform and replicating that is virtually impossible; nevermind in the same form as facebook.

Having Google+'s code day one would mean little without the users.


A social media platform refused to let you advertise a competing social media platform.

Is this really news? Isn't this just business as usual in corporate America?


If this is the case its actually anti-competitive and supposed to be against the law. Obviously there is a difference between braking the law and getting caught though.


supposedly* breaking the law. /s


So, the author is surprised that a private business doesn't let him use their free platform to promote their competition at the same time that they trash them...

The level of delusion and entitlement of some people is simply to hard to understand for me.


Facebook isn't just a "private business" or just a "free platform", they're a gigantic global entity that have integrated their product into the lives of billions of people.

If it was a free service by a mom and pop shop with "use at your own risk" in the agreement, then yes, it would be entitlement.

However, there exist people, for whom 90% of their communication happens via Facebook or social media. And it's not even by choice, kids are born into it being the status quo, and if 100% of your friends are using it while you're growing up, chances that you won't use it too are slim to none.

Thus, the company needs to hold responsibility for providing open communication. Censoring posts about their competitors goes against that.


Given the number of articles from Facebook saying how they won’t censor speech (doing a basic Google search here) that all seems like false advertising.

Note: I don’t use Facebook so perhaps I’m missing something.


Probably just a bug, I work in different social media, but see these kinds of posts pop up about it when something happens by accident. The comment threads are filled with people throwing out unfounded accusations, "obvious" conspiracy theories. The mob is riled up.

One of the most amusing things about actually working in one of these companies is just seeing how confidently wrong some internet commenters are about what is actually happening when an article or outage happens.


Well maybe the big powerful corporation should be up front about what it moderates and what it doesn't, and changes in policy as it learns. Then abide by and reference that.

That way people can understand why their post was removed without having to speculate.

Maybe filtering changes should be rolled out slowly at first, with every customer complaint analyzed, to catch these bugs you speak of before they are widespread, frustrating and look obviously suspicious.

Maybe the big powerful corporation should hire staff in proportion to their mistakes, instead of blaming a pandemic for its record profits, er ..., I mean lack of interest in finding ethical solutions to problems.

Maybe if the company made good faith explanations of mistakes, and actually fixed them, instead of letting them fester, or continually playing hide and seek with information, speculation would not be necessity.

Your attitude about your company's customers is equally disappointing.


Your making excuses for being confidently uninformed.


I have not bothered to speculate why the took action in the case being discussed here.

Not informing customers, then being amused that they are uninformed (and some invariably speculate), is not a solution to anything.


I'm amused by how confidently people assume they know whats going on when they're clearly clueless. They're not my customers, I'm not speaking on behalf of my employer. I just in the past have had real information about what is happening in related situations(eg. someone gets banned, post is moderated, technical outages) I find it funny how clueless people are yet act like they're not.

Am i not allowed to find amusement in dumb people? Then maybe use that as a reminder to relax when you don't have all the information.


I have no idea why you think you are smarter than anyone else.

You are calling people who are left uninformed dumb because information that is relevant to them is being withheld and they don't have inside information or corporate experience like you.

People work with the information and experiences they have. If a lack of information leads to wild misunderstandings, that is why corporations should communicate better.


You're not dumb because your uninformed, you're dumb when despite being uniformed you act like you aren't

Im not smarter,(or maybe i am) But in some cases I am more informed.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: