Still, it does seem like the sort of thing that could get Facebook in trouble with a regulator if you squint at it.
Also note that Twitter has taken an interest in decentralized protocols with Bluesky project.
On the contrary, Newscorp for example is great to get dynamic patterns of headlines you don't want to read.
And the ~100,000 or so people who actually used that network.
Negligible: so small or unimportant as to be not worth considering; insignificant.
The first page of search Mastodon results for me are all about a heavy metal band from Atlanta. Sounds like a negligible risk to me.
So if there is 2^-500 chance of an alien invasion, lets put a team on it!
Does the team cost about 2^-500 of your resources and attention? Then, YES, do it. That's my point.
I'm not saying that this is what you SHOULD do all of the time, I'm saying that it is entirely plausible, even likely, that Facebook might deliberately "go after" Mastodon because it very very easily can.
I was saying what is the impact (damages) if this guy can't post about his software on facebook to his friends and family. Do you even think this would be in the top 100 ways to crush the competition???
Whether or not Mastodon will is another question. (I've been on it since ~2016.) But in FB's position, paranoia pays, and is worth throwing billions at, if deemed a sufficient threat.
What you're really looking for is what happened to Parler which looked like a serious act of anti-competitive behaviour which showed the true brutish nature of these large companies destroying an alternative social network.
That was a much worse form of 'anti-competitive behaviour' than this.
Four data points ain't a lot, but you can no longer claim it's a one-off event either.
I genuinely am curious, in Facebook's world today, how do you see this playing out in a way that actually hurts Facebook. I legit want to know so I can actively work toward making it happen.
Perhaps after certain number of people report it, it automatically will be marked as one (such logic would be more reliable on FB, than for example reddit or HN, since most FB users have a single profile tied using their real name)
How could it?
Make no mistake, this is bi-partisan. Both sides have their own agendas against Big Tech.
Oh, I hope so...
My best guess is that their lists of competitors to keep an eye on got mixed up with other stuff. Or, of course, that they simply don't want to promote competitors on their platform, which would be normal for any non-monopoly.
It would be interesting to do some experiments here: post the same text to see if it gets removed again, and then repost it and remove sentences to see which one triggered the filter.
the largest mastodon instance has 500k users. Facebook has two billion users. If you can post twitter and tiktok and tumblr links on facebook, do you seriously think there's someone sitting at facebook taking names and making lists about a social network that practically nobody even uses
there are competitors a hundred times as large you can link too. My first guess is it probably tripped some NSFW filter because on some mastodon instances there's quite a lot of porn.
Yes, when I worked at a 500-person startup there was an employee who’s sole task was to stay aware of our established competitors and nimbler startups. It was jokingly nicknamed the “office of paranoia.”
That said, I highly doubt FB is using their list of competitors to block posts mentioning them.
Source: I used and helped maintain one of the “don’t federate with these instances” list.
Such a weird thing for regulators to be chasing - there seems to be so many more obvious issues than this. Is this a political winner - in other words, does the average person think I can put of Burger King flyers in a McDonalds store?
> C. Prohibited Content
> * contains advertisements, solicitations, or spam links to other web sites or individuals, without prior written permission from Walmart;
Facebook on the other hand, promotes itself for creating and sharing user content. If it then moderates in ways that are not disclosed, opaque, and with little recourse, their deceptive behavior is neither ethic or in line with how they promote themselves as a service.
Facebook (and social media in general) is essentially a public forum and comes with the expectations of such since these companies control such a large part of internet discourse.
If all of them said "we won't allow anyone to talk about any of our competitors on our site" then it strikes me as a company using their dominance to silence other players in the market, i.e. anti competitive behavior.
Facebook claims to be a social network. Those types of networks normally DO NOT allow you to promote other social networks.
Even review platforms which are a bit more communication in nature block posting other sites reviews.
I don't see how that makes it legal.
Herey other examples of potential actions that would also clearly qualify as illegal anti-competive behavior:
Microsoft could decide to block the download pages for Chrome and Firefoz from being shown in IE.
Google could block results related to Bing in their search engine or browser.
I don't see how this behavior by Facebook is any different. If it can be shown that this was done deliberately by Facebook, I have little doubt that it would also qualify as anti-competive behavior.
When you first search on bing for chrome downloads they will put up a big edge promo above the result. After you download they will pop up a box asking if you are sure you want to switch as edge is "Faster and more secure".
The problem for mastadon is it has a TON of content that is against facebook policies. So facebook can simply say - users are reporting this crap as spam - we've blocked it. Done.
European law tends to favor maximizing competitors. American law tends to favor maximizing consumer value. At first glance, these can be considered equivalent, but they differ at the margins (which is why, for example, Amazon keeps getting hit with antitrust in France but not in the US).
I thought what happened to alternative social networks was a warning to show that not only they can do it to anyone but it shows how anti-competitive they really are.
What happened here to this person mentioning Mastodon is no different but was like 1% of what Facebook and many other private platforms can really do.
Nowadays most of our communication channels are owned by corporations. Are you okay that they get to decide what we're allowed to talk about? Zoom for example banned meetings talking about the Tiananmen Square massacre; You can't post links to The Pirate Bay in private chats on Facebook... on private chat!
And this coming from me who's quite okay with Twitter banning Trump and other idiots off their platform.
The message says it triggered spam filters. It's not related to COVID misinformation.
The only place COVID appears is in the warning that their manual review queues are longer than normal due to COVID.
I don't know if Facebook has their stuff together, but I think it's unethical to have people review random user uploaded content without close access to a mental health specialist.
You can have several degrees of intensity a reviewer might be able to see (to not expose all reviewers to the very worst on a regular basis), but no algorithm can clearly identify the nastiest of the nasty content. The algorithm sees "government pedo club", flags it as fake news, and who knows what the shared content actually contains. It could be a conspiracy nut, it could just as well be actual child porn. The probability is low, but you need someone standing by just as well, in my opinion.
No, the opposite is true: Any job that requires reviewing potentially private content must be done in a controlled environment.
I wouldn't be surprised if content reviewers weren't even allowed to have cameraphones at their desks.
Can't risk having someone snap photos of the screen while reviewing content flagged as sensitive. Doing this job from home is not an option.
FB team can be overwhelmed with Covid related misinformation.
A lot of content moderation is outsourced to countries like India where productivity, availability may have degraded due to covid that ravaged through the country.
Many firms still have backlogs from Covid disruption.
If you have worked at / started any half-decent sized company you'd have known
We have vaccines for those that want them.
We have drugs that treat it very well.
There’s no reason to censor information or shut things down over it.
With respect to connecting with family & friends, I'd much prefer a pure platform based pretty much on just that.
With respect to other people with interesting things to say, I'd prefer blogs aggregating & curated sites like, well, HN itself.
For the former, I don't know how you get to a "pure" platform like that where you can communicate & share experiences/photos with each other without also letting meme-ish "lol this person of <political affiliation I hate> is an idiot"> posts through, but at the very least it could avoid surfacing them algorithmically and rewarding them with "internet points".
> avoid surfacing them algorithmically and rewarding them
This is the key. And I’ve come to believe that the only way to prevent the platforms doing their algorithmic engagement maximization thing is to encrypt everything E2E.
The nice thing about these open protocols is that they are simply reverse chronological. You see what you choose to see, in the order it was published.
It's a totally different experience than the engineered rollercoaster that is corporate social media.
I recommend evaluating primarily based on the censorship policies of the instance operator. For example, the list of servers on joinmastodon is restricted to those who are actively engaged in censorship of legal speech (full uncensored instances are not indexed there) so you may be interested in searching for instances not shown there, depending on your attitude toward censorship.
- the instance has grown too big and thus some consider it counter-productive towards the federated nature of the protocol
- disagreement with the direction its main developer / maintainer is taking Mastodon, such as intentionally hiding the local timeline from the official iPhone app
- some consider it under-moderated, or not responding quickly enough to reports
- disagreement over its content moderation guidelines
- in case of a mute, it could also be not wanting their federated timeline to be flooded with primarily mastodon.social posts
Lack of federation between these instances and mastodon.social could be a reason not to pick mastodon.social. (Similar situation applies to mastodon.online btw, which is a spin-off server of m.s.)
Another reason to pick a different instance could be not wanting to use mainline Mastodon software. For example because you want to run your own instance on limited hardware (Mastodon can get a bit resource intensive), don't like Ruby, miss certain features, don't like the front-end (though alternative external front-ends to Mastodon do exist), or some other reason.
Personally I've switched my primary use over to an account on an instance that runs Mastodon Glitch Edition, also known as Glitch-Soc (https://glitch-soc.github.io/docs/), which is a compatible fork of Mastodon which implements a bunch of nice features such as increased post character count (Mastodon defaults to 500 characters per post, Glitch-Soc supports increasing this in the server settings), Markdown support (though only instances that also support HTML-formatted posts will see your formatting; mainline Mastodon servers will serve a stripped down version of your post instead), and improved support for filters / content warnings / toot collapsing, optional warnings when posting uncaptioned media, and other additional features.
Another alternative Mastodon fork is Hometown (https://github.com/hometown-fork/hometown) which focuses more on the local timeline (showing posts only from your own instance) with the addition of local-only posts, to nurture a tighter knit community.
Aside from Mastodon there are other implementations of ActivityPub which can still federate with Mastodon instances, such as:
- Misskey (https://github.com/misskey-dev/misskey)
- diaspora* (https://diasporafoundation.org/) (which AFAIK inspired Google Plus back in the day)
- Hubzilla (https://hubzilla.org//page/hubzilla/hubzilla-project)
- Peertube (https://joinpeertube.org/) (focused on peer-to-peer video distribution)
- Friendica (https://friendi.ca/)
- Pleroma (https://pleroma.social/)
- Socialhome (https://socialhome.network/)
- GoToSocial (https://github.com/superseriousbusiness/gotosocial)
- Pixelfed (https://pixelfed.org/) (which started as a sort of federated Instagram alternative) and more.
Fediverse.party (https://fediverse.party/) is a nice way to discover various protocols that make up the bigger Fediverse.
Instances.Social (https://instances.social/) can also be used as an alternative to find instances, though I believe it is limited to Mastodon-based instances.
I don't understand the system well enough to know if this is a dumb question or not.
The answer is: it'll happen automatically. Just search for someone's handle, and your server will talk to that other server. When you follow that other users, your server will start federating with that other servers.
Note though that servers might block each other. For example, many Western servers block Japanese pawoo.net, since it allows posting lolicon. Western servers don't want this content in their timelines and caches, so they block it. If your server blocks another.social, you won't be able to follow anyone on there.
But your question also hints at a real problem with Fediverse (of which Mastodon is a part), which is: each instance only sees a subset of the Fediverse. Thus, searching by hashtag will only get you a subset of all posts that contain it. Full-text search is even more complicated.
They are still decentralised, the aim is to populate the federated timeline. Obviously, the visibility is still a subset of the network.
My /etc/resolv.conf currently uses
options attempts:1 timeout:1 max-inflight:1
I am just wondering why is it such a hard requirement for you to stop other people to follow someone else that is not part of your server? Costs?
 - https://www.youtube.com/watch?v=MpmGXeAtWUw
I am the crazy bold one of the group and the canary in the coal mine. You won't see any of my friends interacting on this or any other social site. I on the other hand email and make phone calls to everyone at every level of government, C-levels in corporations, investors, military leaders, scientists, influencers, etc...
The best example of how it works would be email. You can set up your own email server, and interact with other independent email servers seamlessly, or just find a provider you trust and get your email access from them.
There's also the option of adding "featured hashtags" to one's profile, allowing a user to search for users of a particular interest.
Along with the "Federated Timeline", which others have mentioned, and your follower's boosting posts (akin to retweeting) I've found it quite easy to find a diverse list of people to follow and interact with.
There's a sort of blocking firewall around mastodon.social and sites broadly on the same 'side' as it, in that all these servers tend to share blocklists. One of the things they'll block a server for is being 'free-speech maximalists'.
But outside of the mastodon.social bubble, there are lots of free speech maximalist fediverse instances that don't block anyone, or block different people.
Pleroma instances tend to be more free-speech oriented (because the technical choice of using Mastodon or Pleroma as your backed became part of a signalling game). I think Pleroma's better software, anyway.
I follow plenty of people on mastodon.social, mastodon.tech, etc., as well as people from a lot of the suspended instances in your list, and I can clearly see/feel two (really more) different 'cultures' in the fediverse.
The GP is more aligned with the second culture, I think- the culture you could label 'free speech maximalist'.
I don't really think mastodon.social should change- you've banned things you don't like, you perceive certain messages as pernicious enough to warrant a ban, that's fair- you've made a space with a certain tone and flavor, that suitable for a certain type of person.
But people are diverse, and so there are plenty that find the culture and tone of mastodon.social inferior to, say, Poast.
I think people with different sensibilities are suited to different spaces, and that it makes sense to point out that some of the spaces on mastodon.social's blocklist have value- maybe not to the median member on mastodon.social, but to people not of your culture.
To be frank, you're progressives. There's nothing wrong with that! Some of my best friends are progressives! But it's a lens that colors how you view the world, and what's ban-worthy. Again, nothing wrong with making a space that conforms to your sensibilities- but I wanted to make it clear to the GP that there are plenty of instances that don't have the same sociopolitical 'flavor' as mastoson.social, and that mastodon.social sits at the graph
-centre of a particular subset of the federated network that is of similar flavor.
Here are the requirements for us to promote your server:
The only hard requirement related to content is that racism, sexism, homophobia and transphobia be not allowed. There is no requirement to peer with anyone or have any specific political stances. If your political stance or perspective requires you to dehumanize people of different races or sexual orientation, then yes, you are not welcome.
I don't know anyone who's a self-declared racist, sexist, homophobe or transphobe.
But these things mean different things to different people.
So let's say JK Rowling wants to join. Would she be welcome?
What about Glenn Greenwald?
Or Andy Ngô?
And yes, it's worse than misinformation.
But I suppose it will depend on the circumstances, and I'd honestly be interested to hear your thoughts on why censorship is worse.
As for the inevitability of abuse? When it comes to corporate interests, that seems to be nearly axiomatic. The Verge's list of fascinating & horrifying exchange at Apple about app approvals & secret deals makes for a great case-study in this. 
If gamma rays randomly excluded one post in a thousand, that would be mussing data. Censors excluding one post in ten thousand is worrying because they have motivations of their own, which gamma rays do not.
Both exist. But the larger effort is put into distraction.
The recent Russian model is more on bullshit and subverting notions of trust entirely.
American propaganda seems largely based on a) what sells and b) promoting platitudes, wishful thinking, and c) (at least historically) heart-warming (rather than overtly divisive) notions of nationalism.
The c) case is now trending more toward divisive and heat-worming.
Yes, censorship and propaganda go hand in hand. In 1922 Walter Lippmann wrote in his seminal work, Public Opinion,
> Without some form of censorship, propaganda in the strict sense of the word is impossible. In order to conduct a propaganda there must be some barrier between the public and the event.  
Both are also tied inherently to monopoly, along with surveillance and both general and targeted manipulation.
This is 'some bad data' vs 'systemically biased data' and the latter is much worse. Most datasets will contain some bad data but it can be worked around because the errors are random.
A statement of "I don't know' clearly indicates a lack of knowledge.
A statemnt of "I have no opinion" clearly indicates that the speaker has not formed an opinion.
In each case, a spurious generated response:
1. Is generally accepted as prima facie evidence of what it purports.
2. Must be specifically analysed and assessed.
3. Is itself subject to repetition and/or amplification. With empirical evidence suggesting that falsehoods outcompete truths, particularly on large networks operating at flows which overload rational assessment.
4. Competes for attention with other information, including the no-signal case specifically, which does very poorly against false claims as it is literally nothing competing against an often very loud something.
Yes: bad data is much, much, much, much worse than no data.
Outlier exclusion is standard practice.
It's useful to note what is excluded. But you exclude bad data from the analysis.
Remember that what you're interested in is not the data but the ground truth that the data represent. This means that the full transmission chain must be reliable and its integrity assured: phenomenon, generated signal, transmission channel, receiver, sensor, interpretation, and recording.
Noise may enter at any point. And that noise has ... exceedingly little value.
Deliberately inserted noise is one of the most effective ways to thwart an accurate assessment of ground truths.
1) You can have an empty dataset.
2) You can have an incomplete dataset.
3) you can have a dataset where the data is wrong
All of these situations, in some sense, are "bad"
What I'm saying is that, going into a situation, my preference would be #2 > #1 > #3.
Because I always assume a dataset could be incomplete, that it didn't capture everything. I can plan for it, look for evidence that something is missing, try to find it. If I suspect something is missing but can't find it then I at least know that much, and maybe even the magnitude of uncertainty that adds to the situation. Either way, I can work around it understanding the limits if what I'm doing or if there's too much missing, make a judgement call and say that nothing useful can be done with it.
If I have what appears to be a dataset that I can work with, but the data is all incorrect, I may never even know it until things start to break or, before that if I'm lucky, I waste large amounts of time to find out that the results just don't make sense.
It's probably important to note that #2 and #3 are also not mutually exclusive. Getting out of the dry world of data analysis, if your job is propaganda & if you're good at your job, #2 and #3 combined is where you're at.
It's a scientist who removes outliers in the direction that refute his ideas, but not ones in the direction that support it.
These aren't entirely dissimilar, but they have both similarities and differences.
Data in research is used to confirm or deny models, that is, understandings of the world.
Data in operations is used to determine and shape actions (including possibly inaction), interacting with an environment.
Information in media ... shares some of this, but is more complex in that it both creates (or disproves) models, and has a very extensive behavioural component involving both individual and group psychology and sociology.
Media platform moderation plays several roles. In part, it's performed in the context that the platforms are performing their own selection and amplification, and that there's now experimental evidence that even in the absence of any induced bias, disinformation tends to spread especially in large and active social networks.
(See "Information Overload Helps Fake News Spread, and Social Media Knows It".
(https://www.scientificamerican.com/article/information-overl...), discussed here https://news.ycombinator.com/item?id=28495912 and https://news.ycombinator.com/item?id=25153716)
The situation is made worse when there's both intrinsic tooling of the system to boost sensationalism (a/k/a "high engagement" content), and deliberate introduction of false or provocative information.
TL;DR: moderation has to compensate and overcome inherent biases for misinformation, and take into consideration both causal and resultant behaviours and effects. At the same time, moderation itself is subject to many of the same biases that the information network as a whole is (false and inflammatory reports tend to draw more reports and quicker actions), as well as spurious error rates (as I've described at length above).
All of which is to say that I don't find your own allegation of an intentional bias, offered without evidence or argument, credible.
Well, it's rare that I know of. The nature of things is that I might never know. But most people that don't work with data as a profession also don't know how to create convincingly fake data, or even cherry pick without leaving the holes obvious. Saying "Yeah, so I actually need all of the data" isn't too uncommon. Most of the time it's not even deliberate, people just don't understand that their definition of "relevant data" isn't applicable. Especially when I'm using it to diagnose a problem with their organization/department/etc.
Propaganda... Well, as you said there's some overlap in the principles. Though I still stand by more preference of #2 > #1 > #3. And #3 > 2&3 together.
I show some aggregated moderation history on reveddit.com e.g. r/worldnews . Since moderators can remove things without users knowing , there is little oversight and bias naturally grows. I think there is less bias when users can more easily review the moderation. And, there is research that suggests if moderators provide removal explanations, it reduces the likelihood of that user having a post removed in the future . Such research may have encouraged reddit to display post removal details  with some exceptions . As far as I know, such research has not yet been published on comment removals.
I've worked with scientific, engineering, survey, business, medical, financial, government, internet ("web traffic" and equivalents), and behavioural data (e.g., measured experiences / behavour, not self-reported). Each has ... its interesting quirks.
Self-reported survey data is notoriously bad, and there's a huge set of tricks and assumptions that are used to scrub that. Those insisting on "uncensored" data would likely scream.
(TL;DR: multiple views on the same underlying phenomenon help a lot --- not necessarily from the same source. Some will lie, but they'll tend to lie differently and in somewhat predictable ways.)
Engineering and science data tend to suffer from pre-measurement assumptions (e.g., what you instrumented for vs. what you got. "Not great. Not terrible" from the series Chernobyl is a brilliant example of this (the instruments simply couldn't read the actual amount of radiation).
In online data, distinguishing "authentic" from all other traffic (users vs. bots) is the challenge. And that involves numerous dark arts.
Financial data tends to have strong incentives to provide something, but also a strong incentive to game the system.
I've seen field data where the interests of the field reporters outweighed the subsequent interest of analysts, resulting in wonderfully-specified databases with very little useful data.
Experiential data are great, but you're limited, again, to what you can quantify and measure (as well has having major privacy and surveillance concerns, often other ethical considerations).
Government data are often quite excellent, at least within competent organisations. For some flavour of just how widely standards can vary, though, look at reports of Covid cases, hospitalisations, recoveries, and deaths from different jurisdictions. Some measures (especially excess deaths) are far more robust, though they also lag considerably from direct experience. (Cost, lag, number of datapoints, sampling concerns, etc., all become considerations.)
>Self-reported survey data is notoriously bad
This is my least favorite type of data to work with. It can be incorrect either deliberately or through poor survey design. When I have to work with surveys I insist that they tell me what they want to know, and I design it. Sometimes people come to me when they already have survey results, and sometimes I have to tell them there's nothing reliable that I can do with to. When I'm involved from the beginning, I have final veto. Even then I don't like it. Even a well designed survey with proper phrasing, unbiased likert scales, etc can have issues. Many things don't collapse nicely to a one-dimensional scale. Then there is the selection bias inherent when by definition you only receive responses from people willing to fill out the survey. There are ways to deal with that, but they're far from perfect.
A: "I've conducted a survey and need a statistician to analyse it for me."
(I've seen this many, many, many times. I've never seen it not be the sign of a completely flawed aproach.)
As the saying goes, it's not what you don't know that gets you into trouble. It's what you know for sure that just ain't so.
You may be ignorant, but you know it, and can deal with it. Let's call is starting from 0.
When you have bad data, you frequently don't know that you have bad data until things go very very wrong. You aren't starting from 0. 0 would be an improvement.
In the known-knowns model, you have knowledge and metaknowledge (what you know, what you know you know):
K U -- What you know
K KK KU
U UK UU
What you know you know
TT TF FT FF (Truth & belief of truth)
---- ---- ---- ----
KK | KKTT KKTF KKFT KKFF
KU | KUTT KUTF KKFT KKFF
UK | UKTT UKTF UKFT UKFF
UU | UUTT UUTF UUFT UUFF
In both the TF and FT columns, belief of the truth-value of data is incorrect.
In both the KU and UU columns, there is a lack of knowledge (e.g., ignorance), either known or unknown.
(I'm still thinking through what the implications of this are. Mapping it out helps structure the situation.)
Whereas censorship is lindy among things that have bad effects on society.
So give the most caution against the proven bad thing and not the one you're in a trendy moral panic about.
I'm going to adopt this style of argument from now on.
"Oh, you think that X is a big problem? Well, it isn't, because you have problem X, and only think that way because of it! It's your cognitive distortions talking! Zing!"
On a similar note, I somehow doubt if people broke through the doors to enter your home, assaulted people trying to protect it, yelled about how they want you dead, and then took some of your stuff you'd be calling it an "unguided tour".
1. Don't say anything because my neighbour tapes your mouth shut
2. Lie and say, "They were brutally murdered by your neighbour", resulting in a dead neighbour followed by my kids showing up unharmed from school
...can you explain in this scenario how censorship is worse than misinformation.
I'm not trying do be a jerk. I hear your argument a lot (especially on tech-heavy web sites) and I want to understand it.
Concretely to your hypothetical: don't attribute to misinformation the issue that is most like your barbaric reaction. Not to say that the liar should not be punished, it should bear a big responsibility in the consequences of the actions. But at the end of day it was not the liar the one that killed your neighbor, you were.
It's not as if folk AREN'T acting on misinformation or showing that they aren't really capable of distinguishing between the two. Tons can. And tons won't realize that The Boston Tribune isn't real.
We're having to deal with almost literally shouting "fire!" in a crowded theater when there's no fire, only there's special effects and major campaigns to convince people there's fire, not just taking some guy at their word and stampeding because of it.
If I am the father of the missing children and I see the "family and friends" sharing their condolences, I would go talk to them first. If someone comes with pictures trying to accuse someone of something, no matter how shocking the accusations, there would still be the question of (a) why is someone bothering with taking pictures and not taking to the authorities beforehand and (b) what are the consequences for me if I went on a rampage attack based on bogus evidence.
To get a little bit on topic: the reason that censorship is worse than misinformation is that we should always operate on the premise that our information is incomplete, inaccurate or distorted by those controlling the information channels.
Without censorship, I can listen to different sources (no matter how crazy or unsound they are) and I can try to discern what makes sense and does not. With censorship, any dissent is silenced, so we get one source of information - who can never get questioned - or worse we get to see many sources of information but only the ones that are aligned with the censors and gives us a false consensus and the illusion of quality in information.
Only idiots can walk around in the world of today and confidently repeat whatever they hear from "official" sources as unquestionable truths.
The extremes of my example were only to show that there could be real and serious consequences from misinformation rather than silence. If we dial it back from "killing my neighbour" to "lost my job" or even "missed my bus", I believe my point still stands. In many scenarios that we experience every day, we would be better served by accepting censure over misinformation.
You claim "we should always operate on the premise that our information is incomplete, inaccurate or distorted by those controlling the information channels" and I agree with you in theory. But in practice this is impossible. The human brain is physically unable to work everything through from first principles. This makes sense conceptually and has been verified in research.
And this to me is the fundamental issue of our time:
In theory, social media and unrestrained free speech are a boon for all society.
In practice they have turned people against each other with very real and serious consequences.
No. Not at all. I refuse your premise. Not only you are begging the question here (what scenarios? Your example was terrible and I really don't think you can come up with a good one), I honestly worry more about those that believe this rhetoric than the "victims" of misinformation.
Also, it's curious how those that so easily accept censorship never think that they will eventually be on the wrong side of the taser gun.
> I agree with you in theory. But in practice this is impossible. The human brain is physically unable to work everything through from first principles.
Good thing then that this is NOT WHAT I AM SAYING.
There is no need to "work though things from first principles". The idea is NOT to determine a priori what is "right" or "safe" and then make a binary decision. The base idea is to decide on what action to take (or to refuse to take) by asking yourself what is the worst possible thing that can happen if the information I have is wrong? What are the odds of me being wrong?.
I'd suggest you get acquainted with Nassim Taleb and Joe Norman to understand better how to deal with complexity and uncertainty.
> In practice they have turned people against each other with very real and serious consequences.
Bullshit. There was no Facebook during the time of the Crusades. There was no Twitter during the Cold War and no smartphones during WW1 and WW2. None of these things would be avoidable if only we could censor wrongthink.
On the other hand, THERE ARE video records of Tienanmen Square who have been successfully hidden from an entire country for an entire generation.
(Sorry for the harsh language, but I start reading any kind of censorship-apologetic and fighting instincts kick in. If you don't see how much of a sign of being morally bankrupt it is to casually defend the hellish things like state-sponsored censorship, I see no point in continuing the "debate")
To think that is okay to have one all-too-powerful entity controlling information channels is stepping into fascism and totalitarianism. This is a lesson that we should have learned already: no possible good comes out of that.
"If there be time to expose through discussion, the falsehoods and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence"
However, I think it's also important to recognize that in today's algorithmically driven content presentation, "more speech" is often comically ineffective because it is never consumed in the emergent content bubbles that silo people from contradictory information. Not to mention the fact that misinformation that confirms your preconceptions is a much more powerful influence than actual information that contradicts them. Given this, an important caveat embedded in the above quote is: "If there be time". A recognition of the fact that, in some circumstances, there will not be an opportunity for more speech to prevail.
I don't have a solution to this. There may be no good solution to this, except lesser degrees of bad solutions.
Alternatively, instead of removing one's profile picture, one could replace it by the Mastodon logo to make a statement.
Also, interviewing for a job at Facebook has been one of the worst job interview experiences I have ever had.
To interview for Facebook, study B-trees like crazy (no, the recruiters did not warn me about this).
Also, after the interview, the recruiters at Facebook I was in contact with completely ghosted me. Very rude.
On the other hand, the opaque/nonexistent review and appeal process is sleazy and YouTubesque.
Same thing happens on youtube, twitter, etc.
I've been using addons and rss feeds to go back to time/date ordered feeds, so I don't miss things I want to follow.
You can use RSS feeds to go around soft censorship, using apps like IFTTT, etc. Pockettube addon for youtube. etc.
The whole hacker/programmer community is/was freeing information, never trusting governments or mopolies. Culture sure has changed.
Not some conspiracy, just incompetence on FB's part. Of course some people would prefer to believe something nefarious.
This said, I remember when many MAGA friends announced they were leaving for Parler or MeWe or Gab and I never heard any of them claim their posts were removed (the ones who didn't leave right away).
If he posted it to a group (i.e., his college alumni group, or a sports fan forum, or a gamers group), no doubt several people tagged it as spam--which it may have been--and the algorithm kicked in.
There's no Big Conspiracy at Facebook to keep Mastodon down.
* What aspects are currently priority for improvements?
* What are lower priority problems or features that could be started now and worked on at a slower pace?
* Where is a good place for someone with programming, networking, and/or engineering skills to start getting involved with development?
* What can someone with little-to-no programming/networking or related "technical" skills do to further the development and uptake of decentralized social media?
* Are there any suggestions for good reading on this topic, both technical and non-technical? Websites, books, people/groups to follow?
I really want to see better authorization & authentication features across different instances. Right now, a really high priority feature for users is "Disable replies", but the way replies work, anyone can construct an activity that is set as "replying" to whatever posts they want to reply to, just by linking to those posts. Figuring out some way to "authorize" those replies (we have a few ideas, but need to work out a lot of the details) is important for us. Additionally, we've been thinking for a while about implementing more group-focused experiences, something kind of like old LiveJournal comms or the new Twitter Communities, and now that there are a few different projects looking into similar things, and we think it's an idea whose time has come. And of course, improving on-boarding and general user experience are always at the top of our priority list.
Interoperable clients. When the ActivityPub network was first envisioned, the idea was that servers would be completely generic, like email servers, and users could connect multiple different, opinionated "clients" to get different UI experiences of the same inbox. However, most of the current fediverse projects implement only the server-to-server federation experiences, and use more standard, domain-specific REST APIs for client communication. I think
It depends on your inclination! My perspective is that you should always write code that you know you yourself are going to use, because that's the best way to ensure that you're going to stick to it long term.
As a more practical suggestion, https://blog.joinmastodon.org/2018/06/how-to-implement-a-bas... and https://blog.joinmastodon.org/2018/07/how-to-make-friends-an... are still the two best tutorials out there on how to implement the basic ActivityPub protocol.
Use it! Invite your friends to use it. There are lots of non-programming technical skills that are always in demand for these types of projects—UX, design, product management, support, fundraising, comms—but besides those the biggest way you can support decentralized social media is simply by using it! The more people who are part of the community, the more vibrant, stable, and welcoming its going to be for new members.
There's a lot of good writing out there, but it's hard to recommend anything that I would regard as really authoritative and summing things up. I think we're kind of in a place where we need less people writing about possible futures, and more people building them. As a comparison, you can write all you want about possible startups people could make, but the thing that's really valuable is going to be going out there and trying them. Execution, as always, is 99% of the game.
I thought that the author of this post knew this given the mass de-platforming going on throughout the years.
This shows once again that it can happen to anyone. Facebook and the rest of them will never change.
Even if its toward an open source federated network that has no head and can host marginalized content
The implementation here seems to be an anticompetitive practice, which is sanctionable by governments in the US
https://dxe.pubpub.org/pub/dreamadvertising/release/1 ("Advertising in Dreams is Coming: Now What?")
I was notified on Sept 4 at 6:40pm.
Clicking on a link in my post to joinmastodon.com notifies you that the link goes against their community standards.
In late 2015 Whatsapp, which was acquired by Facebook in 2014, was caught with the pants down intentionally crippling functionality when detecting links to Telegram.
It seems more like it was a false positive of some moderation system that triggered because the post sounded too much like an advertisement.
That said: the company sees 2--3 billion MAU, and sees on the order of 5 billion pieces of content submitted per day.
The best I understand, their measure of exposure is not items but "prevelance, that is, the number of total presentations* of a particular content. Long-standing empirical media evidence suggests that this follows a power curve, where the number of impressions is inverse to the number of items. So, say, 1 might see 1 million impressions, 10: 100k, 100: 10k, 1,000: 1k, etc.
This means that a service can budget and staff for either the minimum prevalence threshold before manual review, or the total number of items granted more than some maximum unreviewed threshold. Machine-assisted filtering can help. In either case, though, mistakes will happen, and at 5 billion items/day, the number of misclassifications even at very high accuracy is large:
- 1%: 50m/dy
- 0.1%: 5m/dy
- 0.01%: 500k/dy
- 0.001%: 50k/dy
... which necessitates secondary review and additional costs, as well as, of course, malicious appeals by bad-faith actors. If the filtering system is fed by user reports (flags and the like), then malicious or simply disagreement-based flags may well trigger moderation. (Crowdsourcing has its own profound limits.)
Another element is that, especially with AI-based filtering systems, what results is determination without explanation. We know that a specific item was rejected, but not why. And in all likelihood, FB and its engineers cannot determine the specific reason either.
(I've encountered this situation more often from Google, again, as I don't use FB, but the underlying mechanics of AI-based decision systems are the same between such systems.)
The upshot though is:
- Moderation is necessary.
- It's ultimately capricious and error-prone. There are initiatives and proposals for greater transparency and appeals.
- Cause-determination is ... usually ... poorly founded.
0. https://toot.cat/@dredmorbius Also Diaspora (see below).
1. Monthly active users. https://investor.fb.com/investor-news/press-release-details/...
2. See Guy Rosen, VP of Integrity for both content and prevalence references: https://nitter.kavin.rocks/guyro/status/1337493574246535168?... I've written more on the topic here: https://joindiaspora.com/posts/f3617c90793101396840002590d8e...
FB I’m for hire! Plenty of experience spin-doctoring/downplaying incidences for PR.
However, the underlying idea that Facebook would block links to competitors is historically valid. As recently as 2016, Facebook blocked links to competing networks from Instagram (https://www.theverge.com/2016/3/3/11157124/instagram-blocks-...), and leaked internal emails from Facebook have shown that the company has an extremely broad view of what does and doesn't count as a competitor (https://panatimes.com/facebook-bought-instagram-to-neutraliz...). The company is extremely anti-competitive, it's not shy about this, and internal emails show that this anti-competitive attitude is entrenched very deeply and very consciously within upper management.
I think taking down this post in specific is very unlikely to be deliberate because:
A) Mastodon is likely not a large enough service to warrant it, and because
B) The explanation based on Facebook's AI being weird, opaque, and generally untested is a much cleaner, simpler explanation that requires fewer jumps in logic.
But it would be completely in character for Facebook to target a real competitor in this way. The reason it's unlikely to be deliberate is not because Facebook would never do something like this, and it's not because Facebook would be too frightened of regulators to do it so openly. Facebook has very openly done stuff like this in the past. It's just that there are other explanations that are more likely, and it's that if Facebook was going to start doing this, Mastodon probably wouldn't be among the first competitors they would target. I need a lot more evidence to show that this is deliberate before I jump off of the (extremely compelling) explanation that automated moderation is really buggy across the board and regularly does unexpected things.
The article comes off as a little uncurious to me, I feel like the author is jumping too quickly to a specific conclusion without a lot of critical thought. But part of why Facebook has these problems with people jumping to conclusions about how it tracks and moderates is because Facebook has a very real history of being openly corrupt in these areas, and Facebook has a real history of being deceptive about their motivations behind decision-making processes. The reputation hasn't come out of nowhere.
But that's probably a much deeper, longer conversation to have. I do believe that Facebook regularly uses the poor performance of its moderation algorithms at scale as a shield against public scrutiny, and as a way to occasionally influence public policy.
This example my have not been malicious, but it is a start reminder that you are allowing them to see, and control, your communication. That is something that I would prefer not to occur.
Read literally a single history book, people.
FB can ban you because you like to eat broccoli or for whatever any reason.
Many people support this idea during trump's ban. So you will just need to suck it up.
Seems odd to think that folk have to support a generic action rather than how that action is done. Like, there are people who like baseball but would probably be a bit upset if you randomly threw a ball at them at 90mph in the middle of the street despite them being really supportive of it in a different context.
This is false.
What is true is that in some limited cases and with a lot of caveats, you cannot be fired for your political affiliation in California, assuming that affiliation is expressed outside of work, etc. That's it. Please do not try to stretch that bit of weak labor law to protecting posts on TWTR because they are headquartered in CA.
When you tap, then tap Learn More, it just sends you to to this page
"We have fewer reviewers available right now because of the coronavirus (COVID-19) outbreak..."
That smells fishy. Seems like a job that would be a really good fit for work-from-home. Wouldn't you then have more reviewers available?
Or maybe the exposure to graphic content means they do this in the office?
edit: https://www.theverge.com/2019/2/25/18229714/cognizant-facebo... it was probably this one
Until very recently, Google Play also had a similar notice without mentioning COVID when an app was in for review.
Surprisingly enough I've had more "our response to covid-19" and similar crap from tech companies that would be near-immune to it than from companies that would legitimately be impacted by it (those whose business requires on-site staff, etc).
Please disable your ad blocker and reload the page.
I disable adblock and reload.
How is “they’re a private company, make your own” still being used as an argument when situation is obviously beyond that? We have conclusive evidence from FAANG and Governments that they work together.
You practically CAN’T make your own Facebook. Facebook will stop you one way or another. Google who has a dollar or two, tried and failed spectacularly. Do you know how much better an organic startup would need to be to rival Google’s Day1 investment in Plus?
Google+ was successful and Google shut it down for Google reasons.
If they gave away Google+ instead I can't think of anyone who wouldn't gladly take it off of their hands.
You can make your own facebook and facebook will not stop you. But people don't want another facebook, many are realizing they probably want off of facebook, replacing it with something similiar isn't helpful. What facebook offers (network effect) is the main value of the platform and replicating that is virtually impossible; nevermind in the same form as facebook.
Having Google+'s code day one would mean little without the users.
Is this really news? Isn't this just business as usual in corporate America?
The level of delusion and entitlement of some people is simply to hard to understand for me.
If it was a free service by a mom and pop shop with "use at your own risk" in the agreement, then yes, it would be entitlement.
However, there exist people, for whom 90% of their communication happens via Facebook or social media. And it's not even by choice, kids are born into it being the status quo, and if 100% of your friends are using it while you're growing up, chances that you won't use it too are slim to none.
Thus, the company needs to hold responsibility for providing open communication. Censoring posts about their competitors goes against that.
Note: I don’t use Facebook so perhaps I’m missing something.
One of the most amusing things about actually working in one of these companies is just seeing how confidently wrong some internet commenters are about what is actually happening when an article or outage happens.
That way people can understand why their post was removed without having to speculate.
Maybe filtering changes should be rolled out slowly at first, with every customer complaint analyzed, to catch these bugs you speak of before they are widespread, frustrating and look obviously suspicious.
Maybe the big powerful corporation should hire staff in proportion to their mistakes, instead of blaming a pandemic for its record profits, er ..., I mean lack of interest in finding ethical solutions to problems.
Maybe if the company made good faith explanations of mistakes, and actually fixed them, instead of letting them fester, or continually playing hide and seek with information, speculation would not be necessity.
Your attitude about your company's customers is equally disappointing.
Not informing customers, then being amused that they are uninformed (and some invariably speculate), is not a solution to anything.
Am i not allowed to find amusement in dumb people? Then maybe use that as a reminder to relax when you don't have all the information.
You are calling people who are left uninformed dumb because information that is relevant to them is being withheld and they don't have inside information or corporate experience like you.
People work with the information and experiences they have. If a lack of information leads to wild misunderstandings, that is why corporations should communicate better.
Im not smarter,(or maybe i am) But in some cases I am more informed.