Message franking is a technique used by Facebook in their own E2EE chats that allows them to trust user reports about E2EE messages, otherwise someone could claim they received something abusive and there would be no way to know it's true. Believe it or not, "just block the sender" is not a sufficient solution because while it works for the individual, when a spammer is allowed to run rampant and thousands of users receive spam messages from different spammers regularly it brings the quality of user experience for the whole platform down significantly.
>For once am not happy about Mastodon being on the HN frontpage. Someone submitted a work-in-progress pull request there. Half the comments are circlejerking about free speech as always, the other half gives opinions without understanding the context of the feature (which is not surprising given that it's a work-in-progress pull request on GitHub and not a goddamn press release).
I feel that Mr. Gargron has been his own worst enemy at times. Mastodon.social instance is notoriously bad, overmoderated and often down-right abusive.
While Mastodon itself in many ways a brilliant project with great dev team and direction it's that just they seem somewhat socially inept/extreme at times.
Gargron has shown that they do not care much for the community or the greater fediverse at large.
Pleroma also is cleaner code-wise so there's that, more active dev and lightweight relative to mastadon.
not paid by Pleroma btw, i just use it daily and I love it wayyy more then Mastadon
Pleroma does what Mastodon’t
Looking at the linked issue, it's for PMs: https://github.com/tootsuite/mastodon/issues/1093
I'm all for "many eyes make bugs shallow", but this is somewhat-widely deployed software, and I have approximately 0% confidence (based on my previous experiences with Mastodon releases and code quality) that this will be strong and safe for its first public release. (Happy to be proven wrong, mind you.)
Perhaps this could be done in an official testing fork, and merged back in when actual cryptographers are more confident about it?
The idea of shipping this in the standard Mastodon release cycle is terrifying, and I really hope they don't intend to do that.
Ultimately, from a design perspective, I'd much rather see ActivityPub implementations support good profile deep linking to existing (read: safe) messengers rather than trying to graft e2e onto a federated messaging protocol that happens to support DMs-do one thing and do it well, and all that. (Also: backwards-compatibility downgrade attacks, anyone?) We all know how well previous attempts at e2e encryption of federated protocols went (spoiler: they didn't).
The modern day version of Zawinski's Law of Software Envelopment seems to be that apps will always attempt to expand until they can send and receive DMs. The consequence of this should not be that every app bundles key generation, key encipherment, key backup, secure key distribution, federated key authentication, and a message cryptosystem simply to support e2e DMs. That's (dangerous) madness.
And the default is to mash both of them together and make it public. Unsurprisingly, it's a source of toxicity and needs intensive moderation, because a broadcasted address is mostly employed in a narrative sense, with the person at the other end reduced to a character in the story. A timeline creates a space, but in a shared timeline, whether it's a Twitter hashtag or a comments section on a small blog, the space is made by spamming your narrative more often.
With a decentralized, privacy-enabled solution like ActivityPub, there are many tools to reshape the extent of the narrative so that you always own your own space, but the tools themselves are quite complex and pressure our engineering and UX capabilities.
And yet - broadcast by itself is not hard, if done in pull-orientation like RSS. And secure messaging is challenging but mostly solved. I have some unfinished thoughts that perhaps simpler is possible by changing the system's orientation further, because I don't think the current designs are quite it.
Why isn't this solved fairly easily via a digest metaphor? One can imagine many different implementations, but something as simple as "your recent updates appear in a group and that group can't be bumped to the top of people's feeds more than twice a day" already seems better than the barrage of puke hiccups that is Twitter today.
The main benefit of integrated messaging for most users is leveraging the the network effects that come from having a built-in way to message others. This goes away entirely if you have to have every user download a different app and link it and hope that the people they want to talk to will do the same. And the fact of the matter is that "many apps that each do one thing well" is only a good user experience for hackers that can spend a lot of time learning and understanding the quirks of many individual apps, or already have preferences on which apps do what best.
> and I have approximately 0% confidence (based on my previous experiences with Mastodon releases and code quality)
Could you elaborate more on what these problems were? We've had security bugs in the past, but not many more then any other large, complicated app, and to my knowledge we've always fixed them very quickly after they were discovered. We also have a pretty good track record in encouraging adoption of critical security fixes across a large and diverse ecosystem.
This is something myself and a few others are interested in pursuing.
Mastodon even has a simple key-value table that any profile can enable and fill with details such as "website: example.com", "twitter: @jack" and so on.
This feature already has "verifications". Meaning that you can add proof that example.com, keybase etc are really yours.
This could easily be expanded to verify ownership of SMS, or messenger apps. Provided those apps have some form or authentication/proof in place.
I have this on our roadmap for our "fediverse linkedin" project (another story for another time) so would gladly offer help here; keybase with my contact details in my hn profile.
 It would be easy after a refactoring. The code handling this in Mastodon is not ugly persé, but rather unfortunate. Tightly coupled to the God-Model "User" (which seems to happen with every rails project at some point) and spread over some json-store, model and unrelated controllers.
I'm more interested in exploring getting the entire AP ecosystem be encrypted by default, using Chris' work.
I don't think I care that much about profile deep-linking to other apps. I would rather obsolete that need.
You have forgotten your origins, James, and it bums me out. Email sucks. Requiring email to join a site sucks. Let's cite a popular essay from our old community:
* Registration keeps out good posters. Imagine someone with an involving job related to your forum comes across it. This person is an expert in her field, and therefore would be a great source of knowledge for your forum; but if a registration, complete with e-mail and password, is necessary before posting, she might just give up on posting and do something more important. People with lives will tend to ignore forums with a registration process.
* Registration lets in bad posters. On the other hand, people with no lives will thrive on your forum. Children and Internet addicts tend to have free time to go register an account and check their e-mail for the confirmation message. They will generally make your forum a waste of bandwidth.
* Registration attracts trolls. If someone is interested in destroying a forum, a registration process only adds to the excitement of a challenge. One might argue that a lack of registration will just let "anyone" post, but in reality anyone can post on old-type forum software; registration is merely a useless hassle.
But I also think asking for at least an email is not that unreasonable. Sure, I value my privacy, but I also value a good community and keeping the worst spammers out often helps. Talking smaller instances here, obviously, but I think that's where the value is with something like mastodon. Tight knit communities with the option to reach out to others.
I also like the idea of using BTC you mentioned in another post to state good intentions, but acquiring BTC to use in the first place seems like much more hassle, registrations and sharing of data is required.
You can get btc with cash, if you already have it,it's a nice option.
"If it can be abused,it will be abused".
Captchas and small btc payments can't easily be abused against the user but email and phone number can. Email and phone number are also not costly to an attacker to spoof or churn out at a high rate. Captchas and small payments have accumulating prohibitive cost for an attacker.
IfI create a gmail just to sign up,it certainly is much more of a hassle. Even temporary inbox services are a hassle these days compared to a 20sec captcha.
That Email is a hassle but not a big one is exactly the point why sites ask for one.
And Captcha? You talked about spying before and this is exactly what captcha does these days. Its massively invasive, plus it was always annoying and never really worked well for its intended purpose.
I have to say, it seems a bit like your argument is stuck a few decades ago. While I quite like the thought experiment of how the internet could work with essentially anonymous and frictionless participation, I don't think it ever has or at least not for decades. And examples like hn and reddit are interesting, but every other blogs open comments section show that this working is rather the exception than the norm.
I consider myself privacy conscious, and from the options captcha, email and BTC, email for me strikes the right balance of ease of use, anonymity and effectiveness. More options would always be welcome though. Conecpts like Indieauth for example, look promising, but let's not kid ourselves, privacy wise, they require more of me than providing a fake address to get an email. And just like the BTC solution, it would probably keep many more people out if it were the only option.
Anyway, in the same spirit one could say that "there is a strong comorbidity between being a harasser who jumps on random threads just to insult people as well as subscribing to racist and authoritarian idelologies and using mastodon" which (while true in my personal experience) would be unfair to the nice people that use mastodon and possibly cause these that read it to be prejudiced against mastodon users due to their software of choice.
I run a Masto instance and I am just speaking from my experience in what shows up in the tags where us admins share block recommendations; there are bad actors using Masto too! I defederated from a Masto instance just this morning after seeing its admin aggressively escalate a conflict with well-practiced speed. One of those pops up about every six months. Pleromae where the admin encourages escalating conflicts and wears an anime girl icon are a constant feature of the block suggestion tags.
Things like #fediblock you mean? Most of the block suggestions there are
- against instances because they federate (as in, they let their users view and reply to posts) with certain other instances (gab, kiwifarms, etc) even if indirectly <https://tabletop.social/@host/103308182908324413>
- against instances that host anyone who kyzh doesn't like
- against instances that host someone who got tagged by someone else that resides in an already blocked instance <https://toot-lab.reclaim.technology/@djsundog/10402250286634...
- against instances for no reason given at all <https://tenforward.social/@guinan/104015065073049864>
- against instances for made-up reasons
- against non-instances <https://mastodon.art/@Curator/103768019516091512>
Seeing an instance on a block tag does not mean anything other than that certain extremist elements are finding said instance to not strictly follow their party line at the first glance.
> Watching anime is fine by me, I’ve enjoyed the hell out of a lot of the stuff in my time.
What was your intention by focusing at the anime avatars then? To promote prejudice? To focus on the fact that they are introverts? To make fun of them for being unmanly and having cute avatars? There seems to be a lot of bullying directed at people just because of their avatars.
As I said before my experience is exactly the opposite. I started making all of my statuses private because I was getting stressed every time that someone from mastodon reposted or replied to my status. There seems to be an endless supply of mastodon users willing to jump on random threads just to insult, harass, and shit-talk people behind their backs.
I just think the fact that this association exists is fucking hilarious, it's a detail that nobody even began to get anywhere near imagining in all the sci-fi I read growing up.
Except it is not rare at all. Did you read what I posted regarding fediblock?
A simple question, do you block fedi.absturztau.be by any chance?
(I also don’t go looking at any fediverse site besides my own unless I am investigating reports of bad behavior, tbh. So if there is a vast constellation of chill moe Pleromae I’m missing it. And quite possibly forming invalid stereotypes due to this bias in my sampling.)
It's a recent thing they're doing. I routinely create at least half a dozen free email addresses a year, phone number did not use to be an issue. They've locked down pretty tight.
Hopefully this is helpful!
It would be fine if the users opted into this, but it happens silently and arbitrarily by admins, oftentimes based on speculation or gossip, not even real abuse. It’s all of the worst of tribalism, polarization, guilt by association, and preemptive censorship (regardless of whether or not admins are “in the right” by censoring messages between x and y flowing through their own machines).
It’s also not easy or reliable to migrate your account between instances without losing your followers, and none of the server implementations yet support virtual hosting, so you can’t migrate hosts while keeping your own domain/handle.
There are real problems there, and casually dismissing the major censorship issues in the ecosystem doesn’t begin to solve any of them.
Some people want to exercise the rights over their computers (pick any ideology, FOSS included) and don't want certain bytes shipped to their computers. Who cares the reason.
Some people don't have the time, energy, money, and technical experience to exercise their rights of byte-shipping in a competent manner, so they carefully delegate that power to someone they trust. And some want to join in a community that is purposefully run this way. To categorically paint this use case as "insidious" ("silently", "arbitrarily") is in denial of these peoples' real needs.
Forcing peers to accept your bytes with the assumption that they must examine it with their own eyeballs in order to overcome a zealous interpretation of "censorship" is blatantly disregarding the humanity in a peer and their real needs.
Would a web host performing MITM on an HTTP connection to alter or redact your blog posts be bad? After all, it's their hardware...
This is a categorically different problem than MITM.
There are instances which require an account in order to see the bans (cyber.space). There are instances which do not list bans at all. There are instances with made up reasons of banning made up instances (mastodon.art). Even that flagship instance lists incorrect reasons for removing instances (claims that certain instances shared illegal content when said instances do not allow any form of illegal content).
In addition most mastodon instances do not disclose their policies via AP. See for example https://fediverse.network/mastodon.art/federation
However, this is not a systematic censorship problem, unlike centralized services with opaque policy language and a complete boot out the door. People are free to run their own instances or have multiple accounts across different instances.
Whether you think they're correct is irrelevant to the question at hand. Freedom of speech and association means you're free to not federate/talk to those problematic instances, and maybe you'd be much happier for it. On the other hand, not being OK with it and trying to fight for transparency means you're trying to externally force these communities to be run in the way you want, which may be received well, but not always b/c forcing unwanted change is exactly the opposite point of Federation: communities will be built the way their members want to build it. Like the real world, some value transparency and some don't.
It's one thing to argue specific bans about specific instances and disagree on the other party's interpretation; it's a totally different claim to say that the entire system is corrupt with opaque censorship.
Mastodon != Fediverse
In the future, it would definitely help me and others understand your motivation better if you could even include one more sentence in your communication like "Just here for a correction: some instances are transparent..."
I will strive to be more charitable.
That can be a simple issue of jurisdiction. Mastodon.social is hosted in Germany (IIRC), so they have to adhere to German law. That means, for example, while hatespeech isn't strictly illegal in the US, it certainly is in Germany, it even has a fairly good legal definition. Or take the Japanese instances, which aren't well federated or have media-bans because of differences in media legality. And lastly it can also be simply the case that the instance is not moderating (ie, they write 'no illegal content' but do not care).
Both the statement that an instance shared illegal content and that the same instance was banned for illegal content can be true at the same time.
Why does it matter if any instance decides they don't want to associate with you? It doesn't affect your ability to use the service beyond not being able to interact with folks who probably don't want to talk to you anyway.
Forcing someone to make their server software talk to yours is just as much of a "Free Speech" infringement, if not moreso.
It prevents people on that instance who explicitly want to follow me from doing so.
It also prevents me from following people on that server from my primary account on my homeserver, even if those people explicitly want the whole world to be able to read their public messages.
Both of those are undesirable interference between mutually-desired communication by Alice and Bob, by Mallory.
This doesn't address my point at all that you cannot argue that my server is somehow obligated to process bytes from your server.
Your peers' routers are allowed to drop your packets, but nobody is arguing that that's good or beneficial. There is a difference between "within rights" and "good".
If a company with a near-monopolistic network effect (Google, FB, etc.) censors speech or who-can-talk-to-who/see-what on their platform, it seems that most folks agree that this is bad, whether or not they're willing to do anything about it.
So, instead, we have decentralized services (and semi-decentralized ones, like Mastodon). At what point does a Mastodon community operator's decision to censor speech or who-can-talk-to-who/see-what on their platform become problematic? When a community achieves a large size? If new community members aren't made aware of the censorship? Is the difference between these communities and an ultra-ubiquitous one like "having a Google account" or "being connected to friends on Facebook" a difference of degree, or a qualitative one?
It's simple to argue an extremist position of "all speech between any set of parties must not be suppressed for any reason, even by the parties themselves", but I don't think most people want to live in that world. Similarly, it's pretty hard to isolate a line past which an operator of a community-service should be held to a different standard of conduct because of how ubiquitous/depended-on their community is, but a lot of people seem to think that this line exists.
The technology itself is of course open, but if your content is not approved by the main Mastodon federation, then users will have to be signed into multiple Mastodon federations (if that's what they wish), one to see the main Mastodon federation, and one to see the one that got banned. Because of this extra hurdle, a ban from the main Mastodon federation does shut out a large portion of the Mastodon users.
Mastodon is often presented as this 'free speech social network', in reality it's just a decentralized social network, with all the censorship that comes with being a modern social network.
Based on what principles should server owners be forced to federate with third party servers if they don't want to? How is not wanting to federate with anybody "censorship"?
I've just setup a mastodon instance on a VPS to give it a try. For less than $5 per month you can have your own instance where you can invite like minded people and find people to federate with.
And if you can't find anybody to join you server or federate with you... Maybe you should think about what that says about you instead of screaming that you're being censored?
This group of people have shifted to this position because they no longer have the "de-platform/systemic-censorship" argument that arises when someone is banned from a centralized service, resulting in a total loss of access to the entire platform. Conversely, on the Fediverse they're still there but simply can't talk to some % of users. And that can easily be rectified by being a part of multiple communities and abiding by their rules.
I've tried to write about how ActivityPub (which Mastodon uses) is not a censorship-resistant network and that the point of Federation is to build lots of custom communities and have them politely talk to each other, or ignore the ones that violate community's expectations . Feedback I literally got from here on HN was "I'm disappointed in you", when I think it's an accurate and realistic view. Especially when standing in the shadow of FreeNet.
The same liberty of free-speech and free-association that lets a far-left community thrive, and a far-right community thrive, also lets them block each other (which is a good thing -- it would be ugly otherwise).
Vehement agreement from a Masto admin who works to keep her instance a nice quiet chill place for people like her, with some connections to other nice quiet chill parts of the Fediverse, if you want to argue then go to Twitter or go to a "free speech" instance - and accept that you will probably be cut off from the chill places unless you make a second account and abide by the chill rules.
I honestly doubt that. Are you sure that you are not straw-manning them? In my experience they usually complain about how admins strip the ability to read their posts from the users registered in said instances.
>If you come to my house and starts yelling things that offend me and I kick you out, I'm not infringing on your freedom of speech.
It'd be more like be your neighbor being offended and therefore kicking out the person yelling things that offends him. If you don't want to be kicked out of the apartment complex, you are required to share the same views as everyone else.
From the mastodon wikipage:
Gab, a controversial social network with a far-right user base, changed its software platform to a fork of Mastodon and became the largest Mastodon node in July 2019. Gab's adoption of Mastodon allowed Gab to be accessed from third-party Mastodon applications, although four of them blocked Gab shortly after the change. In response, Mastodon stated that it was "completely opposed to Gab’s project and philosophy", and criticized Gab for attempting "to monetize and platform racist content while hiding behind the banner of free speech" and for "paywalling basic features that are freely available on Mastodon".
In the case of Mastodon, they've deliberately avoided putting themselves in the shoes of a platform/paper/platform owner in the interest of a system where no central authority fully controls its use.
That the creators of Mastodon are "completely opposed to Gab’s project and philosophy" is in no way a departure from that ideology. They're criticizing Gab's use of free speech using their own free speech but have deliberately relinquished their right to exert any more authority than that on an ideological basis. That they can't control it themselves it their unique selling point.
> I'm sure the Mastodon creators wouldn't like it, but they'd have to ideologically support the very idea of allowing them this, as it comes down to free speech.
And I argued that it does not come down to free speech, and that whether "Mastodon ideologically support the very idea of allowing them this" is unclear at best.
The developer decided to implement instance blocking, not the other way around.
It was sad, to have witnessed such a bright future unfold, interesting discussions on language, on tech, on culture. And then to have half of the world just cut off.
The fediverse was the future...
There was a block-list circulating around, and if you do not block every instance on the list, your instance is misogynist, pedophile and far-right.
It just feel very weird to me that the word "fediverse" is thrown around like a universe, except it is a balance of not getting thrown out by not being the norm. Perhaps it is just me that has this fantasy of everyone being in one place, at least on the Internet, but jerks are jerks.
String phone in one hand, scissors in another.
I must have missed the memo, because I run a medium-sized instance, don't follow any blocklist, and no one ever complained about it.
I block instances when they either flood or I find I don't want to have anything to do with them (I do tolerate opinions I disagree with, of course; but not patent bigotry).
And only based on evidence I gather myself, I don't trust screenshots or copy-pastes (but I understand some mod teams do, and that's ok if that's what their users want).
> I wonder how often you get complaints about wanting an instance blocked, and how you manage them.
As I said, I never got complaints. I also never got instance requests personnally; although I do sometimes see other instance admins saying they blocked a given instance. When it happens, I take a look at that instance's public pages. Usually, that's enough to make my mind, eg. because their public timeline is overrun by literal nazis and/or lolicon.
(I was looking for examples as I was writing this, and it turns out most of the nazi instances I blocked don't exist anymore. Oh well.)
The hardest part is dealing with big instances with many "well-behaved" users, but also a very lax moderation policy that tolerates trolls. So far I only banned individual trolls in this case, but it requires work, and I understand not all moderators want to spend so much time.
I was just so frustrated when the instance cut me off from people I follow, sorry for the language.
As an added note, authentication can be and should, for most cases, be done on a session basis, when establishing the session key (which should also, by the way, be generated with care to provide forward secrecy).
The idea is that if individual messages aren't signed, there's the advantage of plausible deniability to third parties. You know who you're talking to, but you can't take a message go to a third party and claim "hey, this person has said this. See? This message is signed by his key.".
This is the level of privacy generally expected in a conversation conducted within the same room in meatspace, and most people would be uncomfortable with any less than that.
I wonder what legal changes they would come up. I am not optimistic.
Bullying has always been a problem. Sometimes its physical, but its always mental. The child who is ostracized is bullied too.
Now, in the virtual world, children find it follows them home too. Even if a child is offline after school, they hear all about what was said about them when they get back to school in the morning.
Where I live, there are good speakers who pass through all the schools, talking separately to the children and to the parents. Their descriptions and explanations made me realise that my mental model of what bullying is or how it works was inaccurate.
Enforcement doesn't happen linearly. And if you stop the bullies from letting others discover them at all, there's no hope to changing them. They will still be able to bully someone but it won't be a celebrity, I guess because they are public.
People say the censoring is to keep others from committing suicide. I think it's simply wrong to censor a person that died from taking their own life. Outrage would be more prevalent if things weren't censored like today. I can rightfully assume not all family members are even informed how someone died in a suicide and contrary if someone in the family is lost to cancer or any other illness that results in death. It's easy to realize why certain illnesses get more funding. More people are aware of it.
A lot of positives could come from everyone understanding not all people enjoy living because of whatever reason they suffer that leads to suicide. Progress comes with understanding. I don't think the mental health field is doing a good job at innovating like we see in the tech industry every few years.
IME, bullying was protected by the school because the rule was always "Anyone involved in a fight is punished, even if they didn't start it."
That meant that if someone hit you, you got punished for it. So you couldn't report it or you'd be punished, which basically meant that the majority of conflicts went unreported.
Nobody was holding the school responsible for it, and so they did nothing because it was easiest for them.
However, unlike in-person bullying, cyberbullying is always recorded. (Well, unless you're on voice chat or snapchat or something, I guess.) There's a paper trail for people to follow and determine what really happened.
Perhaps ultimately it shouldn't be the website that's responsible for that, but I know if I were running the website I'd feel ethically that I had to do something.
This. The sad reality, IMO, is that government policing of cyberbullying is never going to work out in practice due to its sheer scale. On the other hand, any legal framework that's going to be introduced would likely be abused by bad actors to suppress speech. So it's either going to be pointless at best, and outright harmful otherwise.
There was a lot of censorship during the Fukushima nuclear disaster by the government.
But admins can still choose to block instances in the future that I might have interest interacting with. It is like a gamble choosing an instance.
Making an instance is tedious, and once someone in charge finds out who you hangout with, your domain name gets blocked. Such is socializing.
Then have multiple accounts and abide by each instances' rules.
> Making an instance is tedious, and once someone in charge finds out who you hangout with, your domain name gets blocked. Such is socializing.
You can still hang out with the folks you were hanging out with. They blocking you has no bearing over who you hang out with, unless you let them get under your skin.
Based on your responses here and elsewhere it sounds like you have a bone to pick with Mastodon because you can't find a solution where you get to be heard by everyone all the time, from the far-left to the far-right. That's not a right and that's not "free speech", that's trampling on others' freedom of association and their right to build communities as they see fit: Not every person is welcome in every community. Who are you and I to dictate what a "correct, healthy community" is?
I really wanted Mastodon to be where I can find everyone. To be free of censorship, ads and algorithm-induced bubbles. I am lucky to have the "right" mentality (in regard to the tech industry), so I am not often suppressed, but everyone is different.
I don't want to impose on someone a "correct, healthy community". Blocking an instance seems to do so.
However, it's very presumptuous to say:
"I really wanted Mastodon to be where I can find everyone."
That's Facebook and Twitter. And even then you can't find everyone.
People go to the Fediverse to build the community they want, not be subjected to "everyone". It's this clash of collective rights vs individualism that seems to drive so many of these ridiculous arguments. It's no different (or, in fact, it may be better now) than getting banned from one of the many phpBB forums of 20 years ago. Those communities thrived and the banned didn't even have an instance leftover to call a home: everything was gone when they got banned.
Just because you want to find everyone, doesn't mean everyone wants you to find them.
"Mastodon is a decentralized network! Remember, regardless of server choice you can talk to and follow anyone on Mastodon!"
I was perhaps misreading the developer's intentions.
So I wouldn't get too hung up on one developer (me included).
Mastodon should be the exact opposite of IRC imo. Hashtags or message threads can act as a simulated chatroom but generally speaking, the power of feed-based social networks is that they invert the traditional power structure, thus yielding more interesting content.
I had high hopes for Mastodon but whatever, this whole social network thing isn't worth the trouble. Now HN and Youtube are the only websites I visit for entertainment.
I understand that the question is: should the mod define who “the community” the community can connect with, or not?
Server blocks are just an extension of that function. If a spammer creates a new server under their own control and creates a million accounts to send spam from, do you expect moderators from other servers to just click "block" a million times? No, that's why the bulk option exists. There's no way it couldn't.
De-federation and Gab blocking are mostly for instance operators and app developers to avoid hosting illegal content and avoid being seen as sympathetic to extremists respectively.
What people fear is either could be used as grounds to charge them as child molester or colluding terrorists or to revoke your Apple/Play Store developer accounts for life.
If you don't like listening to some people, that's fine, don't follow them, block them, whatever, but I won't join an instance where I can't listen to other people because the moderator doesn't like their message.
I'm happy let those who want to run a community that has similar values to mine block obnoxious content for me. Far better that than Facebook or Google being the sole arbiters.
The change is that it empowers the users.
> one blocklist will get a lot of users, end up as a "recommended" setting
This is fine as long as the users select said list out of their own will.
> then all those hurt that their bigoted views aren't more popular will moaning about "free speech" again.
Have you considered the possibility that a lot of non-bigoted views are currently blocked due to trigger-happy admins?
Exactly who is on Gab who I want to listen to? If they're not a bigot and not ok with bigotry, why are they on there?
Fact of the matter is that I don't really want to talk to most people on the internet, and I don't want to see what they have to say about me every time I want to see what my friends are up to. I want to talk to my friends, maybe have our wider communities able to chime in, and occasionally discover new people through that. It's not my job to convince random assholes on the internet that I deserve to exist, and it's not useful in any way to see their messages. Blocking extremist free speech instances which promote harassment as a normal part of their operation is... a feature, not a bug.
It's incredibly unlikely that tomorrow, my instance pushes the needle so far that everyone blocks it immediately. More likely a series of changes in the moderation team gradually pushes things that way and I can change instance before things get bad enough that anyone would block it - and I'd do that because it wouldn't be a community I want to be part of any more, rather than any particular fear about being blocked.
Just need to find an instance that doesn't block...
Note that the majority of instances that are "blocked" are actually soft-blocked by most instances, meaning you can still talk to people on them if you follow them, you're just not going to find posts from their users otherwise.
See email for instance: if you're unhappy with your current provider you can move to a different one or even roll out a new server and you can still interact with the other users.
Mastodon uses the ActivityPub protocol, and there are other implementations that use this protocol (Pleroma, for example).
Pawoo is the go-to instance for (lolicon) artists banned from Twitter, and despite being the largest Mastodon instance is blocked by almost all instances.
The fear came from some European laws forbidding under-aged illustrations (typing this makes me die inside), so instances serving pictures from Pawoo may get into trouble.
My theory is there's a set of curve parameters that evaluate to "legality" in a sigmoid response that has less to do with ages or even if it's depicting a human or an animal or not, like a picture of desert hills looks pornographic sometimes or how sumo wrestlers charging evoke no sexual emotion. That human curve scoring yet to be discovered is, to me, looks like how legality is determined worldwide on and off internet, so calling those drawings as loli or people doing those curves as lolicon is inaccurate in my opinion.
I wrote a proprietary tool that provides a Pleroma client as filesystem called Polearm
You’re painting Elixir as this obscure language that no one knows and while it’s undeniable that it may not be as widespread as JS, PHP or Python, it is not so marginal.
I live in a European city that’s not a tech capital and a couple major startups use it, and I’ve seen it used in major apps across the web.
Never used it myself but read how it excels at concurrency and messaging so why not use it for this?
Ruby and PHP are also declining (well, relative to Stackoverflow questions), but Ruby still gets 10x more questions than Elixir and PHP 100X(!) more. Now sure, lots of PHP questions are from noobs who don't work in the industry. But if we start looking at jobs we're gonna see pretty similar results.
P.S when we look at frameworks it's about 60x more adoption for Rails vs Phoenix https://insights.stackoverflow.com/trends?tags=phoenix-frame...
This is hardly "obscure" - these are applications used by billions of users all over the world.
Potential developer market for your business or OS project is a factor in engineering, sure. But it's certainly not the only one, and there's a utility threshold for how useful a large developer base can be - maybe I don't care that there are only 10,000 good Elixir in my region if I only need 3 or 4, and I can entice them with good conditions (salary, or a prestigious OS project)
World-class CTOs and engineers choose Erlang and Elixir for their working characteristics as programming languages, a point which you've chosen to completely ignore.
> Even worse, it's functional
I wouldn't consider myself a functional programmer (although if a language offers FP facilities I often use the hell out of them over imperative and OO constructs) but if I built something in FP because I thought it was be the best-suited paradigm for the task at hand, I'd happily weed out people who can't be arsed to learn the rudiments.
That sounds like the encryption isn't deniable. Personally I would prefer deniable encryption to ability to report wrongthink.
Or a platform used by children.
The alternative, which you're welcome to use, is a fully decentralized/unmoderated platform. That alternative doesn't work for a lot of people. For them, the ability to report is often critical, quite literally, for their physical safety.
May I ask why? If you are not willing to stand behind something that you said then do not say it at all.
Anyway, I do not think that deniable encryption is useful at all, after all potentially edited screenshots are taken as truth all the time. At least if you are using a non-deniable communication method you will be able to ask for proof that you wrote the post which they claim that you wrote.
You can block the spammer yourself. I'm not sure if the feature is about only private communication between two users or in channel, but if it's in channel, there can be bot logging messages. That way the bot's owner still knows who posted what and can ban/moderate as needed.
>Do you think these things should be allowed to run rampant just because you believe that an admin's decision to not communicate with you is that terrible?
I have no idea what are you talking about. Are you reacting to what I wrote or to your own projections about my beliefs?
If Bob spams thousands of accounts he'd quickly get on multiple block lists.
This is nonsense. Do you really think everyone should have to deal with spam themselves? Do you disable spam filters on your email and deal with all of that on your own? Do you think, on a site like HN, we should have to filter spam ourselves too?
The internet would be completely unusable if it was expected that everyone deal with spam themselves. This is ludicrous.