Due to how easy it is to setup accounts and post on Bluesky, it’s likely many of the same operatives behind the propaganda and bot armies on Twitter are now pushing the same vitriolic content, and triggering these reports. If they can negatively impact Bluesky at a critical moment, it’ll reduce the flow of users who will quickly surmise “oh this is just like twitter”
This underestimates the effect of Bluesky’s culture of “block and move on”. There are curated block lists you can subscribe to. Individual communities do a pretty good job of shutting down toxicity they don’t want to engage with.
It shares the same problem that Twitter had years ago back when it supported API blocklists.
Everybody you're blocking is at the whims of the blocklist owner, and it didn't take long for those people to go insane and use their lists as a tool for their own personal unrelated crusades.
Bluesky is already starting to experience this from a few I saw going around
Being 'at the whims' of whoever maintains the blocklist isn't unique to this style of moderation - when it's Twitter, you're at the whims of the company - but at least it means you can use other blocklists if/when the good ones go to shit, or can start a community-ran blocklist.
Yes, your combined blocklist is point-in-time. If you unsubscribe from a blocklist, a user on that blocklist will be immediately unblocked, provided they're not also on another blocklist you subscribe to (and that you didn't block them directly).
if they had a rule in the autoblock subscription that if a name appears in 3 or more (configurable) subscribed moderation lists it gets autoblocked, then users could stop following bad actors and change what moderation lists they use over time with less large impact to their experience. if you see messages from someone and they're on one of your block lists, you might reconsider the list. if they're on 2 you might consider personally blocking them, and if on 3+ you'd never see them. make blocks require a reason as well that the user will see alongside their block.
On Bluesky you have different algorithms/feeds which solves this problem. You subscribe to multiple feeds, which show you different content using different moderation and block lists. Sort of like you read different news sites and watch different news channels. Whatever feed you find that you enjoy the most is the one you spend the most time on.
You're right, they need to do well with the bot problem to really succeed.
But, it won't be "just like twitter" unless the "Discover" tab ("For You" on X) is filled the billionaire owner's non-stop hyper-partisan, political posts.
I don’t think you realistically can. I’d instead approach it from limiting the reach of new accounts until proven as good actors.
Or switch it back to invite only, as there’s a massive userbase now, and if you invite a problematic account it becomes a problem for your account too. Operate on a vouch system.
Aha... dont be naïve... what is the definition of "good" in 2024? Take the US population for example... 50% will say your intentions are "good", the other half will not!
Moderation lists and labellers honestly already get you most of the way there. Labellers are very effective at flagging spam/botted content and accounts that continuously show up on labellers as spam/bot content get referred to moderation lists dedicated to specific types of spam and bot content.
So you can already start by using a labeller and just hiding that content behind a warning (kind of like the NSFW wall), hiding it entirely, or just attaching a visual tag to it (based on preferences). And then to filter out more consistent perpetrators you can rely on mute/block lists.
No one's saying the quiet part out loud. Pay for an account. Even $1, one time, is enough to cut almost all those bot farms down.
Is it realistic? yes. Is it viable? I'm not sure. People claim to care more about privacy but will choose ads and trackers over a subscription any day of the week. Anyone operating a website or app with a subsciption knows this.
Considering that some bot operators [1] and spammers [2] on Twitter are willing to pay the $8 a month for fake verification, I don't expect a $1 sign-up fee to be very effective.
What about using TPM modules? I've been researching these modules lately, primarily for use in online video games. From my understanding, you can use TPMs to effectively ban players (TPM ban) based on their hardware. This would mean every time an account is banned, the bad actor would have to switch to a different TPM. Since a TPM costs real money, this places a limit on the scalability of a bad actor.
Cool, if you can require them for every possible interaction on a platform but even that violates privacy if you have one universal value that ties it all together (the identifier of the specific TPM).
It's just the phone number/email issue but tied to hardware. If you think these things won't leak and allow bad actors to tie your accounts across services then I have some lovely real estate in Florida you may be interested in.
It also appears that resetting a fTPM works around this since it fully resets the TPM. Even if it didn't then people buying used CPUs could find that they're banned from games that they've never even played or installed on their system before
> It also appears that resetting a fTPM works around this since it fully resets the TPM. Even if it didn't then people buying used CPUs could find that they're banned from games that they've never even played or installed on their system before
It depends how the TPM utilization was applied in practice. The initial manufacturer key (Endorsement Key) is hardcoded and unextractable. All the long-lived keys are derived from it, and can be verified by using the public part of the EK. Usually EK (or cert created from it) is directly used for remote attestation.
> What about using TPM modules? I've been researching these modules lately, primarily for use in online video games. From my understanding, you can use TPMs to effectively ban players (TPM ban) based on their hardware. This would mean every time an account is banned, the bad actor would have to switch to a different TPM. Since a TPM costs real money, this places a limit on the scalability of a bad actor.
It is even worse for privacy than phone number. You can never change it and you can be linked between different services, soon automatically if Google goes forward with the plans.
> I think if you can realistically solve that you'd be a millionaire already.
Please.
If I knew how to do that, or even how to reduce bots even with SMS verification etc., I'd be a multi-billionaire at least.
Making a twitter clone is relatively easy, making a community with a good vibe that's actually worth spending time using is the one single problem that makes none of the clones stand out to normal users.
One idea I had (feel free to steal this idea for your own use) was a one-time crypto payment to create an account. Of course you can't prevent bots from doing that, but if the price is right then I think it might greatly limit the number of bots on the platform as well as possibly limit the number of low-quality accounts.
But you don't know what you don't know, so I might be missing something that makes this pointless.
Bad actors were willing to pay for whatever Twitter's premium thingie was called at the time, shortly after Musk started changing what the blue tick meant.