This is the head of T&S sharing about the new challenges that have come with the Brazilian wave. I hope it’s clear that the increased CSAM is not left online; we take this very seriously. This is just what comes with the increased attention.
Do you feel that this is a transition towards the “moderation at scale is hard” unending series of battles, or do you think it’s just a blip that’s totally addressable?
Like every other app we’ll constantly battle with it. It’s the job. And then disagreements about how we handle it will create controversies, and then that’s where the protocol comes in, where other apps show up and provide viewpoint diversity and a check on the decisions we make
Great answer! Love how you're never afraid to jump in with clear answers to tough questions, gives me some hope we might actually be able to have something nice for once. Bluesky has really been a breath of fresh air compared to how things are going with "the everything app".
It's possible that the reason this particular item is attracting outsized attention is not randomicity or peculiarity. The competition's hype machine never sleeps.
Well, either way it’s useful to be real about the kinds of challenges that come with social media. It’s not hypothetical. And we’re still at a very small scale compared to most networks (9mm users)
As just a normal user, I can't speak to their motivations, but the sandcastle kickers are much more present now.
There is a surge of new users I've seen who are there just to spread racial slurs, and get on the opposite side of whatever argument is being raised in that particular thread.
There are so many of these suddenly that it feels coordinated to me. But I suppose that all it takes is one /pol/ post.
I was an early Lemmy instance host, and particular communities in bigger servers were getting hit with CSAM bots that were using it as an attack vector. It caused the meme community on Lemmy.world to completely shut down. Not only that, but Lemmy didn't properly remove images from storage when the attacks happened. They weren't accessible on the frontend anymore at least, but it required additional work on the command line to locate and remove the content from storage.
Wouldn't surprise me in the least if something similar wasn't also happening here, partially masked by the increased attention.
What is everyone using to scan for CSAM nowadays? As I understand the tech is typically some type of hash comparison, but the prominent thing I see is Thorn which would cost like 10x of our whole s3 bucket per month. Is there anything self host able or open source?
I’m surprised there isn’t a government offering this type of service for free.
I don't think either of these are available to a community/selfhosted platform. Both are advertised as being available only to "qualifying" organisations.
I'm glad they're being transparent like this, but problems like this are why I have big questions about their ability to keep going.
Social media Trust and Safety is hard work, and because it's adversarial, the job is never, ever done. The more eyeballs you have, the more your platform becomes a target for all sorts of miscreants. Centralized platforms need centralized response to that, and it's just not cheap.
Last I heard they didn't have a sustainable revenue model. Has that changed?
We’re doing fine so far. It’s a wild time but none of this is unexpected, or caught us unprepared. This is what investment is for, to help us build up the system while work towards revenue
You're becoming a domain registrar? Well I'm not sure that'll solve revenue but it's certainly interesting, I'd like to read more about the reasoning behind this if you have any posts or articles on it yet. Would also love to check out the code for a bsky-style registrar backend if that's open source :)
Could totally beat Namecheap on search and provide good value there, I like theirs better than most but most are awful and theirs isn't amazing still, let me find name options I wouldn't find on other registrar searches and I'd happily pay a $5 premium.
Thanks! Well I love mucking about with DNS TXT records so it's not for me then :). But Namecheap is my go-to registrar so I'm sure it'll be solid enough, hope it works out!
Oh and obviously this is super early and you'd probably fix this on your own anyways but a quick note since I checked out the interface: you should disable autocorrect/autocomplete on the search form input. When I type a domain on my phone it inserts a space after the period and the search fails.
Not sure why people are downvoting this. I think it's a good question for any startup, but especially one with millions of users that are going to be real sad if it turns out to be unsustainable.
I get it, my phrasing is pointed, but I rather ask directly and get a discussion than be polite and not get an answer (to pfraze's credit he would have answered honestly whether my phrasing was rude or not)
IMO reselling domains is not going to do it, but not a bad start to selling services
Personally I'd pay $5 a month if I was able to use my PDS as a backup for all my photos and media, like a Dropbox or OneDrive. That's a pretty commoditized service but it would be nice to serve content without having to worry about the embed disappearing. I remember another Twitter clone getting traction had trouble on boarding musicians because they would try and embed YouTube links of their own music but their label disabled embeds!
Did I see something about bsky dipping their toes in bandcamp's market? I think that's all a lot of artists want, a host for their media that actually distributes posts to followers (unlike Facebook, who holds your own fans ransom and doesn't show posts to all of them unless you pay to "boost" it)
I'm sympathetic, of course. I was part of a startup that iterated for a couple of years on a social thing looking for revenue, so I know the semi-delusional level of hope that any startup requires before you find product-market fit.
I agree that's one thing investment is used for. But another thing it's used for is to prop up things that never find a revenue model until the lack of money forces people to swallow some hard pills.
Bluesky's been going quite a while for not having a proven revenue model. And it's not like it's the only social media company ever; this is a space that has been well explored over decades. So the lack of it in any social media startup these days seems pretty glaring to me.
Twitter launched in 2006 and didn't turn a profit until 2018. They're not making money now either and don't show any signs of stopping, so I agree it needs to happen at some point but I'm not convinced it's an immediate existential threat.
You're conflating being sustainable with turning a profit. I don't much care if Bluesky is ever profitable, but it sounds like it's pretty far away from being sustainable.
I think Twitter is a bad example in a few other ways. At the time, Twitter was a pioneer, so on reason they were burning money was to figure out the landscape. That landscape is pretty well known now, and the experience of Bluesky is not wildly different than Twitter was a decade ago. Another is that Bluesky claims the aren't going for the Twitter- and Facebook-proven revenue model of ads, so we can't use Twitter's success to say much about Bluesky.
But the biggest reason it's a bad example is that Twitter shows what happens when you're not sustainable on your own. Their original owners, VCs, pushed them to monetize through ads. Later, different investors kept pushing for more revenue, causing a lot of flailing inside Twitter without much success. (E.g., Vine's rise and fall, or when Twitter tried streaming NFL games.) And most recently, of course, Twitter's new owner gives a good example of why one doesn't want to be too dependent on outside money over the long haul.
Fair points. Still--and I'm no financial expert and haven't looked at any data so maybe I'm wrong--I don't get the impression that they're in danger of going bankrupt within the next year, and a lot can happen in a year. I guess I just have more confidence that they are actively working on it and have the talent to figure this out before it's too late.
Oh, I certainly believe they're talented at creating software and products. But those talents aren't well correlated with building businesses. If anything, I think it's negatively correlated. I hope they make it, though!
That is a pretty glib response, and disappointing in that you aren't grappling with the negative externalities of this approach.
I also think the reflexive dismissiveness is a sign that Bluesky is not really taking the problem seriously, even internally. Which is exactly what I'd expect from the external behavior.
And I get it! Building products is way more fun than building businesses. But as somebody who has put a number of years into things that turned out to be unsustainable, I encourage all startup people to at least try to avoid the "me sowing/me reaping" ride.
I didn’t mean to come off as dismissive, really I was trying to voice that part which we’ve both been through, the endless stack of existential concerns that you have to get comfortable with in this job. I’m actually quite acutely concerned about revenue and spending a fair amount of my time preparing for our answer; I’m just not at liberty to share our plans yet, so I went for the shorter response.
I know you’re concerned. I hope we find a good answer!
You can either work smarter or harder with content moderation. Unfortunately, most companies choose harder and just outsource the eyeballs to TPCs and cheap overseas labor with predictable results.
I think you underestimate the extent to which companies try to do both.
I know Twitter and Facebook both spent big for years and years on increasingly sophisticated ways to automatically recognize problematic content. Even they couldn't afford to just throw labor at the problem; a few years back when I ran the numbers Twitter's revenue was about $1/user/month, or about ¢1 per tweet. No matter where you outsource too, that doesn't buy much labor.
If you look at the manuals for the outsourced moderators [1], it's clear they were being asked to work at a pretty sophisticated level. That level certainly wasn't available from ML systems before a couple of years ago, and I'm skeptical that it's there now, given that many of the hard moderation decisions come down to contextual judgements, and given how good awful people are at innovating their way around automated detection systems.
The Integrity Ops moderation tools I am familiar with are often designed and built as an afterthought to the platform. Surfacing problematic content for review is only one piece of the puzzle and insufficient for effective moderation.
The moderation described in the Guardian article is far from sophisticated. It is, in fact, the most basic way to moderate a platform. Create a queue of flagged content and have people manually review it.
When your platform gets large enough you have to choose harder. Facebook tried to choose smarter in Myanmar, and look at how well it worked out. There’s simply no algorithmic solution to the problem where your users genuinely want to see the content you’re trying to moderate.
Even if you build flawless tools that ensure the algorithm won’t promote anyone talking about killing, your users will invent a new word “unaliving” and use that instead.
ContentMod is a war and not a battle. Tooling should be flexible and adaptable as the adversaries' tactics change and evolve. One Algorithm To Moderate It All™ is a fool's errand. Today's nonsensical word/emoji/ASCII art is tomorrow's euphemism for drugs or hate or ???.
The smart, and only lasting solution is to build a team of Navy Seals and give them advanced tools to quickly identify and neutralize the adversary before any significant damage is done.
I get it though. If you have money, then the easier solution is the one you can just throw money at to get instant results; mental health of your employees be damned.
Bluesky PBC's hosted AppView has its own moderation policies, much like mastodon.social does in the Mastodon/ActivityPub world. You can spin up your own instance if you want to do things your own way.
Bluesky/atproto has never claimed to be "censorship resistant" or similar, it's pretty conventional in that regard.
Exactly. A fundamental question for any platform is, "How much harm do we want to enable?" The answer can't in practice be "none". But "well fuck it" isn't a tenable answer either, because reasonable users don't want to be around the awful stuff and they'll migrate away, leaving you with an ever-increasing proportion of terrible users. One can't think effectively about free speech without thinking about freedom of association.
That is a common novice take on the problem, but it only makes sense if the only way harm happens in the world is by people accidentally seeing things they didn't want to see and that can be easily filtered out.
I think we just don't agree on what the problem is. I don't want a discussion platform protecting me from "dangerous" content that I am choosing to engage with.
Yes, I think most people feel the same way as you. The question then is what to do about the societal harms enabled by the platform.
I think that's a question for society, but also for the people working at the platform. For example, if you read interviews with 4chan founder Christopher "Moot" Poole, you'll see that he started out with a similar ethos to yours. He ended up reining in things some over time, still grew miserable with what he created, and eventually quit.
The problem is much larger for a platform that aims to be larger and not generally despised. If you look through the big platform rules, it's worth thinking about each item from a perspective of, "Am I happy devoting my life to enabling that?" E.g., the people who are into CSAM don't want protection from content their are choosing to engage with, just like you. But how many people want to go to work each day and say, "Yay, another day of supporting machinery to distribute child porn across the globe?"
A business needs to have the freedom to kick out shitheads so as to protect itself from becoming a place that non-shitheads don't want to do business with.
A grocery store doesn't let people walk around in their store saying just anything to their customers. They're (rightfully) not bound by as high of a standard for what they allow to be said in their store as the government's standard for how much they'll let you say before they imprison you.
You could say the same thing about a telephone company or a postal service. After all, they are just like a grocery store, so no need for them to uphold free speech.
I've volunteered in crisis intervention and mental health coaching for a decade or so and this is a major part of certain communities.
When ED issues come up, especially in teens, concerning topics are usually (not limited to)
-How to hide it better from your parents/friends
-how to be more efficient at it
-selfies of teens showing dangerously low body fat as "goals"
-shaming of people for eating (at all)
-coaching people who are vulnerable from other conditions into ED as a way to feel better.
Bear in mind in some cases were talking about people who will be trying to hide that all they consume is (relatively common amount) half a small orange and 1 egg white a day and as little water as possible.
It's not 'dieting'. It's a mental health condition that, untreated, leads to death.
To be clear, this is specifically about glorification of anorexia and bulimia. Are you familiar with these disorders?
No one is saying they are not permitted to speak. But if you want to post on a platform, you will also have to adhere with the rules and guidelines of that platform. When you don’t, your posts will be removed.
And if your platform of choice says to not glorify these disorders, don’t be surprised when posts glorifying the disorders get removed from the platform.
Of course, I’m intimately familiar with particular individuals that society has decided is ok to constantly berate and tell them they know what’s best for them and that they’re too thin and they aren’t eating enough and etc etc etc. Keep in mind none of these folks are medical doctors, they haven’t seen any blood tests, they have no idea how much this individual actually eats, they really have no reason to speak about any bit of their medical status whatsoever. They have absolutely zero context, they are as a child who walks into a movie in the middle and immediately starts criticizing it.
But somehow society has decided thin folks are an acceptable target for baseless harassment, and she has to deal with being a pariah anywhere except LA.
Do you think there is no ability to form a consensus on what is harmful content, or that the community has no right to discourage or remove content they have decided is harmful?
This doesn't really matter. We already know how this plays out because we've seen it on repeat. Bluesky gets a surge in users, then no one stays, no one uses the platform. They're missing too many features for Bluesky to be worth using right now. The most obvious one is video: no one wants to use a social platform without video.
>BlueSky isn’t going anywhere, especially with the clearly eventual demise of Twitter.
According to the media and on Reddit, since Musk bought Twitter, Mastadon, Bluesky and Threads have collectively gained 70 bazillion users on seventeen different occasions, and have caused Twitter to collapse and disappear eleven times.
Video support is in testing (iiuc you can try it for yourself if you build the right dev branches), and is rumored to be shipping in the coming days/weeks.
Twitter initially didn't support video, but you could embed YouTube videos within a tweet simply by posting a link to a YouTube video. Sure, you needed a separate account for uploading the video to YouTube, but most people don't post videos anyway.
So if Bluesky ships video support within the next two decades, they'll be fine. Or more likely, HN will just move on to the next thing to complain about.
I really don't think not having native video support is a deal breaker for many people. As I said, most people on X (Twitter) don't post videos anyway. Moreover, it's not clear whether the added moderation and server cost is worth it.
Hello, I am "no one". I am just fine with their feature set. The only question is whether my friends sign up and are active there. I miss pre-Musk Twitter and sharing text things via Twitter, links to stories and such. I've never looked at a platform and thought "gee, what I really want is a TikTok/Instagram wanna-be clone".
People who want content that's more substantive than gifs use it every day. Video is coming in the next update, but I'm not necessarily looking forward to it. The worst thing would be for Bluesky to turn into Twitter or Threads. But the fact that Bluesky has had time to grow slowly and organically and develop its own identity makes me optimistic it will avoid the worst case scenario of importing the whole eternal September from Twitter.
It's a problem everywhere, and here it's simply scaling up along with the rest of their traffic in general which spiked this week due to another Twitter subcommunity exodus event.
Perhaps they mean it's not a useful indicator of anything. If abuse material on Bluesky has increased, but overall on the web it has reduced (because for example Bluesky were better than other channels at policing content) then that, for example, would actually be good.
A media channel that increases in popularity will have increases of all types of content. Whether it increases beyond levels on other media is probably the metric to watch?