Hacker News new | past | comments | ask | show | jobs | submit login
Blocking Threads won't be enough to protect privacy once they join the Fediverse (thenexus.today)
197 points by jdp23 on July 12, 2023 | hide | past | favorite | 415 comments



If you have personal information you do not wish bad actors to see, do not publish it using an open protocol explicitly designed to allow anyone to read said information.

Defederating from Meta as a solution is stupid - Meta can (and will if they actually care enough) just rejoin undercover.

Furthermore, when it comes to the fediverse, Meta is actually one of the more trusted actors compared to whatever else is on there - at least they're a known legal entity instead of some random.

Finally, the fact that publishing private information publicly on the fediverse wasn't considered an issue before Meta came along shows just how irrelevant the whole thing is - the data has been public all this time, but the network is so irrelevant that not even bad actors cared enough to actually scrape it (or at least do anything with it).


Yeah I stopped reading the article once it mentioned people getting annoyed and wanting to control who sees their public content. You don't get to make that call. The community doesn't get to make that call. The protocol makes that call and the protocol allows anybody to view stuff you post publicly. You don't get to say "I publish this but Meta can't view it". Not even legally you don't. That's not privacy and no amount of "web3" is going to fix that.

Copyright cannot restrict who can view a work you created once you transfer or license that work to somebody else (e.g. by releasing it publicly on the fediverse). You don't get to say "here's my post but if you show it to Meta then that's illegal". We don't even have a common legal framework for dealing with content distribution that isn't copying (we got "lucky" that you have to copy content to view it digitally so copyright can be poorly jammed onto the digital content distribution model and everything doesn't burn down).

Where the heck did people get the idea that they get to dictate how culture spreads and evolves?


Your understanding is flawed. Copyright is 100% okay with transitively controlling production of content.

After all how else would you describe someone allowing Netflix to stream your content?

You can't restream Netflix and Netflix is restricted on how it can stream to you. Both rooted in the original Copyright protection and the licenses to content given out by intermediaries.

Most posting platforms give themselves very broad rights with what they can do in a transitive fashion.

But that is "companies don't want to get sued for user uploaded content and are lazy" not "you don't have any rights over things that are published".


It's not that simple. If I sell you a book, I cannot place a restriction on that sale saying you can't resell that book (first sale doctrine). You're allowed to do that because you own the copy of the book. You can't copy the book and sell it, because you don't own the copyright, but you can sell the original copy, show it to whomever you want, etc. It's not that "copyright is okay with transitive licensing" but more that "copyright simply prevents you from copying a work you didn't create (or otherwise don't have copyrights over)". That's all it does.

If you want to place downstream restrictions on your content then you have to license it (and get someone to agree to your license). You have rights over how you license content. Yes. And users may agree that they're only "viewing" a copy and don't in fact own it when engaging with your licensed content, sure. But my main point was that you don't get to publish content to the public domain and then say oh wait no I didn't want Meta to see that oops #privacy #cancelmeta. And further that it's kinda silly to imagine a world where everyone licenses every little toot they make. At some point we're in a public forum and we just all need to understand what that means, including that someone you don't like might be listening.

Anyway, I don't think licensing content is a positive thing for society. It may benefit media conglomerates, but not individuals. Arguably, creators shouldn't be allowed to say "Netflix you can stream this content to users but not in Brazil". I don't think it's a sealed deal that downstream restrictions on content distribution are healthy or in any way in the public interest. Charging a royalty for a views/streams of some show is one thing. Saying "only on Tuesdays and not in Brazil" gives too much control to creators to dictate how their art should be interpreted. And nobody can prevent you from using Netflix to stream a show to your Brazilian friend on the couch next to you... nor should they ever be able to. That would be really really bad technology, were it to exist.

So short of attaching licenses to every post you make, no, there's not a socially healthy, let alone even viable, strategy to control content distribution in the "fediverse".


> So short of attaching licenses to every post you make

I think you have this backwards. Full copyright protection is the default for all content not licensed otherwise. Publishing does not put content into a "public domain" status. Social media platforms already have strong legal terms around every piece of content. If you violate that license, platforms may choose to fuck you up. E.g.: https://www.malwarebytes.com/blog/news/2023/01/untraceable-s...

It would be easy enough for particular Mastodon servers and/or accounts to be explicit about their downstream licensing.


I understand that generally you retain the copyright to your works implicitly and that you don't have to actively file for copyright in order to gain the protection of copyright law.

I'm arguing that it's pretty preposterous for a global decentralized federated social platform to assume some authority (enforced by whom, that'd require a central group of enforcers) over the licensing of posts on the network by invoking US copyright law. Conceptually it doesn't make any sense. So the only thing Mastodon can do is at a protocol/code level require that public content flowing through the pipes be licensed for public consumption (copying and distribution) on the network, the colloquial "public domain".

> It would be easy enough

I can't really imagine the matrix of which nodes can federate with which others based on the default content license they impose on posted content. "Oh you can only view my post if you use this server over here because it uses a compatible downstream licensing configuration." Would that really "work"?


Its important to remember that the tech must conform to the law, not the other way around. If it's not compatible, the the tech that must bend.

That said, why is it strange to think that a license agreement must happen before a node can reshare a post?


How can the node copy the bytes to display the post to users and to send it to other nodes, if obeying US copyright to a T, if the node does not either have copyrights over the content or the content is sufficiently licensed in a way that allows the node those privileges?


> I'm arguing that it's pretty preposterous for a global decentralized federated social platform to assume some authority (enforced by whom, that'd require a central group of enforcers) over the licensing

Why is that not-an-entity the only thing you're considering as able to enforce copyright or otherwise check a platform for doing something legally dubious?

> the colloquial "public domain".

That is not what public domain means.


Would you like to elaborate instead of just a snarky "that's not how that works" with no real contribution to the discussion?

1. Who would enforce copyright on a globally distributed content platform? Which country's "copyright" laws does a node follow? Who is responsible (who do you sue) when Meta federates with a node and receives a copy of content it allegedly isn't licensed to have or doesn't have copyrights over? (Hint: not Meta, they didn't make the copy.)

2. Why do you think I used "colloquial"? On a platform where people are free to publicly view, share, copy, remix, etc. the content then the content is said to be in the public domain. In the US specifically (post 1988) the terms of use of the platform would probably need to explicitly have you release your copyrights on the content before you post, to make it clear that the content is to be treated that way. But I'm saying, the spirit of a public social network where everyone is free to engage with the text of other people's posts, is that the content is "in the public domain". Unless you can provide an alternative context and definition of "public domain" where you're technically correct...

https://en.wikipedia.org/wiki/Public_domain


> That is not what public domain means.

FWIW, I've had a discussion with a British lawyer, IIRC British law has no specific term for what is named "public domain" in US copyright law, and the term "public domain" in UK law means "the public can access it".


Mastodon-the-software doesn't have to do anything. Mastodon instances have to choose whether to block Threads or not.

Of course there is no 'central enforcement', that would be at odds with the very principle of federated social media (namely, that instance admins are free to choose how they run their own instance).


This gets you right back to the issue of having to transmit licenses with all content. Which is fine - this can absolutely be a thing which exists within the protocol.

But then people are suddenly going to find they don't get the impact they want either: because the immediate outcome is simply going to be "license for public distribution or we'll reject your content" because everything else is too much hassle.


Blocking Threads doesn't prevent your content from getting to Meta. Some other instance that you federate with might not block it. See: TFA.


Blocking Threads is necessary but not sufficient. I should probably be clearer about that in the article.


>Not even legally you don't… Copyright cannot restrict who can view a work you created once you transfer or license that work to somebody else (e.g. by releasing it publicly on the fediverse).

This is literally the opposite of what you are saying now. Saying you legally have no right to restrict then trying to act like you were saying that, in some sort of practical sense, it would not be possible to enforce a legal restriction.


I can see how my wording was confusing. Sorry. Just focus on this:

> So short of attaching licenses to every post you make, no, there's not a socially healthy, let alone even viable, strategy to control content distribution in the "fediverse".

I'm arguing that copyright in and of itself does not grant you the right to control how owned copies of your work are used. This is evident by the first sale doctrine.

You can choose to only distribute your work with an attached license that does not confer ownership rights. And because of the license I've agreed to I can't do anything with that work that would breach the license agreement. And because you didn't transfer ownership to me of an actual copy of the work, the first sale doctrine does not apply. And I'm not allowed to copy that work, because of copyright. The limits you've imposed on the work are because of the license not because of copyright.

There's no component of copyright that statutorily grants an author any control of the transitive flow of copyrighted content. It's all in the license.


I can't believe people who are into decentralization and federation now essentially want to reinvent DRM!


DRM could be cool if, rather than cryptographically preventing copying, it would encode complicated licensing agreements especially if it effectively, when transmitting, rewrites the license for the recipient.

This could be guaranteed by laws rather than tech. Laws don't do much to protect big companies from the little people. That's why DRM is currently about technical solutions. But laws can have a huge impact on protecting little people from big companies. That's why big companies lobby!

I don't doubt that such 'license encoding DRM' as I envision will be abused by media publishers by linking it to cryptographic protection. But the underlying idea is pretty nice.


This is a very thoughtful take! I agree that an encoding of intent for enforcement by law rather than code would be a useful solution.

But let's come up with a different name for that than DRM :)


Why not? These are separate concepts.


It's a good question, so let me try to explain why this is dissonant to me, if I can.

I think maybe the easiest way to illustrate it is through the person of Cory Doctorow. I don't know for sure because I haven't been following him for awhile, but I suspect he is a fan of the fediverse concept. He and the kinds of people that he influenced (like me) strike me as the kind of people for whom open federated social networks are appealing and satisfying as a solution in contrast to closed centralized corporate social networks.

But then, a big part of my time following him and just generally being in that milieu, one of the primary boogeymen for us was the DRM being pushed by corporate interests.

So to me, DRM seems anathema to this entire aesthetic of federation.


Licensing != DRM though ... ?

You can have a very strict license without actually using any cryptographic 'protection' on the content.

See, for example, the way IBM used to in the 00s (still does?) release software. There was no copy protection, no license servers, no keys. But copyright was absolutely retained and you couldn't legally just give it to anyone.

If you were found to be running their stuff without the appropriate commercial license then their sales department will be along very shortly to work out a nice cost-effective plan for you, and failing that the lawyers will be involved.

Closed, licensed, but it's not DRM.


You could also look at the FOSS world. Licenses are actually important because if you license some software under GPL, you have legal grounds to sue if a corporation then uses that software in violation of the license. Most corporations just treat GPL as anathema because it's rarely worth the legal effort.

So far we haven't thought of social media content as licensed but it effectively is: the license is usually the EULA of the host giving them the ability to do whatever they want with it.

So really to be responsible with social media content we really should both get a license from our users and then provide a compatible license to federating peers. At that point they can decide not to accept the license, thereby refusing to federate that content. There are already filters like this for filtering certain kinds of content from peers, so it doesn't seem like it should be a huge reach.

Creative Commons might be sufficient here. If you license content as non-commercial, then would Facebook be barred from choosing to display it to the users of their commercial platform?


It's important to note that Stallman considers the GPL a hack on the copyright system (a clever one, sure, but a hack nonetheless). He'd rather it not have to exist, because we shouldn't need to exist in a world where companies can own the recipe to mix bytes around and achieve a certain software flavor.

I mean you can't copyright a food recipe. It so obviously would be stupid to have designer meals and chefs who are the only person that can cook a certain dish. The argument is that software isn't any different. And you can't copyright the fastest route from point A to B on a map. So why can you e.g. copyright the preferred method of sending bytes down an HDMI cable?


I'm sympathetic to that perspective, but that would entail pushing an ideal upstream to where change isn't going to happen because there's far too much financial interest in copyrighting arrangements of bytes. So if the system's unfixable, then hacking it seems like the right approach.

Besides which, ignoring GPL, Creative Commons seems like a good license because creative works are more than logical arrangements of bytes (there are only so many useful ways of sending bytes down an HDMI cable, but many more ways to write a novel) and is probably the more appropriate license for social media content (at least assuming the author wants to apply that license to their content).


Yeah fair. My reading was that the intention is a technical solution to lock down public data. I do think a legal solution makes more sense.


This is the classic problem of assuming that people who share a few opinions with you share all your other opinions as well.


Yes, totally. But that is common because it is useful for understanding how aggregates of people behave. It's an estimation heuristic. It's bad to get caught thinking estimates are truly reality, but it's not bad to try to estimate things.

I do think there is a throughline between the ... techno-hippies? of my youth for whom DRM was a big concern, and those of today, for whom corporate concentration of social media is a big concern. But I know that's just a heuristic. Nevertheless, I think it's interesting to me that there is now some subset of the newer group who is arguing for something that seems very analogous to DRM for social media content.

I think you're probably right that it's totally different people, not the same people being inconsistent.


Because people who create decentralized networks usually also subscribe to the concept of free flowing information, hence the decentralized network.

One is their issues with a centralized service is control. Their issue with copyrighted material is control.

Of course not everyone holds the same beliefs


Hey well put! Your answer is both more succinct and better than mine :)


I'm such a negative person I can't tell if you are being sarcastic


Ha, sorry about that, not sarcastic at all.


Thanks for putting it succinctly.


> If you want to place downstream restrictions on your content then you have to license it (and get someone to agree to your license)

That's not how copyright works like... at all. HN needs a license to display your comment right now (see here: https://www.ycombinator.com/legal/#tou). Copyright works by default (in the US) of being the most restricted. The first sale doctrine applies but online when you distribute something you're not making 1 copy.


I understand how copyright works. I'm arguing it's silly to try and apply it to a public forum in the way it's implemented. And all the centralized social platforms agree as you've pointed out because they require users to forfeit their copyright to posts made on their platform, or at least force them to licenses the content for redistribution on/within their network.

Like, by quoting me on HN you're actually violating the content license and terms of use agreement (you're only allowed to use your "user content" not mine). You'd argue fair use, and you'd probably win. But nevertheless you'd have to argue it.

I think we're in the weeds with copyright. What's the problem we're trying to solve? A federated public communication protocol/system. What do centralized platforms do in the face of copyright? Require users essentially forfeit their copyright over the content they post (if they're nice limited to the scope of the content traversing their network). Why? Because it doesn't work any other way. What is the expectation I have when reading a public post? I expect I am allowed to read it, quote it, remix it, etc. because it's been posted publicly. Nobody asks permission to quote each other even though they technically should in accordance with US copyright law. How would a discussion transpire without a shared understanding of how the content is allowed to be used on any network?

---

And I mean if you really want to get into the weeds, copyright prevents you from e.g. playing a digital music file on your computer since the bytes need to be copied over the network, into memory, onto output devices, etc. This was the original observation: copyright basically prevents you from doing anything digital with any digital media. That's why everyone needs a license and we're in license hell. In digital it's copying all the way down so hit it with the copyright hammer.

Imagine if we applied our digital understanding of copyright to traditional artwork. I have a painting. You visit. By shining light on the painting I'm reproducing the image on your retinas. You say copyright violation? I say preposterous! What if I take a picture of the painting that I own and show you? Uh oh.

Copyright just doesn't make sense when applied to a public social network. At all. And it doesn't make any sense because a social network isn't about artists trying to make money off their works, it's about anons trying to win internet points for the day.


> I have a painting. You visit. By shining light on the painting I'm reproducing the image on your retinas. You say copyright violation? I say preposterous! What if I take a picture of the painting that I own and show you? Uh oh.

Those are all fair use. The definition of fair use is intentionally somewhat vague, leaving a lot of room for courts to exercise discretion, but showing an artwork to a guest and taking a photo of a copyrighted work for noncommercial purposes are clearly protected by precedent.

Two of the key factors a court will look at in deciding whether a use is fair is the use's purpose and character and what effect (if any) the use will have on the market for the used work[0]. Threads is an ad-supported, commercial endeavor, so I don't foresee them winning on a fair use argument when they republish a post from another site in order to sell ads next to it. If you have your own mastodon instance and you sell a single ad on it, then Threads would, by republishing your post, be depriving you of revenue for your creative work in order to increase the value of their own ad real estate via exhibition of your copyrighted content.

[0]: https://www.justia.com/intellectual-property/copyright/photo...


I feel like you're being needlessly obtuse when the other person is being objectively correct.


I'm not disagreeing with them. I'm just a little confused why people are bringing up that you don't need to explicitly attach a notice to content you create to retain the copyright (since 1988 in the US). Of course that's objectively correct. I never argued otherwise. And to add, I pointed out that by the letter of the law, quoting my post on HN is a violation because I never granted you a license to do that (you'd have to argue fair use before a court if I sued you). I'm not disagreeing I'm just continuing the discussion.

My argument is that it's not implicitly fair use for a social network to copy user content to which they don't have the copyright and distribute it across a network. And that that's crazy because a (distributed nontheless) social network doesn't work if there isn't a shared understanding that when users post publicly that they're granting the public a license to copy, re-share, remix, etc. their content. Which is de facto releasing it to the public domain.


> but you can sell the original copy, show it to whomever you want, etc

The license associated with physical media does have restrictions. You can't mount a public display of the work without acquiring a separate license. E.g., you can't buy a DVD of a movie and then screen it for a group (outside of what's permitted under fair use). If you sell tickets to that screening or intersperse ads, thereby infringing copyright for commercial gain, you're in even hotter water.


There's no license attached to physical media, it's all statutory.


> Saying "only on Tuesdays and not in Brazil" gives too much control to creators to dictate how their art should be interpreted.

Do you feel the same way about restrictive open source software licenses like GPL3?


Yes all software should just be public domain. But because we have icky copyright, we have to have icky software licenses.


While your view here is perfectly valid, it would have resulted in a -very- different world than the one we have today.

There would basically be no software industry. There couldn't be because there would be no way to economically do it.

Software would thus only be made by hardware manufacturers- and would need specialised hardware for every task.

For evidence of this alternate reality I point to an era when thus was the case. In the 60s and 70s there were a number of hardware companies who all wrote the software for their specific machines. You paid for yhe hardware, not the software.

This model flowed into the fame consoles of the 70s (atari, intelivision and so on.) PCs started either this model, although software-only plays start appearing. They make money selling games, visicalc etc. Microsoft famously licenses Dos to IBM but non-exclusivly. Suddenly hardware is a commodity and clones appear.

All the big companies today are software companies. The cloud is proprietary software running on commodity hardware. The ability to grow, to make big bets, to develop is predicated on the existence of copyright.

But wait, Linux right? That surely negates my argument. Leaving aside the distinction that it's copyrighted, not public domain, Linux is a clone of a commercial system. That inspiration came from somewhere. It exists because Linus had a general-purpose computer, which arguably only existed because of copyrighted software.

Aside all that, original Linux was pretty primitive. It grew in part on revenues supplied by other software companies. While it would be fun to imagine a world with only Linux, it's harder to imagine how that world is funded.

SQLite is in the public domain. Its an example of what can be done. It's an amazing fantastic bit of software. For all that it's built on the proprietary work of others (SQL). And it's far less capable than proprietary database engines of the 90s (oracle, informix et al.)

So while I understand your viewpoint, it's also earth noting that copyright built the world we are in now, and the alternative would look very, very, different.


If you don't believe in isolated software patents, then a world where you have to actually make a hardware system to patent your invention as hardware with supporting software doesn't seem so bad. At this point the iPhone patent would be just about to expire and we'd all be able to legally clone, sell, remix, improve, etc. the system Apple spent the last 14 years profiting off of, royalty free, nonetheless.

I'm not so sure the world today where we don't own our iPhones because Apple retains an exclusive license to the software deployed on the hardware is really a good thing for the world. And I'm 100% sure society would have figured out how to make the iPhone in a world where all software was public domain, regardless.

As an example, we see more and more open source SASS companies as time progresses. This directly validates the model that you can profit off of a well run service provider business on top of software which you allow any user to copy for free.


Apple’s patents aren’t the issue. The world can’t just figure out how to manufacturer hardware scale. The market is littered with failed hardware startups and even companies like Sony and Microsoft can’t figure out how to manufacturer the number of consoles in three years to match the number of phones Apple sells in less than a quarter.

Literally dozens of Android manufacturers can’t figure out how to compete with Apple at scale. Android is already open source


If you could run iOS on other hardware, you'd get an iPhone for $300. That would be competitive.


Why do you think running iOS on cheap commodity hardware would be any better than Android?

Apple designs it’s hardware and software in concert.


I have no idea, but at least we'd be able to try it out and see. Plus the iMessage blue bubble thing would be dead and that's a big part of Apple's social platform lock in today.


Is Facebook supposed to scan all Twitter everytime you post something to check if it's not violating the copyright of someone there? To know every paragraph of every single book? Or scan the entire internet for that matter? That is not "lazy", that is just a ridiculous expectation, no (legal) social network would survive if that was necessary to operate

Not to mention the millions of derivative works where the definition gets fuzzy: I applied a filter to someone else photo and put an emoji over it, is still the same work? Should FB algorithm be smart enough to classify it as a copyright violation? It would be unsustainable economically to run that algorithm against the millions of posts, and even if they told the user "stop! that is copyrighted!" a huge chunk of users would just edit their post a bit and just try again.


That's the point, they're not. Copyright is a dumb concept when applied to social network posts (let alone the internet) and we shouldn't be wanting to emulate it on a decentralized social network.


>Yeah I stopped reading the article once it mentioned people getting annoyed and wanting to control who sees their public content. You don't get to make that call.

A fair point, and one that's not really new either.

Don't want something to be public? Don't post it publicly. As I recall, when my employer at the time (mid 1990s) "federated" their email with the rest of the world, they sent out a memo which stated, in part, "don't put anything in an email that you wouldn't want to see on the front page of your local newspaper."

That was back when local newspapers were a thing, but I imagine you can see the parallels.

That said, I do get to control who sees my content -- by not making it public (i.e., I don't federate my AP instances and curate who can create accounts on them).

If you want something to be private, don't post it publicly. I'm not sure why that's such a foreign idea to some folks since, as I mentioned, it's not even close to being a new idea.


The article explicitly says that talking only about public posts ignores other privacy risks, and has an example related to a followers-only post. Knowing that, do you really think his point is fair?


>he article explicitly says that talking only about public posts ignores other privacy risks, and has an example related to a followers-only post. Knowing that, do you really think his point is fair?

I was responding to the idea that if you post something publicly, whether that's on a Mastodon instance, a bulletin board at the library, in a Craigslist ad or any other public forum, it's unreasonable to attempt to limit access to the information contained therein. That's intuitively obvious, no?

If you only want specific people to see that information, only give it to those folks. And if they're trustworthy, they won't share it with others but that's not guaranteed.`

How does the old saw go? "Three can keep a secret, if two are dead."

Don't want something to be public? Don't post it in a public forum. Full stop.

I am unfamiliar with the example you mention and its potential privacy impacts. But even if (and maybe especially if) there are the risks/issues you allude to, that doesn't obviate (and perhaps even strengthens) my point.


I wasn't objecting to your point so much as you calling their summary a fair point. They stopped reading the article before they got to the example of non-public information in it, so mischaracteried it, which doesn't seem fair to me.

Agreed that if something's available encrypted on the web with no login required then usually the only protections you can put on it is security-through-obscurity like hard-to-guess links that don't show up on profiles (YouTube's "unlisted videos") or advisory like "noindex". But, although it's not something I talked about in this article, there are design choices. Mastodon (etc) could evolve so that a lot of what's currently "public" isn't available on the web with no login required.


I think I see the disconnect here.

I didn't really pay attention to your username and am seeing now that not only are you the submitter (to HN), but also the author of the submitted post.

As I mentioned, I haven't read your piece so I won't (as I can't knowledgeably do so) comment on the validity of your complaint about GP's point.

I don't have any issue with you, your blog or anything surrounding those things.

In fact, I'm sure you make a bunch of valid, interesting and insightful points in your post.

That said, while you're responding to me as if I had read, digested and considered all the points you make in that post, that's not the case.

My comment was completely unrelated (except as it elicited a response from GP) to your blog post and addressed a completely different set of issues.

N.B., I actually did read your blog post, but didn't realize it was the submission for this discussion thread. I agree with much of what you wrote, especially as it relates to the greedy scum who run Meta and its ilk, as well as the very real risks to a wide swathe of folks who are disfavored by some (many?) in our societies. Especially when those greedy scum "out" folks to add to their sewer of filthy lucre, which is then seized upon by the intolerant jerks to harass innocent people.

All that said, I stand by my initial point: Don't want something read by the "public"? Don't post it in a public forum. Full stop.

All the rest is orthogonal to the argument I made in the comments to which you replied.


>I wasn't objecting to your point so much as you calling their summary a fair point. They stopped reading the article before they got to the example of non-public information in it, so mischaracteried it, which doesn't seem fair to me.

I'm confused by what you wrote. Are you claiming that people who post information in a public forum should be able to dictate who gets to see that information?

I didn't read TFA, so I have no idea what it says.

I specifically responded to what GP wrote, and only that. Full stop.

As to the rest of your comment, I don't see how that's relevant to public postings in public forums.

As in this very comment. It's on a publicly available website and is indexed by the major search engines. I have no expectation that anything I write here is "private," nor do I believe I should be able to dictate who is allowed to read this comment.

Contrast that with my Matrix server. Access to what I post there is limited to those who are invited to the specific "room" I post in. And I limit access to the site to those I wish to share such information with. What's more, I don't allow web crawlers to access that information.

Public posts are, well, public and can be viewed by anyone. Private posts are not. That some folks can't keep that straight astounds me.

As for any protocol/implementation shortcomings of Mastodon and/or ActivityPub, that's a different discussion. And one that I'm not engaging in.

As such, I think we're talking past each other. I hope I've clarified myself enough that you'll stop making assumptions that what I'm talking about is related to your (which may well be completely valid and deserving of discussion -- just not with me, at least not in this thread) concerns.

Edit: Fixed prose and typo.


Controlling who can see your content is actually a very reasonable feature; some things are not meant for everybody, but just for the people who actually know you. But it's not a use case that ActivityPub was designed for. Google+ had a really nice feature where you could easily control who can see your post. Diapora has something similar, but considering it's federated, I'm not sure if you can really guarantee it.

I think the only way to guarantee this control in a federated system, is to encrypt everything that's not completely public. If everybody has a public key, you can use that to encrypt the secret key. It's a hassle, but I think this would work.


The funniest thing about people wanting this kind of transitive control is that the only technical way to achieve this is through a wall-garden model like ... Facebook. Indeed, Meta/FB is eager to train people to want this model 'cause they know you at least one referee controlling everything if you have this transitive model.

Diapora has something similar, but considering it's federated, I'm not sure if you can really guarantee it.

Exactly, this level of access requires a hegemon. Who will be the hegemon?


Not necessarily. You can do it through encryption. But that creates overhead.


This still doesn't make sense though: the reality is that your content will be private right up till someone reposts it publicly. Or saves the image, recompresses, posts it to reddit, reddit posts it to ActivityPub etc. etc.


Of course. On Google+, you got a warning if you reshared something that wasn't shared publicly, but it was possible. It's also a matter of trusting the people you share it with.

In the end, every private, highly encrypted message can always be made public by the recipient. That doesn't mean encryption is worthless for privacy.


Right but even then the basis of protection was still being gated in punishment metted out by a higher power. That literally doesn't exist in a decentralized model - not only can anyone setup a new instance, instances which are blacklisted can also simply rebrand.

Beyond the tightest group of confidants who you can actually trust, your content can and will leak if you put it online - and the probability of this goes up as the legal entities involved become smaller and have less and less to lose.


It's entertaining from an outsiders perspective. On the one hand it's public content for everyone. However onnthe other hand ithey desire the public content to be restricted making it by definition not public, i.e., private anymore.


Of course I can say who can't use my work. It's my creation, I get to make the rules.

If I explicitly deny Meta to use my work, then that is my wish and I can sue them.


> It's my creation, I get to make the rules.

As has been explained multiple times, that's now how it works. If you sell me a copy of your work, you do not have any authority after that to dictate how I use your work.


There's a portion of the far left and the up-and-coming gen-zalpha that takes on a dictatorial "censorship is okay for things we dislike" attitude. They don't realize that in the 90's and 00's free speech was a place of refuge for liberals to escape from evangelical attacks on everything from LGBT rights to Pokemon.

They also don't appreciate that the "come one and all" nature of the internet back then led to many people crossing the fences and experiencing viewpoints they'd never seen or heard before. This is an atmosphere we desperately need to return to.

Present day censorship, "gotcha" moderation, and algorithmic manipulation of emotion have led to hyper polarization. We should 1) deescalate the intrusion of these systems and remove them from our day-to-day experience and 2) reinforce the fundamental rights we all deserve.

Social media networks with over 100,000,000 daily active users should not be considered as "private companies with a right to free speech through censorship". They are effectively public squares that we have all elected and chosen to share. Right and left alike.

Public companies tend to censor to protect profits, but small individuals (such as Reddit moderators and Fediverse instance maintainers) do it from either a position of laziness or political retribution. The latter is a form of disgust and hatred for fellow humans and should be called out as such, even if the other party is guilty of the same.

I've seen the free speech argument twisted into "right wing figures trying to force their views into everyone's feed", but that need not be the case. There are tools for individuals to block. And if we'd finally divorce ourselves from platforms and federation and escape to true p2p social networking, we'd all have maximum individual control: we could institute any blocking, boosting, ingestion, sharing, and ranking criteria we wanted. Many amongst the left obsesses over what the right is doing (and vice versa), which tells me people enjoy rubbernecking rather than tuning out. It's a game of "neener-neener" high school football rivalry.

But back to the core point - you shouldn't get to choose who people talk to if you're not a first party in that conversation. You shouldn't get to choose who can publish openly or who can read public broadcasts. If you want to keep your words private, share them in private. Your choices should be limited to blocking what you personally dislike at your own consumption level, and it should be that way for everyone. Because that's fair.

The pendulums of politics will swing. One day liberals will need the free speech refuge again. Preserve it now even if you want to get rid of it. Question yourself if you find yourself wanting to mute or persecute others. If you're angry with my words right now, please ask yourself why you want the other party to shut up.

I want to emphasize that I do not agree with the far right. But I will fight with my last breath to preserve the right to free speech for us all. If we lose it, we will slide into tyrannical oppression from those in power.

I wish we could all just get along. I know that's not going to happen in my lifetime, but we should make best attempts at deescalation and maintaining open communication with one another. Conversation can be a bridge.


> Social media networks with over 100,000,000 daily active users should not be considered as "private companies with a right to free speech through censorship". They are effectively public squares that we have all elected and chosen to share. Right and left alike.

Shouldn't this also apply to TV channels? Chat apps like iMessage? Popular newspapers, blogs, and email newsletters? And indeed, why stop at 100M DAUs - why not 10M, or 1M? The problem I expect is this path leads to the death of freedom of the press.


It's not 100M readers that makes it a public square. It's 100M writers.

Like a shopping mall, the classic US example of a privately owned public square


To be fair though, freedom of the press should be limited to things that at the time of publishment/boardcast were believed to be true by reasonable actors.

They shouldn't get carte blanche to claim wild conspiracy theories and to turn people into figurative monsters and then claim 1st amendment rights to protect them from the consequences of their public agitating slander.

On the other hand, reviewing and moderating anything other than their current carte blanche status quo would be an inhuman ordeal, so it's not like I have a more perfect idea to replace it with, just an understanding that there are bad actors using the rules of our American Social Contract as a weapon against the fundamental rights of "life, liberty, and the pursuit of happiness" that the self-same social contract they are exploiting has promised us for our abidance.


I understand where you're coming from, but I'm not sure it matters. If you don't like the bad actors spreading "disinformation" then speak louder. You're 100% right that it's untenable to moderate "the news". We have laws preventing bad faith actors from spreading demonstrably false information that damages somebody. Anything else is just... insignificant to your life, liberty, or pursuit of happiness unless you aren't mature enough to ignore opinions that make you unhappy.


> Social media networks with over 100,000,000 daily active users should not be considered as "private companies with a right to free speech through censorship". They are effectively public squares that we have all elected and chosen to share. Right and left alike.

I live in a country with less than that number of people in total.

Which country gets treat the corporation as if the corporation was an arm of the government? Can't really be the servant of two masters.

And if the advertisers don't like what they see, should the government(s) subsidise the sites to make up the difference? That kinda already happens a bit with TV licences in some countries.


> algorithmic manipulation of emotion have led to hyper polarization.

I don't think algorithmic manipulation is really that big of a problem. Look at HN during the Reddit blackout. Tens of overemotional posts, lots of low quality pithy responses, and not an algorithm in sight. Flame wars are at least as old as Usenet itself. I think the truth is humans online, without the context of body language and the consequences of long term relationships, are just emotionally attracted to rage and strong emotions. Algorithms may exacerbate the forum but plain old upvote behavior is plenty to surface it.


Hacker news is an algorithmically influenced discussion board, the comment voting system shared by both HN and reddit influences the way people conduct themselves, and not always for the better.


Usenet had no upvotes and still had plenty of flamethreads.


that doesn't mean that voting doesn't make the problem worse. And before eternal September, the internet was nerds, the "unpopular kids". When all the normies showed up, the incentives shifted.


> low quality pithy responses

pithy means the opposite of low quality, concise and meaningful

https://en.wiktionary.org/wiki/pithy


Liberals never actually left the free speech refuge - they just wanted to keep the far right out of it. There's a logic to this; you don't want people advocating for mass censorship to be able to use free speech as a weapon to cut your head off. In other words, it's OK to censor Hitler[0]. Not everyone will agree, I suspect you don't. I put that disagreement in the same category as, say, the BSD vs. GPL arguments back in the day.

The problem with true P2P social networking is that I don't want to have to make individual moderation decisions based on each identity that happens to try and send me a message. That is makework best delegated to someone at least marginally trustworthy so that I can spend my time actually using the network as intended. Moderation is a necessary precondition to any kind of online forum actually, y'know, working as a forum.

In order to actually have effective moderation, you need scarcity of identity. In a completely P2P network, you can just make new identities to spam people with, so blocking them has no effect. Centralized platforms have a certain magic to them in that they can make identities cost money without actually charging people for them. That's why everything wants a phone number now - because phone numbers cost money, and they can limit sign-ups to a few per number per year. The closest Mastodon has to this is domain names, which is why defederation is the first thing instance operators run to when dealing with rule breakers. It forces you to at least burn a domain name to continue harassing someone.

> Many amongst the left obsesses over what the right is doing (and vice versa), which tells me people enjoy rubbernecking rather than tuning out. It's a game of "neener-neener" high school football rivalry.

No, this is just Twitter. Twitter is structured to encourage clout-chasing and toxic behavior.

[0] Or spammers. Actually I don't get why people who call for extreme free speech aren't also calling for getting rid of spam filters. Spam filters are censorship, but it's censorship we tolerate in exchange for a functional e-mail system.


> Liberals never actually left the free speech refuge - they just wanted to keep the far right out of it. There's a logic to this; you don't want people advocating for mass censorship to be able to use free speech as a weapon to cut your head off.

The political censorship on current social media goes far beyond “we don't want people advocating for mass censorship here”. Ironically, they do like advocating for mass censorship, as long as it's censorship of the “correct” side.


If you mean Twitter specifically they jumped from "we have a bunch of rules that accidentally censor right-wingers more because they break the rules more" to "we intentionally censor left-wingers for the sake of balance" the moment Musk jumped in.

I'm not entirely sure how that refutes my point, though - the liberals the grandparent post was talking about aren't the same class of people who actually have moderation privileges on Twitter. Their crimes here are... cheering when Twitter did a thing that might technically go outside one's core values?

The biggest thing I can fault liberals for is adopting the Comcast argument: i.e. that Twitter is allowed to censor because it's a private platform and only government action carries the Evil Bit that makes content removal into censorship. This was always a losing argument, and we knew it was a losing argument because we'd already shat on Comcast for trying to do the same thing back when Net Neutrality was still a thing. But it's less "leaving the refuge of free speech" and more "adopting obnoxiously technical arguments that only Daddy SCOTUS would love right before Daddy SCOTUS decides to start tearing down Roe v. Wade."


> we have a bunch of rules that accidentally censor right-wingers more because they break the rules more

“Accidentally”? When people got banned for hating on black people, but hating on white people was fine, for example?


see also 'Learn to code'

It was fine for journalists and others to say to blue collar workers, coal miners, truck drivers(obstentionally politically right wing).

The moment that it started to be 'weaponized' by the right, it became hate speach. https://en.m.wikipedia.org/wiki/Learn_to_Code


You're right about one thing; those who are liberals today never abandoned free speech. Many progressives did, but in so doing, they ceased to be liberal.


Social media networks with over 100,000,000 daily active users should not be considered as "private companies

Your solution to private companies not broadcasting what you want is for the government to take the companies over and then dictate what they broadcast? Then what? Anything goes? There are plenty of sites that show you what happens when there is no form of moderation at all and they turn into complete cesspools.


> There are plenty of sites that show you what happens when there is no form of moderation at all and they turn into complete cesspools.

Well, that's precisely because there's only a few of them. If the only site with free speech is some obscure forum, then all people with extremist opinions are going to concentrate there. If all popular social networks have free speech, extremist content is going to be diluted in normal content.


What are you basing that on? The word extremist is there for a reason, people spam and try to overwhelm any site without some bare minimum of moderation. Even sites that say they are going to be all about "free speech" end up immediately having to come up with a plan to weed out all sorts of horrible stuff. Voat, 4chan, 8chan, parler, zero hedge, slashdot with the filter score on minimum... How many examples do you need that are the opposite of your prediction.

Extremists don't want discussion, they want to blast something outrageous to as many people as possible, it is directly opposed to what you think will happen.


> Many amongst the left obsesses over what the right is doing (and vice versa), which tells me people enjoy rubbernecking rather than tuning out. It's a game of "neener-neener" high school football rivalry.

You mean like rolling back Roe v. Wade?[1]

You mean like trying to overthrow an election?[2]

Denying access to gender-affirming care for adults?[3]

What exactly do you think people are being unfairly obsessed over? Because there's a reason people follow politics: it's because politics, and because we're in a democracy - public opinion and discourse - has direct, kinetic effects on people's lives.

This is a real cheap opinion to have provided you're never weighed down with having to engage with the content and stick to vague insinuation. The fiction of "both sides".

[1] https://www.npr.org/2022/06/24/1102305878/supreme-court-abor...

[2] https://en.wikipedia.org/wiki/January_6_United_States_Capito...

[3] https://www.washingtonpost.com/nation/2023/02/28/anti-trans-...


While I oppose these things, I think people who agree with them should be allowed to express their opinion. Maybe if we allow them to explain why they support these things, we can better understand how to come to a mutual agreement.


The quote I was responding to in the top:

> Many amongst the left obsesses over what the right is doing (and vice versa), which tells me people enjoy rubbernecking rather than tuning out. It's a game of "neener-neener" high school football rivalry.

This is a complete failure to understand that politics is real and has real consequences for real people. If your think that "actually none of this matters, why can't we all just get along?" then your world view and privilege is simply such that you have the leisure of ignoring it.


My wife is trans. By turning this into a game of sports and one-upsmanship, you're hurting us.

Listen to the left and to the right. Talk with them. Engage them. Humanize them.

Do this the Dr. King way, not the Malcom X way. You won't win hearts and minds if you're bearing your teeth all the time. People in the middle don't want deal with that. People that could be convinced are instead turned away.

Maybe you grew up in a completely liberal city and don't realize that the opposition have lives filled with joy, happiness, pain, trauma, and feelings too. Maybe you didn't go to school with them and don't work with them. Maybe you're not in a diverse, politically purple city where you can see that we're all in this together.

By being so offended and not turning the other cheek, you're letting the bullies know that they hurt you. You're giving them all the more endurance to continue. When social media and the news media obsess over what Trump is doing, they give him power. It's the same thing.

I'm not saying don't campaign, don't vote, and don't be strong. Those things are essential. Policies of tolerance and equity can win this faster than division and hate. But don't turn the other side into monsters so that you can feel good about yourself - you're just giving them fuel to keep fighting this decades-long guerilla war and making us all miserable for longer.

You gain nothing in vilifying.

Friendliness, strength, and pragmatism.


> Do this the Dr. King way,

Martin Luther King had this to say about "the white moderate" [1]:

"First, I must confess that over the last few years I have been gravely disappointed with the white moderate. I have almost reached the regrettable conclusion that the Negro's great stumbling block in the stride toward freedom is not the White Citizen's Council-er or the Ku Klux Klanner, but the white moderate who is more devoted to "order" than to justice; who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice; who constantly says "I agree with you in the goal you seek, but I can't agree with your methods of direct action;" who paternalistically feels he can set the timetable for another man's freedom; who lives by the myth of time and who constantly advises the Negro to wait until a "more convenient season."

Shallow understanding from people of goodwill is more frustrating than absolute misunderstanding from people of ill will. Lukewarm acceptance is much more bewildering than outright rejection."

[1] http://www.hartford-hwp.com/archives/45a/060.html


Understand the difference between respecting your enemy and not doing anything to address problems. These are orthogonal.

Setting up censorship traps is slapping your enemy in their face.

You should read his "Loving Your Enemies" sermon [1]:

> Now let me hasten to say that Jesus was very serious when he gave this command; he wasn’t playing. He realized that it’s hard to love your enemies. He realized that it’s difficult to love those persons who seek to defeat you, those persons who say evil things about you. He realized that it was painfully hard, pressingly hard. But he wasn’t playing. And we cannot dismiss this passage as just another example of Oriental hyperbole, just a sort of exaggeration to get over the point. This is a basic philosophy of all that we hear coming from the lips of our Master. Because Jesus wasn’t playing; because he was serious. We have the Christian and moral responsibility to seek to discover the meaning of these words, and to discover how we can live out this command, and why we should live by this command.

[...]

> Another way that you love your enemy is this: When the opportunity presents itself for you to defeat your enemy, that is the time which you must not do it. There will come a time, in many instances, when the person who hates you most, the person who has misused you most, the person who has gossiped about you most, the person who has spread false rumors about you most, there will come a time when you will have an opportunity to defeat that person. It might be in terms of a recommendation for a job; it might be in terms of helping that person to make some move in life. That’s the time you must do it. That is the meaning of love. In the final analysis, love is not this sentimental something that we talk about. It’s not merely an emotional something. Love is creative, understanding goodwill for all men. It is the refusal to defeat any individual. When you rise to the level of love, of its great beauty and power, you seek only to defeat evil systems. Individuals who happen to be caught up in that system, you love, but you seek to defeat the system.

[...]

> Now there is a final reason I think that Jesus says, “Love your enemies.” It is this: that love has within it a redemptive power. And there is a power there that eventually transforms individuals. That’s why Jesus says, “Love your enemies.” Because if you hate your enemies, you have no way to redeem and to transform your enemies. But if you love your enemies, you will discover that at the very root of love is the power of redemption. You just keep loving people and keep loving them, even though they’re mistreating you. Here’s the person who is a neighbor, and this person is doing something wrong to you and all of that. Just keep being friendly to that person. Keep loving them. Don’t do anything to embarrass them. Just keep loving them, and they can’t stand it too long. Oh, they react in many ways in the beginning. They react with bitterness because they’re mad because you love them like that. They react with guilt feelings, and sometimes they’ll hate you a little more at that transition period, but just keep loving them. And by the power of your love they will break down under the load. That’s love, you see. It is redemptive, and this is why Jesus says love. There’s something about love that builds up and is creative. There is something about hate that tears down and is destructive. “love your enemies.”

[...]

> Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.

I regularly drive over a bridge with those words printed on it [2]. They're more powerful than hating your enemy.

[1] https://kinginstitute.stanford.edu/king-papers/documents/lov...

[2] https://www.google.com/maps/@33.7608328,-84.3678862,3a,75y,2...


>Do this the Dr. King way, not the Malcom X way. You won't win hearts and minds if you're bearing your teeth all the time.

You don't understand MLK if you think he was less radical than Malcom X[0]. He absolutely did not engage with and humanize his oppressors. He didn't patiently and calmly listen to what racists had to say and try to find common ground and a way to tolerate them. Read his "Letter from a Birmingham Jail[1]" His tolerance for white America was limited to those who were willing to do the work of fighting white supremacy. He had no love at all for the centrists of his day who preached what you're preaching here, or the "colorblind" politics that people have twisted out of his "I have a dream" speech.

You come off like the guy who'd be in 1930s Germany telling the Jews they shouldn't be so intolerant, that maybe the Nazis have some fair points to make and everyone should just hear them out, share a beer and a laugh, and surely everything will be fine if only they realized Nazis were people too. And then point out your wife is Jewish, like somehow that gets you a pass.

Your views are dangerously, almost maliciously naive.

[0]https://www.aljazeera.com/features/2021/1/18/martin-luther-k...

[1]https://www.africa.upenn.edu/Articles_Gen/Letter_Birmingham....


At this point I have come to the conclusion that you can find a King quote to support whatever argument you're trying to make. I'm not so ignorant as to subscribe to the rose tainted Martin vs Malcolm dichotomy. I understand they were contemporaries. I've read LfaBJ. Which is why I think it's a steaming pile to try and argue "Martin was just as radical as Malcolm".

Martin was a human being. He was frustrated with the speed of progress. He did not preach violence. And he rightfully criticized people who wouldn't lift a finger in support of his peoples' rights to share in the same liberty under the law as everyone else. That's justice.

There is a huge difference between allowing immoral/evil unjust racist oppression to exist because you're unwilling to stand up and say "that's wrong", and demonizing your fellow countryman because our scientific understanding of when a human life begins is ever-evolving, or because of differing views on whether tax money should be used to subsidize elective cosmetic surgery. (I'm not arguing one way or another, I'm just stating these examples rhetorically since they're what started this discussion.)

Point being, there's intolerable immoral oppression, and there's acceptable functional political "oppression" (speed limits are oppressive, for instance, but we agree they're generally a useful oppression, so we tolerate the oppression). Racism is the first category. The first world problems of today (save abortion, that one's fuzzy and complicated) fall squarely into the second. And regardless of which category of oppression you're facing, there's still wisdom in understanding that sharpening your edge will only cause the other side to follow and is rarely the way to change minds.


It's also worth noting there also definitely was a group in Nazi Germany who were colloquially called Jews For Hitler[1]. You can guess how it turned out, but they absolutely echoed similar sentiments. They were as wrong as you might expect.

[1] https://en.wikipedia.org/wiki/Association_of_German_National...


I forgive you for thinking that.


I'm not saying that politics don't have real consequences. But by pointing everyone who even slightly disagrees with you as literally the incarnation of evil, you're only making things worse. Do you think Fundamentalist Christians are less likely to support anti-abortion laws if you keep being toxic to them?


Why would fundamentalist Christians change their mind about a core belief? That's the point of being a fundamentalist.


I grew up in that kind of church and over 30 years watched the entire world around me change: I have zero connection or desire to be in that world, my family mellowed out, the tv preachers all died.

Recent social media trends lead to far left liberals are attacking conservative people on social media directly, and vice versa. There's no opportunity for de-escalation on either side. Every confrontation is personal.

Conservatives don't respond to the vitriol. They have the exact same response you do to their hate. They put up barriers and become protective of their beliefs rather than engage in direct, vulnerable dialogue. They tune you out entirely and focus simply on defending and attacking. That's the only discourse left.

Do you realize the rich spectrum of conservative beliefs? Or do you see it all as a single position? There's so much opportunity to change hearts and minds, but it's all being squandered (and perhaps permanently so) on these petty fights and turf wars.

The other side is human, just as you. If they'd had a different life journey, they might stand beside you now. If you'd had theirs, you might be in their shoes.

I assume you've experienced a lot of change in your personal life. Understand and appreciate that everyone can take new paths.

All I'm asking is that you don't engage in petty attacks, as it's pointless, embittering, and drags out the battle. You can fight for what you believe without belittling.


Attacking them for what? This isn't theoretical, I'm connected to giant global database where the public availability of this information is well known. Attacking them for what?

If your moral compass is "someone was mean to me on the internet, now I don't support abortion rights for rape victims" then there was no causative effect, you're just doing what you were going to do anyway.

Because it makes the priority pretty clear: these aren't hard to find stories[1]. The victims aren't "the far left" they're just people[2]. It isn't hard to empathize with them, but apparently the conclusion you reach is that they deserve what's happening to them, because an unrelated group of people might've been mean to you on the internet.

"People are mean to me on the internet" is a lie used to deflect from the data. Because someone is always saying something shitty on the internet. Because someone, somewhere on the far-left might tell someone they're an asshole. But then the far-right will rock up to what they think is Obama's house[3]. But then here you are, telling me the only reason conservatives don't support my issues is because someone is mean to them. I don't know who that person is, I don't know why they were mean, but because they were that's all the justification needed to support denying healthcare to women.

[1] https://www.theguardian.com/commentisfree/2022/jul/16/rightw...

[2] https://www.npr.org/sections/health-shots/2022/11/15/1135882...

[3] https://www.theguardian.com/us-news/2023/jun/30/capitol-atta...


Defederating from Meta as a solution is stupid - Meta can (and will if they actually care enough) just rejoin undercover.

I agree with the rest of your post but I suspect that in this instance Meta, a large public company, that might leery about doing something that potentially embarassing.

But your basic point remains true. A wide variety of companies, many bad actor, are going to be scraping, processing and connecting any data anyone makes puts online and that will happen whether Meta joins the fediverse or not. If Meta wanted fediverse data bad, they'd likely just buy it.

Indeed, all the hand wringing about Meta in particular in the article and here seems deeply confused - of course Meta isn't the only problematic actor out there. Indeed, the anonymity-problem of all the walled-gardens is that they explicitly attempt to stop online anonymity in various overt ways. But everywhere entities are trying to deanonymize covertly and these can be at least as bad.

The main thing is that anyone wanting anonymity needs to take active measures to achieve it. And these measures vary on how visible you are and effectively how much your enemies are capable and interested in you. Those talking anonymity would do best educating people in this.


>I agree with the rest of your post but I suspect that in this instance Meta, a large public company, that might leery about doing something that potentially embarassing.

I wouldn't say that the company directly responsible for the state of online discourse today as being leery of doing anything.

https://www.theguardian.com/technology/2014/jun/29/facebook-...

  It has published details of a vast experiment in which it manipulated information posted on 689,000 users' home pages and found it could make people feel more positive or negative through a process of "emotional contagion".
  ...
  One test reduced users' exposure to their friends' "positive emotional content", resulting in fewer positive posts of their own. Another test reduced exposure to "negative emotional content" and the opposite happened.


>...the company directly responsible for the state of online discourse today...

The company, huh? Not Twitter, not "us", not the properties of online discourse, the polarization of American and world political ideologies...

Anyway, this "hating on" FB doesn't change the situation.


I should have been clearer about how this relates to general issues. I added a sentence

"Of course, Meta's far from the only threat out there, but as I discuss in 'Threat modeling Meta, the fediverse, and privacy', looking at Meta-related threats also points to solutions that increase privacy and safety in the fediverse more generally."

Here's a link to the longer post (still a draft). https://privacy.thenexus.today/fediverse-threat-modeling-pri...

And agreed, it doesn't scale for Meta to infiltrate people into every single fediverse instance -- although threat actors who are targeting specific people or communities might well do this, so it's also something to take into account.


Meta, a large public company, that might leery about doing something that potentially embarassing.

I don’t know, they did depend on Onavo analytics for a while: https://en.m.wikipedia.org/wiki/Onavo


Yeah, the (less-important, subsidiary) point is they'd be happier to use some other source than engaging deception overtly and results would be the same.


Meta has walled gardens at least with privacy settings that cannot be scraped.


I totally agree.

Using a protocol designed to distribute content with the expectation that you remain in control of it is asinine. If you really want to control it, either allow list federate with only instances you know are in the same circle (IE make your own smaller fediverse), or move to a private forum.

I argued with the Fediverse about this several months ago, and raised all of the same problems in the article with them. Only to be shouted at, doxed, abused, insulted and attacked. I still think that what I experienced would be enough to drive some people to self harm. The level of toxic interactions on the fedivese was the worst I have ever experienced on the internet.

As such I find the fact that there is a freakout over what has been predicted multiple times hilarious. I welcome Meta joining. The sooner people on the fedivese grow up and realize the reality of what is coming the better.

I really like ActivityPub. I really like the fedivese idea. However when you visit you rapidly discover its full of the really annoying people from twitter, and people with "ex tumblr poster" in their bios.

Forget where this came from but it sums it up perfectly.

    Fediverse: We are open for federation and you to join!
    Meta: Ok here is what we are doing with instagram.
    Fediverse: Not that kind of open!


> Defederating from Meta as a solution is stupid

One question I've had for fediverse people is how you prevent a federated system from centralizing. I am legitimately curious. Email is often given as an example, but imo that a perfect example of a decentralized system BECOMING centralized. Sure, other players exist but the vast majority of people are on gmail, apple, or outlook (which is much smaller than the other two). Things tend to follow power distributions due to the momentum force being critical. In network systems (e.g. twitter,facebook,HN,email,ISPs,Walmark,etc) the utility/value is not linearly proportional to the userbase, but super-linear (this was one of the big problems with cryptocurrencies too. "Gotta have money to make money"). In these systems resources are "attractive."

So with this in mind, how is a decentralized paradigm any different than an attempt to just reshuffle the top players? (i.e. re-centralize but with a different group at the top) I just don't see the mechanism that prevents centralization.


It's certainly a challenge. Mastodon's development tends to prioritize mastodon.social (Eugen Rochko is BFDL of the software platform and also runs mastodon.social) -- for example, the mobile app now signs people up by default on mastodon.social, and functionality that people running smaller instances have implemented in forks hasn't been integrated back into the main line. So there's the weird dynamic that people generally have better experiences on small instances (as long as they're well-admined) but the vast majority of the current fediverse is on large Mastodon instances. So it'll be interesting to see what happens in response to Meta. There's likely to be a partition, and if .social winds up taking a Meta-friendly position, then the anti-Meta region may be much less centralized.

https://heat-shield.space/mastodon_two_camps.html looks at tensions between people who just want a "better twitter" (which tends to lead to centralization) and people who focus more on small communities (a more decentralized solution).


I think you don’t, which is why I think federated systems built on the existing stack are doomed to fail (by recentralizing). You’re right to point out email as an example of this failure.

You need urbit or something like it to fix it, the problems are deeper.

https://martiancomputing.substack.com/p/tlon-urbit-computing...


I am continually impressed by how much more relevant Urbit has become (contrary to people years ago arguing it would fade out into irrelevancy). Urbit was designed holistically to solve the problems the "web 2" internet experienced in a structural way, not just apply some fancy "web 3" bandaids to some them. So of course it is a lofty project. But time and again it's proven that it took the right stance on socio-technological issues. Will it ever gain enough traction so that it replaces your text message app? Well that's really a social question, and one can wish. But it certainly solves all the problems people keep bumbling into when trying to "do web3". I wish more people would give it a serious look.


I work at Tlon (the main startup behind urbit, so disclaimer) - it's a lot easier to use urbit now than it used to be thanks to free hosting.

It needs to be a lot easier still (particularly the mobile experience isn't there yet without a fully formed app), but if it's been a while it's worth checking out again: https://tlon.io/

It'll be insanely hard to actually pull off, but it's the only attempt in this space that I think has a legitimate chance of a successful outcome. The others are dead on arrival because they don't actually fix the underlying issues. (Success being widespread adoption of software the users actually own and control.)

I personally self-host mine which has also gotten a lot easier too: https://martiancomputing.substack.com/p/product-review-nativ...

The UX needs to be just as good as a centralized service - I think urbit is the only design where that's really plausible (without recentralizing).


Thoughts on Nostr?


> the vast majority of people are on gmail, apple, or outlook

What do apple email addresses look like? I genuinely don't know anybody that uses an address that says to me "provided by Apple"... I do know a lot of people that use their corporate/organisational addresses for personal email though, which always surprised me.

Edit: just read that "me.com" email addresses are apple-provided - I have at least seen them used occasionally, though nowhere near as much as gmail and hotmail/live.com (outlook).


me.com is the old address, it's been icloud.com for years now.


I couldn't recall ever noticing anyone use an icloud.com address as their primary personal email address, though searching through my inbox it has cropped up once or twice. Actually my sister did use one briefly in 2018.


And mac.com previous to that. Plus you can use your own domain name.


Are there any stats on how many people do that? It seems me the only way it could be true that Apple mail has far more users than outlook/live/hotmail is if the former tend to be associated with custom domains.


Apple mail the client has a lot more users than any Apple-offered mail.


just to muddy the waters a bit, I recently moved my personal domain to use iCloud as the mail provider, so you wouldn't see @me.com/@iCloud.com, but just my domain.


I've seen email being centralized mentioned a few times but I don't think it's true. I email with plenty of people who don't use gsuite... seems to work fine. I can see a power distribution, sure, but that's still a distribution not a central organized cabal that stops newcomers.

Still I agree -- decentralized paradigms that are successful seem like they'd end up with big players like you describe eventually. Still better than actually centralized like slack or something.


The reason there's this confusion between us is that we're using the term monopoly (and consequently centralization) to refer to different things. A lot of people don't use the word to refer to the pure sense but rather the practical.

A pure monopoly (a single story) is rather uncommon and usually associated with government direction. Generally these don't exist without government intervention because if anyone is able to create a company, in any form, and sell the same product/service (in any form/price) then it's technically not pure. So doesn't happen unless it's illegal or some other factor. Whereas an effective monopoly refers to dominance in market share.

The reason a practical monopoly is used is because we need to look at the reason to why we discuss monopolies in the first place. Usually it is because they are bad (but they aren't always). The reason they can be bad is because abuse of power: they are able to dictate the market and force prices that are unreasonable.

So with this understanding, we usually call things like Coke+Pepsi a duopoly despite a combined market share of 68% (48% + 20.5%). Or we talk about oil despite OPEC not even being 40% (followed by Persian gulf, oapec, then US), and all of these are multiple players. Similarly we say the same about ISPs. The problem with these isn't that it's single player control, but rather significant market share and the leverage that comes with that, which includes the ability to reduce competition and effectively cornering the market.

I want to give this explanation so we can discuss on the same page and understand what each other means. Because when I'm referring to email (context clues should tell you I don't mean pure -- I mentioned multiple players) I'm referring to these companies abilities to force aspects onto others. Maybe a more clear example is browsers because Chrome's pressure is frequently discussed here and Explorer's previous history also shows similar influence.

I'm not saying decentralized services will become 100% centralized by a single player (it would be incredibly naïve to interpret my words to mean that), but that people will collect into a small set of servers (likely seeing a power distribution) who then have a clear ability to use their leverage and dictate the format of smaller servers. Effectively this is not too different than the power that Salesforce has over Slack. While not absolute power, it is closer to that end of the spectrum than a fully decentralized system. If you make the mistake of creating two bins -- centralized vs decentralized -- you are missing the entire point of decentralized systems. At least the point that is philosophically argued.

Try to realize that when people are talking they have different priors than you. Language is messy and communication is about the exchange of ideas. So even if you disagree with my definitions (neither of ours are objective) you can still argue my points. Just claiming semantics is problematic because it derails the conversation or creates artificial contention. See 5.11 of the Simple Sabotage Field Manual.


The fight over whether or not to federate with threads might actually cause more decentralization, as people move off instances if they don't like the policies the admins have chosen, and different admins make different choices.


Even "effectively re-centralized federated" will still provide much better ways to keep all that browser fingerprinting stuff and the like away from Meta et al than a closed system. E.g. if for some reason you need to operate a Google email address you could run the day to day read and write behind a forwarding setup never connecting your regular browser or your imap client to Google servers, dealing with the occasional setup/maintenance from a browser properly isolated from your search history and the like.


It's not just that there is a risk of re-centralization from network effects.

Since you mentioned email, there is also a large risk that filtering unwanted content and impersonators from appearing on your instance becomes so expensive and/or time-consuming that only a few instances have the resources to do it, just like happened with email.


You can force decentralization in certain ways like only allowing one user per domain but nobody has the courage to try it.


This is similar to how urbit’s ID system works fwiw - the ID is tied to the computing node itself 1:1 and you can’t have an account that’s separate from a node.


What's the problem with most people being to Gmail? It doesn't prevent you or me from being on a different server and communicating with them. On the contrary, e-mail is a great example of how a decentralized protocol can be widely adopted.


For the moment it cannot be too centralized since Mastodon being the main software becomes prohibitively expensive to run.


Prohibitively expensive implies more centralization not less


Let me rephrase. I don’t think the current iteration of Mastodon can scale well enough no matter what hardware you throw at it. It’s not designed for it. You would need to build a new system to achieve what was proposed.

I suspect that the above along with the cost is why it hasn’t happened. Although you do miss a lot of what’s going on by not being on one of the larger instances.


I think the point they're making is that Meta is in a special place, where by using their unusually vast amounts of personal data and photos, they are somehow (?) going to dox anonymous users by publishing their real names, based on matching their profiles in some way. At least that is the implication based on the quoted tweet right at the top of the post (which is lacking any real detail how exactly Meta got this person's real name matched up with an online alias).


>using their unusually vast amounts of personal data

Jikes. Hadn't considered that. Certainly seems plausible, not just via technical means but also content. I've got a reddit profile that I try very hard to keep "clean and anonymous" yet I've had people from halfway around the world message me say "Hey you're Jack from X". Post sufficient volume and it becomes identifiable no matter how hard one tries.


I think people can make that discovery but you could always deny that association, and it might be difficult for random strangers to gather enough proof to say that two accounts are linked.

Meta appears to be publicizing account associations that were previously difficult to confirm, in a way that defeats any previous efforts to maintain deniability.


When somebody confronts you with PII like that, the best move is probably radio silence. Neither confirm nor deny it, and don't even confirm that their message was received. Don't do free battle damage assessment for potential adversaries.


Meta (when they were Facebook) had a rule that members would use their real names, that their users consented to as part of the terms of service. Apparently they used that as justification to find out someone's real name who was using a pseudonym. Kinda stinks, but I could see how they could justify it (though in the cases I'm aware of they just demanded that people provide real names or verify that the name that they were using was real).

But someone who uses a completely different site hasn't consented to the Threads terms of service, and if Threads randomly decided to dox people and alter their posts to add real names (or perhaps deadnames for trans folks), that's a very different matter, and I'm sure that their lawyers are going to tell them not to do that (or risk legal consequences). Since non-users haven't agreed to any terms of service Meta face real court, none of that binding arbitration stuff, and possibly class action if they do it a lot.


What if Meta identifies a user on Threads and/or Instagram (who has therefore agreed to the TOS) using a pseduonym on a totally arbitrary Mastodon server? All Meta has to do is drop a little sidebar on that person's Instagram profile that says "This user also identifies as XYZ@server"

Person has been doxxed, the content is entirely on Threads or Instagram , they agreed to the TOS, and it seems very plausible Meta would do something like this.


The only entity that could realistically tell us how Meta linked the profile to that person's real name is Meta, and they aren't likely to share that information — so it's not exactly surprising that the tweet lacks detail on the matter.


Phone numbers and recovery email addresses?


> they are somehow (?) going to dox anonymous users by publishing their real names, based on matching their profiles in some way.

Aren't there laws against that? are there even allow to build internal links between FB/IG accounts without user consent?


I'm sure the US justice system will be glad to let you sue after you've been murdered by an online mob that found out where you live


If you get doxed buy {a gun, multiple guns}, preferably a rifle with a red dot or EOTECH and a suppressor, if you can. Guns stop riots.


Ah yes, the American way of solving violence with more violence.


I get feelings about pacifists I have to suppress. You can say that there are people aligned closely enough to you that dying is not actually an existential threat in the same sense as your country or species getting destroyed is, but it's not a good precedent to let them roll you over without a reasonable application of force. I would personally start questioning it if I had to kill over a thousand people to save myself, but if I'm mostly avoiding collateral damage (i.e. I mostly avoid killing non-rioters/non-attackers), I would keep going. Give them a couple kilodeaths and the riot will stop.


Ah yes, I believe you're such a badass that you'd definitely be able to kill a thousand people.


I never said I would, I would probably be killed by a couple, at least without a lot of support. Look at the Black Hawk Down incident for what killing a thousand people requires. That actually failed because the Americans left, they should have stayed even if it required killing a thousand more.


What if the rioters have guns with red dots and a suppressor?


I think it would probably stop at some point, due to the chilling effect on riots due to a lot of rioters being killed and the probable presence of police and federal agents. It would probably be too late to save you, but it might be a cause to repeal the machine gun ban, so future people could mount machine guns to the back of their pickups, etc.


The terms of service on Facebook require that real names be used, and anonymous users aren't permitted. I know, many ignore that, but they occasionally demand a user's real name or proof that an unusual name is a real name.

Instagram used to have different rules, but they seem to be trying to integrate everything.


I think they’re talking about Meta doxing anonymous users on Mastodon. That is, by having Threads join the fediverse they can pull in information about anonymous users on other Mastodon servers and match it up with Facebook and Instagram accounts, linking them and potentially unmasking them.


They could do that just as easily without federation (and in fact they already do: they've been found to create "shadow users" from people who aren't even Facebook/Instagram users from the tracking cookies they've gotten lots of sites to add, and they plug in all that info if that person later joins Facebook or Instagram).

I can't think of anything they can find out if federation is turned on that they can't find if federation is turned off. Even if there were some info that could only be obtained by being in the federation (and I can't think of anything but I might be wrong), that's easy enough: just create some small instances that don't identify as Meta or Threads and have the users of those instances follow people on all the large instances.


In the US, yeah. We're famously unwilling to do anything about that kind of stuff.


> Aren't there laws against that?

Law is a weak form of mitigation for risk of harm. If it can be done and there is motive to do it, expect that it will be done.


Have you read the terms of service?

Any rights that they are allowed to strip from you are gone as part of the ToS.

Unless there is a law saying what you said explicitly, then 100% for certain Meta gives themselves permission to do it when you sign up for either service.


I am assuming they used the same email address they use on Facebook.


> wasn't considered an issue before Meta came along

This is false, there's been frequent discussion around how indexing and search should be performed in a privacy preserving way. Meta is just the latest concern.

[1] https://www.anildash.com/2023/01/16/a-fediverse-search/

[2] https://blog.joinmastodon.org/2018/07/cage-the-mastodon/


> Meta is actually one of the more trusted actors compared to whatever else is on there - at least they're a known legal entity instead of some random.

But they're a known entity with a long track record, which is how I know they can't be trusted.


But, you know the space they operate. That space is limited by profits, what their immense legal team allows, and their immense security team controls. The alternative is, literally, a complete stranger, with no track record, unknown motives, and (as the recent hacks showed) doesn't have the skillset to keep your information secure anyways.


This is true, but it's also a "the devil you know is better than the devil you don't" argument. Which is not to say it's invalid, of course, but it's not a strong argument.

Personally, while I certainly don't trust a random stranger, I trust Facebook even less.

Not that any of this matters, really. My opinion affects nothing.


Trust is more nuanced more than that though.

I trust my grandma to make a good cake, but I do not trust my dad to do the same. But I trust my dad to be there when I call him, whereas I wouldn't of my grandma.

And that level of trust nuance comes from a long track record. To me, there is no other source of trust.


I agree entirely!

And the long track record I see of Facebook tells me that I can trust them to abuse people any time they can make a dime doing so. They have no benefit of the doubt, because there is no doubt.

A stranger might also abuse people any chance they get, but there is also a chance that they're better than that.


>The alternative is, literally, a complete stranger, with no track record, unknown motives, and (as the recent hacks showed) doesn't have the skillset to keep your information secure anyways.

That's not the only alternative. Another is, literally, me, someone I know very well, with a lifetime track record that I'm intimately familiar with, known motives and the skillset to keep my information secure.

It's called "hosting your own instance."

And it doesn't stop me from following users on other instances, nor does it require me to accept the TOS of other instances either.

That said, I'm not interested in juicing my "follower" count or building/enhancing my "brand," nor am I interested in doing so for others.

That's the alternative. And when someone comes up with an AP hub that can interact with other AP instances like an email client (my Thunderbird[0] can talk smtp, pop3, IMap, xmpp, Matrix, nntp and more) that's a viable alternative for the hoi polloi. Until then, more technical folks like myself can just roll their own.

I don't really care what most other people say anyway, including (well, especially) "influencers", celebrities, politicians, advertisers and other scum of the earth.

So I'll just follow whoever I want to follow from my own AP instance. Or not, if I choose not to federate with other instances.

[0] https://www.thunderbird.net/

Edit: Added the missing link.


My understanding is that many popular lemmy instances use an allow list, which means your replies to comments will go to the void. Is this true? I'm having trouble finding a clear answer.


>My understanding is that many popular lemmy instances use an allow list, which means your replies to comments will go to the void. Is this true? I'm having trouble finding a clear answer.

I wish I could answer that question, but I haven't messed around with Lemmy.

I suggest setting up an instance and seeing for yourself what admin/moderation tools exist and how they impact interactions with the instance.

Sorry. Wish I could do better for you.


I feel like the fact that Meta (and anyone) can see posts and users regardless of whether or not they are defederated is a bit of a distraction.

However, I do see the point of considering ways for instances to somehow distance themselves from Meta's instance. If another instance/admin was publicly known to comparably engage in pervasive user tracking (both on and off their own websites) and algorithmic attention monopolization, would we not expect many other actors in the fediverse to defederate or otherwise distance themselves from those practices/that instance? Obviously, several instances have decided to do so by premptively saying they will defederate. I'm just saying that I think it makes sense to at least consider it. E.g., compare the labels the data collected by the Threads Android app (https://play.google.com/store/apps/datasafety?id=com.instagr...) to those of the Mastodon one (https://play.google.com/store/apps/details?id=org.joinmastod...), and I at least see the contrast.

But perhaps this is just me being naively unaware of rampant community-sanctioned indiscriminate collection of user data in the fediverse (I'm not part of it, just curiously observing Meta's entry).


The threat here is not about Meta harvesting already-public information. The risk here is about them doing to people off their network what they do extensively within their own network already: harvest information on your interactions with their content and users. Meta doesn't just build a profile of you based on your posts, its secret sauce is building a profile of you based on what you like, repost, comment-on, hover-over, etc.

Yes they can go out and spider ActivityPub content already, though they probably don't currently bother. But what they will gain by being part of the fediverse is the ability to associate your interest in their content back to your identity and learn out a graph about you. Boost/like/share something off Threads, and you've helped them build a model about you in a way that they can't currently do with all the free content out there already.

I'm not sure why other people aren't making this connection. Again, the threat is that Meta could extend its abilities to build a model on how users from outside its own user database interact with its content and users. They're known to already do this in the form of tracking pixels on 3rd party sites already, but this is a richer source of information.

That said, I also think the Fediverse is so tiny that this particular market isn't of much interest to Meta in reality. I suspect this feature will never launch.


You're overlooking the fact that Meta is collecting a whole lot more information than just what people post publicly, and they're not transparent about that.


There's a huge gap between sharing a post and having that be replicated and consumed or archived elsewhere by other users/hosts, and having that same post processed against a giant pile of other data specifically to de-anonymize it. The first one is, yes, just normal use of public information. The second is more like spying, and since we know Facebook does stuff like that, being apprehensive specifically about them is justified.


That was one of the problems with mewe. Sure, there's a good privacy policy. But what you're posting is more or less public, so the point is almost moot.

MeWe was nice though, really great communities on there back in the day


There's a few different concerns, one concern is about the reach of your posts changing dramatically, and meta handing your content to people who will then harass you, share your posts into their network multiplying the effect, etc (a common experience for trans people on some networks). Like, these people could've found your posts but they have no reason to search for you explicitly. L


> If you have personal information you do not wish bad actors to see, do not publish it using an open protocol explicitly designed to allow anyone to read said information.

you seem to not even have read the first paragraph, or not understood what it imples

the whole point of this article is that meta has a precendence of aggregating and combining data from all kind of sources. This includes data which is not supposed to be public, but e.g. was sold without your knowledge, awareness or explicit consent. A situation you could argue the huge majority of people on the internet is in.

For example consider this hypothetical scenario:

So they might take the supposed to be public data of e.g. your anonymous political activism (lets say anti corruption in a very corrupt country).

Then take a public profile you created e.g. in your teens, which you never linked or used the same email address with as you politic profile and should have no connection at all (you acted carefully).

But then meta is like, oh see through the data we bought/own we know that that profile was using that (non public) email address and through other data we brought we know that that email is belived to be owned by the same person as that other email (e.g. you used is for forwarding or account recovery, also non public) so we conclude they are the same and publish *to the whole world trivially accessible that the anonymous political activists is you*.

Or another scenario: They used AI body/face recognition to make the link even through you never posted the face in you anonymous account without appropriate masking or at all.

Or another scenario: Metadata of locations leaked through the usage of social media created the link.

Or another scenario: Someone marks you on a image they took without your consent (and/or knowledge), doesn't matter if they later delete it or make it only visible to their frinds followers.

Or in other words as long as you don't live as a complete hermit and have far above average tech knowledge and also treat absurdly careful to a point where it causing major annoyance in your life stuff like that can totally happen to you.

This is why the GDPR was created to make it illegal to aggregate information about third parties without their consent in surprising ways. But it's also where it failed the hardest to archive it's goals you could say (but thats a different discussion altogether).


Yeah when I saw this headline I was really confused. What is the privacy concern with respect to publicly published data? It really doesn't compute for me...


Can Meta scrape twitter or reddit, and pair your messages with your facebook account? Sure they can. But it is probably not legal, and looks very scary.

Mastodon and Fediverse however explicitly offers your data to them.


Even though Threads isn't federated yet, you can find any user's public messages via https://threads.net/@username (which is identical to the scheme Mastodon and other fediverse sites use). Anyone's public postings can be found in the same way, and defederation doesn't prevent it. There is nothing to stop the Meta folks from looking at https://sfba.social/@not2b even if our admin blocks threads.net.

Mastodon and the fediverse explicitly offer public posts to everyone. Some sites use robots.txt to block search engines and web crawlers that follow the conventions. Others don't.


The problem I see with Threads isn't what Meta will do with fediverse data, it's the power they have with owning 97% of the entire fediverse network [1].

Embrace, Extend, Extinguish. Owning the vast majority of the fediverse userbase will cause them to have a large amount of power to compel users or servers to do whatever they want. What do you do when Facebook implements a new feature and all of your followers complain that your using a Mastodon server instead of joining Threads that has this feature they want? You either go against your entire community or let Meta takeover your account.

As such, the resolution is to not let anyone have this much power. It being Meta makes it easier to hate on them, but no single server should own the vast majority of the network, let alone (100M / (100M + 2M + 1M)) = 97% of it [1].

[1] Threads has 100M users and is rising fast, Mastodon was recently stated to have 2M active users, the rest of the fediverse can be estimated to be, say, 1M. As such, Threads has about 97% of the userbase.


Threads has 100m total users (that number is based on userid badges on Instagram afaik).

The fediverse has somewhere around 10-13m total users, about 8-10m of those are on the main Mastodon network, and around 2-4m MAU. It's hard to pin these down precisely because different counters disagree (it's hard), but if you're going to take the most optimistic number from Meta (the only one you'll ever see), you should take the most optimistic from the other "side" as well.

Threads doesn't have an MAU yet because it hasn't existed for a month, but it will not be anywhere near 100%. Most people I've seen on it seem to have bounced day one and user growth has stalled a lot (roughly halving every day).

Sources for fediverse/mastodon numbers:

- fedidb.org

- the-federation.info (includes some things that aren't activitypub based)

- https://mastodon.social/@mastodonusercount

Threads numbers (only total users, pulled from badges on Instagram)

- https://www.quiverquant.com/threadstracker/


This is indeed true and we will have to see how the numbers settle as we go along.

However I would be surprised if Meta doesn’t continue to possess well above a supermajority of the userbase until another large corporation embraces ActivityPub.


>However I would be surprised if Meta doesn’t continue to possess well above a supermajority of the userbase until another large corporation embraces ActivityPub.

That's likely true. But that doesn't force me to interact with that user base (I don't interact with them now and nothing of value is lost, so why should I start?) in the Fediverse.

If I had an Instagram/Threads account, I would be forced to do so. I don't, so I'm not.

So many folks here and elsewhere complain about the garbage in their "feeds" from the centralized "social media" sites, but (apparently) don't realize they don't have to see that garbage, or advertising or posts that are about/from topics/persons they don't care about.

And so the question I'll ask you is: Why should I care how many Threads users exist? I have no interest in interacting with them and am not forced to do so.


I think that's true, though I also think the fediverse (but not necessarily Mastodon specifically) will outlive threads.

But I think the really big question will be: in 3-6 months is meta putting out DAU and/or MAU numbers for threads separate from Instagram's?

Until then you can only guess how "big" it really is. I don't personally find the numbers so far all that impressive: it's a sub-10% conversion rate from insta daily active users and I think behind the celebratory face they're putting forward that might not be what they were hoping for.

But mostly I see this trend everywhere where people give a lot of latitude to things like threads and Twitter and then give the most pessimistic read of the state of Mastodon.

If Mastodon were a startup and "centralized" its growth, bumpy as it is, would be the darling of the tech press. This is really obvious because every article about the fall of Twitter lists at least one and often several networks that have worse numbers and worse growth than Mastodon as if they're the next big thing.

Though maybe that'll change now that threads has bought its first 100m users.


The power imbalance when a semi-monopolist joins an open protocol is a really hard problem to solve.

Google all but killed XMPP by using it in GTalk/GChat/Gmail/whatever it's called now. They probably had no ill intent from the beginning, but their very presence gave everyone the need to quickly be if not bug- then quirk-compatible.

By the time everyone came around they suddenly de-federated everyone and with vague references to spam, which everyone knew was bunk. But the damage was done.


I think a potential difference here is that a substantial part of the existing Fediverse won't care if we break compatibility with Threads. Many will actively welcome it, so there's potentially less pressure to yield if the make changes people don't like.


The Mastodon corner of the fediverse is also ridiculously more well run and diverse than xmpp outside the big players ever was.

Like, when threads joins it's far far more likely to be a net contributor of spam and abuse towards the rest of the network because the people who run Mastodon instances generally actually care.

Even Mastodon.social (the biggest instance currently) routinely gets silenced or blocked temporarily by other instances when it lets spam get out of control, and that is generally considered a good thing by users.

Honestly that's gonna be the main reason threads gets defederated after the first round of ideological blocks: self-defence against abuse.


A lot of servers have preemptively blocked Meta.


Threads is still way behind Twitter, though, which doesn't even federate with Mastodon and never did. If that's your complaint, why wasn't it doubly or triply so with the last corporate overlord? "Don't use that silly Mastodon thing, everyone is on Twitter" is, in fact, the way the world has worked for the whole lifetime of Mastodon.


> If that's your complaint, why wasn't it doubly or triply so with the last corporate overlord?

I'll interpret this to mean "If the problem is that Threads owns the majority of the userbase, why didn't you complain about Twitter owning the majority of the userbase?"

I'll reply to that as: Mastodon users did. That's why they used Mastodon in the first place, because they felt too much power was controlled in a single entity, so they complained and moved.

In terms of actions to take, what power was there with Twitter that Mastodon users did not exert? With Threads, Mastodon server owners have the power to defederate and block Threads trying to intermingle with their userbase. With Twitter, Mastodon users were the ones with the power to publicly disclose their Mastodon account and tell users to follow them on there.

In each instance, Mastodon users are doing what they can to reduce corporate overlords from having power over as many people as possible. Even if Threads is more centralized because of other instances defederating with it, the overall reach of Meta is reduced.


The fact that Threads has a much larger userbase than Mastodon already means that they won't ever really feel threatened by it. The stated reasons why GChat and Facebook Messenger eventually defederated is that it was hard to keep scaling the platform while speaking XMPP, but the unstated reasons were that Messenger and GChat at the time were still very much niche technologies that were jockeying for marketshare in a crowded space.

This time around Threads is already an order-of-magnitude larger than the existing Mastodon Fediverse. Moreover, now Meta has a diverse array of different social products, so there isn't as much pressure on any one product to succeed. If Threads ends up in a dominant position in the threaded-text social network world, that already nets them more users and more opportunities for ad revenue, which they can collect revenue aside their existing properties of Facebook, Instagram, and WhatsApp. On the other hand, interoperating with the Fediverse allows them to be opinionated about what kind of content they allow on their network (e.g. if you're posting from Threads, you can't post sexually explicit content) which can keep them advertiser friendly, while offering a relief valve for the loud minority that will want content disallowed by Meta's content policies. It's a win-win really.


>Owning the vast majority of the fediverse userbase will cause them to have a large amount of power to compel users or servers to do whatever they want

Mastodon already did this to ActivityPub. Extending open protocols is important else people will stop using them in order to accomplish building what they want.


> As such, the resolution is to not let anyone have this much power.

That's not a resolution, that's a wish. But how could it be achieved though? In 6 months Threads could well have hundreds of millions of active users, the vast majority of whom probably won't care about the fediverse.

> What do you do when Facebook implements a new feature and all of your followers complain that your using a Mastodon server instead of joining Threads that has this feature they want?

Mastodon (and others) either have to compete or their users will be like the people reading email in emacs or vi (no offence intended)


I don't think you can really compare total users (how many people have created an account) and active users (how many people actually use the platform).


People are right to be worried.

Take a look at Threads.net and Mastodon.social and tell me these two projects have anything in common. Everything is open on Mastodon.social (I can even use the search box) and everything is purposefully closed on Threads.net (I can't even see the basic metadata). It's all dark patterns since day 1. Meta is not givin up a sliver of control that's for sure.


Maybe they are getting ahead of the Digital Markets Act by making Threads interoperable from the start ?


What's the point of joining a decentralized federated platform if you don't want other instances or people to see what you post?

Meta scraping your name and doing other shenanigans is a different subject and obviously bad, but the rest is like complaining joining a public torrent tracker and being mad about leaking your ip address to its peers.


What's the point of joining a decentralized, open-source federated platform if you don't want Facebook to collect information about you and track you online, even though you aren't a Facebook user?


So you want the information to be free! But not free to those guys over there. Ever so slightly hypocritical.


That’s a bit of an overstatement. I can want an open neighborhood but still be creeped out when a neighbor puts up a camera facing my house.

Systemic data collection and casual access aren’t equal.

That said, on these protocols you can’t control it anyway, so it’s not like you can stop it.


But it’s Mastodon. It’s ALL posted online without a paywall.

They could have been scraping it for years (if they cared) and you’d never know.

Federating won’t give them anything new except DMs to their users since those aren’t encrypted.

All the existing stuff you’ve posted publicly is already public.


Not all; your IP address and e-mail aren't posted, for instance.

Meta will find a way to get their hands on that.


As others have noted they are doing it anyway. Gmail accepts emails from ProtonMail despite "ideological differences" and vice-versa, otherwise it's destined to doom.

If being separated from the mainstream internet is the reasoning then yeah sure, go ahead, but you also can't complain why no one besides fanatics is using alternatives when the alternatives are not worth using for the mainstream audience.


There was a post here from Drew DeVault a while ago on how they’re rejecting all non-plaintext email.

I see the situation as similar


if you're at all concerned about privacy, the fediverse is not for you.

It is anti-privacy by design.

Once you've posted something to it, you have absolutely no control over who has that data and what they do with it. That's the fundamental design of the system.

Complaining about meta potentially ingesting all data from the fediverse comes off as a bit naive. Meta is the least of the privacy concerns on the fediverse. You at least know who they are and have legal recourse against them. Huge numbers of other consumers are not even known. Just look at the thousands of instances that have popped up. Many of which are just in joe bob's closet and god only knows how they protect the data.


> Once you've posted something to it, you have absolutely no control over who has that data and what they do with it. That's the fundamental design of the system.

Welcome to the Internet. It’s always been like that.


I mostly agree with you.

but if I'm posting to facebook/twitter, I have privacy settings that they legally have to adhere to. Of course this doesn't prevent all scraping/copying/whatnot/whathaveyou, but it's a lot harder to scrape a private profile than something on the fediverse. Most of these sites have a vested interest in preventing scraping and the like. I have some control even if it's not great.

Fediverse I don't even have an idea off where my data is much less what their security/privacy practices are.


I think this is the disconnection with most of these conversations, or at least what's keeping me from understanding them.

I think we might need to be better about communicating this to new users: once something leaves the server, it is out of control.


Which goes back to my fundamental bearishness about the mainstream viability of federation. Mainstream users don't care about how your thing is implemented. If you're finding yourself explaining this level of implementation detail to people, you have not made a mainstream product.


Mostly agreed, though this is more accurate about ActivityPub than federated stuff in general. There are privacy focused things, they just tend to lean closer to being fully decentralized (or at least feel like it).

ActivityPub, and Mastodon in particular, is very public-oriented though yeah. The feature-set and discovery stuff is pretty much inherently un-protectable.


They can collect all that info right now from that decentralized platform without integrating it in Threads


Threads doesnt need to exist for facebook to capture every post in the fediverse.


Yes, your exact question but unironically. If you want to join a platform that will keep your information secret, you can't also want to join a platform that is federated. Those two desires are fundamentally at odds. Federation implies copying information from one node to another...


And what’s wrong about details on bunches of whore - people engaged in the oldest professional occupation known to humanity, no less - scraped into Meta systems, as a replica? It’ll end up in recording, representing, normalizing birth control as well as commercial sexwork and also current status and known issues around it.

It’s just Mastodon movement or whatever it calls itself don’t want to be associated with shady corners of lower classes or the human society, despite there shouldn’t be such classes and hidden areas in the first place, as in not trying to stigmatize, deny and nullify the fact that we’re dirty animals, but in constructively removing negative aspects of life.


Because famously, shining light on a community of people that are hated only results in that community becoming accepted.

Please ignore the people who die and are harmed in the process.


yeah, segregation and gatekeeping wash clean, and feels great doing too.


This is not a problem specific to Federation/ActivityPub/Mastodon.


Why do you think they aren't doing that now? Meta could easily do it.


federated is not decentralized


If I take public transit to work should I be bothered by Uber posting my daily commute details?


A lot of privacy problems (including every single one raised in that article) will be solved by just not posting your personal business on social media, but people are somehow unwilling or unable to accept it.

If you post incriminating content on a Mastodon server it is still out there whether Facebook can officially connect to it or not. It is archived forever out of your control. The server owner can be subpoenaed. Anyone can scrape the website, take a screenshot, or share it in a hundred different ways. Regardless of what pseudonym you use it can be tied to your real identity with 5 minutes of internet sleuthing.

"Private" online social media is an oxymoron. If you put something out there in the world you don't get to control whose eyeballs land on it. Facebook isn't the problem, your expectations are.


It's even worse when you consider that others will put something about you out there. They willingly give up their contacts list (with you in it), when joining a network.

A "friend" may make a photo of you as part of a social/work event and directly post it publicly.

Even with no participation on your behalf, your real name, phone number, address and photo are out there.


Yeah I feel like maybe this is old fashioned of me but

If I don’t want other people to see something or to know about it, I don’t post it online. Or I don’t post it in a way that could be traced back to me. There’s absolutely no link from my HN account, for instance, back to me IRL.

Why open yourself up to the risk? What do you get in exchange?


I want it to be blocked for a different reason. The fediverse has always been small enough that the content is "underground" and interesting. Some of the people on there are weird or completely different than me and that's what makes them so interesting. That's not the case on something like Twitter and Instagram. Good and actually interesting content is drowned out between your average tweets and posts about nothing at all. Or all the content sucks and is there for the sake of exposure, likes, clicks etc. But I don't want mastodon to be overrun by 100M users' uninteresting content! I don't even want them in the replies of posts. Mastodon has consistently been great before this while Twitter fiasco. I wish it never happened, I don't want the space I have liked for years to change and be ruined. Maybe an apt analogy would be the difference between Marginalia and Google as search engines. Why would one want the interesting underground search engine to be filled with SEO spam and ads?


You know what, you've convinced me. I've been rooting for some kind of society-at-large network to succeed at federation (I was so optimistic I tried bitclout, and more recently bluesky. Both want to be a single global database to send money or to index hashtags and blocklists globally)

I really believed that discoverability is king, and I should just be able to search a global graph of user profiles and read everything that everyone has ever said, but you're right, there's a lot of conversations that don't happen in public, and not everyone wants to be "discoverable".

So I think there's a case to embrace the balkanization of social media, and go back to having separate identities to be a part of each phpbb we signed up to. Going to different domains to talk to different groups of people makes sense, and we can have the modicum of privacy offered by a semi-private chat server like discord, so that your messages don't get indexed by google and archived forever. (Obviously discord retains all the message logs, DMs included, but at least its not publically searchable)

And global social media is always going to suffer eternal september. Smaller, unfederated chat communities is a probably a much healthier approach to social media than whatever it is we've been doing the last decade of meta-gramming


Spend a few days in some large Facebook groups or on any active Twitter thread and one comes to this conclusion very quickly: online semi-anonymous interaction does not scale. Cannot scale. Putting aside the concerns about the dark patterns brought by the companies seeking revenue from it, I just don't think human beings are capable of working in these kinds of mediums in a way that doesn't degrade to a lowest common denominator of shitty behaviour.

There's just no way to get enough moderation in place and get people to behave respectfully, and then it just gets ugly.

I remember BBSes back in the 80s degrading to uncomfortable useless places simply because of a few bad trolls or assholes getting in the mix. Now it's just ridiculous.

Small, comfortable social networks and a feed that I can fully control, that's what I want. Facebook and Twitter began their misdeeds the moment they rolled out algorithmic suggested content that I did not explicitly subscribe to.

I've mostly just retreated into a few discord channels for family and friends at this point, with forays into Mastodon here and there. Oh, and way too much time on this orange site.


I think this privacy thing is incomprehensible, but this makes total sense to me.


>Mastodon (and most other fediverse software) wasn't designed with privacy and user safety in mind

this is the real problem. Mastodon and lemmy share way more information than they actually need to (like lemmy shares a list of usernames who upvoted or downvoted a post, not just a count), and if you're using one of those services you should expect that all your data and interactions are public. that's the actual threat here, not the possibility that facebook might suck up that data. Blocking Threads from federating is just a short-term patch over mastodon's bad privacy controls.


Not justifying the design decision (which is bad IMO), but the reason upvotes and downvotes are shared is the nature of sending discrete events. I've been working on a Reddit like thing on top of Matrix and likewise I have to send upvote and downvote events, which means other clients that fetch events will fetch each upvote and downvote event and a malicious client can then track what individuals upvote and downvote.

(I'm trying to play around with ways around this, like using a bot to instead publish aggregation events and making votes private, but it's an ongoing exploration.)


We’re two people making a Reddit-thing on top of Matrix as well. Going open source in a few days. My email is in my GitHub profile if you wanna talk!


To be clear, Twitter also shares the list of users who like a post, and people generally seem to view this as a good feature rather than an invasive one, so it makes sense that Mastodon implemented it as well.


twitter shares that while making it immediately clear that the information is public. when you like a post you know your name is going to show up on the list of people who liked it, because the "like" button and the list are right beside each other. that's consent.

lemmy does not make it in any way clear that upvotes and downvotes are public information.


that's an UX problem more than a protocol problem. anonymous voting would be more easily gameable.


> Blocking Threads from federating is just a short-term patch over mastodon's bad privacy controls.

It's not a patch at all. Facebook (and literally anyone else) can still scrape or otherwise access that data in a hundred different ways. Blocking Threads is simply some server admins making an anti-Facebook statement, nothing more.


Not related to privacy, but the main point of blocking is to never host their content. Not listing it, not caching it, not hosting their interactions with your users's posts. It's more than a statement.


ActivityPub has been ignoring privacy since at least 2017 https://github.com/w3c/activitypub/issues/225


I see a good discussion of the different options and why they've chosen not to take them. That's not ignoring


One idea is to create a privacy-oriented server (client?) that lies; expresses things like upvotes but does so with generated identities.


it doesn't even appear that there's any need to lie. upvotes/downvotes in activitypub are expressed as a number, so a server could simply report -300 instead of a list of 300 different "-1"s to indicate that 300 people had downvoted. but that's not what the lemmy default behaviour is.


If ActivityPub and Mastodon were designed with privacy in mind, Facebook/Meta wouldn't touch it with ten foot pole.


https://news.yahoo.com/teen-mom-plead-guilty-abortion-230802...

>A Nebraska woman has pleaded guilty to helping her daughter have a medication abortion last year. The legal proceeding against her hinged on Facebook's decision to provide authorities with private messages between that mother and her 17-year-old daughter discussing the latter's plans to terminate her pregnancy.

If you have information you don't want others to know, then don't tell your secrets to a multi-billion dollar pseudo-governmental organization that has even less data collection protections than the governments it serves. There's more you should do, but that's a big one.


If you have secrets at all, don't send them through any ActivityPub conversation.

People on Mastodon make this mistake quite often, tagging someone they're talking about, or realising that the person they tagged now receives a copy of their conversation.

This is a massive issue on top of the lack of end to end encryption. Both servers receive plaintext copies of the messages exchanged. I'm sure mastohub.ai is a safe server, but how can you be sure they'll never be bought out or hacked?

If you want to federate and share secrets, try something like Matrix or XMPP. They make it significantly more difficult to read your messages.


this is also what always bothered me about twitter. some friends of mine has absolutely private conversations on their public twitter feeds (nothing sensitive but stuff like sharing shopping lists). my fear always was that if i join twitter they would use it for private conversations with me insteads of using email or something else that isn't public for everyone.


That's true -- and my more detailed threat modeling post has a big public service announcement saying "don't share information on the fediverse that you want to keep secret" -- but there's a lot of information that's not "secret" that people do want to share on social networks.

https://privacy.thenexus.today/fediverse-threat-modeling-pri...


> If you have information you don't want others to know, then don't tell your secrets to a multi-billion dollar pseudo-governmental organization that has even less data collection protections than the governments it serves.

that's such a naive egoistic apathetic world view it baffles me

sometimes I wonder if posting stuff like that just don't understand how humans and societies work, or just don't care because "they know better".


What a terrible article. Well relevant to the discussion of data privacy, it completely misconstrues the case. This Behavior would have been illegal against under the row standard as well given that the team was more than 7 months pregnant and the two attempted to incinerate the body to destroy evidence


I mean yeah, but also, the actual fault is on the side of people who literally voted for this. And campaigned for this. And spend years trying to put the right people on supreme court so that this happens.


The abortion in that case was sufficiently late term that it would have been illegal before Roe was overturned. It also would have been illegal in most of Europe, most of South America, and most of Asia.


[flagged]


Please take your inflammatory language somewhere else.


This reminds me of the "embrace, extend, extinguish" strategies Microsoft used extensively with Linux and open source software in the 90s. From [1]: "a phrase that the U.S. Department of Justice found that was used internally by Microsoft to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences in order to strongly disadvantage its competitors."

[1] https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_extingu...


There's a lot of discussion about that! Here's a very good article on the EEE threat. https://ploum.net/2023-06-23-how-to-kill-decentralised-netwo...

Personally I think it's more an "embrace, extend, and exploit" approach; a decentralized model could work well for Meta, for example if they do revenue-sharing on ads hosted by other instances (think Disney or LA Lakers).

Update: here's another good article looking at how Meta could embrace and extend -- again, not extinguish. https://darnell.day/heavy-meta-four-business-reasons-why-ins...


In my personal (subjective) opinion, XMPP died because of entirely different primary reason: it, by design, had trouble working on mobile devices. Keeping the connection was either battery-expensive or outright impossible, and using OS native push notifications had significant barriers. At the very least, that's why I stopped.

It's not like Google had "extinguished" anything, it's more like the "largest server went uncooperative and removed themselves". Sucked for people who were able to chat before and got separated, but I disagree with painting this as some sort of fatal blow.

I don't think there's some statistics on reasons why people stopped using XMPP, but I don't believe Google is the reason for it. I'd speculate that it just coincided with the beginning of the smartphone era and this whole "Google killed XMPP" is a convenient myth.


Agreed that these other issues were a problem for XMPP. Christina Warren made this exact point on Mastodon a few hours ago -- in response to a post from Evan Prodromou that talked about the role that spam and harassment played and how he and others in the XMPP community didn't diversify the network. So, there are multiple factors. That said, I still think the post I linked to is very much worth reading.


It is a valid opinion, and the events described there took place, I just - personally - don't believe the outcomes were caused by the events described, they feel more like coincidences to me. Although Google dropped XMPP at least partially for the same reasons it died - trouble with architecture that made it problematic mobile.

And the comparison is not fair. XMPP was meant to be extended, so complaining about the second "E" in "EEE" is IMHO questionable. Google left a bunch of useful XEPs and even a Free Software codebase (libjingle) that others still use to the day, and I don't see anything wrong with this (and I'm surely no fond of Google, but that's not something I'd bash them for). This is feels very different from what may possibly happen in the whole Meta/Threads/Fediverse/ActivityPub situation - I mean, it's not likely Meta starts contributing to Mastodon project or something. In my understanding, EEE is more applicable to Microsoft and IE (where it surely happened, and a lot) than to Google and XMPP.

IMHO the article is a good read to at the very least be familiar with the events and understand the argument - but personally I find myself disagreeing with the presented arguments, thinking it's quite a stretch. Of course, that's my own, purely subjective opinion.


Yeah when there are multiple causes it's hard to know how much each contributed.

AcitivityPub's also meant to be extended, there are FEPs, and it's likely that the working group will come up with a new version as well. That said there certainly are differences between XMPP and ActivityPub, most people say the ActivityPub ecosystem is significantly farther along than XMPP was.

I could imagine Meta doing an open-source AP server (and with a fresh start it would be cleaner base than Mastodon). I also wouldn't be surprised if the release a app building toolkit / framework / whatever ... there isn't a good one now, they do that stuff well, and as they introduce proprietary AP extensions then they toolkit is a good way to get people to adopt them. But it's very hard to know at this point, it's also possible it's just PR spin and they won't really invest in it. We shall see.

Anyhow, good discussion, thanks much!


It's more that there is more to a complicated story than that, but that Google dropped it when it did surely was important at the time. To put it the other way around, had Google continued to run a federated chat, Android would have had first class support in no time. The fact that third party real time messaging never worked well in Android, and really bad in GApps, is related to this decision.


As many others have said before, this isn't very likely to happen for many of the reasons it never happened with the web or Linux.

- ActivityPub is an open protocol. If Meta goes all-in on it, they'll be implementing a transparent spec everyone knows. Modifying that would send obvious shockwaves through the network and signal their non-cooperation. There isn't a covert way for them to really try this.

- Mastodon itself is AGPL licensed, meaning any Meta fork (for whatever reason) would be subject to "provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version."[0]

- Meta has no reason to. If they decide the app is sufficiently popular without ActivityPub integration, then things return to the status-quo for Mastodon. Meta loses what little control they had over the direction of the standard/protocol/applications and nothing really changes.

[0] https://www.gnu.org/licenses/agpl-3.0.html


There's no reason for Meta to use Mastodon in order to federate, so I don't see why the license of Mastodon is relevant.


Then there's nothing for them to extend or extinguish. If they're not able to manipulate the client and they can only control the content on their own server, what leverage does Meta have to extinguish the fediverse?


They can potentially try to make changes to the protocol, and try to leverage their users numbers to force people to accept it. I think, though, that they'll find that a lot of us are stubborn and don't like them and will not react well to that.


Which is what mastodon already does. It has extended the ActivityPub protocol in a few places. Everyone copies because you must work against Mastodon to really participate.


Change is fine when they make sense and are made by a party people aren't extremely suspicious and at times hostile to. Even sometimes if you don't fully agree with every aspect.


It doesn't have to be a technical strategy, but a UX path to EEE.

I've been thinking about this in terms of Lemmy (also built on ActivityPub), which I understand isn't currently on the table for interop (but if Facebook is after Twitter's lunch, why shouldn't they be after Reddit's). It could even be the same application - Kbin is another AP service which has separate tabs for "link aggregation" and "microblogging" (Reddit and Twitter, respectively).

With Lemmy, the way a large corp could come in and push it around is by simply creating it's own version of the top 100 (or N, whatever) communities, and automatically subscribing users into them based on their interests (already known, due to existing accounts/profiles elsewhere). c/linux on lemmy.ml has ~6k subscribers, and is the largest Linux community on Lemmy, afaict. It's not unreasonable to think a large corp willing to pull in its existing userbase couldn't increase that by an order of magnitude in very short order. Overnight, those communities become the place where conversations are happening on those topics (maybe even with some pre-seeded content) and the existing lemmy communities stagnate.

Fast forward a while and one day BigCorp decides to pull the plug. Existing non-BigCorp Lemmy users are now separated from the communities they've been in and need to create BigCorp accounts. You could argue that those non-BigCorp Lemmy users are no worse off than they are pre-BigCorp-federation, but they're effectively migrating their communities all over again.

As far as why, I think it's pretty invaluable for Facebook to:

1) appear to be "playing ball" from a regulatory aspect 2) eat a competitor's lunch 3) control a (potentially!) up and coming federated service


remember the glory days when both google chat and facebook chat used XMPP (jabber?) and you could chat with people with any client you wanted.. (ahh, i miss pidgin). that lasted until they had 'converted' enough users to their systems to then close off all connections and make a walled garden.

I assume they will do the same with the ActivityPub compatibility. I don't see it as a permanent plan.


I wonder if all of this hoopla over Meta joining the Fediverse is even justified. If Meta wanted to suck up all of that data right now, they could do that without creating an entire social network to do so, by literally grabbing it from the source, where it is publicly available, and they can do this with basically no fear of ever getting called on it. By merely federating with and supporting ActivityPub, all they do is make it reciprocal, and opt-in, at least from our PoV.

The real risk here in my opinion is the influence that Threads could have over the Fediverse indirectly. What if they become an integral part of it and threaten to leave, or just leave? What if they become the defacto censor of what instances you can federate with, by virtue of cutting off anyone that doesn't defederate certain instances? Etc, etc.

The privacy concerns, while they hold some validity, are a little bit moot for people who weren't going to consider using Threads in the first place. Google hoovers up all of this data already if only indirectly, and nobody seems to bat an eye.


My prediction on how this goes down...

Meta has zero interest in ActivityPub or the Fediverse, a tiny speckle of users hostile to them. In less than a week, they've created an "instance" 50 times the size of all of Mastodon and the rest of the fediverse combined. The projection/goal is to grow towards 1B MAU, which would make it 500 times larger than all of the rest of the fediverse.

Why would Meta possibly care about this tiny group of misfits? The only reason I can think of is to give legislators the idea that they are "doing good".

Say it is done, and we have this Threads cosmos-sized instance. Tiny vocal Mastodon instances will defederate out of principle, and nobody cares. Because they are anti-growth anyway, they object to anything.

Larger Mastodon instances will consider federating but will then find out Threads will only do this under conditions. You have to serve ads, have to comply with a moderation policy, treat user data in a certain way. You effectively work for Meta now, but unpaid.

Then you turn the thing on and the flood gates open. The first thing you'll notice is your bankruptcy as your few tens of thousands of users now having follow access to a billion users, including very active and popular ones, spiking your infra. 10x? 100x? Who knows? And what about storage? Yesterday I've read how a mid-sized Mastodon instance (few thousand users) was adding 1GB of media storage every 15 mins. Do that times a 100 (or 1,000) as well. Your moderation inbox...well, good luck.

This entire thing isn't going to work, at all.


I wonder if they'll surprise us all a little and allow people to create - tightly controlled mind you - personalized fedi instances for things like "fan experience", but from the Threads app perspective it allows you to jump "portal to portal" if you will, without leaving the app, so it feels seamless. This would open other monetization verticals for Meta via platform creators etc. It'd also give you data carve outs that let Meta see what the most popular verticals are and they can sell specialized targeted ads against that, which would likely fetch a bigger premium and provide more useful analytics.

Also worth consideration: They could federate your Facebook feed in the future too.

It may not be so much supporting the protocol from the outside as its worth doing from the "inside".

EDIT: I'm not talking about full blown customization here, just enough that allows creators to make their direct profile feed look different from the standard app, maybe have targeted links or a special background color etc. Simple but differentiating things.


I think it won't happen at all, for most of the reasons you gave and more. The feature will never launch. Either because negotiations with various Mastodon instances just falter, or because Meta just loses interest, or both.

Ultimately I think their play here is they want to build their social graph and interaction model around users that are not on their network right now. Every time you boost or share something that came out of Threads, they'll learn something about you -- a non-Meta user -- that they couldn't get just from mining public content. But I think they'll find they're actually not interested much in that data, that its value isn't high enough to justify the hassle.

I just think it's a tempest in a teacup and by the autumn when it's rumoured to be supposedly launching, we'll just stop hearing about it. Or there'll be a trial for a bit and then it'll just get unceremoniously dropped.


This is very akin the dilemma of getting yourself a very secretive and secured email: all the privacy and security stops the matter the moment you send an email to somebody with a GMail; the entire thread is visible below (through a chain of quote blocks) and it's game over.

If you want true privacy, make a centralized self-hosted service where people have to be allowed in explicitly.

Don't see what the problem is in the OP, they are kind of expressing displeasure that a service that technically can be scraped by almost anyone is... you know, scheduled for scraping and exposed. And at the same time nobody actually bothered to prevent the scenario from happening.

And this also looks very much like the early internet: people didn't think others are malicious so security was minimal.

This kind of naivete really needs to get clubbed to death. We can't afford being as naive nowadays.


Don't have public accounts on a platform if you are concerned about privacy. Don't use threads, don't use Mastadon.


Don't go outside if you're concerned about bullies.


Meta makes their money through advertising, they do everything in their power to profile and track you so they can serve up relevant ads.

The issues in the article are related to how ActivityPub/Mastadon work, if you are concerned by privacy issues, don't use Meta.

Meta is guaranteed to erode your privacy, being outside doesn't come with a guarantee of being bullied.


unironically


The sex worker real name reveal has to be bullshit.

I'm quite convinced that Meta actually does have the real name of most of us as well as the ability to link it to other accounts. But the idea that Meta would willingly reveal this without the user's consent means a planet-scale doxxing event. It could lead to actual deaths in the real world, and they would be legally crushed.

What is far more likely to have happened is that the user had an Instagram account with their real name and used that to log in/sign up to Threads. There is no stand-alone account on Threads currently.


My guess is that she was breaking Facebook's real name policy by using a fake name on her main profile. It seems plausible that Facebook would update someone's main profile to their real name.


They usually just suspend the account silently instead of updating your profile in the claimed way.

Meta is not known for being a good citizen, but unlike sending fake notifications or tracking which helps them achieve a business goal, updating user profiles without consent achieves nothing of that sort.


  It could lead to actual deaths in the real world
They've already been there, just it wasn't white people.

Myanmar https://www.nytimes.com/2018/10/15/technology/myanmar-facebo... https://edition.cnn.com/2021/12/07/tech/facebook-myanmar-roh... and Dutertes drug war in the Philippines https://www.buzzfeednews.com/article/daveyalba/facebook-phil...


Claiming control over information that you’ve made publicly available is nothing but claiming control over other people.


If a message is public then I shouldn't care if Meta has access to it or not; they can access it if they want to do. However, if I wish to send a private message to someone, even on a different server, it should not have to go through Meta to do so (unless the recipient is on Meta, although then the sender should be made aware of that before sending the message).

I don't use Fediverse nor Meta/Threads, but I write stuff that is public and anyone can view it, or private which only the recipient should read, like anything else, whether I post on Hacker News, or on Usenet, or on a public IRC channel, or whatever else it might be. (Some people don't like public IRC logs, but if it is a public channel then I would prefer that it does have logs; fortunately some IRC channels do.)


IME the problem with Meta in the fediverse aside from the privacy issues is that fedi is largely built by and run by people who want a community separate from the mainstream of social media, with their own rules and goals, without corporations, for them specifically and not for profits.


Well, it will be a problem if people are forced to use Meta even those who do not wish to do so.


The unfortunate thing about the Fediverse, relative to a (hypothetical) walled garden, is that this sort of information leaking is inevitable.

Meta has the scale and scope to make it scary, but the point of the Fediverse is that it is federated, which implies some openness. If you're federated, you are publishing content to other people that they might do whatever they want with. That includes crawling it, storing it, indexing it, and building mass profiles. You can certainly protect yourself by blocking bad actors, but since the network is, well, a network, an aggressor that wants your published data need only find access to a node you do want to share with and copy from there.

So you either default-close your data and choose very, very carefully who you federate your node to or... You don't put that data in the fediverse at all.

(Contrasting to a walled garden, where monolithic control of the data storage and transfer means a single entity is responsible for where the data goes and can constrain at will. If someone's kicked off Facebook, they're off Facebook; they have a single attack surface they have to reenter to get to that data, not O(nodes) they could make an account on to reach the data of someone who'd rather not share it with them).


As the crypto industry discovered: the paradox of decentralization is that every downside it has can only be solved in a centralized way.

You can't have perfect privacy in a system that has the exact opposite goal: federation. It means your data spreads by design and enforcement of any privacy-preserving feature is optional per instance.

The very loud minority on Mastodon that obsesses over safety has picked the wrong software. They should have just created a Telegram group.


>Trans people at risk of being targeted by groups like Libs of Tik Tok

It's my understanding that Libs of Tik Tok simply reposts public videos. Say you manage to find a way to block them specifically from seeing your content what's to stop another account springing up and doing the same thing. The only option is to keep your content among friends. But then you have another trust model where your friend could be the next Libs of Tik Tok.


ActivityPub has a problem of laying all data out, nicely structured, just waiting to be scraped and mined and machine-processed, in perpetuity by default, as if it was something people inherently need when communicating. Is it, though?

It does look like something idealistically-minded early techies would justifiably find really cool.

It may indeed be desirable for, say, Dutch government (and perhaps any government that wants to be transparent).

However, I’d argue it may be from suboptimal to harmful for regular people.

Regular people may have to worry about future governments, which may or may not end up less transparent to hostile towards them, as well as other powerful adversaries. Regular people may want to be careful and value features like transience, privacy, and plausible deniability.

Perhaps we can do better and come up with a protocol that combines openness and those values. Whether Facebook enters the Fediverse with its new product or not, ActivityPub in its current shape and implementation seems to be a liability.


shrug not having API didn't stop anyone before that.

And "I want random people to see my social stuff (cos I yearn for attention) but not that particular person/corporation" is unsolvable problem


Bug-free software is unsolvable, but it does not mean we should stop trying to avoid bugs, that’d be just silly.

If fully precluding public and private intelligence is infeasible, that does not mean we should be using a protocol that in many ways is optimised for public and private intelligence.

Privacy, like many things, is a spectrum.


I'm with you if you want to keep the API but put them behind stronger authorisation requirements, i.e. what "authorized fetch" seems to be for.

I absolutely disagree if you want to keep the data public but make it "harder to scrape", i.e. remove all APIs bury it in some annoying HTML/Javascript mess.

That would absolutely punish the wrong players: Having an API which allows easy access to structured data allows all kinds of desirable usecases, such as being able to use whatever client you like.

In contrast, the big players who are interested in tracking the entire userbase already have enough experience in building robust scrapers - they won't be deterred by a closed-down API.


Regarding “mess”, it doesn’t have to be. Upon some research, there actually already seem to be protocols that try to address these issues in a reasonable way in spec and implementations (e.g., LitePub[0]).

Regarding dedicated “big players”, I will just repeat my point: they may still be able to do something but perhaps we shouldn’t make it easier for them, especially if it provides no benefit to an ordinary person (that is: excluding users such as corporate or government bodies, OSS projects, and so on).

If it becomes sufficiently difficult for them to gather intelligence, the effort required may actually be useful evidence in case of a lawsuit—it would show that one side cares about privacy and took measures to avoid being identified, while the other side circumvented those measures.

[0] https://litepub.social/overview


Indeed, and I extend this problem to any data of any value. The more semantically you describe it, the more pathways you create for abuse.


"Decentralized access for everyone, unless it gets popular enough that somebody actually wants to interop with it"

I understand the Meta hate, but joining a very explicitly public and intentionally republishable service and then being unhappy that your data is public and intentionally republishable is bizarre to me.


I think for a lot of mastodon users it's more about being part of a specific ideologically aligned in-group than it is about anything else (this post touches on a lot of stuff that makes overtures to that).

The irony to me is that any chance of relevance for a protocol is obviously going to need big players like meta to sign up (and that's a good thing for the protocol).

A weird set of circumstances might have aligned where meta sees an advantage in being part of a federated protocol to commoditize a threat to themselves (twitter, bluesky, etc.) and still hold a dominant position in quality of the end user clients (which is the only thing 99% of users care about).

It's a little funny a lot of the mastodon hosts are up in arms about this, but not that surprising when considering what they're actually getting out of being part of it (the identity stuff that comes along with being a mastodon user).

I'd guess similar stuff was said during the eternal september era of the web itself - simply being an internet user was no longer an identity that meant something culturally specific.


Thread users are today's aol users. Better to keep them out of areas where real conversations happen


Then I fail to see the issue since "real conversations" aren't happening on the fediverse either.


What do you mean by this? There's over a million active users including quite a few high profile people like Cory Doctorow.


There's dozens of real conversations happening. Dozens!


You are 100% right that Threads users are today's AOL users, and that's a good thing. If Threads actually federates, it will be another Eternal September [0], and we need that for ActivityPub to truly thrive.

[0]: https://en.wikipedia.org/wiki/Eternal_September


> If Threads actually federates, it will be another Eternal September [0], and we need that for ActivityPub to truly thrive.

Please define "thrive" in this context. Do you mean DAU/MAUs? I know, perhaps, a couple hundred people and respect the opinions/ideas of perhaps a few dozen others I don't know.

Those are the only people I care about interacting with and I can interact with them just fine without Meta's "offerings."

AFAICT, ActivityPub is thriving. It has an active user base, is under active development and doesn't "productize" its users.

I'm not trying to be a jerk here, I just don't understand how you're defining "thrive" in this context.


ActivityPub is doing fine, but the more people who are using it, the better. We saw this with the most recent migration from Twitter to Mastodon about 6 months ago. We got more users, and that brought more developers, bug fixes, enhancements, and new ActivityPub projects.


>ActivityPub is doing fine, but the more people who are using it, the better. We saw this with the most recent migration from Twitter to Mastodon about 6 months ago. We got more users, and that brought more developers, bug fixes, enhancements, and new ActivityPub projects.

You won't get any argument about that from me.

That said, what's needed is more instances, not a single instance with a hundred million users.

What's more, how likely is it that the Meta dev folks will want to share bug fixes/enhancements/etc? And even if they did, how many of them are there?

I'd rather see 100 new Fediverse instances of a wide variety, with a bunch of different folks contributing than a Meta monoculture.

That's not to say I object to Threads. AP is open source. Meta can use/not use federate/not federate the software as they choose. And if they actually do federate and bring new devs and instances along with them, that's great!

I just don't think we need Meta for AP to thrive. It's been doing just fine so far, and will continue to do so with or without Meta.


Yeah, I wouldn’t count on Meta contributing directly. I guess what I mean is, out of those 100M users, some will be programmers who get jazzed about ActivityPub after using Threads, and they’ll start up hobby projects or contribute to existing projects like Mastodon or Pixelfed.

I’m 100% with you that more diversity is needed, but if Threads actually federates, I’m optimistic that it will help feed the broader ecosystem.


Surely you, Foss User, are aware that FB et al don't just "republish" your voluntarily published info as per GP - they doggedly track you around the internet and the physical world, 24/7, without consent, storing, profiling, and reselling you to advertisers.

Example: download the supposedly privacy-focused app pCloud on your iPhone, start it up, and check what IPs it's hitting. That's right, it's hitting facebook tracking servers.

This is not a tribal ingroup club thing. It's a "fuck off, megacorps" thing.


Yeah, this article reads more like a critique of the way ActivityPub and Mastodon work. None of this is particular to Meta.


The odd thing is that even the biggest fedi promoters don't seem to get it.

I thought "mastodon.social" was based in Germany, heart of GDPR country but there is no consent theater, no harassment by cookie popups, certainly no controls over data.

I really don't mind, but there is some serious cognitive dissonance there.


Because almost none of that is actually required if you are not collecting data outside of the actual usage of the application.


People are documenting their own personal lives without any protections. You can delete your account and the system will circulate a polite notification that other servers should delete that information.

If you are a "sexworker" (sic) and you doxx yourself tough luck.

If you reveal your mental illness through the things you write about and the language patterns you use tough luck. (Pro tip: machine learning algorithms can read your social media posts and psychodiagnose you better than the psychiatric nurse practitioner you'll struggle to get an appointment with.)

People get these spams inviting them to play games where they ask questions trying to gather their answers to break into their bank account such as "What was the name of your first pet?" Even if you didn't have a tendency to be paranoid maybe you should.


What does any of what you just said have to do with GDPR requirements?


If this is right

https://github.com/mastodon/mastodon/issues/7280#issuecommen...

#1 is "consent theater" which I am not noticing, maybe I missed it. Even if I give consent to one server am I really giving consent to any other server? Can any other server be bound to my agreement with the first server? #2 is a "polite request" and not a guaranteed property of the platform. #3 seems to be satisfied.


Fedi promoters are traumatized from Mastodon explosion of late 2010s that forced them into picking either of revolustionist-terrorist or anime-loli or trans-furry factions to support, of which the last one is the only less-than-seriously-considering-self-harm choice for most. It still must be full pain and giant source for self contradiction.


There's no consent theatre because mastodon isn't doing anything that requires consent under the GDPR.


> joining a very explicitly public and intentionally republishable service and then being unhappy that your data is public

Comparing the sheer amount of data that Meta/Facebook vacuums up to the privacy practices of similar apps is instructive.

https://www.wired.com/story/meta-twitter-threads-bluesky-spi...


IMO the problem with Facebook is the private data they vacuum. If you publicly post the data on an open network, I see no problem with them taking it.

In practical terms, Facebook is actually quite tame compared to any other malicious actor who can get the same data. FB just wants it for ads and processes it in aggregate (most is never seen by a human), while other malicious actors might actually target you personally.

The main issue is that you shouldn't post anything on a public, unauthenticated network that you wouldn't want random, potentially-hostile actors to see.


This feels like another instance of seeking a technical solution to a legal problem


People seek mitigations to problems when solutions are not available.


This reaction has shades of nimbyism: Yes we care about climate change + housing affordability + are in favor of increasing immigration. No we can't have those evil developers building big apartment buildings in my neighborhood! How dare you insinuate these things are related in any way!


Just being pedantic, but none of that first sentence is requisite for NIMBYism. I know plenty of NIMBYs who disagree with the first sentence.


They're not unhappy about their data being intentionally republishable. C'mon. On the contrary, they are saying that when the product gets successful, they will pull the rug. Like facebook did before, and like twitter did with the API and now again.


Even if this happens it won't make any difference to how these instances operate, will it? It'll just reduce traffic to them, right?


People are complaining that facebook is, intentionally or otherwise, going to suffocate the baby in the crib, before it has a chance to grow. If you provide a way to see mastadon content without actually doing the work of joining mastadon, nobody will join mastadon except the people who have strong ideological reasons to join mastadon, which accelerates the problem.

It's the same effect as any platform that tries "free speech" invariably becomes a nazi echo chamber, because the only people who WANT to use the less popular system are those that CAN'T or REALLY REALLY REALLY cling to their ideology.


The original idea of having many small-sized instances is already failing, also without Meta. Small instances are unreliable (they quit/shutdown), have major sync issues (not seeing all replies, boosts etc) and have a tendency for too restrictive moderation and defederation.

So indeed, most people (normies) will naturally flow towards larger and more mainstream instances. It's already tilting in that direction and actively encouraged in the signup process.

As these instances grow, they will simply have more disagreeable posts (from the perspective of the ideological instances) leading to even more defederation, hence the split will become ever harder over time.


In the same sense that Google didn't largely kill off the RSS ecosystem with Google Reader and Microsoft didn't stagnate the browser ecosystem for a decade with Internet Explorer, sure.


The death of RSS preceded the death of Google Reader, no? (Although it may have hastened the last act of it's death.) Not sure I understand the comparison to IE.


There are multiple factions and one is definitely concerned about the data sharing - they're the ones who are talking about defederating from any server which in turn is federated to Meta (i.e. mastodon.socal etc)


Pulling the rug is a daily event in the Fediverse itself.


"You didn't invent the perfect solution so you aren't allowed to complain about the faults of your imperfect reality"


I stopped reading the article as soon as it criticised LibsOfTikTok, arguably one of the best and funniest social media accounts.


Same - when they start comparing accounts like that to Nazis, I know the article is trash.


A million years ago there was P3P, Platform For Privacy Preferences, which pages/resources could use to declare what kind of sharing/privacy rights surrounded a document. https://www.w3.org/P3P/

The working group was closed after browser implementers failed to express interest. As I recall, Microsoft Internet Explorer was the only browser participating. https://learn.microsoft.com/en-us/openspecs/ie_standards/ms-...

I could be mistaken/mislead about the purpose here. But this idea of having the data be able to advocate for itself what rights other people have seems semi-obvious. It won't prevent abuse, but the age of ultra-Legalistic DMCA corporate hawkishness seems largely to have won above almost all others, in most arenas, and this idea of sticking a "you can't do that" label on stuff thus seems like a pretty obvious first level defense. One that big data warehouses & big companies in particular probably couldn't violate & keep under-wraps.


I know the main image in the article claims Meta scraped their posts and updated their profile, but is it not feasible they used the same email address or phone number they use on IG / FB and Meta just filled in the missing blanks using information they have already? Which mind you, Facebook buying IG was under the premise that they would NOT merge IG and FB, but they've been doing that for a while now, they are arguably already merged to the hip.


1. The renaming thing happened to me, and it wasnt that Facebook had scraped something and made some super AI decision, its that one of the things you can report people for on the platform is using an alias, because its against T's and C's, and that form (last I checked) let you identify the actual user.

2 Said well elsewhere but its daft to want to hide public info from a subset of users on a protocol designed to distribute public data.


Meta should use an event loop to avoid blocking Threads


Whoever wrote this needs to be flogged. So melodramatic...


'I had a FB account as Mistress Matisse, but FB scraped my legal name from somewhere else and then changed my displayed NAME on my account without notice/consent.'

Who gave FB permission to conflate two different identities?


The legal right for a private entity to do most things to its private property.

The law says they don't need permission to do something like that, regardless of any morality or decency issues that causes. The law is often not aligned with morality.

Most people seem to not really get this, even when they objectively know it, or are otherwise unable to imagine what could go wrong, because doing so basically requires you to be the unhealthy kind of imaginative and paranoid. "What if facebook doxxes me and changes my display name" SEEMS like it should be an insane paranoia, because the human brain isn't equipped to handle extremes of scale and bureaucracy like this.


I didn't ask if it's legal.


probably in their TOS, where they give themself right to do anything.


The best way to handle it is to make a "minimum viable" account and do absolutely nothing with it, ever, except login and logout annually. Set up a spam filter to trash every single notice from the company.


Can you help me understand why this is preferable to never joining in the first place? What is the goal? Controlling the entry for my email/auth of choice?


It is a mitigation for identity theft and slander.

https://news.ycombinator.com/item?id=26931894


Interesting. Thank you for the link.


"Privacy" and "fediverse" are like water and oil: they don't mix.

Meta would have no more (extra) access to Fedi posts than an large Mastodon instance like Mastodon.social would have.


> "Privacy" and "fediverse" are like water and oil: they don't mix.

I'm not sure I agree, privacy can be achieved via anonymity. Use a VPN, block cookies, change usernames, etc


Meh, I'm posting on a public forum, I don't consider any of it private. Anyway, they're not going to do the Fediverse, I'm 100% positive at this point. There is no benefit to them anymore. Nobody wants them there, and their target user doesn't want the complications inherent to the system. I love Mastodon and Lemmy specifically for these reasons. Go there. Forget this nonsense. It's a beautiful place to be.


I don't see why people think this is the issue it is: the community, at large, will refuse to federate with Threads, and then anything that Meta steals is just a simple DMCA away.

And guess what happens when Meta refuses those DMCAs? You go to the networks they peer with. You don't screw with tier 1 transit providers, not even Facebook has some magical power here.

This isn't, and was never, about privacy. It is 100% about Meta stealing data and displaying it so they can pretend people are using their product, so it looks active, and then they can tell their shareholders it was a successful launch. Nobody in the Fediverse wants to help Zuck profit from people's hard work that they didn't donate to him (ie, post on Facegram and thus license it to him).

The flip side of this is also, how will they moderate data that isn't theirs? They will have to unfederate from certain servers and users, thus solving the problem.... and if you've seen the fediverse, its 20% trans catgirls who code in rust, are part of a polycule, wear programmers socks, and have spicy opinions about niche Linux distros, all of which are persona non-grata on the Instabook platform.

The catgirls are going to save us from Zuck slathering the Internet in Sweet Baby Rays. This wasn't the future I was expecting.


I think you're right that DMCA will be effective but my guess is that meta will actually be able to federate just fine. As for moderation, they're already the best at it. They can simply allow or block federated content as they choose, just as easily as their own content.


But the content doesn't cease to exist... its just going to tell people "hey, join this better non-Meta branded Fedverse server, and look at all the banned content you want! And now Meta cant ban you for having opinions about things!"

They have done the opposite of a moat... they built a bridge across the moat that is around the prison island everyone is trapped in.


It's like suggesting gmail would fail because they have spam filters. People actually want moderation. As long as it's mostly good or at least consistent, I expect Meta to find success. I also expect a wide variety of more lax/niche aggregators to also find success.

Meta's moat will be that they will be the largest home domain. They'll have the most user data and they'll have a secured user base even if the fediverse collapses.


In theory it shouldn't be hard to block Threads. If they're only using one domain for all their users it's trivial to block it.

But privacy is not the issue with Threads. The issue with Threads is that they're going to attempt to destroy the Fediverse through standard Embrace, Extend, Destroy tactics.

You see this with Bluesky as well. The point is to interoperate when it's in your interests and then break interoperability when you have enough of the audience. Thus, thereby capturing the lion's share of the audience.

Just wait. Threads will soon have a 'new feature' that only works with Threads and that does not work on other Fediverse nodes. Then they'll try and poison the standards bodies working on ActivityPub. They could increase the velocity of new 'features' to ActivityPub so fast that unpaid OSS developers couldn't keep up. Like Google and that cartel do with browsers. Eventually Meta and maybe a couple other large players will control the standards, or atleast make it obtuse enough to prevent new entrants. This playbook is tried and true.


This feels like irrational fear. The subversion of the original Web took decades to happen, a lot of complacency, lack of reflexes and the moral degeneration that allowed surveillance capitalism to become hugely profitable.

For sure Meta cannot be trusted to be up to anything kosher especially since social media tech is close to the money spinning core of the Death Star.

But what "stolen" audience are you worried about? The existing million or so fediverse users that will be lured back into the lethal embrace of the move-fast-and-break-things brigade? Future fediverse users that cant tell whether they are joining a surveillance apparatus or, e.g. their local community instance? Threads is currently cannibalising Instagram in the hope, pressumably, of grabbing some pieces from the decaying corpse of Twitter. All quite morbid affairs that dont have overlap with the migrants escaping to build a new life in the fediverse.

The issue of subverting the fediverse standards is more serious - in principle. But the tangible threat is not clear (to me at least). E.g., the protocols are low level, minimum interop standard, they specify nothing about how server platforms can (ab)use their users. This is all down to implementations.

In any case if you dont want corporate control of a standard make sure you dont take any corporate money and if they insist to join the fediverse party give them one vote like every other solo fediverse pioneer.

The fediverse is being noticed. Thats a good thing. Savvy PR by fediversians could spin Meta's "interest" in the project to open doors that they could not dream of. Granted PR and marketing is not the fediverse's strong point. Its better this way even if it makes the job of adoption harder. But lets not get scared by shadows.


I’m surprised that a followers-only post’s author’s information would be available to followers of original author’s followers. I would think that a non-public discussion started by one account should be nonexistent to anyone without access to follow that account.


If Mastodon block Threads, I don't know what to think anymore. Mastodon should be open to all. That's the whole unique selling point, no federation from US companies etc.

This will make Mastodon pretty much useless if it can just block networks..


Hunting down your personal details and publishing them is a crime¹, isn't it?

1 - I mean on the US where Meta really cares about. It's probably one on most countries where Meta has revenue, but that won't send anybody to jail.


> Hunting down your personal details and publishing them is a crime¹, isn't it?

It's a crime to obtain public data published on a public network built on a public protocol explicitly designed to share data? Isn't that the whole raison-d'être of mastodon?


> obtain public data

That's not what I said, and not what is on the article.


The story is almost certainly untrue or a misunderstanding. Facebook has no reason to scrape individuals personal information and then forcefully update their Facebook profile.

There are various processes at Meta that do require identification to be submitted and in some cases that information will be published. For example, to be verified on Instagram you must have your name be published. Likewise, certain Facebook pages must publish their operators identities.

Most likely the person in question submitted their identity documents to Facebook (perhaps their account got locked) and they didn’t realise they were agreeing to that information being put on their profile.

The concern is valid — Facebook has information users might not want public — but the cause isn’t nefarious. Facebook is not finding an anonymous sex workers identity and then intentionally outing them.


>Facebook is not finding an anonymous sex workers identity and then intentionally outing them.

They are?

https://www.thewrap.com/facebook-sex-workers-outed/


> Facebook has no reason to scrape individuals personal information and then forcefully update their Facebook profile.

Collecting personal information is central to Facebook's business model. Facebook's policies mandate legal names.

> Most likely the person in question submitted their identity documents to Facebook

Most likely this person would remember that.

> The concern is valid — Facebook has information users might not want public — but the cause isn’t nefarious.

Surveillance capitalism and legal names policies are nefarious.


You are free to not use Facebook and Instagram. I have not heard of a case where Meta deanonymized non-Facebook/non-Insta users.


> Threads has rolled out the welcome mat to Nazi supporters, anti-LGBTQ extremists, and white supremacists, including groups like Libs of TikTok that harass trans people.

It's hard to take the post seriously when they make statements like that, From the linked article:

> Fuentes, who claims to have been banned from Meta's platforms, announced in a livestream on July 6, “I signed up for it last night. I made a fake Instagram. I got on a fake Thread.”

I'd guess of the 100M users that have signed up lots of people that are banned, or post content that gets banned on FB and Instagram have managed to make accounts too. But I'm sure the same content moderation policies on FB/IG will apply once they start posting, some will get through, but that is far from being welcomed.


Beyond copyright, why wouldn't restrictive licensing fill that need? ex: by accessing content on this server, you agree to the terms of service..blah blah ?


So I was going to try threads but can't figure out how. I'm not interested in using a phone. I'm on a computer. Is there no web version?


There is no web version. Threads can be viewed at a URL, but you can't join, post, or browse via web.

Amazing, eh?


If it uses the ActivityPub protocol, then couldn't you use any ActivityPub client to send/receive messages with it (or else, to deal with the JSON data directly)?


It doesn't use ActivityPub yet. (Or maybe ever)

Also ActivityPub as I understand it has two separate forms -- a client mode, and a server to server federation mode. When/if they do implement ActivityPub, it will likely be the latter that gets implemented not the former.


Wow. And this is supposed to liberate us from twitter. Oh, my poor internet, what have they done to you?


"Website? e-mail? What's that? If you want to get in touch with me, or find out what school events are coming up or whether your kids bus is late, or whether there's a tornado that touched down near by, or whether your local ski hill is open ... use [Facebook|Twitter|Threads]"

Sigh.

https://qz.com/333313/milliions-of-facebook-users-have-no-id...

https://en.wikipedia.org/wiki/Internet.org

https://www.wired.com/2016/01/facebook-zuckerberg-internet-o...


My hope is that people will give up on all social media.


Looks like some Mastodon users wanted an interpersonal communication channel and didn't realize they were using a publishing platform.


How would meta know tyhey were the same person though?

I'm glad I just routinely obfuscate my online presence by lying on every different platform.


Now that I know the term "garden path sentence" I can accurately spot them.

Blocking threads? Do we need a mutex here? Faster I/O?


You can't exactly keep privacy if you choose to broadcast your message with a "loud speaker"


I don't think Meta is ever going to federate in the first place.

What would they gain?


Every single comment on this story qualifies to be in "Shit HNer's say."


[flagged]


When the account "just reposted stuff" with inflammatory claims about children's hospitals, those hospitals were targeted by a deluge of online harassment and phoned-in threats. [1] If Twitter is the public square, then their account is on a massive soap box with over two million followers. What they say is going to have real-world consequences, and to pretend as if they have no blame is ridiculous.

[1] https://www.washingtonpost.com/technology/2022/09/02/lgbtq-t...


[flagged]


going to call bullshit on that. Share the source of so called "illegal gender treatment". Going to guess it was a hospital that had to quickly change practices due to one of those reactionary laws recently passed in Texas or similar at best. These laws violate people's human rights and should be overturned (and they will be).


Great, so we agree that Libs of TikTok "just reposting stuff" is done to achieve political goals, with success, that would not have happened without the huge spotlight they control. So when they post hateful content targeted at trans people, they have reason to be afraid.


[flagged]


I don't know why you're responding to me then. The post I'm replying to is pretending that "just reposting stuff" is totally harmless and has no real world consequences.


Here's a couple of excerpts highlighting way many LGBTQ+ people see Libs of TikTok as a threat

"After gaining a large Twitter following in the spring as she baselessly accused LGBTQ teachers of being pedophiles and “groomers,” Raichik began criticizing children’s health facilities earlier this summer, targeting a hospital in Omaha in June and another in Pittsburgh in August. The attacks resulted in a flood of online harassment and phoned-in threats at both hospitals."

(From "Twitter account Libs of TikTok blamed for harassment of children’s hospitals" https://www.washingtonpost.com/technology/2022/09/02/lgbtq-t...)

...

"One former English teacher, Tyler Wrynn, told Lorenz for her piece that he had been harassed, sent death threats and eventually fired after one of his TikToks about supporting LGBT+ kids was posted by Raichik"

(From "How Libs of TikTok Became an Anti-LGBTQ+ Hate Machine" https://www.them.us/story/libs-of-tik-tok-twitter-facebook-i... )

"While the account doesn’t always explicitly encourage followers to do anything, its posts have sometimes led people to harass or physically threaten its subjects. In one instance, a group of five Proud Boys members disrupted a Drag Queen Story Hour at a public library, spewing homophobic and transphobic insults at attendees, which investigators believe was spurred by Libs of TikTok."

(from "Teacher targeted by Libs of TikTok sent death threats and lost his job" https://www.thepinknews.com/2022/04/20/libs-of-tiktok-teache... )


You're the only one making the absurd terrorist analogy. Reposting a person's content in another context with a whole crowd of people specifically there to mock and humiliate that person, many of whom will then go out of their way to personally harass that person, definitely counts as "targeting."


More than harassment, literal bomb threats https://www.washingtonpost.com/technology/2022/09/02/lgbtq-t...

> After Raichik falsely claimed on Aug. 11 that Boston Children’s Hospital performs hysterectomies on children, the hospital received a barrage of “hostile internet activity, phone calls, and harassing emails including threats of violence toward our clinicians and staff,” the hospital said in a statement. The hospital does provide hysterectomies to certain patients over 18.

> On Tuesday, police responded to an anonymous bomb threat at the hospital. No explosives were discovered, and hospital officials said they were cooperating with the police investigation of the incident. “We remain vigilant in our efforts to battle the spread of false information about the hospital and our caregivers,” the hospital said.


I’m starting to think there are lots of assholes using gender and gender isms as excuse to themselves consistently labeled assholes. Like there are people getting feet dragged by gender dysphoria, and there are people who’s got nothing else to shift their blame to than maybe their biological identities that catches onto it.


Some people think they are above everyone else and that others should not be able to scorn their behaviour.


what an ignorant take. Social media promotion of "hated" groups can be plausibly blamed for mass murder, including a literal genocide for which Facebook is now being sued for £150bn right now:

https://en.wikipedia.org/wiki/Facebook_content_management_co...

https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...

https://www.theguardian.com/technology/2021/dec/06/rohingya-...


That's completely different. Libs of TikTok only reposts very far-left takes to expose their absurdity.


They provide commentary too. They certainly have an agenda. They have targeted specific people and organizations, and also groups of people generally. The article writer needed an example of a well-known social media account that fit this description, and accounts that are worse in terms of explicitly encouraging harassment have already been banned.


Their only "agenda" is "exposing" and making fun of far-left excesses by simply reposting them. I don't think they have targeted anyone in particular. If the things they repost are damning, they were damning in themselves.


> I don't think they have targeted anyone in particular.

Then maybe research before commenting? They do this regularly.

> If the things they repost are damning, they were damning in themselves.

The whole context of the account is to "damn" the things they are posting. Sure, if you pick one of their posts at random it'll probably be something that 99% of people agree is wacky, but come on. They have inflammatory commentary, they target specific people, organizations, and groups, they know they influence politics and society and are proud of it. If you need me to, I can spend the time to prove all that, but it's all to say that yes, they are a good example of a social media account to use in OP's article.


The OP article makes it sound as if they are immorally harassing people, not what they are actually doing, at least mainly: exposing things which are damning in themselves.

An analogy: They raise awareness about far-left excesses in a similar way in which the media likes to raise awareness about far-right excesses.


[flagged]


[flagged]


Just convince other people about the falsehood of the opinion.


tl;dr:

Things you post publicly are public.


not a problem for 99.99 percent of people.


So... Still a problem for 700,000 people. Got it


the only problem is lack of critical thinking and victimization. You're joining a centralized netowrk its pretty obvious.


Isn't ActivityPub specifically about decentralisation?


protocol to centralize decenteralized activity. The blog starts with an issue on facebooks centralization and then goes into issue on centralization on this decentralized network. It's all very stupid imo. Edit sorry for my spelling


What an asinine concern. Don’t want your data on threads? Don’t use threads.


Even if you don't use Threads, when they eventually add ActivityPub support, Mastodon users' data will inevitably be harvested. Instance admins have been signing a pact[1] to defederate with Meta for this reason, in addition to the fact that they don't trust Meta to moderate their instance well enough for it to be safe to federate with.

[1] https://fedipact.online/


> when they eventually add ActivityPub support

What does that have to do with anything? Mastodon is explicitly setup to allow all user data to be harvested. What Threads supports or doesn't support in the end has no bearing on Mastodon having all user data public.


Just like digital entertainment is set up to allow all movies and music to be downloaded for free!

If you don't like it, just don't make movies or music.


This, but unironically. If you are uncomfortable with the idea of a zero-marginal-utility medium distributing your content without your consent, you probably shouldn't make and share digital copies of your work.


I think you would agree that harvesting that information is far easier if it's being literally POST'd to your servers (which ActivityPub does) than if you're going out to scrape them, no? It's the same principle with defederation; either they're going to scrape all the data, or the data is going to be literally sent to their platform.

The point is, the idea that "don't use threads" solves the problem being presented (your data being harvested), is wrong.


I find it hard to believe they are really going to join the Fedi. With 100 million users on Threads and maybe 2 million on the Fedi, how could Meta possibly benefit? Federating could bring them trouble but no benefit.


I could definitely see a benefit for them from a legislative perspective. By federating with other networks, they're able to signal to lawmakers that they're not _really_ a monopoly, they're willing to pay-ball with others.

Also, isn't the 100M user figure disputed, because it's counting existing Instagram users or some such?


It is real sign ups but it is very easy to sign up. Just because you signed up doesn't mean you're going to use it regularly.


The EU digital markets act will require "gatekeepers" like meta to provide some form of interoperability or open access. Supporting activitypub would be a way to satisfy that requirement.


But Mastodon can't possibly comply with GDPR the way it is organized. I mean Heavens you can use it without clicking on a cookie popup, they probably owe $70 billion dollars just for all the people who haven't seen a cookie popup already.


It doesn’t need a cookie popup because they aren’t using cookies for non-essential reasons, similar to how it complies with GDPR because they aren’t collecting any data beyond what is necessary for the service’s stated purpose.


I'm assuming this is the point. Facebook can get around things like requests for deletion under the GDPR by sending that data out to the fediverse, and then reading it back in from the fediverse after they've deleted it.

and when the EU complains, they get to throw their hands in the air and say "yeah, you made us do it"


Thanks. I was going to point out that this was an existing problem in the Fediverse (there are instances that are 'unsafe' because they are either explicitly _for_ hate speech or just don't do enough to moderate it) and that the standard approach is to not federate with those instances, nor with any instance that chooses to federate with them. It's not universally popular (some people don't like the idea of 'guilt by federation') but it's necessary if your goal is to prevent your users from coming into contact with nazis.


What stops them from harvesting Mastodon users' data after they're defederated?


I mean, I guess nothing, they could absolutely still scrape data, but that's much more likely to be noticed (rather than sucking in data as part of product functionality), and is a higher barrier to entry.


The article shows that federation delivers data to Meta even if you personally don't use Threads, but I agree with your point.

If you want to control distribution of your data, don't join a federation designed to distribute data. Trying to blacklist nodes in a graph that you don't control is not a solution.

Information wants to be free, if you post something to a social graph assume everyone in the graph can see it forever.


Isn't the exact concern here that people avoid Meta properties and for that reason chose Mastodon, but now Meta is sucking that data in?

To me that still seems fairplay on a platform that's designed to be open and heralded that way. Not a opinion I hold strongly though.


I know replying with "did you actually read the article?" is explicitly forbidden on HN but is there an exception for cases where the person who didn't read the article uses a word like "asinine" in their dismissive reply?


Specifically addressed in the article.

"Even if I only make followers-only posts, which aren't public and can't be boosted, if somebody who's following me replies, any of their followers on Threads will see my account name and instance" and also "If somebody on another instance who follows me boosts one of my public or unlisted posts, people on Threads who are following them may be able to see everything I've said in the post"


Isnt this a core way that ActivityPub works though? Like this isn't a Meta issue. It is the technical functionality of the protocol these federated services are built on. If you transmit data using the AP protocol, your content isnt private.


Correct.

A lot of the protections in ActivityPub are listed in the spec as "should" not "must". For example edits and deletes.

I think it should be assumed that when you publish content over activitypub it is now public. Any exception that it is private is asinine since you are literally publishing it to other servers.


> Isnt this a core way that ActivityPub works though?

Yep but it counters the "If you don't want your data on Threads, don't use Threads" argument since you can be two hops away from Threads and still have your data appearing on Threads.


Did you even read the article? This is about data going to Threads from people who aren't on Threads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: