Pull not being a good model of two way communications is going to be the major blocker here in my opinion. It's going to mean that people are only going to see comments and reactions on their posts or comments from people they already subscribe to, because their RSS reader would have no possible way of knowing if anyone outside of that list commented, since you can't get notified of content sources you don't already know about, only poll ones you already know. That's already bad enough (one of the big negative things people with large followings on the fediverse talk about is how they can't see what people are saying in the replies to their posts a lot of the time if the servers those people are on are blocked by their server, which means hate and harassment and one sided conversations can fester, and often many commenters can't even see each others' comments, leading to people saying the same things over and over exhaustingly). This also means that people who don't have any followers will literally be essentially muted by default: no one will see their comments or interactions, because no one polls their feed yet, which means that it's basically pointless for them to interact at all, which sounds dispiriting and would probably lead to no one wanting to use this type of social media — moreover, it also creates a catch-22 problem, because a major way to get followers in the first place is to directly interact with other people and bigger blog posts, to make people aware of you and maybe get some of them interested in hearing more of what you have to say, yet in this model, you can't really interact until you have a following already, so your main means of getting a following is gated behind needing a following to work!
In the olden days when bloggers walked the earth, emitting lengthy posts over RSS, they solved this problem in two ways:
Firstly, by appending forms to the end of the post where someone could type out a reply that was more likely to be a few sentences or paragraph, rather than a full-blown essay.
Secondly, by inventing "TrackBack", a standardized way for someone else's blog software to say "hey I wrote some stuff on my blog in response to this post of yours".
Both of these would get appended to the end of the blog post's page as "comments".
This very quickly enabled the new problem of "trackback/comment spam"; the enduring solution in the world of blogs to that has been "Wordpress' Askimet plugin", which is a very centralized piece of the otherwise mostly-distributed infrastructure of RSS-based blogs. I think it's like $15 a year on top of the $60 or so I pay for my Wordpress site on cheap hosting.
"Announced" may be too strong; the link is to an internal email leak discussing this possibility, but "Automattic may be experimenting with selling data to midjourney/openai" is still pretty close to "Automattic's selling user data". Hell, I can see the positive spin blog post announcing this: "They're also giving Automattic a bunch of free cycles on improving Askimet's spam/ham filters, in exchange for a look at every other comment anyone is sending through Askimet ever. We're aware this is a thorny ethical issue; click HERE for the archives of our mailing list dedicated to this project."
As for Palantir, my inner paranoid says that if the FBI/NSA/etc wants this data, they have some way of getting whether or not a deal with a public front like Palantir is involved.
I think it's safe to assume at this point that all public content and most private content on the internet is being fed to both AI and American intelligence services.
Note RSS is an ill-defined polling protocol. The server emits an RSS file which has the top N pieces of content.
All you can do is poll it at a greater or less frequency and hope you don't underpoll or overpoll. (I can easily fetch the RSS feed for an independent blog 1000 times for every time I fetch an HTML page, but should I? What if I wanted to follow 1000 independent blogs?)
With ActivityPub on the other hand you can ask for all updates since the last time you checked so there is a well-defined strategy to keep synced.
There is RFC 5005 - Feed Paging and Arching, but sadly the world of RSS Tools has never been very specification-forward, mostly, because the publishers of RSS feeds are even more desinterested.
ActivityStreams could be seen as a viable extension of RSS (aside from ActivityPub being based off it already) and it does support some simple pagination via its "Collection" vocabulary. Since ActivityStreams is ultimately based on JSON+LD, one could also add seamless querying support to an ActivityStreams endpoint based on SPARQL, for more advanced uses.
Thanks for the advice and compliment! :D I usually write them out and then read them and use the edit function to insert paragraph breaks after the fact, but I forgot to do that this time lol
Yes it is a big hurdle. However, I think content discovery is generally a big part of any content platform, way broader than discovering "who have reacted to my content". Now if you want to solve the problem of content discovery in a broader sense, then you have already fixed this particular shortcoming of pull-model as well. If a service that can inform you about new posts with a particular hashtag, it most probably can also tell you about reactions to a particular post.
And yes, I do realise that such services will tend to not be really decentralised (similar to the relationship of websites and search engines). But that means the downside is not that you don't get such discovery, but that you'll be reliant on more centralised services for such discovery, whereas in the fediverse you would be less reliant on such services for finding out who has commented on your post (though it will, as you've mentioned still not be enough).
> Yes it is a big hurdle. However, I think content discovery is generally a big part of any content platform, way broader than discovering "who have reacted to my content". Now if you want to solve the problem of content discovery in a broader sense, then you have already fixed this particular shortcoming of pull-model as well.
Right but I don't think as a general case finding all RSS feeds on the internet that satisfy a certain criteria, like publishing a hashtag or responding to a particular post, is a problem that can actually be solved in a principled way, because a fundamental limitation of the pull methodology is that you have to know the list of places you are checking beforehand, you can't get content from somewhere you didn't know about prior. The only way to solve this would be to have some kind of crawling and indexing system that regularly crawls the entire internet looking for these expanded RSS feeds and then categorized them according to various criteria in order to poll them. And that is both a very high technical investment and has a lot of limitations itself. So in the end it seems like you haven't really actually distributed the work of a social media system more equally after all, you've just inverted who is doing the work, going from a Federated set of servers that do all the work pushing content everywhere to a Federated set of servers that do all the work pulling content from places.
I do recognise the fact that such "aggregators" would be hugely centralised (if not outright monopolised, like the search engine space). however, maybe I'm wrong but I don't see the federated model succeeding without such services either, so I think of "need for centralised content discovery" as an independent problem, honestly.
I see your negatives as mostly positive. Engagement and virality are inevitably cancerous to any social network, and comment velocity needs to be suppressed and controlled to reduce entropy and limit the degree to which that network can be used by people primarily interested in "getting a following." Any feature (or anti-feature) that makes a platform unattractive to capitalists and influencers and shitposting trolls is a good thing. Discouraging people from posting and commenting is a good thing. Making it difficult to network is a good thing. None of these things need to be impossible, but I do believe there needs to be enough friction to make low hanging fruit and opportunism not worth the effort.
Otherwise everything gets taken over by AI and bots and psychopaths and propagandists and turns to shit.
For me, the peak of decentralization efforts were Beaker Browser [1] and Stealth [2].
But one project didn't make enough money and the author of the other one got doxxed into oblivion, so I guess we can't have nice things.
A peer to peer browser has so much potential, I wish somebody else might give it a try. Imagine the possibilities when you can just share the content with others, without needing a web server.
Does anybody know whether there's a decentralized (static/generated) blog for ipfs or similar? Maybe that would make a nice starting point.
I actually can't imagine what are the possibilities the personal web server gives me, compared to github pages or cloudflare pages or tons of other super cheap web hosts.
Ease of use / nice design tools? You don't need a new protocol for it, create your web editor and add plugins for common providers. Your website would be accessible from any browser, nicely solving chicken-and-egg problems.
Free domain names? Plenty of existing systems exists, and many of them are actually compatible with existing browsers.
Availability? Any real server will be way more reliable than your home machine that will go to sleep when unused, get updates, etc. And if you say "IPFS", then you still need to sign up with some centralized service to pin your site, so might as well sign up for webhost instead.
Illegal content? P2P exposes your IP, so your local police has access to it. And if it's kind of the content that is not actually illegal, just heavily frowned upon, there would still be plenty of hosters willing to host you.
Unlimited media storage? That could be a legitimate reason, but most people would want to store their video using dedicated system anyway (youtube / peertube), and modern storage is so cheap, the photos are not going to be a problem (Cloudflare free plan gives you up to 20,000 files up to 25 MB each for free for example)
> if you say "IPFS", then you still need to sign up with some centralized service
Ipfs actually has some inbuilt cashing, so visitors to your site also take part of the load and can serve your assets when your PC/internet is down. All without pinning, purely based on GC timing. So as long as you are online reasonably often and your content has readers you will be fine. Another idea has been to make it so that bookmarking works a bit like pinning so by using the decentralized web you actually archive it (sort of)
How much data can you reliably cache this way, though, especially with bookmark approach? Even free website providers give you gigabyte of space or more, and you are competing with them.
How do you handle outdated pages? A p2p file sharing is one thing. A file is pretty static, but websites change very often. By the time your file got shared all the way across the planet you might be already displaying something else. Do they all have to keep track of the original file?
I agree that pull is the best model for posts. For comments I still think the best model is to push them to the post author, and let the post author moderate. This way people who can behave civilly can get an initial audience by writing comments, and having a link to their blog in the profile, with no intermediaries involved. If someone doesn't like the moderation, they're always free to write a comment-as-post, where they can write anything they want but have to take care of distribution themselves.
What if someone posts a dangerous lie? The post author is going to delete any comments that expose the lie. How would comments-as-posts be sufficiently linked to the original post so the lie can properly be exposed to as much of the audience as possible?
What if somebody posts a dangerous lie on a centralized platform and blocks everybody who criticizes it? Even if it doesn't allow deleting comments, you can effectively the same result with just block.
A mute-only social media platform would be doable technically but it would likely involve tons of spammers replying to large accounts. Think about how unusable the replies to an American politician's tweets are on Twitter because the US courts have ruled that it is unconstitutional for politicians to block people on social media.
The best solution to this problem I've seen so far is Community Notes on Twitter where a crowdsourced fact check is directly pinned to the tweet allowing users to challenge liars without directly calling them out and getting blocked. More centralized approaches to fact checking don't work particularly well because you end up with biased fact checkers who clearly have an agenda and thus aren't trusted at all by the liar's audience.
the thing is, with no intermediaries involved, more specifically, without any proper "search engines" (or "aggregators") involved, the network will suffer greatly from a content discovery problem regardless (as the fediverse currently is, IMO).
with presence of such services, the problem of comments (and reactions in general) can be solved too. if a poster is ok with engaging with potentially hostile content, then they can get reactions to their posts from aggregators that aren't heavy handed on moderation. if they don't want to bother with such interactions, then they can choose safer aggregators. if they want, they can only pull reactions from feeds they are already subscribed to, similar to private posts on twitter.
I'm more thinking about the social side of things. When comments follow the pull model, you get the pingback problem of old: most "comments" will be links to blog posts, themselves stuffed with more promotional links and so on. The only way to avoid it and have comments look like a somewhat nice garden is to allow post authors to say: please write text comments and don't stuff them with links, or you won't pass moderation on my blog. In other words, the push model. In my experience that's the best solution to this particular problem.
I think that's an issue of tools at your disposal to create content (including comments) rather than their distribution model (e.g. pull vs push). If your main method of saying stuff is through a blogpost, ofc you'll end up in the situation you've described. If the tools at everyone's disposal are textboxes that just work, people will use that.
Yeah, this is precisely what I said as well, being able to comment on anything and have that seen even without a follower count is important for making initial connections on a social network.
I’ve also been thinking about this problem for a while. The push model and account portability are definitely the most important dimensions of this issue.
Can’t we just solve this problem with IPFS? And something like DHT tables for protocol-level awareness of state changes?
Your client app would just be responsible for pushing and pinning content to IPFS, scanning state tables for interesting updates, and then sending you push notifications.
Really simple to take your IPFS key with you can switch to a new app.
> In the pull-based system, more work in the end is required (when should Alice query Bob? Also Bob needs to respond to the query, though thats super easy as it is static responses), but the work is better distributed, lowering the maximum amount of work someone has to do (in this case, Bob).
I don't see how that follows. Yes, work is better distributed temporally (since consumers hopefully poll the feed in a randomized way independent of new posts appearing), but the baseline load of these polls will in the end be larger than that of having to do the push fanout per post – at least for people posting less frequently than the average poll rate.
Generally, the push-vs-pull discussion seems like a red herring: For every pull system, we'll want some push mechanism for efficiency reasons in the end anyway; for every push system, we'll need a pull way to catch up with posts potentially missed initially.
To me, the practically relevant differences between Mastodon (push) and e.g. Bluesky (pull-ish, with aggregators) seem to revolve around the actual ease of self-hosting: As the author notes, setting up a Mastodon server seems roughly as complicated as self-hosting email (i.e. possible but practically almost nobody will do it), but I don't see this as a limitation of the protocol (Activitypub), but rather its implementation.
Decoupling identity resolution from hosting an entire server would also be a smart move: Webfinger is way too complicated for this; DNS TXT records would be ideal.
yes it is a federated system by design. doesn't mean it is not a limitation though.
to give you an example: in case of email, though it is not that difficult to host your own server (and even if you are a small startup you'll most probably do so without much effort), in the end basically Google decides on "who is an accepted participant in the network". If Google deems you spam, you are spam. If Google deems your authentication emails "promotion", for most intents and purposes, you are "promotion" and your users will miss your emails.
that, I feel, is an inherent limitation of any federated system, specifically one whose design is really inspired by email.
> I don't see how that follows.
as you've mentioned right after.
> work is better distributed temporally (since consumers hopefully poll the feed in a randomized way independent of new posts appearing).
Still, what's the benefit of that? If peak load is a concern, a push-based system can stagger out individual post deliveries just as well, and push gives the producer much more control over load management. If that's not enough, several posts can be combined too.
In a pull system, you are at the mercy of your consumers' refresh rate setting, and for infrequent producers, you'll have lots of wasted cycles fetching nothing new on top of that.
I do agree that pull is simpler to implement (since subscription management is handled entirely on the consumer side, requiring no network protocol and server-side state for it), but in terms of network calls, it's strictly worse.
the system doesn't need to be completely pull-based though. I don't even think most modern RSS readers are fully pull-based, don't they support WebSub?
the main point is to separate publishing and distribution, making publishing far more accessible and decentralised. for that to happen, I guess, from a publisher's point of view, the system should be pull based. of course we can have hubs and relays to add pull-based mechanisms to ease the load of the system.
p.s. even in that case, strictly speaking, yes a pull-based system, or even a hybrid one, will always require more work than a fully push-based system.
> the main point is to separate publishing and distribution, making publishing far more accessible and decentralised. for that to happen, I guess, from a publisher's point of view, the system should be pull based.
Oh, I completely agree with that assertion: Static content hosting is much easier than stateful subscription management, knowing which aggregators to post to etc.
I just think that this does inherently put more work on the subscribers, and there's no real way to do it efficiently in a relatively flat architecture (with subscribers directly polling publishers). WebSub helps with pure distribution, but not with aggregation (e.g. to allow keyword/hashtag search), for example.
That doesn't mean it's not worth still designing a system like that (in my view, the benefits are significant!), but I wouldn't call it a performance win.
but I suspect aggregating / indexing / etc. still is going to be the most resource consuming part of a push-based system if you want your content discovery to not be limited to two-way interactions, which means the gains of a push-based system, in terms of performance, shouldn't be that much (I suspect the main gains will be in realtimeyness instead).
Yes, we can easily get more decentralized that the Fediverse, and we have. Both Secure Scuttlebutt and Urbit are peer-to-peer social networking, with rather different takes on what that means. There might be more, those are the ones I'm aware of.
Urbit choosing to artificially limit their "address space" with an NFT sale was a bit of a gut punch tbh and I don't see how this will work out positively for the project in the long run.
Urbit's address space is 128 bits wide, so scarcity is physically impossible, unless there were a need for every atom in the observable universe to have many addresses.
The decision to make the bottom 32 bits valuable was a clever one, but it's lead to some misunderstanding of how things actually work. Specifically, planets (a 32 bit address) only own an additional 2^32 addresses, called moons. That leaves 2^64 of the address space "wild", these are called comets. There are plenty.
If you want a four-syllable address, they're loss leaders from hosting providers, currently. Two syllables you have to pay for, and one syllable is not usually on the market. A sixteen-syllable address is and will always be free.
How this is handled socially, in a hypothetical future where there are more than 4 billion active Urbit users, is a problem for that future to address.
Why does it have to be so obscure? Why planets, moons, etc? It doesn't serve any purpose other than confusing anyone not familiar with it and making those familiar with it seem like the "all knowing" in group.
Moons are the /0 subdomain you get with a /32 aka planet, and comets are the 64 bits of free address space left over in the 128 bits of address width. To create a comet, you make a public-private keypair where the public key hash doesn't collide with owned address space. If you miss, you spin up another keypair and keep going until you get one.
That's really the whole thing. These things need names, as evidence, I present the fact that in IPv4, they have names. To my taste "slash eight" is more obscure than "galaxy", your mileage may vary.
We need better-than-bittorrent p2p social swarms that are fast, efficient, and massive.
I want for when someones posts an article, to have my local custom filters flag it for interest, schedule it for reading, grab the photos and videos, pull in relevant comments (again filtered, perhaps to my interest graph peers and highly-ranked dissenting opinions), and never have to step foot on the corrupted, ad-ridden, algorithmically boosted web again.
News websites are trash. Reddit and socials are trash. I want complete unfettered control over the inbound stream. Everything first class from engineering principles. The protocol, the data structures, the ranking, the visualization, etc.
I want data I can easily copy into my notebook, easily bookmark, easily remix and respond to.
The web doesn't cut it, and it never has.
P2P social should be article and media centric. Sharing news, blogs, videos, etc. with first class threaded comments built atop it.
Everything is ephemeral and immutable unless you want to save it or publish a correction.
I echo your sentiment about bittorrent and p2p swarm protocols. Seeing the current fediverse emerge has felt almost anachronistic to me - the tech put to use is, at baseline, older and less resilient than the p2p protocols from the late 1990s and early 2000s. There may or may not be milage in the Matrix protocol...
The problem with fediverse style setups is that they tend to become highly centralized. Email being a great example. It's often pointed to as a perfect example of federation but naive because when highly centralized it only slightly reduces issues of full centralization. So the question is how you be federated while being highly decentralized.
This of course isn't important for everything. Centralization can even be good at times. But an example might be like browsers. Technically anyone can run their own. But since the ecosystem is so Chrom{e,ium} based, Google can exert a lot of control over what standards are set on webpages and how to process data. You might have heard about them trying to destroy the cookie. Which in part I feel is great, but at the same time it's not like they aren't still tracking people. Either way, the point is that a singular (or even a few) compan{y,ies} shouldn't get to dictate critical infrastructure. This can actually even lead to fragmentation, especially when we consider that more than one country exists.
So it has to be easy, near trivial, to run your own server, and must also be cheap, if you want a highly decentralized platform. If it isn't, there's always a strong pressure for centralization.
Sure, whats fundamentally difficult about setting up a federated instance? Its just a db, and backend/frontend service. You can put in a docker container and run it in minutes.
Things tend to get centralized because users want to be where users are. Developing another platform doesn't change that. Even if that platform just straight up does not allow centralization, it doesn't stop people from using centralized platforms.
Try setting up a Matrix server[0]. Or Mastadon[1].
This isn't hard for people like you and me, or probably most of HN. But it is difficult for the public. It is hard to set up. It is hard to keep up. It's costly.
I think maybe a good comparison is to PGP. PGP's greatest flaw was that you could reply back that you couldn't decrypt and the sender would resend the message decrypted. Encryption hasn't made its way into the mainstream until companies like Signal showed it could be easy, and then it was picked up by WhatsApp and iMessage.
I'm with you that it isn't "hard", but you need to recognize that this is a comparative word and that not everyone is like us.
Those issues just seem like a matter of creating the tooling for it no? Setting up a blog was difficult when you had to get your own computer hardware, setup the networking, install the software and run it 24/7. Now its just hosted somewhere with a click of a button.
Nothing about a federated instance is fundamentally any more challenging than setting up any other service.
Anyway, I think the bigger issue here is most users simply don't have the need to. I was big into this whole "lets decentralize the internet, screw these big corps". I setup a lemmy instance, then just forgot about it entirely. What the hell am I going to do with it?
I think partially yes. This was my mention of Signal and WhatsApp. If I've learned anything it is that UI/UX matters just as much as the backend. Which kinda sucks, but also at least helps us know how to make better adoption of better things and allows us to compete with existing objectively worse products. I hope that UI/UX doesn't matter more, but it is a fear I have because I think most people just don't get it and these things are too abstract because our world is so complex that specialization is a necessity but we still delude ourselves on believing our knowledge has wide coverage. This is how I really think about interdisciplinary collaboration and diversity of thought.
On the no side, there are literal dollar costs to decentralization in that you have to host the server and pay for that bandwidth. I'm conflicted about this because the cost is always being paid somewhere. Like there's examples of how people will not buy a product if shipping is not free even if the total price is less than the price of the version with free shipping. So if you see your internet bill go up you might ignore how other bills might go down (things like healthcare might be a good real life example of this complexity in evaluation?). I mean isn't Microsoft distributing windows updates through P2P now? Shifting bandwidth costs to users? Are streaming services getting in on this?
> Anyway, I think the bigger issue here is most users simply don't have the need to. I was big into this whole "lets decentralize the internet, screw these big corps". I setup a lemmy instance, then just forgot about it entirely. What the hell am I going to do with it?
And I think this is actually coupled. There is the problem that anything with a network effect naturally shifts towards centralization and thus monopolization. Because the utility of the product is the centralization even if by centralization we mean "people are here." I think this makes a great case for why we would want decentralized services, because it enables more competition as we allow the switching of platforms to not cause such disruptions. But I think this part can't happen until it is essentially trivial to do the setup part, and until there is enough critical mass of existing servers. Maybe we could force running servers or default to that?
Somewhat connected I was thinking over the weekend about how to actually push technology forward and thinking about how Google constantly kills products far too early. I wonder if the actual best move is to reframe our thinking around what Apple is doing with the Vision Pro. Product isn't great but they push it because they know that people will buy it because of the name brand. They can't fix all the problems with the system until they get enough people using the product, but shipping out samples and trials is too expensive. So do we as consumers (and especially those with disposable incomes and especially those with excessive incomes) rather need to reframe to think that we should be using our purchases in part simply to vote for technological directions and not entirely on utility? Such as if you want to see VR/AR become a reality is it worthwhile to buy a VP even if you never open the box? Because this is the signal that companies are internally using to gauge product viability and that metric is often misaligned due to things like people actually wanting the product but the implementation is just not there yet so they don't buy it?
Sorry if this ended up being ranty, but I think it is potentially relevant here because I think people do want these decentralized services/platforms given how much negative attitudes there are to the centralized versions. Same with privacy and security. But I think a big part of this might be the economics and the path to viable product being difficult playing as big of a role as the literal technology itself. Maybe I'm off base?
even for me, as a long time software engineer (who has worked devops before cloud and docker times), its not "hard" but it is "non-trivial" and can get "costly", while publishing my static blog is waay waaay easier (and hence doable).
I mean look at email again. how many people, even technically proficient people, setup their own email server? we've mostly just accepted google's dominance in the area and are ok with them even leveraging it to impose sometimes ridiculous rules of who can send email to whom and how it will be presented. why should the fediverse end up differently, if the basics are quite similar (if not a bit more complicated)?
To be honest, I don't really think there's much value in social networks for humanity. In theory, they can do something good and some people see only positive effects, but it seems that the combination of pseudo-anonymity with algorithms inevitably brings out the worst in people.
Social media is the most important counter measure to propaganda. Before social media, we had a one way stream of information from the media outlets without much opportunity to question, discuss, and debunk. The fediverse resolves the problem of centrally controlled algorithms and censorship that manipulate people's speech depending on the moderation rules of individual instances.
Your first sentence is the exact reason why propaganda thrives in social media. There's an understanding that fellow people are somehow more trustworthy than Big Media. The part that's not considered is that people on a whole are not able to understand every intricacy of problems out in the world - but they sure do love sharing opinions about every problem as if they do understand.
Given this, easy-to-digest messages are easily amplified through social media and every complex detail is withered to nothing. This is why I stopped using Lemmy. The larger communities (read: the only ones that get any posts) ended up being a worse echo-chamber than every other platform due to people repeating the same simple concepts ad nauseam.
All information can be considered propaganda as defined by Edward Bernays in his book Propaganda. Yes groups game social media to propagate information. Yes centralized algorithmically controlled platforms are easier to manipulate conversation. Yes you must weed through the chaf of useless opinions. Yes echo chambers are often reinforced.
If topics are openly discussed, untrue narratives generally dissolve. What better tool do we have?
100% agree. I think Reddit is a huge propaganda machine in itself. The most ironic thread I’ve seen was a headline about how Republicans were more susceptible to propaganda, with every comment of course being 3-10 words amounting to “oh yeah so true, they’re terrible”. Anyone that actually read past the headline though could see the article was total nonsense.
I suppose it may be harder to control since a central authority can’t publish anything they want.
At one point I heard someone talking about how people have transitioned from reading articles to only reading the headlines. Your comment reminds me that many people do this for most of their news. And I'm guilty too. I usually read comments on HN before the article, if even I read the article.
Very interesting shift in media consumption, and it may help explain the sensational/disconnected headlines.
On the contrary, social media is a huge target for propaganda
Special interest groups have been around forever (see Cryptome's article "The Gentleperson's Guide to Forum Spies" which outlines some useful tactics). These groups still exist (eg: the JIDF, or ShareBlue in whatever name it's calling itself these days, to name a couple)
Platforms themselves also engage in tactics such as shadowbanning its users (Reddit and Twitter were both in the news for this at least a few times), or using a curation algorithm to show you what it thinks is relevant (Facebook's wall, or YouTube comments/chat being shuffled rather than showing activity chronologically to discourage discussion and to more easily hide censored comments). The idea that social media platforms are not one-way is an illusion
In short, the people who you're talking to are not always real people. Some of the discussions you're trying to have might be getting silenced without you knowing. And the facts you read from a given source could have been invented. The Fediverse isn't exempt from these problems
This is a good point actually, despite my reservations about social media, if you want to get a big message out quickly there really is nothing better than something like twitter.
First, nobody cares what anybody on the Fediverse says. It's tiny. It's pinnacle "app", Mastodon, is losing lots of MAU every single day and is now below 1M MAU. Besides being tiny, it's scattered and discovery and search do not work. How can this mess possibly counter propaganda?
Second, whilst the Fediverse may not have sophisticated algorithms, it very much has censorship and typically way more than traditional social networks. It's basically a collection of far-left misfits that engage in constant defederation wars. You can't even post a photo of a meal because somebody will be "triggered".
A good discussion between two people can enhance their understandings. Size/MAU is irrelevant in this context. People in my network smash propaganda every day.
I can understand how you had this experience with the fediverse, but it has gotten better for me over the years. Many big instances are wokist with all kinds of rules such as banning hate speech, but you have many such as noauthority.social that are explicitly free speech. I have been running my own instance for years. People can block me, but nobody can kick me off.
Somehow amusing you’re posting this on a pseudo-anonymous social network with an upvote based algorithm.
Although I’d probably agree. Even tightly moderated communities who fancy themselves as “intellectual” and “rational” like this one are prone to bizarre group-think and emotional manipulation.
But that’s also just humans in general. What you might call bringing out “the worst” in people, might actually be the best we can do given our biology.
I tend to agree with this sort of. In my opinion, stuff that's more real time, ephemeral, one to one, and focused on closed groups below a certain size, like IRC or Discord, or stuff that is one to many like modern social media but much less highly visible and networked, like the classic blogosphere, tends to be much more healthy and in the long run rewarding then microblogging social media like Twitter or Facebook or Instagram or whatever.
I don't think it's necessarily the anonymity though. Or even the algorithms — the fediverse has no algorithms and yet in my experience (having been a minor player in some big drama there before I left) it's getting just as toxic and judgy as Twitter, maybe even more. I think it's more that in micro blogging social media, because interactions and posts are automatically broadcast to this huge audience that doesnt necessarily share any values or social norms, and are immediately highly discoverable and visible to everyone even outside the people who initially saw it, and these interactions can sort of stay around in the zeitgeist a bit more permanently than an instant message, instead of being ephemeral, in the moment, and directed at one or two or a few people within a closed form community, every post you make and every interaction with people takes on a sort of grandstanding, performing for the crowd, dare I say it virtue signaling (I say this as a leftist lol so you know I'm serious) tenor. It becomes automatically a lot more adversarial and fake and just weird and distorted. And then if you add on top of that the fact that posts and responses are highly asynchronous, so it's actually difficult to feel like you're really having a dialogue with a person, instead of just combatting disembodied words on a screen, and difficult to engage in compassion and quickly correct misunderstandings and respond to feelings in the moment, it means that all of the grandstanding and performing for the crowd and virtue signaling will be that much more dysfunctional and detached from actual human social interaction.
Chronological feed + boosting is an algorithm and it's about as toxic of an algorithm as you could get without making a data set of toxic vs toxic posts.
It's difficult to impossible to implement bulk level social media algorithms on decentralized networks because no single node has all the data and views/interactions are largely private.
This is a feature and is perhaps an even more compelling reason to go federated or decentralized than the other autonomy and privacy related reasons. Social media algorithms are cultural lobotomy machines.
A great example of technological limitations improving content is podcasts. Podcasts are one-way, still fairly simple in their distribution methods (RSS and a few major apps but no clear monopoly), and mostly non-interactive. This limits the ability of platforms and advertisers to ruin them, which is why podcasting is still a bastion of quality media online.
it was quite easy for a small team to crawl and index a good portion of the internet, enough to become the de facto gateway (talking about Google).
it was similarly possible for a relatively small team to crawl a good chunk of the available internet and train some of the most sophisticated "algorithms" we've seen on them (talking about Open AI).
if there is an incentive, this problem can be solved. if this was actually a hard problem, most current social media companies wouldn't put so much effort in restricting crawling to force everyone through restricted API access (look at Twitter, Reddit, Instagram or Facebook, as examples).
I make algorithms that filter RSS feeds and other social media content. You don't need a global view of the system at all to do this, you just need a point of view. That is, if you have a few thousand posts and a thumbs up/thumbs down judgement you can train an ML model that will predict those judgements.
With about two days looking at toots I could make a model that shows you nothing but angry toots about politics or one that removes angry toots (could take down that keyword filter that means I never hear if somebody is having trouble with the transmission in their truck or that Transnistria got invaded.)
The main reason I haven't developed a social media (as opposed to product/service) sentiment model like this is that it would involve looking at a few thousand angry toots. (1) The reason I want it is that I don't want to read those toots, (2) it would cause me great suffering to look at those toots. Social media moderators at companies like Facebook have been traumatized, it's no joke.
If it was my social network, I'd use that filter to put a brake on selfish angry memes spreading so that the pain of one person reading angry toots gives relief to so many more.
I have a model that predicts the probability of a headline getting a lot of comments relative to votes on hacker news: some high scoring headlines in my RSS reader right now are:
Why do women commit far less crime than men?
Study suggests anti-Black racism may account for conservatives' negative reactions to jobs requiring DEI statements
Checking a bag will cost you more on United Airlines, which is copying a similar move by American
Everyone seems to forget why GNOME and GNOME 3 and Unity happened (2022)
Three of those are clickbait, the last one is a good HN submission. A social media sentiment model can give a larger algorithm a "superego". There are other ways to pursue engagement other than selfish angry memes.
I've got to disagree on this point. I am a firm believer in "democratisation" of anything, including "publishing content that many other people will see".
this is what social media mainly have done, in my opinion. they have made it extremely easy to publish content. the "social" part is just to further lower the barrier: it is easier to quote or comment on something someone else has already said compared to posting something out of the blue, and features like "like" or "share" allow you to create content with push of a button. they have also used other techniques that has no social aspect (Twitter's character limit, TikTok's musics and video length limit, Snapchat's stories, etc).
of course, that means posting and spreading "worst in people" is also easier (as is spreading spam, etc). this aspect I feel has nothing to do with the "social" part of these platforms, any form of lowering the entry barrier would have caused more terrible things to be published and spread (maybe with different extents, but not essentially different).
> this is what social media mainly have done, in my opinion. they have made it extremely easy to publish content. the "social" part is just to further lower the barrier: it is easier to quote or comment on something someone else has already said compared to posting something out of the blue, and features like "like" or "share" allow you to create content with push of a button. they have also used other techniques that has no social aspect (Twitter's character limit, TikTok's musics and video length limit, Snapchat's stories, etc).
It was very easy to publish things on the internet before modern social media.
how, though? I doubt we had an easier method than opening your phone and just typing a few sentences in a text box, or just "liking" or "reposting" something.
Good. The "worst" in people is a part of people. To pretend it doesn't exist and suppress it forever is an insane social engineering experiment.
And the Dionysian night of the Internet is a way better and safer place for it than the Appolonian day of real life.
If you're willing to surrender that outlet and the game theoretic ground of pseudo/anonymity because your feelings are too hurt, I sorely hope whatever totalitarian government is in your future punishes you for your weakness.
Yes, we can. Christine Lemmer-Webber, coauthor of ActivityPub, cofounded the Spritely Institute to work on the next generation of decentralized online communities. https://spritely.institute
yes I think that'd be a great addition. any publisher can, optionally, introduce a hub for their feed, so people get realtime update of their content, and even a social hub, where other authors reacting to their content can use to notify them (and ease the aggregation of these "reactions"). I suspect that can lead to a situation where there are a few hubs and social hubs that anyone can use for their choosing.
I think world scale social networks as in an open "town square" are borderline impossible. Which is not that surprising as it's quite unnatural to talk and be seen by the entire world.
However, if you do try it, I still believe a central network is superior. I would opt for baseline moderation (content should be legal) after which any further moderation should be in the hands of the user, not the network. Bluesky has interesting ideas in this area. For example, you can say to always block sexual content, show a warning, or just show it. To each their own.
With that in place, a centralized network is superior as it tends to have excellent discovery, search, no weird syncing issues, it's centrally monetized, doesn't depend on lots of volunteer effort and the track record in keeping content online is far better compared to alternatives. That's a lot of benefits to consider.
The only real downside is the algorithms that dictate reach and how they are gamed.
Federated social media is a really bad idea. Having separate instances is fine if you want to carve out a community, it's the federation part that sucks.
It's very resource hungry and still can't manage to properly sync up content, likes and replies.
Discovery and search is very hard which beats the entire point of a social network, which is to find people and content.
Reliability is low as any volunteer may quit on a whim, taking down your content.
Instance moderation is very heavy-handed in that it not only dictates instance rules (which is fine) it also dictates federation. Which means you have no say in which outside content or people you can see. The solution is the dreaded "switch instance". Social media users don't know what an instance is, you're not getting it.
The culture on the Fediverse is: connect nobody. Extreme safetyism.
The lack of some type of ranked algorithm is a problem, not a solution.
Reach is a problem. Bigger accounts, institutions, companies, news, sports...get no reach on the Fediverse.
Bottom line: fully centralized social media or fully isolated ones. Federated is the worst of both worlds.
I see your points (though I don't fully agree). Most of these relate to a "central search and discovery service". this doesn't need to be coupled with the rest of the network functionalities (posting content or interacting with content), which I personally believe is better fully distributed (for similar reasons to what you mentioned). We've done this for the internet as a whole (anyone can bring up a website on their own, search engines will discover and index it, but the search engine space is as centralised as it gets), so I believe we can do it for more "social" content as well.
I think there is a huge issue with centralized social media. Why are unelected tycoons in the US dictating what the rest of the world should be allowed to see? What happens when they neglect languages and regions? It can go horribly wrong: https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb....
Decentralized social media gives us a chance to take back control.
Decentralized social media sounds better in theory, but in practice is less effective.
People are voting with their feet. Or phones, I suppose.
It's like all the FOSS types who yelled about how IRC is superior as millions migrated to Slack and Discord. Your bullet point list is nothing in the face of usability.
its not about "usability", in general, though. do you think reddit is truly usable?
as a content consumer I would love to be somewhere where there is engaging content and content discovery. I go to mastodon, my feed is empty and the search doesn't work. I bounce.
as a content publisher, I would go wherever the audience is. I go to mastodon, users have a hard time finding me, no one gets my stuff. I bounce.
now I (personally), as a consumer, hate the fact that I need to follow people at least on 3 platforms to get their content proper. as a publisher, I (again, personally) hate the fact that I am forced to at least partition the discussion on my content. but I'd tolerate these pain points for content / audience, because the main role of a social media is to give me content / audience. being nice and not having all these pain points is secondary.
Compared to federated equivalents? Oh yeah, definitely.
That's not to say it doesn't have other problems, but casually following subreddits and commenting on them is super easy, and you don't have to worry about which subs federate with other ones, or where your Mastodon account actually is or anything.
> its not about "usability", in general, though. do you think reddit is truly usable?
Um, apparently, yes?
Google searches were doing "+reddit" before reddit melted down. The content was super discoverable and searchable. Everything (including discord, slack, and the fediverse) is laughably bad on that front.
People switched from twitter and reddit to ... discord and slack. Mostly beause they handle identity and phone apps--apparently searchability and discoverability isn't that important. I'm one of the olds, so don't get why the fuck people want phone apps to pester them all the time, but apparently the youngs want this very, very badly.
So, yeah, apparently twitter and reddit were really quite usable for the vast majority of users.
I'm a big fan of the fact that reddit's meltdown caused a bunch of people to set up forums again. However, that's a lot of duplicated work for every single forum administrator.
did people migrate from twitter / reddit to discord / slack? That's an interesting phenomena if it has happened that I wasn't aware of, do you have sources I can read up on it more?
anyways, my point was that "usability in general" is not the key (reddit has famously terrible interfaces, for example). it is being really usable on the core functionality it should provide (like having tons of content to discover, and getting your content to tons of other people, in case of social media) is what matters.
> did people migrate from twitter / reddit to discord / slack?
Six technical reddit groups that I monitor regularly all migrated. I was kind of surprised as that was 100% of the technical groups I regularly track on reddit. 3 are now on Discord, 1 went to Slack, 2 set up a Discourse.
In terms of "number of people", it's probably not that much. However, in terms of adding "+reddit" to Google searches, it's going to be a massive hit.
- all distributed solutions show too much overhead to perform well for casual users, some (now abandoned) like ZeroNet was performant enough for some personal blog hosting, but barely and still no "indexing" solution stable enough;
- classic decentralized social like Usenet seems to be abandoned by most people (except for commercial piracy) so well, most people do not really care enough to take this root if they do not feel something at hand immediately...
Long story short: we "just" need IPv6 with a global per host AND a "family" domain name (with all relevant subdomains) per home, a homeserver (like common domestic "routers", just a bit more open/powerful in hw terms) and that's do the base to rebuild "the internet" with a connected desktop model where we no not need third parties or a VPS to traverse NAT, where we do not have to deal with strange long IDs instead of something like phone.bill.myfriendfamily.tld to make an IP2IP audio/video call and so on.
All other approaches ESPECIALLY those who try to mimic big platforms will fails.
Thanks, and yes it is, but technically we have all the pieces since decades, so while it would be a revolution it's not really something new to be created with uncertain results...
The real revolution is a paradigm shift form the actual development focused on big of IT interests toward a classic desktop centered development, the needed ground is there and battle tested, the rest is "just" to be done on scale. A simple example: we have emails, almost any human on the planet have one, at least between those who use computers in a way or another. Today for 99% mail == webmail, something offered by some third party. We have some desktop MUAs but they are stuck in the '90s at best. However we have for instance OfflineIMAP, notmuch, storage normally big enough, so all we need to have emails on our iron is a MUAs friendly enough for end users, based on notmuch/mu like nomuch-emacs, mu4e, neomutt with notmuch backend and so on, that offer IMAP sync built in so and end user have only to enter relevant credentials and wait for the initial download time. The rest follow naturally.
Similarly you buy an IoT sensor? Your homeserver speak MQTT, CoIoT etc all you need is connecting to the sensor via LAN or to the AP offered by the sensor and then configure your WLAN. You see the sensor pops up in HA or something similar, that's is. No need of a third party "cloud". And so on.
The homeserver is simply a machine on 24/7/365 like most home "router", witch are actually embedded appliance, we only need them more powerful and open to be used by the end user, not only by the ISP. This "center point" offer "sync/services" to all families devices, that's is. The "cloud" became a simple backup/mirror of the homeserver for redundancy, availability, backup.
thats an interesting perspective. I am personally much in favour of reducing unnecessary interdependence in society, specifically to big corps: google can really screw me up just because they provide an email service for me. questions that come to mind though:
- why would someone do that? its not hard, it can be made easier, but what would be the tangible, low-level gain for people to do that? I can think of examples like safe storage of photos and documents (my old dad already does that, he buys hard disks and stores stuff), easy and safe sharing of stuff in the home (airdrop does this tbh), and secure smart home controls. but I can't generalise the pattern to properly understand the underlying benefit here.
- in communications, there is an issue of trust. like how can I trust email coming form a new personal server, to not spam or scam me? I think that can be solved separately though, just can't formulate the solution clearly in my mind atm.
IMVHO because of material and monetary interests: so far Google have not let down too many, but horror stories of people screwed by a Google ban are more and more common, "the cloud" start to be expensive not only for companies but also for individuals and for companies horror story about "economical dDOS" start to be real, Netlify have just paid 104.000 USD for a spike demand for a single mp3 file for instance, once such things became spread enough and that will happen soon, people start to look for alternatives. So far they feel to have no choice, but if the FLOSS world will came to rescue the choice would be there.
About emails, the shot is far longer: we start to have digital IDs, after decades finally the idea of a digital signature usable with a smart-card start to be the norm, at least in EU (eIDAS) but I think in the USA things are not much different, at maximum few years behind. With "family domain names" we start to "know each other" enough to filter safely our contacts and bayesian spam filters, mail aliases wisely used are more then sufficient for the 99% of usages.
Another VERY important thing such model can offer is automation, a thing 99% of users miss simply because there are no solutions designed for them. Now imaging a world with OFX/OpenBank mandatory available to all customers of any bank/financial institution, imaging a set of FLOSS clients to import all their feeds from different banks, cards, ... in a single place and auto-categorization of most transactions. This alone is a thing most users today do not ask for because they do not even know that's perfectly possible today. But with a resurgent desktop model such things would easily became common and at that point the big corp model would fail crushing on it's own technical limits.
To me the only point is "where to start", the classic answer was from universities labs where students start outlining and implementing their own world, due to the sorry state of unis these days that's can't happen, but in the FLOSS world it can. If enough "seasoned enough" devs start to say "dammit, I'm tired of the actual crap, instead of trying mimicking big corps platforms let's made something the classic desktop way" in few years such ideas and simple software would spread and following the same FLOSS spread "digital sustainability" where nowadays most devs works on *nix FLOSS environments even if many inside some proprietary walled gardens such comfy and simple solution would grow enough to be unstoppable. The only point so is having a not-so-small set of devs who want to start such change, the rest is already there...
what do you mean by "content-centric-networking"? perhaps I have some reading to do.
the "allure of large companies hoarding all the data" might be a factor independently though. I personally believe a proper content discovery service is key to any form of decentralisation, and that is, as far as I can imagine, highly dependent on centralised entities amassing and then processing large amounts of data. I mean we sold our data to Google and in return got the current decentralised web where anyone can make a website and get traffic if the content is good enough (or SEO-hacky enough).
The wikipedia article is a decent high level overview: [1]
"Content-Centric Networking (CCN) diverges from the IP-based, host-oriented Internet architecture by prioritizing content, making it directly addressable and routable. In CCN, endpoints communicate based on named data rather than IP addresses. This approach is a part of information-centric networking (ICN) architecture and involves the exchange of content request messages (termed "Interests") and content return messages (termed "Content Objects")."
It seems well suited for a distributed p2p style network in my mind, but its definitely not anywhere near my area of expertise.
To be more decentralized you could use radio waves outside of the Internet like LoRa to broadcast whatever you need to... maybe it could be combined with the Internet if it's available to increase distance.
Not having any LoRa hardware, I've been imaging just using Bluetooth to gossip updates with people in the same elevator, at the same stoplight... I'd love to build something useful that could tolerate a transport layer like that
One thing that pisses me off is that Google never implemented WiFi "Ad hoc" networking on Android (reducing the probably of decentralized networks taking off).
But I would also love to see something like this going.
When I work on that project I often consider it as a weapon against the Googles of the world, so it's not surprising that they haven't made things easy. If we want to cut them out of the loop we're going to have to rebuild some of that loop.
Absolutely. We've pioneered portable identity (and links and data) in a decentralised social web in Peergos: https://peergos.org/posts/decentralized-social-media
This is also pull based interestingly, but also has visibility controls being E2EE.
I don't think society needs public, or at least public by default, social media. This is even more true now in the age of LLMs and mass data harvesting.
> I don't think society needs public, or at least public by default, social media. This is even more true now in the age of LLMs and mass data harvesting.
Depends on your definition of “social media.” Is a message board / forum considered social media? Because while they can be problematic, they’re still extremely useful. It would be interesting to think about decentralized Reddit and why lemmy is going nowhere.
That's a good question. I don't think I'm including reddit here because those are moderated spaces on a fixed topic. I consider social media more people posting top level things that others can choose to follow directly. So social media is more like people meeting and talking, whereas reddit is more like an interest-based society meeting and talking.
you don't need a CDN or even a DNS to host your blog (though most probably you'd use them). you will still be at the mercy of registries to have an IP address other people can find, but I think practically speaking it is not far fetched to consider this a "distributed system" (for most use-cases anyways).
This article, and many others that promote a highly decentralized internet, consistently fail to address the vast majority of the population who are not developers or extremely tech savvy.
> This is in contrast to personal blogging, where every Bob can easily host their own (and they often do).
A key word in this sentence is "often". I would argue that hosting your own blog is exceptionally rare at the population level. Most people do not have the interest or skill to accomplish it, so they use services.
Similarly, this author proposes a system where every social interaction could be self-hosted. I fail to see who this benefits, beyond a vocal minority of smart technologists.
I've seen arguments that suggest the bar should be lowered to allow everyone to do more technical things if so desired (self-hosting, domain registration, etc). I just think this misses the point that for most people, these topics are utterly uninteresting and unimportant. Unless the suggestion is to build a decentralized network for a very select few who wish to participate, I don't feel like these types of ideas will ever be relevant.
> consistently fail to address the vast majority of the population who are not developers or extremely tech savvy.
My feeling after a number of years in the field us we massively underestimate users.
We have read all these blog posts about what Amazon and Google does to make people click and we think the same applies to us.
But we aren't trying to get the last few percents of the worlds population fall into our dark patterns.
We should be optimizing for the ones who actually use it. Enthusiast. Early adopters. The already then 50 year old electrical engineer who showed me Ubuntu 15 years or so ago.
Firefox, Ubuntu, even Google used to have a massive unpaid sales force. I know. I was part of it.
They all decided to take existing users for granted and focus on new users. Firefox "polishing" ux and forgetting that the best UX is the one that we chose ourselves.
Ubuntu going all in on copying Mac: trashing the existing alt-tab solution, moving the window controls for no good reason.
Google prioritizing "intelligent results" and the number of results at the expense of relevancy at every point until not even the double quote and verbatim operator together can prevent them from second guessing what I really meant to search for.
I am tired. And I am voting with my wallet. It is Orion and LibreWolf for browsing, KDE for Linux distro and Kagi for search.
Apologies, I'm not tracking your argument. The first part of your posts seems to suggest that focusing on early adopters will lead to mass adoption. But I'm not following the last part where you indicate you've dumped popular services/software in exchange for relatively niche solutions
I came to Firefox because it was extensible and customizable and it was a community project I could like.
I am leaving it because the only time Mozilla now think of us as a communicaty is when they have fundraisers.
I came to Ubuntu because it was stable, smart and good looking. Innovative while being usable.
I left because they copied a desktop paradigm I didn't like, breaking everything in the process.
I came to Google because they were nice and had the best search results.
I left because I feel they are somewhere between hilarious and criminal and their search results had been tragically bad for a decade already when I finally found a better option.
thanks for taking the time to read through it regardless. I wrote this as a brain dump, and shared it mostly with the intent of learning more about the topic (which thanks to this thread, I already have), so I feel it is ok if it doesn't change the life of billions of people.
the last part of the post, however, is dedicated to assessing whether such an idea would have any real bearing on, as you've put it, "the population", or not. I've tried to list potential benefits to the average user, if such a decentralised platform was ever built. since you are interested more in such evaluations rather than technical contemplations, I'd like to hear your feedback specifically on that part.
I'm not sure I have strong feelings about it, but here are a few thoughts:
1. I think a more universal content aggregation / discovery tool would be beneficial for many people. Google might be the analog that comes to mind. This makes me wonder how such an aggregator/search tool would operate in a decentralized way. Would each user/node be responsible for maintaining its own index? My question to you is how does this differ from current centralized aggregators and search tools?
2. Regarding distribution vs. publication: its an interesting point that you could separate the two, but I personally believe this benefit would be moot for most people since they would be unlikely to self-publish content. My view is that most individuals will continue to use centralized services, regardless of the underlying protocol, due to a combination of convenience, ease, and cost. Like you identified, unless there is some critical mass of people self-publing and self-distributing, I'm not sure I see a tangible benefit here.
Also, mea culpa, I sometimes forget that the author might read my response and perhaps my language in my initial post was too strong, I hope I did not offend.
1. I suspect something like Google, indexing feeds instead of websites, and also mapping what is a reaction to what, perhaps. something that makes a bit suspicious about the potential of this though, is the fact that we've had nice rss readers for so many years and none have embarked on something like this, although I think there was a huge potential if they could properly index youtube channels / podcasts for example (all already on rss). would love to find someone in feedly or inoreader teams to ask more.
2. its not only about self-publication / self-distribution though. this is already affecting normal users, to the extent that it resulted in a few break-aways from twitter (none successful of course), and even the whole twitter management changing and attempting to capitalise on the desire. none of this is enough (or is ever going to be enough) to really force big social media to meaningfully change on its own, but situations where we have a stable market dynamic that constantly produces disgruntled costumers who are still locked in the system without much choice, the change typically ends up happening through regulatory intervention (I mean that's their job), so yeah this point might be more relevant from a regulatory perspective rather than a direct market force.
and no worries. if I wanted only nice comments I would've only shared the post with friends or on linkedin / threads. the value, for me, is in these discussions, even if the language sometimes gets a bit spicy.
Interesting idea that it might have more impact on regulation than as an intrinsic market force. I might not fully understand the idea or implications fully. Seems like it might be worth dwelling on more and fleshing out more fully as you continue to investigate this topic.
I'd suggest it's more of a journey, with convenience and usability at the start.
Witness many youtube creatives trying to jump ship wrt to cut of the monetization, changes in monetization eg. longform videos, shadowbanning, and so on.
These are secondary, perhaps more subtle, considerations that come to the fore after a while, and illustrate the dichotomy between platform reach and endorsing that platform.
Fundamentally I think we are still dealing with hard decentralised problems - Small world networks are better understood from a tech perspective.
AT-protocol, zksnarks, DIDs, lattice types and the like are fertile ground for exploration that MAY yield new advantages in this problem space. UX can follow the function.
You are right I'm sorry. I wasn't notified that anything discussed on these forums should be exclusively about whatever "the real world" cares about at least one bit (preferably 8 bits or more I guess?).
Yo don't take my comment in bad faith, I was being tongue-in-cheek sarcastic.
Obviously "we" here on HN care about the idea itself/tech ...
Was just saying things will likely never even begin to rival the Twitters and Instagrams out there contrary to what some utopists among us tend to think...
no worries, I have worked as a product manager for a few products that "the real world" seems to care about, and I fully understand your sentiment. that's even why I've dedicated the last segment of the post to basically "who cares?" (I suspect some "real people" might since they can get everything in one place instead of having to follow people in 6 different places, but not sure how strong of an incentive that is).
also having had the privilege of observing a few products that have found "real world" usage, I actually believe that "nerdy passion" is key, though of course without considering USABILITY and CONVENIENCE (as you've put it), it wouldn't get far.
They're slowly but surely reinventing OStatus (<https://indieweb.org/OStatus>), which ended with ActivityPub. Surely if they go the same path they will end up with the same conclusions.
- It's "just" JSON and WebSockets. From a developer's perspective, it's incredibly easy to get started and build interesting things with it. (Like the early days of Twitter.) Which leads to...
- Nostr culture is tolerant of bots and other automated workflows. (Unlike current management of Twitter)
- Nostr culture is tolerant of search engines. (Unlike fediverse, or at least the Mastodon part, though I appreciate the reasons for that distinction.)
- Built-in micropayment infrastructure with "zaps" and Lightning Network integration enables a ton of interesting new startup idea possibilities.
One thing I don't like about Nostr, the social network. (Not Nostr, the protocol. Those two things are very easily conflated right now):
- Too many Bitcoin bros. (Although personally I don't mind Bitcoin, a thriving platform should have more than one thing to talk about.)
looks quite interesting, I wasn't aware of that. it seems like a nice hybrid solution to pull/push problem (relays can push if they choose to, clients just publish and pull).
I should say though, that upon first look, I wish it was built on top of some already existing standard (like rss), as it could get a great headstart in terms of content already circulating in the network that way.
Nostr can get complex, but at its core, it's deceptively simple. It's built on JSON, WebSockets, digital signatures, and sending a simple Event data structure over the wire. It can get complex, but its core is very small. Feels similar to how HTTP itself started.
Also, it wouldn't be too hard too build bridges or bots that bring in other stuff (like RSS feeds or ActivityPub content) to Nostr. One already exists: https://mostr.pub
Sure. Instead of servers, you have friends, and you subscribe to your friends content, and discovery occurs through friend-of-a-friend network and "discovery hubs" like the old Yahoo, but for niches (i.e. Follow TechDude96 for hardware review aggregation, etc).
Sure. But even slightly less decentralization has mostly proven clunky and unattractive when it comes to two-way communication social networking. I mean we all have accounts bluesky/mastodon/whatever the network du jour is, but it's on Twitter the posts are, despite its massive quality nosedive in recent time. The centralization of Twitter has massive benefits has almost zero downsides.
it has downsides though. I am constantly jumping between twitter, reddit, spotify (for podcasts) and youtube to see the content and latest updates from the same people. when I want to share something, I should think where should I post it, is it more technical? then reddit and HN maybe, though on twitter I do have some technical people following me too. is it social / political? then its twitter, so on. then I find myself reading and discussing political posts on reddit which are screenshots of posts on twitter or vice versa. I don't know how much pain this is truly, but the unnecessary walls between these communities is indeed a pain point, both as a consumer (I need to check people on multiple places) or as a publisher (I have to partition the discussion to say the least, if not the audience in total).
as for mastodon / bluesky / whatever, the main issue to me seems to be lack of content and content discovery. which is weird, since every youtuber, podcaster, subreddit, all of HN and medium, etc. is technically on RSS, and even more can be easily put on RSS with cheap bridges. but instead of unifying and enhancing all this existing shared content streams, we've broken off with new protocols and created separated and isolated communities.
Personal instances of the US Post Office are also pretty rare, my dude. You gonna go deliver your mail yourself? If so, why write a letter?
You want total distributed decentralization? Use Gnutella. It's equivalent to giving your letter to your neighbor and asking them to deliver it, and they ask their neighbor, etc, until it reaches a back alley in Algeria, and then that Algerian calls you on the phone, and if you're home, you read the letter out loud live on the phone. Turns out that's a pretty crap way to deliver letters. But it's decentralized!!
People: Stop being so fucking obsessed with decentralization. It's not that great. I'm sure you're all having lots of fun inventing crap technology to solve an already solved problem, but you're just codesturbating.
People tend to forget that we had everything decentralized in the first place, then centralized them for the sake of convenience: Email -> GMail, IRC -> Slack, RSS -> Social Media, Blog -> Medium, Forums -> Reddit and so forth.
And basically, convenience won big time.
These are not mutually exclusive though. We need decentralized services to fight discrimination and censorship. But, I agree that there is also disproportionate obsession with decentralization at the moment.
I wouldn't characterise the history of the internet as having started with "everything decentralised". if I'm not missing something, it was completely the other way around.
introduction of general populous to the internet was through internet gatekeepers like aol or msn, with unimaginable convenience (and centralisation). that convenience lost, in the end, to the world wide web, where a ton of single individuals could set up a website and reach other people across the internet. some of the big central giants of today got born out of this transition (and some were instrumental to it, namely google).
something similar happened to smartphones. ios was intended to be a closed off gatekeeper. people jailbroke it, and apple caved in and opened the system up, allowing anyone to provide services (apps) on it. similar to the www transition, we had a period were you could easily find popular apps made by a single programmer. this decentralisation also basically enabled most centralised social and messaging services of today.
so I see it more as a cycle: in stable times, convenience causes a drift towards further centralisation. when there is a platform shift, established players lose their defensive power, some essential operational costs drop drastically, resulting in a sudden decentralisation: tons of small (mostly individual) players can now enter the market and gain prominence through it. the market then stabilises, thriving players shrink in number and grow in size, and become the convenient, centralised corps of the future, only to be similarly shaken by market forces that allowed them to exist in the first place.
are we close to such a shift now? I think so. current version of the internet is clearly starting to rot and drown in noise, while we are the cusp of new heights of information processing and accessibility through LLMs, while also thriving to push the boundaries of the information we circulate (AR/VR). but who knows.
I mean, yes, you're missing something, the first and foremost, Internet. :) Its sole purpose was to solve the information access problem in a decentralized way. That was the rationale behind the ARPANET project.
Then, DNS came, which is also a decentralized solution to the name resolution problem. BGP routing is also similarly decentralized. World Wide Web itself is decentralized. IRC is decentralized. These technologies predate almost everything you mentioned.
I agree with your sentiment that it's mostly a cycle. I just wanted to emphasize that decentralization had never been a novel idea. We've always had decentralization, it's not like a new revolutionary thing as it's presented. That's my point.
tbf I mentioned the www era decentralisation mostly.
but I agree. as mentioned earlier, actually the fact that decentralisation has happened multiple times before, while convenience was kind of against it, is the reason I think it will most definitely happen again as well. I'm even assuming it won't be far off by considering similarities in some respects to the situation before two other large-scale decentralisations we've observed (www and apps), though of course that is the same as predicting when the next platform shift will occur and that's not someone can claim with meaningful precision, IMO.
I wouldn't. I also wouldn't think about a decentralised email solution, or a decentralised messaging service. doesn't make sense.
as for publishing my own blog though? nah I'd really rather a decentralised system where I can host my own blog wherever.
and yes I do realise most people don't care about that, and yes I'm not sure if they'd care about other benefits of decentralisation as well or not, but its not like there is nothing there that would affect them.
This schema relies on "third-party aggregators / crawlers / search services" for critical parts like comments and content discovery. They are going to be responsible for things like spam filtering, misinformation removal, and blocking accounts which violate TOS.
And those crawlers _have_ to be expensive - after all, you need to check every participating server to see comments. So instead of everyone running mastodon instance on $10/month machine, it's only going to be a few centralized services doing all the work.
fair. but I'm assuming you need such expensive (and as a result, centralised) services regardless. a small part of content discovery is "who has reacted to my content", take the case of tiktok, in which a strong content discovery mechanism helped it soar through the ranks. I suspect without such a mechanism the system won't succeed either way (as the fediverse is suffering rn), so its a problem that we will have nevertheless (same as the problem we have with search engines).