1. "Decentralized Web Summit 2018" is happening just now (July 31 - August 02
https://decentralizedweb.net/. I'm looking forward watching the talks once released.
2. This year at JSConf EU 2018 Tara Vancil talked about Beaker (https://beakerbrowser.com) an experimental peer-to-peer browser. This was really fascinating to listen to.
Here is Tara's talk:
I was curious what the difference with IPFS (that I had heard of) and Dat (that I had not heard of) - they have a FAQ entry on this on their page: https://docs.datproject.org/faq#dat-vs
This is basically a twitter clone (https://github.com/beakerbrowser/fritter/blob/master/README....), but the difference is that your profile and your posts (i.e. all your data) is stored in your own "archive" stored on Dat, and the APIs they have written simply load up that archive over Dat to get all your details (and cache it to provide some performance) WebDB source here that explains it: https://github.com/beakerbrowser/webdb
So you can go and edit/delete your posts and profile from the data on your computer, and it is updated in the Fritter application instantly. Posts you make from the app automatically appear in the files in your local archive too.
That is pretty cool!
In light of GDPR this is really interesting and potentially huge - you could imagine a lot of use cases for this where you are really in control of your data. I believe that only you can edit your own archive since only you have the private key - posts from Fritter made by you are writing to your own archive so that is how they handle auth I think.
Question for anyone familiar with Dat: how do I properly secure access to my data? E.g. instead of the Fritter application, what if there was an ecommerce application? I put my home address, paypal/credit card etc data in my own archive that the ecommerce site uses.
How can I make sure that only the ecommerce site can access that? I know in the FAQ it is basically relying on security by obscurity, but is there anything more concrete than that?
It would be nice if asymmetric keys were somehow pinned to a Dat archive so that I could still edit my archive in plain text still and then have the application transparently encrypt it with the ecommerce site's public key so that I could be sure that only those with the ecommerce site's private key could unencrypt it.
Also, are there common "micro-formats" for common applications? E.g. looks like Fritter has its particular format that it uses - is there a defined format for this that would allow me to reuse the same archive on another service? You can imagine the ecommerce one being very useful - no more entering your name, address and credit card over and over!
No, there isn't a pattern for this kind of thing. There is a "session data" proposal that begins work in this direction.
However, there is an implicit attack vector here - if a phishing website were able to punk you into connecting to the address (and key, presumably) of your wallet - well, then they score all the good stuff. Beaker would need to have a keychain-type thing builtin that would store that dat address and private key.
I think Beaker has a great start and they are grabbing a bunch of great low-hanging fruit. This is the first time I've seen compelling "static" applications. And it marks a return of the read-write browser. (Yes you can figure out how to run a TiddlyWiki locally from Firefox, but Dat lets you create a website right from its start page.)
« Fritter uses Beaker's DatArchive APIs to access the Dat network. This makes it possible to store profiles locally on the user's computer, and to transfer profile information directly between user devices. »
Yes yes yes! The user's own computer should be the store of record for their own data, and the data then synced out as appropriate.
E.g. if I have the archive on my computer/server I control and delete it then bingo - the data is gone and the app cant access my data any more. However, what happens to nodes that are seeding my data? Presumably they'll hang onto it for as long as they want, and there would not be a way to "force delete" the shared data since that would break a lot of the other benefits. Same goes for any hypothetical "prevent seeding" type options.
But I'd rather have social networks of the future have a DELETE call that atleast honest nodes will follow and delete the stuff.
We have a long way to go.
I find it a pity that the datproject.org are so 'programming-focussed' to the neglect of the Dat protocol, standardization, documentation, advocacy and community-building
Last time I checked (admittedly some time ago) all their libraries were NodeJS/NPM (some vanilla JS, but low-prio, unsupported), and the Dat protocol paper was out-of-date and incomplete.
There is good documentation if you want to build simple apps off of hypercore/hyperdrive, but it lacks depth/detail if you want to go further. Then you'll only find a rather small (though dedicated) community to help you.
Hope they succeed, and also that they start operating less under the radar, so to find more community help. Maybe via Beaker increasing traction they may get there.
We created a protocol working group and spec are being updated at https://www.datprotocol.com/deps/. But definitely still some need to improve docs all around! We have an upcoming grant to support this :).
- joe hand, a dat person
Here I hope that Mozilla's "Dweb" will bring something new to the table. I'll be following.
> This is the first post in a series. We’ll introduce projects that cover social communication, online identity, file sharing, new economic models, as well as high-level application platforms. All of this work is either decentralized or distributed, minimizing or entirely removing centralized control.
I do agree they're making up a term, but naming/phrasing can change/shift dialogs and the default perspective and is actually important. Whether it's a good use of their time is another thing.
Distrinet? Meshweb? Hmm..
"D-net" might have been better.
"Vlog" almost sounds cool in comparison.
But this is just jumping on a bandwagon that's already well on its way and joining the landgrab over who gets to be seen as an important player in the realm by trying to be the one to slap a label on it. -- This is a tactic that's a bit lacking in intellectual honesty IMHO.
So I don't think they're claiming to invent it. I do find it odd however that it's pretty much a zero-information lead-in article; it doesn't seem like this approach will hook many in unless they already know what they're referring to in advance.
For the first one, we already have Archive.org.
For the second one, we already have flash drives.
For the third one, well, I'm not sure distributed sites will be a whole lot more stable than traditional ones. Someone still has to be there supporting it.
Archive.org might let you _see_ an old version of a website but it hardly lets you _use_ an old version of that site. Especially if the site in question is a dashboard-like service or single page application. (Imagine google docs, trello, or an amazon service's dashboard.) This is also true with apps. I had no way to go back to the old version of my feed reader app after they updated it and filled it with ads.
Flash drives aren't an option if any of the devices in question is lacking of a USB port. My phone has no way to accept a flash drive without some sort of adapter that I don't own.
And you're right that _someone_ still has to be there supporting the app. The goal of the distributed approach is that that someone can be any member of the distributed network. It no longer has to be a single central entity that can vanish without recourse.
Note that this isn't some random position they've taken, they also funded stuff like the Wireless Innovation for a Networked Society (WINS) challenge
Please don't. Political discussion is needed to determine how we live together.
Techonological solutions are just that. They are neutral and can be used for good or bad. If we don't discuss things, they will be used for bad.
Attacks against net neutrality are just a symptom of people who want too much power. If they can't have it through the net, they will try by other means. These people need to be fought.
If you stop hate speech you are effectively censoring ideas. And then you have the challenge of who defines “hate speech”.
If you prevent censorship, you’re allowing hate speech because there will always be people who abuse a free speech environment.
In the 'Dweb' (god I hate that word), the question isn't so much about censorship vs. hate speech, but how the system is robust (or not) to well organised/funded minorities pushing content throughout the whole network.
The web has become centralised meaning there are relatively few targets required to compromise to reach a large number of people. In theory a decentralised or distributed network makes this harder, but in practice...?
It has more to do with the targeting algorithm used. Once you start watching some kind of content they start proposing more of the same kind of content to you. So people see less and less contrary ideas.
And you can't select an option to "give me quality containt from domains I never touched". Instead of discovering random new content and concept you get pigeonholed into some things you already know and agree with.
However, when you have multiple, smaller venues, it's less of a problem. If one venue filters the communication that occurs within it, people who don't like that can take their communication elsewhere. If another venue hosts hate-speech, people who don't like that can take their communication elsewhere. It's not a zero-sum game.
I think the only way to achieve the above is just that: create an area with like-minded people, who think and speak similar enough things, also derogatorily called a filter bubble.
Freedom is only freedom when it equally applies to people you like and people you don't like. Otherwise, it's a tyranny of your taste, however well-meaning it might be.
(Life is full of this; see more at "First noble truth". Alas.)
If this is not possible, I'm sure the government will find a way to legislate against it, and I'm not sure if I'd want to support it.
I say this as a devout proponent of decentralized technology being the future.
If everyone agrees it should not be visible, no one will be posting it; filters, censorship, and moderation address disagreements.
Further, even if there is a broad consensus that, say, “child pornography” should not be available, there are disagreements over what constitutes child pornography.
It's a very difficult problem. Most people agree that CP is horrible and should be removed.
However, what happens when the media turns a very large majority against a certain ideal / content that should still be available, even if people disagree? Who decides what, and how do you enforce it? How is the arbiter of visible content?
Until this problem is solved, or somehow sidestepped, I don't believe a decentralized web/social system is possible.
I really want to be wrong about this! If anyone reading this has an idea of a solution, please message me!
I wonder if there's a variant of Zooko's triangle for this. Moderated, anonymous, and decentralized?
Imagine that you empower users with the tools create and manage their own content filters. You allow them to decide what types of content qualifies as the "hate speech" they want to avoid, if any. Make it as easy as possible for them to share their filters and understand what is affected.
In such a design any "hate speech" can be defined differently by each group or individual, and "censorship" isn't imposed by a central authority.
In a distributed system this pattern is easier to defend. There is no one company that has to explain why the content is on their servers.
[EDIT]: To be clear, I am not arguing for censorship, but rather pointing out that the way most people see the problem of hate speech is not as one where they come in contact with it, but that it is allowed to exist at all and that others come in contact with it. Maybe the US won't legislate on this and being free from advertisers will be enough, but the EU (and Germany in particular) does legislate on these grounds, so you may find yourself in conflict with a bunch of governments if your solution is that users do their own filtering as they like.
It seems like that's precisely one of the issues the 'dweb' hopes to solve: To prevent certain chunks of society from imposing censorship on the other groups through centralized services.
 https://www.supremecourt.gov/opinions/16pdf/15-1293_1o13.pdf [pdf]
I think this is an overly simple response to a deep problem. We are all affected by what we take in, some more than others (and certainly when we are attacked). Getting people to be unaffected (or less affected) is likely to prove untenable or impossible.
Putting the onus on the recipient also leaves us blaming the bullied. "Maybe if they'd had tougher skin they wouldn't have committed suicide."
Start taking these less seriously. Start building better functional families. You will see these problems will slowly vanish. Happy kids from happy families have extraordinary bullshit avoiding capabilities.
If it is a physical assault we need to engage the legal system.
If not, treat these as mind games. The opponent expects you to behave in a predictable manner. If you react violently or weakly, they win. Censorship is a weak move. It is very much possible that your opponent (possibly, a front for some government agency or a political party) expects you to want more censorship (you can guess the reasons).
Words cannot ruin our religion\race. If they can, our religion\race does not deserve to exist. A religion\race that needs the crutches of censorship is a weak religion\race.
I am not proposing a simple solution here, but your blithe dismissal of online abuse is comical and cruel.
This is a misunderstanding of hate speech. If leaders of a group direct harassment campaigns against another group, that speech hurts whether you "let" it or not. For example when reddit had a "fat people hate" group, they started coordinated harassment campaigns. When the subreddit was deleted, they didn't have a support group telling them that harassment was ok, and they didn't have an obvious place to coordinate even if they wanted to.
You could harass someone with ideas they agree with by spamming or other means to create disturbance in social media or other forms.
Edit: note we're not necessarily taking about the state mandating the censorship, which potentially gets into first amendment territory. We're including service providers and ToS.
I should just ignore the thousand notifications with insults, I mean I can't block them can I? That would be some awful censorship.
Can't believe you actually compared this to a dog not barking at other dogs in another reply further down, how disingenuous.
If all the good people come together a kill all the bad people (maybe put them in special places and call them 'camps', need to think a bit more about this), then the world will be a paradise without crime. (Like murder, for example.)
There are lots of incredibly racist people out there using email. Email does not need a solution to prevent racist people using it.
You could literally apply that kind of arguments to any problem:
- gun violence vs gun control
- war vs peace
- crime vs police
It is too idealistic and ignores the presence of bad actors.
So right, society would just be perfect if we could all think the same way!
Terry Pratchett said it best, "Pulling together is the aim of despotism and tyranny. Free men pull in all kinds of directions."
― Noam Chomsky
"I disapprove of what you say, but I will defend to the death your right to say it" - Evelyn Beatrice Hall
Since you persist in using HN primarily for ideological battle, which violates the site guidelines, I've banned this account. Would you please not create accounts to break HN's rules with?
Why not treat it as a threat under criminal code, and prosecute it as such? (That's what I think should be done.) What do you gain by censoring "hate speech"?
But that would also be censorship...
If they never have to defend their views then you admit their idea has merit by default.
And yeah sure there are people that are maybe wrong and will not talk with you at all. But in my experience they are a minority. Generally people actually believe what they say they believe and they’re eager to sit down and discuss it.
If you actually talk with people, more often than not their position is more nuanced than “kill all the Jews.” Isn’t that a good thing?
But every time you make even the discussion taboo you just validate their believes. Because if it’s something beyond even mere discussion then it must be something true ‘they’ don’t want you to know about. Or so it seems.
I know the Earth is not flat. Why would I avoid discussing it when truth is on my side?
Which by definition makes it a view they allow a reply to. refer to my original, now flagged comment. If they actually respond to your arguments and questions, that's not what I'm talking about, and it's more than you had the grace to offer.
More to your point: people with minority views cannot “allow” for any discussion to happen. The fact that we’re not discussing their issues and rather try to make them go away is the fault of the majority.
WE make the rules and it seems everyone is more interested in labeling people than hearing what they have to say.
And then what? Then they change their mind?
If someone in the street calls you a frog murderer, and you ask why they think you are one, and they say they can see it in the distance between your eyes, and keep trailing you and scream frog murderer, interrupting every other conversation you want to have for the rest of your life, where would you draw the line. And would you offer video evidence of everything you ever did to placate them? What if they just scoffed and said everybody knows frog murderers know how to fake video?
When you say "views", you simply don't understand the distinction I'm making. I have spent so much time discussing with bigots of all stripes in the last 1.5 decades. I don't regret it, it's never totally wasted, especially when it's not just trading insults -- but misunderstandings and ignorance are not the cause, that only applies to those on the fringe, not at the core of something like Nazism. "Show that person is an idiot" is referring to someone who would be phased by that, because their opinions come from their own person and thoughts, because they actually are opinions. (By the way, such a person often has their views challenged by at least one person anyway, themselves)
I know and have dealt with those, but have you dealt with those where that isn't the case? Where the espoused belief is not a belief, but a cover for more, and endless abyss, and where the offered arguments hardly register with the person enumerating them? For you it may register when you say something and someone else refutes it. But for some it doesn't, they just register amusedly that you actually spend time and energy on what they can produce without end and at zero cost to them.
> Before mass leaders seize the power to fit reality to their lies, their propaganda is marked by its extreme contempt for facts as such, for in their opinion fact depends entirely on the power of man who can fabricate it. The assertion that the Moscow subway is the only one in the world is a lie only so long as the Bolsheviks have not the power to destroy all the others.
-- Hannah Arendt
Another way to look at it would be the differentation between an individual person speaking, and a person channeling a mob. It doesn't have to be a racist mob, it can also be a "politically correct" mob, you know?
I remember when a girl strolled into the Myspace forums and said "hi guys, I'm a fascist, let's discuss". I was intrigued, then a bit shocked by her views, but I had to respect the person for being honest about them and open for discussion. But IIRC most people were just assholes to her, she was an asshole back, and got banned shortly after, no idea why. But I remember thinking it sucked, that is was a very poor performance on behalf of "the" group. In that case, the "right-minded people" were kind of acting as a mob, and she was a person speaking as a person.
I'm not arguing for any government banning something here, and unless I'm mistaken, neither is Mozilla. But even as private individuals, we simply should pay more attention and not just lump everything together as "something someone else doesn't like" and all that. Mob psychology and politics are no joke, neither are alienation and lack of perspective, shortening attention spans, inability to form coherent sequential toughts. Networks that datamine people and then influence them for maximum bit-sized engagement, that's no joke. People funneling themselves into "communities" where they play meme bingo, that's no joke.
Being downvoted and shadowbanned on HN for comments people can't refute, now that is a genuine joke, and oh look, my comment got flagged already. Because replying to it is not enough, one simply has to assume I haven't thought about what I said, and punish me for one's assumption. And of course, your reply is kind of the least charitable interpretation of my comment possible, as if I never argued with someone who had opinions they didn't like, without even attempting to understand what I was hinting at, and as such against the guidelines, but hey.
> As citizens, we must prevent wrongdoing because the world in which we all live, wrong-doer, wrong sufferer and spectator, is at stake.
Sounds silly, right? Who dat ho anyway, huh? Well, this is not an intellectual climate to seriously elaborate on serious things, so I'll have to just leave it at the suggestion to not judge icebergs by tips while preaching about letting others speak. Thanks for the demonstration of hypocrisy, bye. People so weak and dishonest I genuinely prefer as enemies rather than allies.
1) those few companies are "the internet" for major parts of the population (actually it almost falls flat already here)
2) to make it worse those companies has a long history of censoring not only what we all despise but also things they didn't like (business wise), things their staff didn't see the value in (art, histn eoric photos from wars) etc etc.
There, -now you haven't been downvoted without an explanation. (I really don't like anonymous downvotes and think all should do better most of the time.)
On mastodon, my instance doesn't allow hatespeech and I ban instances that propagate it.
But these instances aren't censored and federate with lots of other places.
As long as people are willing to federate with an instance, they can share, the code is open source so they can use their platform as before. It's just that my part of the platform won't speak with theirs.
The point of freedom of speech and the marketplace of ideas isn't that all ideas thrive regardless of merit, but that the coercive power of government isn't being deployed to determine which ideas survive.
I think that line is a very popular hot take, that all these platforms want extremist content, because it's good for clicks & ad impressions, but it's very clearly not the case. All these platforms are quite incentivized to prevent all these problems, it's just that these are hard problems to solve to begin with, and the definition of extremist content/hate speech/etc is a political minefield.
Where do you read that? I thought the hot take was that they want people to be in a perpetual state of pearl clutching but not actual extremist content.
Maybe your definition of extremism does not include hate speech, but mine does.
So the way you reconcile the two is not by preventing the speech, but ensuring that it can have consequences. It's why we have laws of libel for instance: governments don't censor the press directly, but if you knowingly lie about someone in print, they will have the right to recourse through the courts against you.
For this to work, though, there needs to be some way to identify someone to hold to account. That takes you into other tricky areas around common carrier vs publisher, and anonymity of users.
It's that balance that really makes this so tricky, and is the real question, I think. If there are no consequences, you get lots of more-or-less problematic speech as we've seen. If there are mechanisms to ensure consequences, if you're not careful you can end up exposing people you'd want to protect (eg posters in oppressive regimes) to consequences you would think unjust.
That balance probably means having consequences at the publisher level rather than at the individual one. Meatspace publishers have historically shown willing to shoulder these burdens: journalism organisations have sometimes gone to extreme lengths to protect sources and staff working in oppressive regimes.
1. You deterr it "upstream" through society-wide education and well being.
2. You coerce with punishment.
3. And when someone engages in it, only then do you restrict liberties that make sense to give to wellbehaving citizens. You only enable judges to apply those restrictions.
The way to do away with hate is to shine upon it with the light of open discussion, not hiding it by labeling it “hate speech” and hoping it goes away. Hate hidden grows bolder.
What I described enables someone to publish lies that actively harm a second person, but then the first person gets fined / jailed for it. Do you think this is reasonable? Or that the sanctity of free speech should protect that first person, and impose on third people the effort and responsibility of determining whether what was said about the second person is fact or lie?
To me this phrase, to “say something illegal”, is absurd. There is no magic word that you can say or write that would harm anyone else, and as such any law that would make any kind of speech illegal must therefore be unjust.
The canonical example against that is of course of yelling fire! in a crowded theater. However I do not see it as an example of forbidden or illegal speech, as the actual idea of a fire is not the issue here, but rather it is the ACT of yelling the word that is the problem. After all, a painting or a movie of someone doing exactly that should not be illegal. Or in other words it is not an issue of speech but rather of speaking.
Now, we don’t of course live in a perfect world. There are other acts of speech that can have dire consequences and should be deterred.
Probably the most obvious topical example is a false rape accusation. In a perfect world such claims would be outright ignored without any evidence, however as it doesn’t seem to be the case there should be some recourse for the accused. At the same time you need to balance this with the chilling effect this will have on the speech.
To this extent this is not a black & white issue, as nothing ever is. Nevertheless all of this deals with liability, not censorship, and as such is transparent and can be limited.
There is no case for censorship under any circumstance whatsoever.
So as to act vs speech to me this is the distinction. Murder is illegal, but we don’t censor it. Accusing someone of something or yelling is similarly an active thing, however the actual information should never be limited. There is certain conflation here of speech in abstract and the act of expression of speech, however I think there is a distinction and it is important.
There is also in my mind no overlap with “hate speech.” Yelling fire is very context specific, and an accusation directed at someone similarly deals with actual person or persons, this is why such exceptions are justified.
Most examples of “hate speech” are often general statements that do not deal with specific persons, but rather with ideas. As such there is no “act” of speech, only speech.
You are free, but then if something happen because your freedom, then you pay
Does it work?
I think however, that ridiculing often achieves the opposite effect, making you the enemy in the eyes of those susceptible to the hate speech.
How do you define those people? My view is that hate speech is badly defined concept because it is predicated on assumption that people like this exist.
But I don't think we should accept that such people exists. People should have freedom, and that freedom includes responsibly to act in moral way. By saying that somebody is "susceptible to (hate) speech", you're denying their agency.
I recently saw a critique of satire that mentioned white supremacists tend to be fans of movies like Apt Pupil which, while criticizing Nazism, still portray it as powerful and potent. But the Producers, which literally portrays an attempt at an in-universe pro Hitler play (albeit for in-universe subversive reasons) and similar parody content isn't well received by them, because it refuses to take them seriously.
So yes, while ridicule might make more enemies, it can also serve to make those enemies less powerful.
You don't. You instead crank up the censorship and -- more importantly -- the social stigma against bigotry. Change does not happen naturally and people don't get better, but if they get enough backlash from family and friends, people will eventually pretend to accept whatever it is you're trying to promote by censorship. This pretense will turn into genuine acceptance in future generations because they will grow up with not being able to be bigoted without clear and univocal backlash.
You don't realise it, but you have just expressed an opinion equally extreme and dangerous as fascism.
The only sane way to run a society is through the free exchange of ideas, good and bad, and the unfettered right to discuss and criticise those ideas.
I remember when we used to crank up the social stigma against censorship. The plan you've outlined will work, but it can be used to promote bigotry as much as suppress it, and bigotry can be wrangled to political power. It's better not to create tools with such potential for abuse.
You either have the majority performing their civic duties, holding each other responsible or you don't.
These are not technological, rather civic, political, even educational issues, but if you only have a hammer...
Look at Stack Overflow. Most of active users vote down, vote up, flag inappropriate content.
Then, people become tacit at expressing their view, knowing they will not be sidestepped by a loud and inappropriate populist. This is compared to unmoderated communities, which turn into shouting matches and then the most fervent get permanent bans.
You can't create a community with a ban hammer only.
Now I know what IP addresses concepts are originating from.
> net neutrality
I can nowretrieve ostensibly blocked resources via my peers, instead.
> data exploitation
Use a DNS whitelist for this one.
But I can say that decentralization does in some small way constitute an attempt to solve the net neutrality issue. If built in the right way, a decentralized internet that sits on top of the "regular" internet can make it difficult to track (and charge different prices for) for certain kinds of traffic individually.
Yes, you'll probably just get charged the highest possible rate (if the ISP can't figure out if they can give you subsidies or not), but the shift to difficult-by-default tracking is a good step in the right direction IMO.
Such decentralized internet would be ripe for exploitation, only it would be even harder to tell apart those who are responsible.
Instead of all that, go contact your representatives and your ISP.
You either perform your civic duties or you don't. Despite what the 21th century made us believe there is no app for that.
In a decentralized protocol, other name servers that had nothing to do with that domain could give you the records for the downed nodes. But how do we know it's authentic, or up to date? And now we go down the decentralization rabbit hole...
One might argue this is the same as the current web where we have proxies, load balancers, etc. but the difference is the intermediate nodes are users.
One could also argue this is the same as tor, but the difference is the technology is more at an application level then at a network level, and additionally that every user is effectively equivalent to someone running an exit node.
That we've had DNS names blacklisted from being served (no matter how distasteful the services provided by those DNS names may be) shows that there is insufficient decentralization of authority over DNS.
Kinda. Not quite.
ICANN decides who controls ".foo". That organisation decides who controls "bar.foo", and so on.
You can't just make up a domain name and claim it; someone else has to permit you to have it.
So perhaps reading the DNS is decentralised, but writing it is centralised.
It reminds me of a checklist that used to circulate on Slashdot in response to any suggestion to curb email spam. One of the items was: “you’re proposing a technical solution to a social problem”.
Other “not centralised” systems have met similar fates due to network effects, usenet springs to mind, and git to a lesser extent.
It think you mean this one, but it does not contain that phrase: https://craphound.com/spamsolutions.txt
A lot of apps and services don't really need to be centralized, eg. messaging, file sharing, online games...
But the problem is that writing peer-to-peer networking code is difficult, so people just set up a central server.
If there was a ready-to-use library that people could just use, a lot of devs would probably consider that. Would make scaling a lot cheaper...
(unfortunately everyone just talks about blockchains nowadays, and isn't interested in solving old problems)
Things don't seem so wild anymore :D
The only remaining equivalent now is probably Neocities? Would love to hear of others.
With slow upload speed between peers, you have to change your networking model to have a high level of fault tolerance.
There are many possible ideas to have about a decentralised internet, but should one integrate a software, make it better and wait to make a protocol later?
The problem is that you have to change almost everything if you want it to work well.
I've wished this about web browsers as much as about web sites, but Mozilla seems to be moving in the opposite direction there. I feel that this is not, in fact, a goal of the project at all, but merely an incidental feature of its current architecture.
Sounds a bit sad though.