Hacker News new | comments | show | ask | jobs | submit login
Decentralisation: the next big step for the world wide web (theguardian.com)
532 points by carsonfarmer 13 days ago | hide | past | web | favorite | 208 comments





It strikes me that, if we are ever to expand the Internet into space on a universal scale (a la Vint Cerf and "delay tolerant networking", as an example), the inherent physics problems involved with distances and connectivity in space would probably make decentralization an absolute requirement. I mean, it seems it would not be uncommon for there to be a "local net" and a "universal peer-to-peer or mesh network net".

We're obviously a long way off from colonizing space and needing the Internet to spread, but we still have the physics problems here on Earth.

I'm not convinced that centralization in its current iteration (cloud operators controlling huge infrastructures) is the best long run. As we saw with the recent Azure outage in South Central US, even the huge infrastructure can have problems too.

Secure decentralization has seemed like a panacea for a long time - for all things that resemble a public utility. Even things like the power grid.


> I'm not convinced that centralization in its current iteration (cloud operators controlling huge infrastructures) is the best long run

You might be interested in checking the InterPlanetary File System [1] which attempt to tackle this among other issue.

I can't find it now but I remember the doc mentioning the need for a future space network to be decentralized, so there is that too.

[1] https://ipfs.io/#why


From a conference I attended years ago, the key motivator is that request/response networking is pretty much already broken when we get to the moon; once Mars comes into the picture, it'll be even more so. When each TCP packet requires an ACK, and an ACK takes minutes to arrive... things break down a bit.

So, the idea behind IPFS and others (SSB comes to mind, except, yacht-themed) is that it's largely a collection of offline networks, and when the planets align -- quite literally -- those networks will exchange all their new blocks.

It's a neat concept.


Each TCP packet does not require an ack, at least not as a "receive TCP packet send out ACK". They're returned collectively as a bitset for the last N packets instead based on your transmission window (which TCP stacks already tune based on your RTT). Can't see how you'd get rid of that if you're looking for reliable real-time transport medium.

Additionally you can layer in forward error-correction above TCP to reduce packet loss due to the physical medium.


None of that really matters if Mars is on the other side of sun from Earth. You'd need relay satellites to direct the signal around the sun, and even at the speed of light that's going to take tens of minutes, one way.

As the parents suggest, even at Earth-Moon distances, we need to completely rethink things.


So who cares? Just adapt the TCP timeouts (resend timeouts) to 10 minutes.

We’ll end up with the same, TCP-based system, except it’s going to be somewhat different, as time scales are not invariant for us humans.


My guess is we'll just have local datacenters on mars for the big services (google, netflix, etc). And then more websites will use services like cloudflare so they can get their website cached on mars. AWS will eventually have a mars datacenter. No need for IPFS.

This papers over the whole "relying on large cloud providers" issues in the first place. A decentralized system will ensure that websites won't need to rely on large, centralized powers beyond core infrastructure providers (which we have to fight to ensure are neutral parties, ala Net Neutrality) to avoid concentration of power to a select few.

While I agree with your assessment, it reminds me of the "flying horse carriage" view of the future. Wouldn't be surprised, if multiplanetary hosting would change things more generally.

I just wanted to play counter strike with the mars people

You will have to attend community tournaments and compete locally for the chance to play against a Mars team over a high-bandwidth satellite array, possibly on a lunar base, possibly only during seasons of opposition between Earth and Mars.

Even then, you will be playing on a specially-modified version of the game that disables server-side anticheat systems, instead relying on human referees.


I'm not a networking expert, but imo i think that with extreme delays, you would not want a real-time transport medium at all. The higher-level protocols would be built on top of a non-real time substrate. You would want to have so much forward error-correction that you could remove ACK completely (perhaps a higher level protocol could still be used to request retransmission in the rare case of failure, e.g. one could request a web page be resent via HTTP, but the hypothetical TCP replacement under that would have no concept of ACK).

It's called "delay-tolerant networking", and we've been doing that for a long time - think Usenet.

> SSB comes to mind, except, yacht-themed

I think this is a reference to Secure Scuttlebutt: http://scuttlebutt.nz/


The "Fire upon the Deep" describes an interstellar communication system which is very email-nntp-fidonet like. I'm afraid it's more realistic that we could imagine.

It was written during the time that was the peak of those systems (the endless September would happen a year after the book was published), so there's no surprise there.

But I think it's not particularly accurate, in that, while latency would be extreme, that doesn't necessarily translate to bandwidth - and bandwidth constraints shaped Usenet and especially FidoNet as much as latency.

I think a more likely primary mode of operation would be WWW-like, but only your local part is actually real-time; everything else is synced in bulk as and when possible, with some creative approaches to update conflicts for writable resources.


So basically the offline-first approach

On a related note, I'm wondering how distributed systems that rely on atomic clocks (e.g. Google Spanner) would work in the space era, given that relativity says that there's no such thing as a global clock.

They can still work, with a few changes and worse performance.

The clock is not used to say that the time is exactly the same on all nodes, it is used to guarantee that if two events have timestamps whose difference is larger than some threshold they can be ordered reliably. You don't need an atomic clock to do it, for instance CockroachDB only requires NTP, but of course, the smaller the error margin, the faster the system is.

That being said, given that speed will be limited by the distance traveled by information and the speed of light, I suppose those systems won't have much edge over purely causality-based ones. In other words, CRDTs will rule Space :)


Not an expert by any means but I would imagine you simply prefix your time with a specific „large-sclae time zone“ that you are moving in, e.g., it could be earth, another planet or your spacecraft. Wouldn‘t solve it completly but seems to be a pragmatic solution that could work alright for most cases.

However, a total ordering of events seems plausible only from the relative perspective of an observer and we would need to figure out, how to think about how things like transfer duration affect each oberservers understanding of ordering.


Not a physicist but isn't it the case that any two observers can still compute at what time the other person perceived any given event, if they know each other's history of travel and the history of travel of the event?

So you would just have to agree that one observer's clock is the "master clock", and then everyone translates their local clock time into the corresponding master clock time (and all timestamps are written with respect to the 'time zone' of the master clock).


I suppose those continue working fine in very local systems (contained in a ball of a few light-seconds radius) whose components move at speeds where relativistic effects can be discarded. Drop those constraints and you also need to drop even system-local globality because of relativity.

You _could_ introduce the One True Lamport Clock, and as long as you're in its light cone you can get global syncronization, but that comes at the cost of having to learn a lot about patience.

That would only work up until the point at which we start to travel at portions of c.

Couldn't you account for relativistic effects and adjust accordingly?


Quantum entanglement seems poised to provide real-time interstellar communication.

Quantum entanglement seems poised to provide real-time interstellar communication.

That makes for good science fiction, because Quantum Mechanics is so poorly understood by most people, but it’s in no way possible or implied by the theory. Any entangled channel of communication would appear to be random noise without a Classical channel of communication, which effectively limits entanglement to light speed.

The first answer gives a nice explanation in depth. https://physics.stackexchange.com/questions/203831/ftl-commu...

The key point is this:

Alice therefore still measures two overlapping bell curves, overall! Where are the interference patterns?! That is very simple: when Bob and Alice compare their measurements in the first case, Bob's 0-measurement can be used to "filter" Alice's patterns...

That comparison is what requires the Classical channel, and we’re back to light speed. If you try to use a Quantum channel to compare you just have two things to compare and a lot of noise.


I think the best way forward would be Satellite based Internet where people can tune their dish antennas to the sky and get Internet. That would be Interruption free and decentralised. I have very low faith on Bitcoin mining operations to be repurposed for decentralised Internet, though in theory they have the compute nodes no body is free from the Internet service providers. A decentralised network based on existing network can never be totally free or decentralised.

Satellites are decentralized?


This seems relevant to my industry

If your full name is Buckminster Fuller, I can see how you might think so! ;D

Beaker Browser (mentioned as one of many possibilities in this article) is the real deal. If it doesn't get you fired up about the possibilities, then - well let me explain I guess:

* The interface is dead simple - share this folder, done.

* It is a read-write browser. Netscape (and other browsers) used to be this way - they had some limited HTML creation tools. Beaker brings this back in the form of making an "editable copy" of a website. It's a choice in the address bar.

* Making an "editable copy" doesn't have to mean you're now editing raw HTML. An editable copy can direct how it is edited through JS. (See the recently released "dead-lite" for an example of this.)

All these attempts are exciting but I'm actually starting to use Beaker because it's so useful even without adoption.


I checked out Beaker Browser, and apparently it's based on the Dat project [1], which seems to be very similar to IPFS. Then apparently it follows that, just like IPFS, you can't throw random things onto the network and expect it to stick; you need to pay someone for hosting and bandwidth (that someone could be yourself) to have it pinned, and in order to have it available worldwide at all times you still need to pay for a CDN of sort — the Linux box in your closet, or worse, your laptop that sometimes goes offline just won't cut it. Eventually it's just another protocol to copy stuff around, where stuff originates from various servers (your browser basically embeds a server, capable of serving stuff), with the possible benefit of popular stuff may be p2p'ed (but if you're a business you probably can't rely on that anyway). I fail to see how it's radically different.

(Also, I'm not even sure how you could p2p private user data, unless you expect everyone to carry around one or more yubikeys, or implant chips into fingers or something; plus all devices need into buy into that. But I haven't given that much thought.)

[1] https://datproject.org/


Some things in p2p hypermedia (dat) that's not possible with http/s:

* You can generate domains freely using pubkeys and without coordinating with other devices, therefore enabling the browser to generate new sites at-will and to fork existing sites

* Integrity checks & signatures within the protocol which enables multiple untrusted peers to 'host'. This also means the protocol scales horizontally to meet demand.

* Versioned URLs

* Protocol methods to read site listings and the revision history

* Offline writes which sync to the network asynchronously

* Standard Web APIs for reading, writing, and watching the files on Websites from the browser. This means the dat network can be used as the primary data store for apps. It's a networked data store, so you can build multi-user applications with dat and client-size JS alone.

I'm probably forgetting some. You do still need devices which provide uptime, but they can be abstracted into the background and effectively act as dumb/thin CDNs. And, if you don't want to do that, it is still possible to use your home device as the primary host, which isn't very easy with HTTP.


This is a very interesting topic, thanks for working on it and answering questions here.

The first concern I had/have is about security. If everybody runs their own server, isn't this a security nightmare waiting to happen?

I understand from the presentation that these websites won't run php or other server side scripts which at least take some concern away.

Tara also showed how easy it was to copy a website, while pretty cool, that is also a nightmare scenario for most companies. If your competitors can clone your websites and pretend to be you, how do users know who's data they are looking at?


Not OP, but I believe when it comes to website copies, you can identify which one you are actually using by the url. So if someone makes a copy of dat://mylocalbank.com, their url would be just the hash (Eg. dat://c6740...)

Thanks for the list.

> You can generate domains freely using pubkeys and without coordinating with other devices, therefore enabling the browser to generate new sites at-will and to fork existing sites.

Not entirely sure what you mean,

- We can generates HTTP sites at will (all you need is an IP address);

- We have existing protocols for mirroring sites (not implemented universally, but nor is dat://);

- When you talk about pubkeys with coordination, there are obvious problems like the last paragraph of my original comment, right? Again, I'm probably misinterpreting what you're saying.

> Integrity checks & signatures within the protocol which enables multiple untrusted peers to 'host'.

Basically subresource integrity? Granted, with this protocol you can in theory retrieve objects from any peers (provided that they actually want to cache/pin your objects), not just the ones behind a revproxy/load balancer, so that's a potential win from decentralization.

> Versioned URLs

We can have that over HTTP, but usually it's not economical to host old stuff. In this case, someone still needs to pin the old stuff, no? I can see that client side snapshots could be more standardized, but we do have WARC with HTTP.

(EDIT: on second thought, it's much easier to implement on the "server"-side too.)

> Protocol methods to read site listings and the revision history

> Standard Web APIs for reading, writing, and watching the files on Websites from the browser.

You can build that on top of HTTP too.

My takeaway is it's simply a higher-level protocol than HTTP, so it's unfair to compare it to HTTP. Are there potential benefits from being decentralized? Yes. But most of what you listed comes from being designed as a higher-level protocol.


> We can generates HTTP sites at will (all you need is an IP address);

That's not really so easy from a consumer device with a dynamic IP.

> - When you talk about pubkeys with coordination, there are obvious problems like the last paragraph of my original comment, right? Again, I'm probably misinterpreting what you're saying.

You do need to manage keys and pair devices, yeah.

> My takeaway is it's simply a higher-level protocol than HTTP, so it's unfair to compare it to HTTP. Are there potential benefits from being decentralized? Yes. But most of what you listed comes from being designed as a higher-level protocol.

The broader concept of Beaker is to improve on the Web, and we do that by making it possible to author sites without having to setup or manage a server.

Decentralization is a second-order effect. Any apps that use dat for the user profile & data will be storing & publishing that data via the user's device. Those apps will also be able to move some/all of their business logic clientside, because theyre just using Web APIs to read & write. Add to that the forkability of sites, and you can see why this can be decentralizing: it moves more of the Web app stack into the client-side where hopefully it'll be easier for users to control.


> Decentralization is a second-order effect.

I see, I was looking at it backwards.


Its not a higher level of http, its more like lets use torrents instead of http because they are distributed and scale better. But the web is more than http, its dns and email and logins and all of that stuff, it all scales poorly, it can all be improved with distribution, lets not replace http with torrents lets replace it all with distributed stuff.

As an example you talk about needing a special device to manage keys which presents problems. It centralises your identity to your yubi key (instead of email), lose your yubi key and you lose your identity, what if it gets wet, crushed, corrupted, your fucked. Instead we encrypt the key and distribute it across the net, if a copy is deleted or corrupted there are other copies and its available to you anywhere anytime. Currently your identity is centralised to your email, if your email goes down you lose your identity, if its distributed and a copy goes down you just use it like normal.

Distribution solves pretty much all the problems centralisation creates, its just really complicated so we generally don't bother.


> Its not a higher level of http

Of course it's not higher level of HTTP, I never said that. I said higher level than HTTP. HTTP is just a stateless transport protocol, of course dat is higher level, and as I said much of the benefits described can be built on top of HTTP (and have been, just not standardized or not widespread).

> it all scales poorly, it can all be improved with distribution

Pretty sure it all does NOT scale poorly, as has been proven over the past thirty years. What's being solved here is not a problem of scale. "It can all be improved with distribution" is very hand wavy and doesn't really say anything. DNS and many other protocols are already distributed, btw.

> Instead we encrypt the key and distribute it across the net, ..., if its distributed and a copy goes down you just use it like normal.

There are two kinds of crypto, symmetric key and public key. Symmetric key is easily out of the window. For public key crypto, you always need a secret key and that has to be prior knowledge, not something negotiated on the fly, and of course prior knowledge has to be kept somewhere and presumably synced if you need it elsewhere, and it definitely can be lost. "Distributed secret keys solving everything" sounds like nonsense to me; there's always a secret key that is the starting point (call it the master key, if that makes more sense) and can't be distributed.


The fundamental difference is source independence. It doesn't matter where the data is, as long as someone has it pinned, you'll be able to access it.

That indeed is a fundamental difference. But on second thought I got confused. Source independence, content addressable are nice and all, but we don't build static websites that always have same hashes; "Ubuntu Server 18.04.1 ISO" could be ipfs://<static_hash>, but even "latest Ubuntu Server 18.04.x ISO" couldn't be that. You still need to query the origin server (or client, whatever you call it), the central authority, to get those addresses. So, frequently changing websites/webapps don't benefit from this; they may even be penalized by the overhead. Only aggressively cacheable objects could benefit, but the vast majority of those probably won't be popular enough to be cached/pinned by peers anyway, so you still end up getting whatever you need from the origin server (or paid CDNs).

Btw, I skimmed through Beaker docs, and it seems they resolve names through DNS (what else can they do) and even use HTTP for discovery.


I'd say that most websites are static enough to be pinned. With the others, the main problem is content determinism. If the same website renders differently on different platforms, they will have different hashes. The only reliable way to store them is in "unrendered" form. Which prevents the inclusion of external resources, something that most single-page interactive websites rely on.

Naming is a consensus problem. The key here is having the freedom of choice between trusted providers. The central source could be provided by a single cryptographic key, by many keys, m-of-n schemes or other arbitrary contracts, even in P2P form.

I'm really interested in what kind of user interfaces the Beaker people come up with when it comes to their "editable cloned websites" (forks).


Most websites; maybe.

Most popular websites; unlikely. Even HN isn't static enough to be pinned considering there is a new comment about once a minute or so.


What is pinned would be the content that doesn't change. It may mean the site architecture would need to be changed to accommodate one of these decentralized models.

Just want to point out that Beaker uses dat, not ipfs, so its sites are pubkey-addressed and therefore mutable.

You are right, a p2p web wont solve the barrier to entry, but web hosting costs $20 a year, so its not much of a barrier.

The real cost is scale, $20 year will cover a few thousand users but if you want googles scale it will cost you in bandwidth and complexity. p2p like torrents radically reduces the cost of bandwidth by distributing it, but more importantly it reduces complexity by standardising it.

Once the complexity is standardised budget web hosting can provide google scale for dirt cheap, and there are millions of budget hosting companies too many to shutdown them all giving you censorship resistance.


Tara gave a good intro talk at jsconfeu https://youtu.be/rJ_WvfF3FN8

I'll mention Amaya here.

https://www.w3.org/Amaya/

The original vision for the web was that editing/creation had the same status as viewing/consumption, and that websites were writable as well as readable. This is what Amaya implemented. It never gained wide adoption, but is served as a reference implementation of the W3C's vision of the web. (In my experience Amays is not particularly usable because it regularly crashes, but that could be fixed.)

Is Beaker similar to Amaya extended to use transport layers beyond http, such as ipfs?


I'd hazard to say that the concept of "wiki" is what successfully implemented that early vision of editable WWW.

Wiki markup is different from HTML markup, but it represents many of the same (early) text-formatting and resource-linking concepts, while limiting the excessively powerful features of arbitrary layout, scripting, etc.


Have a look at the federated wiki [0], it allows someone to easily fork someone else's page, edit it, and host it. History is tracked at the paragraph level.

[0] http://fedwiki.org/view/welcome-visitors


I can't really answer this yet, because I've never heard of Amaya and I don't want to answer based on a cursory glance - but I really appreciate this link, as I love bits of Web history and I am going to do some homework on this. So, thankyou, femto!

Am I correct in undesrtanding no consideration is being made for server-executed code?

How is access control implemented?

It seems like this basically only applies to web content you want to give everyone access to and can have 100% of application logic run client-side.

That's a pretty narrow cross-section of the existing web...


So - there is no server-executed code - it all runs in the browser and the folder can only access itself anyway, which can't happen unless you have the private key.

Access control in Beaker is through that private key - you need it in order to edit the 'dat' (name for a synced folder). So, no, there aren't a lot of complex permissions available - but you can also separate an app into several dats and use a master one to manage the permissions of those. Not terribly complex, but it's actually surprising how much you can do. (It's tough to wrap your head around not having a server - but it's actually true.)

But help me out - I think alot of the Web falls into this category:

* User logs in to edit their data (has private key to their dat). * User shares their data (blog, photo feed, whatever) with others (who don't have the key). * Those others merge all incoming feeds into a single master feed.

You could replicate YouTube, Facebook, Twitter this way - usually there are not complex permissions in these apps, are there? (Not that you'd want to replicate them...)


Umm, having everyone’s data as a flat directory as opposed to an aggregated database sounds terribly inefficient... You need to somehow build a distributed, decentralized database on top of that flat structure, right? Otherwise your Twitter is just a microblog publishing tool plus a direct crawl RSS reader...

Maybe Twitter is too specialized an example. What about any kind of search? You do need an index, and someone still has to own that index, and “donate” computing power to update that index. You own your self-hosted data, like many of us already do, but there will still be gatekeepers, e.g. Google for our current web.

EDIT: I realized that with a clever enough architecture and probably much more computing power than necessary in a trusted environment, no one needs to own the index. But it seems way more advanced than this protocol. (I’m completely new to this stuff so please excuse my naive skepticism.)


No, no - I understand on several fronts: first, there is just so much technology these days, it's tough to find anything that isn't just a fleeting thing; also, you're absolutely right that you can't just solve everything with a distributed filesystem.

I also am not sure what yourself (or newnewpdro) are looking for in the web or what appeals to you - for me, Google simply doesn't work for me - sure for technical issues it does, but it is basically Stack Overflow search in that department. If I'm looking for personal blogs, I can't just type "personal blog" into Google and find anything worthwhile - it's all clickbait of a fashion. The best way I've found of finding blogs is either to look through Pinboard tags or to click around on other blogs until I eventually get somewhere. It's horribly inefficient - but it's rewarding when I get there. I'm making a personally-edited blog directory to try to aid discovery - and yeah I actually think there's a lot we can do if we all did more grassroots search and directories. Anyway, that's my perspective - wondering what you're looking for in this thread. Have enjoyed your other questions above (below?)


> Access control in Beaker is through that private key - you need it in order to edit the 'dat' (name for a synced folder).

You're referring to write access, which is a small subset of access control.

How do you restrict read access to a group of specific people? Encrypt the data and distribute keys to the privileged parties? How does revocation work?


Can you explain what you can use beaker browser for, even without adoption?

Yeah, so in my case, I wrote a blogging thing that runs in Beaker. I can run it from any of my machines and it has no server. This is great because it is insanely portable - I can setup the software on a new machine by just going to the URL. It is just "software as a service" but there is no server. It's somewhat similar to TiddlyWiki but Beaker adds automatic synchronization. (I still manage the JS code itself in git, but the blog posts are managed by Beaker.) Thank you for the question, styfle.

Look up Tara Vancil's talk on "A Web Without Servers" if you need a crystal clear explanation - don't know if I'm doing adequately. And my blog is at kickscondor.com if you're curious why I had to write my own blog warez. Also, there is a resurgence in blogging happening right now with the shakedown of social media. It's great.


For blogging use case, I have a few questions:

1. A (current?) limitation I've noticed with Beaker is that you can only edit a site implementation (for a specific address) on a single machine. What would you do if you had multiple computers / locations that would want to make updates (JS or User Content)

2. What about errors. Don't they persist in the address's history? What if there's something undesirable that got added by accident

3. What about mobile? How could someone visit/browse on mobile without a non-distributed proxy http address


Yeah, cool - thank you for the questions, kingnight.

1. So this answer is a bit convoluted because I am still learning, wish I could keep it short. So my setup might be a bit 'naughty' because I'm currently saving the 'key' in my JS in a seperate dat. Right now I have one dat that acts as the 'admin' and one that is the actual blog - but I am going to move to hyperdb (the new solution for multi-writer support.) There are actually a couple libraries cropping up for doing this sort of thing and I'm not up on all of them. So this is more of a 'need to make up my mind' thing than a capability thing with Beaker. There are a TON of libraries and a TON of possibilities - (there's an 'awesome dat' page that just goes on and on and on...) - but I am still researching a better way (and, who knows, maybe my way is fine.) I want something that could be in place ten years from now - because I do think JS and HTML will.

2. No, files can get replaced. Not sure if that is your question. Yes the undesirable will persist in the history, but you can overwrite it. I can prevent bad content, though, by checking it in my JS code and allowing a preview first.

3. Yeah this is a problem - there is a Bunsen Browser in development, but I haven't tried it. I am honestly okay with everyone browsing through HTTPS, though - I like Beaker for the admin tool. Again, I have a browser-based blog software without needing to run a server anywhere at all.


Thanks much for the replies. I’m very interested in this and am wrapping my head around how it works and these differences so it’s great to hear perspectives from people using it.

Also your website is amazing. Is that the blog you were referring to? Is there a write up or overview of how it’s all put together. Love it

No, I've only written a piece on how I came to decide upon its design. https://www.kickscondor.com/ticker-tape-parade

I'm not sure how I would talk about the CSS and JavaScript - which was the most work. I will think about 'if'/'how' I can describe that. I really appreciate the encouragement. Please send me a link to your blog, if you have one. I am collecting links to personal blogs. See you around, kingnight.


If you visit kickscondor.com be sure to turn down volume first. You may find out as I did that if you visit at night its bleeps and bloops can wake up a partner :O

Thankyou for saying something tomcam - I've turned down the volume on the sample - please apologize to your person in bed. I might also consider closing down in the evenings.

I love your blog @kickscondor. The article you have about using Twine with the kids and the way they used it for building linked stories, reminded me how Twine is such a natural and intuitive tool for authoring linked chunks of content, much more than typing in HREFs. If you were able to build a visual map and bridge to someone else's paragraph... that would make the distributed system's easier to grasp.

Hey thankyou! This is very encouraging. It's true - kids really love Twine and it is one of the very few tools that comes automatically to them. It is better than code.org at what code.org is 'attempting' to do.

Really cool point about extending Twine! That had never occurred to me. Amazing.


Hi, I've been playing with making something similar myself -- although using IPFS rather than Dat. I notice you've married up Webmentions and Dat; how have you done that? The only real solution I've come up with is to run a bunch of regular services at an HTTP endpoint (for Webmentions, ActivityPub, Webfinger etc.) that then generate new builds of the static site, which gets added to IPFS.

Oh believe me - that's exactly what I'm doing :D Webmentions are very much made for HTTP - since they require an HTTP POST. A variation would have to be made for Dat.

(As an aside, I originally didn't like using Webmentions on a static site - I had planned on making a cron to periodically check for Webmentions and republish. But now I really appreciate that it goes hand-in-hand with moderating comments. I look over the incoming Webmentions, nuke any spam, and republish. No bad feelings about comments that sat in the queue for a day - they are still out on the web at their original URL.)


thats a great implementation. Simple and intuitive. Far better than asking people to set up an ipfs node or sth.

When we think about decentralization, we sometime think about uncontrolled extreme decentralization. That's the ideal : no single point of failure, with everyone hosting their data where they want, and deciding who access what about them.

Practically, as pointed in the article, there are laws to comply with and those are IMHO the biggest lock to decentralization.

The fair middle ground between extreme centralization à la Facebook/Twitter and total network anarchy is something based on federation, like emails and Mastodon. With federations, there are several providers for the same end-user application, with native data exchange and interoperability. The idea is to give the power to anyone with hosting capabilities to compete with the Giants, even if only a few domains will actually survive (like Gmail, Hotmail, etc because of network effects and funds, probably).

What we need is a framework, or a backbone, that allows people to easily create new federated-native apps ("dapps") without thinking about consensus issues, protocols versioning, and with native laws compliance.


So, you're essentially advocating rewinding the Internet to pre-Facebook time and continuing from there? I'm all for that. Damn, every time I stumble upon the Google Talk page, with its explanation how participating in an open decentralised world is good for everyone involved, I feel tremendously sad for how things worked out.

Unfortunately, the world decided to go the centralised way. At some point I had to re-work my outstanding papers, because any mention of peer-to-peer or even decentralisation meant immediate rejection. Internet service providers went more greedy, so if you don't build your own global backbone to have some leverage, you need to pay someone who does or you're hosed. Even the laws in place start to strongly reflect an expectation of overpowered centralised platform beneath any communication.

Then, finally, what we ultimately need is to figure out the money flow. People want polished products and that costs money. The centralised platforms we have today have succeeded because they figured some funding. Achieving that in a decentralised world is the main problem we should be looking at. I'm afraid "just slap blockchain on it" is a highly detrimental approach, but I haven't seen anything more serious (not that I looked seriously).

Dislaimer: I'm in Google now, but this comment actually reflects my personal post-INRIA sentiments.


> What we need is a framework, or a backbone, that allows people to easily create new federated-native apps ("dapps") without thinking about consensus issues, protocols versioning, and with native laws compliance.

I agree that this is probably the way forward. The only downside is how your identity is tied to the service provider you choose. It was a PITA when Lavabit went down and I lost that email address.


> The only downside is how your identity is tied to the service provider you choose.

Fully agree with this. The link to identity is not often brought up. I run a university lab focussed on re-decentralisation of the Internet as a day job. We focus on identity & p2p + trust.

Beaker browser is impressive early work focussed on the raw bit transport. It re-uses DNS for global discovery, its hard to do everything decentralised at once. How to do global search on a decentralised Twitter or spam control?

The hard issue we need to solve in the coming decade is the governance of such systems. Ideally it would rule itself. Definition of a self-governance system as: a distributed system in which autonomous individuals can collectively exercise all of the necessary functions of power without intervention from any authority which they cannot themselves alter.


Would be nice if all devices that wanted to be routable could get an ipv6 address including mobile, and we not have to worry about turn and stun...

Practically laws are forced to contend with the challenges of decentralization when it emerges, as in the case of nunerous laws that became harder to enforce due to the internet.

Federation is what we have now and it has a tendency toward centralization, as we see with the WWW and the mega sites where users aggregate.


>What we need is a framework, or a backbone, that allows people to easily create new federated-native apps ("dapps") without thinking about consensus issues, protocols versioning, and with native laws compliance.

This is really needed. There are endless great tools for centralized apps that make it trivial. I could build a usable forum website in rails in a day. I have no idea how to do that in a decentralized and secure way.


This seems to miss the problem completely to me. The internet essentially solved the problem of _distribution_. That meant disintermediating away from publishers, and allowed anyone to publish.

The problem then moved onto being one of _curation_. Companies such as google, facebook, amazon are in the business of providing curation: i.e. taking away the leg-work of what we should attend to.

A de-centralized web doesn't appear to decentralize the problem of curation at all, which means we are going to still end up with centralized curation and the same or similar monopolies on attention that we have now.


This kind of decentralisation -- taking existing platforms controlled by megacorps and making them P2P...

...feels, to me, like a huge mistake.

How would one eliminate hate speech and toxic content from it? Or illegal content? Or anything you put there and need removed to keep living your life freely? The technologists developing this tech hand-wave these concerns away citing "freedom of speech" -- but one's freedom ends where another's begins, and hate speech, toxic content, illegal content, not being able to have what you said or did forgotten online, all these things curtail someone's freedom.

And by making it decentralised, they're just making it harder for people who are the victims of these problems to hold the people responsible accountable and to stop them. These technologists want freedom of speech at the expense of everyone else's freedom.


> How would one eliminate

You simply make your own choices and don't follow/subscribe/view all that illegal, toxic, hate content. You know, the same way you do today by not visiting all those illegal, toxic, hate websites. They still exists though for those who don't share your views on policing content for other people.


Ignoring the problem doesn't make it go away for the victims.

The women whose boyfriends posted private sex pictures as revenge, or the minorities who will be the victims of hate groups organizing on social media, the children who were filmed while being raped and have their video circulating online, the victims of bullying whose bullies are empowered by other people seeing it and not doing anything to stop them...

You can choose to ignore this when you see it, but the victims can't, and it's for their freedom that I'm concerned for.


But you are advocating exactly that - ignoring the problem by blocking content instead of dealing with people engaging in those behaviors.

Yeah, its kind of ridiculous having e.g. the german government expect Facebook what posts are and then delete hate speech and co. If that stuff is illegal, then the persons doing so should be held accountable by our legal system. But politicians love using those companies as easy scape goat. Anyway, p2p solutions usually try to make finding a person to hold accountable harder than current centralized services.

As you say, you can ignore this content today. And yet the authorities still feel obliged to make access to child pornography etc illegal. There is no reason to believe that changing the platform would change this stance.

Just because it can't be dealt with by threatening the odd CEO or two, doesn't mean that it won't be dealt with some other way. Now, it may be the case that governments react to a change like this and just accept that they can't control terrorism, child porn etc etc. But it would be astonishingly naive to assume that that's the most likely outcome.


That is an important point but kind of orthogonal to P2P. A popular P2P website, say DTube could delegate its moderation to some group of "censors", and have some of its listings removed. Besides, unmoderated spam/trolling-infested sites are not going to become very popular anyway. Now you surely can't completely remove content that has been distributed but you can condemn it to be forgotten. 'Organic' forgetting may actually be easier in distributed web because sharing of content is generally temporary - just like you can no longer find older movies seeded in torrent sites.

> toxic content

What is considered toxic for you might not be considered toxic by another person. That's personal choice. If you need to eliminate some sort of content, that inevitably will lead to this https://www.reuters.com/article/us-china-internet/china-laun...


I'm all for decentralized apps, but before venturing into that kind of complexity, how about getting back control over the Web? It was created as decentralized net (if not in the blockchain sense) and is the work of an entire generation.

That's what this is about.

Current web tech is inherently centralizing. Say you want to create an experience like Instagram or Twitter, delivered via HTTP. You have to pay for bandwidth, CDNs, storage, app servers, DB servers, etc etc. At scale, it's millions a month. So only corporations can do it, and with a few exceptions (eg Craigslist, Stack Exchange) they end up monetizing and "growth hacking" in user hostile ways.

The big open question is: can we create an experience as compelling as Instagram or Twitter over the P2P web?

It's a hard technical challenge, and today the answer is no. But if we get there, then internet mass media can be delivered via open source projects over open protocols, with a bunch of competing clients to chose from. No central organization controls and monetizes the thing.

Like BitTorrent, but for applications more complex and interactive than just file sharing.

--

If you're interested, here are imo the most compelling projects in this space:

- Dat

- Beaker

- Augur

- OpenBazaar

- Patchwork / Secure Scuttlebutt

They are working on overlapping subsets of the same fundamental challenges, eg:

- How does a node choose what to download? The BitTorrent answer is "only things the user explicitly asked for". The blockchain answer is "the entire global dataset since the start of time". For something like a decentralized Twitter, both of those are unsatisfactory, you need something in between.

- How do you log in? Current systems either have no persistent identity at all (eg BitTorrent) or they just generate a local keypair, and it's your job to back it up and never lose it (eg SSB, Dat, all blockchain protocols). Both are unacceptable for wide-audience social media. Ppl lose their devices, get new devices, forget their password, etc all the time. They expect and rely on password reset, etc.

So there's a lot of hard tech and UX problems left unsolved, but also a lot of recent projects making solid progress


The hosting cost for things like Facebook and Twitter are a pittance compared to the cost of employing all of the engineers/designers/etc who enhance and maintain those services. That IMO is the biggest economic challenge facing decentralized applications.

You can make some nice proof-of-concepts with a group of volunteers, but the effort required to provide a UX comparable to centralized services is going to take more than a handful of people working evenings and weekends.

Decentralized services generally do not afford the same monetization opportunities as central services. Decentralized proponents consider this a feature rather than a bug, but it leaves open the question: Who is going to pay for all of this?


> The hosting cost for things like Facebook and Twitter are a pittance compared to the cost of employing all of the engineers/designers/etc who enhance and maintain those services.

Facebook had $20.4 billion in operating expenses in 2017. Less than 1/3 of that was the cost of its 25,000 employees (at the end of 2017). Facebook is spending more on its infrastructure than it is on all of its employees combined (and that much more when you reduce it to just engineers). Engineers are maybe 1/5 of its operating costs, including their all-in costs.

Both Facebook and Alphabet had roughly $15 billion in total capex for 2017. Data centers, networks, electricity, et al. cost a lot at that scale. It's not a pittance. Facebook spent ~$7 billion in 2017 on capital expenditures related to their network, data centers, etc.

Facebook's first Asia data center is a billion dollars to just start up.[1] When they put up new data centers in places like Henrico County VA, New Albany OH, or Newton County GA, it's similarly nearly a billion dollars a shot to start those up. Once you have dozens of those operating, it's billions of dollars per year to operate them all.

[1] https://money.cnn.com/2018/09/06/technology/facebook-singapo...


I wonder how much of that cost is toward user-facing improvements and how much is toward extracting additional profit out of the surveillance economics model? As Mastodon and Patchwork and other federated social media platforms continue to grow, it would be an interesting and useful effort to analyze the cost structure of these alternatives.

There's too much evidence to support the conclusion that companies like Facebook are far more bloated than they need to be for their core experiences. The reason is all about laws of diminishing returns, in every cross-cutting concern of the business.

Two engineers can't do twice as much as one engineer. Perfecting the ordering of the news feed is significantly less valuable to users than just having a news feed in the first place. Building a speech-to-text engine that works 99% of the time costs hundreds of millions of dollars more than one that works 95% of the time, but is it worth that much to users? Think of the number of engineers at Facebook or Twitter who just work on infrastructure, or supporting other engineers, or perfecting ad placement to improve CTR by 0.5%. All of these are tangential to the core experience, in many cases required or at least valuable only because Facebook is so big.

I can't just pick on Facebook here; this is why all companies will always get disrupted. Massive layers of scale behind the scenes to support products that are fundamentally simple, combined with advancing publicly available technologies helping newcomers.


If the open source ecosystem as a whole has taught us anything, we can take things much further than simply proof of concept and still remain open.

I think your point on UX/UI is an important point though. Open source has a turbulent history with functional UX. We’ve done an incredible job helping the technical communities understand why open source is important but because so many of us are technically focused, we’ve fallen somewhat short on helping UX and design focused communities understand why open projects are important, on a deep level, in much the same way technically focused understand.

If we’re aiming for mass adoptions across the spectrum, onboarding the UX/UI communities is as important as it was for the technically focused to understand.

Also, there is another point worth considering, someone recently made a convincing argument to me that sometimes mass adoption may not be a good thing. Mass adoption leads to eternal September and depending on what the project is, eternal September may destroy a community. A project with a solid technical foundation but difficult UX/UI experience can be a good barrier to prevent eternal September.


> The big open question is: can we create an experience as compelling as Instagram or Twitter over the P2P web?

> It's a hard technical challenge, and today the answer is no.

This is why I completely dismiss almost every "distributed" solution. If you can show me a business model/design document for a distributed service that can scale to big tech levels, deliver a user experience that matches current solutions, while also incentivizing developers enough with money to get them to build it, I will be swayed. However, every solution I have seen makes massive tradeoffs that negatively affect all 3 criterias compared to current centralized solutions.


Huh? Email >>>>> Twitter over like 40 years or more? You don't work as an engineer, right? If you would, you would know how low the quality of most centralized technology is that is created by big corps. Most of the good stuff is decentralized. But the problem is that decentralization also means that the edge has more responsibility. And that responsibility is what people don't want to have. So they rather use a centralized bad system, because it's "easier" and then every few years they can blame all their own laziness on the centralized service provider and even read about it in the news.

If you need another example google a tool called "git". It is used by almost all software developers nowadays and even by quite a few authors. You don't need to setup anything to use it. If you have ssh you can just share your text based data with others by giving them access to your repository's directory. I bet it transfers more MB/day than Twitter. But nobody counts it because it's so distributed that it's hard to put an owner label on it. (Although the originators can be named quite clearly.)


Running your own email server is definitely not "easier" if you are actually trying to maintain any durability and availability SLA's with your average homeowners computer/storage/network.

And sure, you can use git by itself and send the diffs to each other over email. However, then you have to deal with coordinating where the head is which is a pain across every node since it is constantly changing. Thus, most people don't do it that way and instead use a centralized service like github.


> you can use git by itself and send the diffs to each other over email

Why not send it via ssh? It's much easier. And if the data is not too big you don't send diffs but complete file states. Git usually calculates the diffs on the fly by comparing two file states.


I was just giving an example because I think I remember hearing that linux exchanged a lot of their patches over email which is what partly inspired linus to make git the way it is. I'm not really sure of all the different ways you can use it because my teams have always used central repositories.

You have to pay for bandwidth, CDNs, storage, app servers, DB servers, etc etc.

Decentralized serving has just as much bandwidth, storage, and iron (if not more). Does it somehow make those resources cheaper?


It simply distributes those costs to users. For the users perspective, it's free (zero additional cost) assuming they already own a computer and pay for monthly flat rate unmetered internet.

For example, in 2016 a friend of mine and I made an electron app called WebTorrent Desktop. It has over a million downloads. The total bandwidth transfer so far is probably a lot--wild guess, maybe a few million dollars worth?

But it is free and open source and costs roughly $0 to run--just enough to keep the website up. That's the magic of decentralization--you're simply writing software. You're not running a service.

--

Consider the total monthly internet bill of all Twitter users combined. Extremely rough guess, 500 mil montly actives * $40/mo for Comcast or something = $20b per month.

The crowd has more than enough bandwidth, disk space, and CPU cycles to run services at any scale. That's another magical aspect of dapps: the total resources available to the system automatically scale with the number of users. It's up to us to figure out how to harness it.


I'm currently sketching out/prototyping a key recovery system with biometrics- https://www.notion.so/Design-Spec-fa2b4e36d1b74d56bfca7a5062...

>how about getting back control over the Web?

That's the point.

Disclosure: I'm working in this field.


A good recent podcast about the decentralized web, with a technical focus, is JS Party #42, https://changelog.com/jsparty/42.

Featuring Mathias Buus and Paul Frazee from the Beaker project.


Thankyou for mentioning this! I hadn't seen it passed around I guess.

How long until it reverts back to some nodes having way more influence/power/data than the others?

This is not only a technology problem, it's (mostly, I'd say) a social one. Humans will always want more power and control, whether it's in real life or online.

Every single type of governance has fallen victim to human greed and ambition, as will any kind of Internet, I believe.

Fix the users - save the Internet! :)


I think a lot about this.

In A Thousand Plateaus, Deleuze and Guattari talk about the opposition between the state apparatus and the "war machine" (their term for a nomadic/decentralized structure). They talk about how it seems like nomadic societies are primitive, but actually a lot of nomadic societies have "collective mechanisms of inhibition" to ward off the formation of a state apparatus, by preventing power from accumulating within any one party and "evening it out" among everyone.

The applicability of D&G's ideas on the war machine to our current problem of platform power is immediately apparent. A centralized platform is exactly like a state apparatus. In our situation the collective mechanisms of inhibition might be something like stronger/more proactive antitrust laws to break up/nationalize entities that become infrastructural components of the society.

But as you've mentioned, I think this problem of "uneven development" is a feature of any marketplace-like structure. In sufficiently large numbers, a power law tends to assert itself with no other checks on power. This is why blockchains by themselves won't solve the problem. The debate, then, shifts to be about whether this is a feature or a bug, which is something that I'm never sure about.

To close, another quote from ATP comes to mind ("smooth space" is another term they use for nomadic spaces):

> Smooth spaces are not in themselves liberatory. But the struggle is changed or displaced in them, and life reconstitutes its stakes, confronts new obstacles, invents new paces, switches adversaries. Never believe that a smooth space will suffice to save us.


Awesome to see others on HN loving D&G. But perhaps also power is cyclical. When the web was first popularised, it had the same potential as what DWeb has now. TCP/IP was written to be inherently distributed and provide resilient routing. Then, as soon as it starts to threaten existing power structures, forces kick in to try and stabilise it through control, surveillance, and ‘governance’. It becomes part of the rhizome, the rhizomatic system of power, that the new system (in this case TCP/IP / www) set out to challenge, creating an even more complex, ever-evolving rhizome of power (surveillance, paywalls, censorship). The same thing happened with other revolutions throughout history — the power base they set out to challenge, transmorphed into a similar power structure as an unintended consequence.

Well, I think this cyclical pattern shows exactly exactly why the thinking around the war machine is so important. Thinking about this very naively, to get closer to the kind of smooth space that D&G conceptualize, it is necessary to have some kind of homeostatic system that recognizes abstractly when power (and I'm using this term in a very naive, non-Foucauldian way) is being disproportionately concentrated in any one body, and corrects accordingly.

That said, as from my previous comment, I'm not totally confident that this kind of decentralization is even optimal, but that's a story for another time.


> forces kick in to try and stabilise it through control, surveillance, and ‘governance’

Which is ARIN/RIPE/APNIC/AFRINIC/LACNIC and the DNS root zones, and ICANN on top.

Not to say that's only bad, just trying to illustrate that in this case, D&G's point is actually pretty tangible.


It will always be a struggle, just like the offline world. It doesn’t mean you give up, we haven’t defected to a new world order offline so why do it online?

It's not really technological or social, it's logistical.

We've had all kinds of redundant network topologies that used independent networks for decades. The internet is decentralized, and it works pretty well, all things considered. The web is fairly decentralized, too: DNS is independent of a registrar is independent of a network service provider and all are independent of ISP's, and even those are independent of backbones.

The only thing that isn't very decentralized is the client-server IP/TCP/HTTP model. You can provide decentralized versions of HTTP services, but those are the things that are the most costly and inconvenient to decentralize. It can be done, but it's a huge pain with very little benefit.


HTTP is client / server (session) based, but TCP / IP certainly isn’t. I’m is act as a client / server protocol because we tell it to, but IP networking was written to be distributed / decentralised so as to provide better resilience, unlike other networking standards at that time (token ring, anyone?).

TCP is connection-oriented, and each connection is a session. IP is decentralized, but the way it's used now in consumer devices makes initiating connections to them difficult and dangerous. Any realistic hope of a successful new distributed web should address this problem, though probably the current solution is "have clients join a private network and route back through it", completely side-stepping firewall concerns. If you ignore the concerns, I guess these protocols aren't that big a stumbling block.

The very notion of the present Internet is grounded on the network infrastructure that is formed through ISPs. Everyone needs a gateway and a 'bus' to reach the desired end-point. Most ISPs still limit the upstream bandwidth for consumers and charge a premium making it a friction point. In a way the centralization is just a logical outcome of such 'rationing'.

Distributed network will even more depend on the ISPs due to self-serving nature.

Perhaps the 'decentralized' web should also address the very foundation of the network - the network infrastructure and access to it.

Does Internet need to depend on the ISPs?


It will probably look and feel a lot like how Github has effectively centralized git repos. At least in this world if we don’t like how somebody is centralizing something it can be easily and quickly moved.

Imho it comes to incentives and different factions keeping each other in check, the same way it's done in Bitcoin and in some ways in Democracy. Although it's important to note that Decentralised systems are different to Distributed systems. In decentralized systems, there exist parties with more influence than others but none of those has enough influence to overpower all of the other ones.

You could see this play out every time any party has tried to take full control of Bitcoin. So far everyone has failed


And eventually a decentralized node will become the new central node for the next big thing.

It's all cyclical.


Large systems want to centralize for the sake of efficiency. I was among the first wave of P2P hackers in the 2000s and we learned the hard way that decentralization leaves a LOT to be desired.

Not to mention that back then we a performance ratio of edge to node that was order of magnitude better than the one we have today. A laptop today and one from 2010 don't have that much difference in performance, a data center from 2010 and one from today are day and night.

And who will pay for it? With experience Xanadu seems like the only solution. The reason why it's in development hell is because the problem it's trying to solve is so hard.


Remember when Audiogalaxy arrived in the P2P world. It was a (limited) form of centralization. But it made the P2P MUCH more usable.

There is an easy contrarian view about decentralization: even if a decentralized protocol wins, at the end it is all about the UI/UX/Aggregation which clearly cannot be decentralized. For example, OpenBazaar can be great but the one who develop the best UI and search engine will win.

There is a semi-centralized solution: make search run on a handful of central machines, and check if the results of (e.g.) two of them match.

This problem is essentially solved by YaCy: http://yacy.net

I don't think you both understood what I said. It does not matter if you can find a decentralized solution to a problem because at the end the user will access it through an UI/UX/App that, in you example, can choose how to rank the search results beyond what the protocol dictates.

In the past people used the simple "mail" command to read emails but now they choose GMail or others because the UI/UX (or any other reason) is better or they like it more. The SMTP (federated) protocol is hidden.


Repeat after me: The Internet is already decentralized. The web is already decentralized.

Blockchains do not have a monopoly on decentralization. People who assert this are trying to redefine the term to mean some kind of extreme P2P model that fits their narrative.


Repeat after me: It's effectively centralised. It's effectively centralised.

Almost all our traffic goes through Google, Amazon and Facebook. It's extremely centralised.

Blockchain a don't claim to have a monpoloy - they're just the most recent thing that's repopularised decentralisation.

If Amazon servers go down, so does a significant portion of the internet. That's centralisation at work!

Blockchain tech doesn't claim to have a monopoly on the term 'decentralisation', it's just re-popularised the technology.


> Repeat after me: It's effectively centralised. It's effectively centralised.

So is bitcoin. Only 3 or 4 companies own the majority of mining.

Companies on the internet are centralized, but not the internet itself.


Yeah, Bitcoin-based consensus probably isn't the best choice for a project like this. I'd love to see a project that uses non-blockchain (but similarly publicly immutable) tech, like Nano or Iota.

Still some issues with centralisation (since consesus is achieved through vote delegates), but that's much easier to fix than redistributing hashpower.


> And one of those is speed. Because of the way the DWeb works differently from the current web, it should intrinsically be faster

This seems wrong to me.


I think the logic is that instead of everything going via a handful of servers at the big companies, it goes through many more thousands of individual PCs but of course, that depends on the technology used. Torrents should be faster but only if lots of people seed and don't simply download and disconnect.

Torrents themselves are often faster but it destroys the speed of anything else on the same pipe.


It depends what's meant by "distributed". A content-addressable network could very well be faster because of the pervasive caching and it's immune to (D)DoS attacks. Things that aren't accessed frequently by a lot of people would be much slower.

Complete decentralization as a philosophical and absolute goal misses the point. There are great benefits to decentralization, for sure, just as there are benefits to centralization. Projects that aim to decentralize everything “for the sake of it” are doomed to failure. I want control over my privacy, my spending, my choice of content. But at the same time, I want a great user interface; I want curated content; I want performance. I want a service, and I’m willing to pay for it. I don’t have a fundamental problem that there are big companies out there who provide that service to me, and who make money doing so. Even, in some cases, a whole lot of money. Good for them.

It boils down to: what’s the best way to provide services that I want?

I’m working on a project to provide a decentralized marketplace for software and infrastructure services, competing with AWS and Azure. The marketplace itself is blockchain-based: partially decentralized, but with a permissioned blockchain that still allows governance, legal compliance, removal of bad actors, KYC compliance, etc. The kind of things that customers (corporations) need for them to use the marketplace.

I think we need to be pragmatic about it and figure out where technologies like blockchains can help build better services, instead of trying to cram decentralized systems into everything whether it makes sense or not.


There are inherent benefits to large, centralized platforms. Users, especially content creators, desire a large network of users and excellent UX, both of which have typically been properties of centralized systems or at least far easier to achieve with centralized systems. Every decentralized social network thus far has failed, and the modestly successful ones (Mastodon, for example) are in fact federated.

Many of the problems alluded to in this article, in particular the privacy risk of centralized data, are more effectively solved by policy changes and iterated technology (differential privacy as well as bread-and-butter cryptography) rather than furious hand-waving about blockchain protocols.


The next big step?

The fundamental technologies were designed with decentralization in mind

Mastodon is just peered IRC all over again

The ISPs have poopooed running shared services from home connections.

DNS and the core protocols can run in decentralized ways no problem

It’s the social order that doesn’t enable it


Handshake.org is tackling this problem by decentralizing DNS and removing the need for (and vulnerabilities associated with) certificate authorities.

Disclosure: I founded Namebase which is a registrar for Handshake


I'm using the Namebase beta right now, it's great!

Awesome, I'm glad you like it! We're shipping a mobile-first redesign soon that's much better. Lmk if you'd like more testnet HNS when we do :)

Hasnt half of the decentralisation problem been already solved by the Tor network which is encrypted and already decentralised? And they still regularly identify and stop tor servers. What makes them so sure that the decentralised nodes on a bitcoin network cannot be physically identified and stopped ? A truly decentralised network cannot be built on borrowed network. A truly decentralised interruption free, government free Internet would probably be built on a satellite based network not this.

Important note: Tor does not encrypt your traffic

Tor simply hides where your web requests originate from - it's up to you to to visit HTTPS sites and encrypt your communications.

Also, Tor is quite decentralised but the existence of directory authorities undermines this, since presents a centralised component.


Sorry you are getting downvoted. This is very much correct and folks simply put a lot of faith in the proxy transport as the ends to a means. One vulnerability / bug (Tor has had many) can weaken that link. Tor is rarely installed correctly or in a secure manor. (forcing all packets through it and dropping anything that leaks from the browser, for starters)

Do you have any links on how to install it properly, and to test that? (Maybe through Wireshark or something similar) I admit I've haven't used it in-depth (although I've studied the protocol quite a bit)

I don't have one handy, though if you might find one in the documentation for Tails linux OS.

At a high level, the client workstation must not be allowed to send any packets to anything other than the socks port running on the Tor host. The Workstation must have a static arp entry for it's gateway. The Workstation should use a ram-disk linux distro and not persist anything to unencrypted disk. The Tor host must not allow anything inbound other than the Tor SOCKS port. The Tor node must only speak outbound on 80 and 443 (formerly known as the fascist firewall setup). Ideally, the Tor node should be running on a cheap VPS host, ideally payed for with a burner card and accessed via a VPN so that Tor traffic from the home ISP is not evident. The VPS host should be cycled from time to time.

This is of course a lot of setup work, but most of it can be automated.

[Edit] Speak of the devil. Here is a zero-day published on the Tor browser [1]

[1] - https://www.zdnet.com/article/exploit-vendor-drops-tor-brows...


With phones doubling performance every 1-2 years, and desktops/laptops largely stagnating, it looks like in a few years, a server rack will fit into your smartphone.

And most enterprise software (and almost no individual user) barely needs more than a rack of current gen servers.

So yeah, decentralization will be upon us soon enough.


network performance + battery consumption make phones an impossible choice for distributed web. The decline of the PC is a bad thing for the distributed web. It's the only box that can stay online with a stable connection 24/7. I believe we re not going to see distributed services like good old torrents or kazaa until PCs (or at least home routers with a lot of storage) start dominating again

> It's the only box that can stay online with a stable connection 24/7.

My router stays online 24/7. It already has a web server built in. I could hack it to make it serve a public website.

But there’s absolutely no way I’m going to do that. The security and maintenance requirements are just too much of a PITA.

It’s much easier, more secure and more reliable (and likely cheaper once you figure in depreciation and opportunity costs) to set up and maintain an instance in the cloud, or a serverless site.

And if you don’t like the big cloud providers, there are many smaller outfits that can do the basics - compute and object storage are all you really need for a small site.

Consumer hardware and software are not really well suited to running publicly faceing websites.


> The security and maintenance requirements

thats why you need one of the sharing protocols, like IPFS that make security everyone's responsibility, not just yours.

I don't get why you think cloud solution is so much better. Glorified CDNs are a clumsy intermediate solution until internet connections get fast enough for everyone, that running a sharing node will have negligible impact. E.g. no cloud provider can compete with Popcorn Time in speed, despite billions of dollars of effort.


> thats why you need one of the sharing protocols, like IPFS that make security everyone's responsibility, not just yours.

Everyone responsability = No one responsability.

If your data is lost by IPFS you don't have anyone to sue.


You could always pay a host to host your files for you and distribute them to IPFS. nobody argues that (e.g.) IPFS is replacing all the functions of the cloud. but it certainly decentralizes things, gives less power to The Man, and allows competitors to emerge.

I do like to imagine a future where the modem/router becomes a place people can host their own data. A formally verified Deno-like web-server on seL4. The actual modem/router software running in a separate VM.

I wouldn't say it's a "decline". Most people just don't need an upgrade. You can use a PC from ~7 years ago for most stuff today. Gamers are not the majority here. The best performance boost you get from an older machine is a SSD.

Desktop sales were already declining 7 years ago. A lot of people just don't care to have a desktop PC any more, and when their old computers finally break or stop working, will replace them with laptops or tablets, which can't do the sorts of always-on distributed things that we might hope.

I would count Laptop to the PC category. Same year span - have a Laptop/PC from 2012 and you are good to go.

But the key difference remains: there aren't many personal computers that we leave plugged in and turned on 24/7 anymore.

This was never the case.

I presume you mean "never the case for many people". As in "most people never left their PC on 24/7". That may well be true.

I used to run a couple of minecraft servers for my kids. Judging by how amazing their friends thought that was... I don't think there were many other people in the town doing that.

On the other hand, I still have a raspberry pi running 24/7 to this day.


Yes, but in this case your PC is doing something. It has a purpose. 24/7 without doing anything is a waste of energy.

That was and still is the case. We just have forgotten about it. In the past it was SETI/napster/torrents/kazaa etc. Someone else mentioned minecraft servers. There are also lots of people running similar opensimulator servers at their home. Unfortunately laptops/tablets have thinned that crowd and the possibility of having such arrangements is smaller overall now.

Of course it was the case. What do you think things like Seti@Home were running on? People who owned computers had so many idle processor cycles that they were desperate to find something to do with them.

If 24/7 home computers were never a thing, what did BBSes run on?


Same answer: Yes, but in this case your PC is doing something. It has a purpose. 24/7 without doing anything is a waste of energy.

This has somehow turned into your opinion on energy use (yes, it's good that desktop PCs can now decrease their energy use significantly while idle, which was not the case in the 90s).

But you made a claim that "this was never the case", when clearly there was an era of home desktop computing when your desktop could act as a hobbyist server. This era gave rise to BBSes, then MUDs, then Minecraft servers (and many things in between).


>when clearly there was an era of home desktop computing when your desktop could act as a hobbyist server

Yes, but again, you have used your PC for something. This has shifted to smaller (Raspberry Pis) or dedicated servers (or "cloud" stuff), so you could run this stuff there.


unfortunately people dont use them. The best replacement would be is if someone made a popular home router that permantently runs an IPFS node or sth.

Super late to the game here, but we're (https://textile.photos/) seeing pretty great results on mobile running a 'lite' IPFS peer. Battery consumption is not bad compared to other network-driven apps, and with intelligent swam on/off optimizations, mobile peers can share and pin files quite nicely. It helps if you have a network of 'always on' peers to back them up of course :)

This seems implausible. Moore's law is over and phones are limited by size and power. There's no reason to believe progress will continue until a phone is the same speed as a typical desktop machine, let alone a server rack.

Interestingly it's the centralized network that will grow radically more powerful, while the home devices continue to stagnate.

The total capacity of infrastructure entities like AWS will increase by 10x at a minimum over the next decade. By comparison, your phone or laptop will modestly nudge forward. Consumers are not going to buy 10x the number of laptops, desktops and smartphones that they do today, ten years out. Most likely, those figures will barely move (the smartphone industry is already stagnating). Most of the incremental spending and investment will go into the centralized infrastructure by the giants.

Network speeds will continue to increase relatively rapidly. We can easily go from routine 50-100mbps home lines to 1gbps over the next decade. We're not going to see a 10x increase in the power of the average laptop (lucky if it doubles in ten years). It's primarily going to be useful for streaming/consuming very large amounts of data from epic scale central systems for gaming, 4K+, VR, etc. Decentralized systems owned by consumers will be far too weak to fill that role.

The AI future isn't going to be decentralized. The very expensive infrastructure that will demand, and its need to run 24/7, will be centralized and owned by extraordinarily large corporations.

It's precisely the typical consumer's home hardware that will act as the ultimate bottleneck guaranteeing decentralized can never take off. This has always been obvious, it won't prevent the fantasy from maintaining its allure of course. That will perpetually draw headlines and hype in tech, for decades to come, with no mass adoption breakthrough.


That sounds very plausible to me, but I still think decentralized server-side infrastructure has some potential. Sandstorm didn't take off and maybe Mastodon isn't it, either, but it seems like someone's going to have consumer-friendly, <$10/year, general-purpose server accounts for running apps. Maybe some game will make it popular?

Especially if you consider how much of a lead apple had over ARM and the custom designs of SoC vendors like qualcom then you will realize that the easy growth is already gone. 2 years for a 50% increase in performance just to catch up with the strongest competitor is very far away from doubling performance every year.

True. Companies are hitting a lot of friction getting node sizes down to 7nm when compared to sizes from years or decades ago.

True but you don't need much of a server to run many of the "server side" things a single user needs.

Until we find ways to add more bloat to the user requirements that will require a server-of-the-future to be more than a server-of-the-present.

I hadn't heard of the Dweb, the term appears to have been coined by Mozilla [0].

[0]: https://hacks.mozilla.org/2018/07/introducing-the-d-web/


Nah, long before that, by the dweb community and the Internet Archive, leading up to the first Decentralized Web Summit / DWeb Summit in 2016 :)

https://2016.decentralizedweb.net/


Can there be a truly decentralized web without decentralized physical layer? Mesh networks etc.

Which, of course, will not necessarily be connected... but that's a part of decentralization and freedom. "Diamond Age" and its virtual polities come to mind.


I have started a project as a way to explore how web applications can become more decentralized. In the blockchain the starting point are dapps, so I’m calling mine ddapps. Would love any constructive criticism (not quite ready for ShowHN yet):

ddapp.org


Is there anywhere we can track the growth of these platforms though? For entrepreneurs, we need to understand the opportunity, not to menation OS developers adding support to their browsers to make this mainstream.

Just the other day, Chrome declared their Intent-to-Implement-and-Ship for more URL schemes in registerProtocolHandler(), which is a small first step: https://groups.google.com/a/chromium.org/forum/#!topic/blink...

Firefox has been supportive of the effort for some time already, working on libdweb: https://github.com/mozilla/libdweb


"lose your password and you lose access to everything" - I'm sorry but this UX blunder just won't fly. If we are to have decentralized web we'll need services for key recovery.

Yeah, I have no clue what they were smoking when they said this.

Mitra @ the Internet Archive, when integrating DWeb ( https://news.ycombinator.com/item?id=17685682 ) and I talked about this.

I showed him a cryptographically secure method of having passwords (that keys are not derived from) that allows for password resets (without a server).

For a high-level conceptual explanation of this approach, see our 1 minute Cartoon Cryptography animated explainer series:

http://gun.js.org/explainers/data/security.html

This same method can be used for doing Shamir Secret "recover your account based on your 3 best friends" method, which I believe will be the best UX for most users.

This is an already solved problem.


Maybe a digital locker for your password that can only be decrypted by the password of 3 out of 5 friends.

Nah, what if your friends conspire against you? A better idea, imo, is you have three physical keys. If you lose one, then you can use the other 2 in order to revoke it and generate a new one.

There are already multisignature services being developed with a third party being one of the holders. I'd have to read up again on how it works but it makes sense when you hear it.

I think you mean "services for account exploitation and warrantless search by law enforcement"

I, and I suspect many other people, often have online profiles and existences and we simply don’t care about the government seeing it. Don’t get me wrong, I’d like a high security option for some things, but most of what I do online is frankly trivial nonsense. I’d be much more upset if I lost access to it forever than I would if some jackbooted thug decided to snoop around it. Why does everything need to be the digital equivalent of a Supermax prison? I want the full range from a guy on furlough to Hannibal Lector fullly restrained, mask and all.

If you’re only willing to offer me the “Lector” package, I’m going to pass.


I'm more concerned about criminals using such proposed credential recovery procedures to rob me. Thanks however for sharing your views about how we should all trust the government without question.

If public blockchains prevail then everything is already public and traceable to a single individual. Search wouldn't even be necessary. As long as financial gateways are tied into decentralized web, AML/KYC will prevent total anonymity so there is always room for forensic services.

this is more of a "using passwords" problem than a "decentralized web" problem. password recovery is a band-aid fix over the real password management problem. i think key-based capability security is the future, but it isn't possible without first moving away from passwords.

i think the UX for a completely keychain-centric auth/authz framework can be much better than what we have today with password managers. a master password + device-entangled PINs protecting per-app/agent keys drastically reduces the possibility of getting locked out of your account AND provides for master password reset by unlocking and re-encrypting your keychain using the local device-entangled key.


I think the web is still so new that people will really struggle to understand a decentralized web. Even 20 years on and .com still reigns supreme as the best TLD around.

I'm in tech and I'm interested in a decentralized web but I also feel that throwing the baby out with the bathwater isn't a great idea. The article says, "The decentralised web, or DWeb, could be a chance to take control of our data back from the big tech firms." To me it sounds like we're basically saying, "ok Facebook/Google/Twitter/Instagram... you're all too big to regulate so we're going to build A WHOLE NEW INTERNET". If they're smart enough to pollute the current system, they're smart enough to pollute a new system. In fact, these corporations are so big that you'll find out eventually that they've funded quite a bit of this decentralized web.

As a parent, I would feel at least a little better seeing some bankers, Pharma bros, tech execs, etc. actually go to jail and have their lives ruined for their blatant disregard of pretty much everything. I don't want to tell my kids, "well, we're too dumb to regulate the internet so we made another one.. and that one got messed up too... herp derp"


.com still reigns supreme but the icann only recently started letting people register new TLDs, and even then only 500 a year are registered. Handshake is helping to solve this by decentralizing DNS and letting anyone register new TLDs, so in the future .com may be way less popular than it is today.

Disclosure: I founded Namebase.io which is a registrar for Handshake


When you use a .com you expect the domain to just work.

When you pick a random tld like .io for example you are not getting the reliability of a .com. .io had a few big issues last year (1/5 of dns queries were failing, ex-google employee bought ns-a1.io and was able to take over all .ios).

As more tlds come from good and bad faith actors people will flock to .com as a known respected entity. Limiting to .com, .org, .net and country codes and slowly introducing new tlds made more sense and gave time to estiblish trust / create brand awareness. 500 a year creates noise and forces distrust of any unusual or new tld.


IMO, there are two key barriers to a decentralized internet (other factors are minor in comparison.)

(1) IPv4 (2) Bandwidth limits

IPv6 makes NAT unnecessary. With IP scarcity gone, IP addresses might become permanent like phone numbers.

ISPs are currently making money off fixed IP addresses. Market forces would change that eventually.


The need for a search engine seems, imho, to be the major argument versus (full) decentralization. Another formulation for that: there is a (valid) need for (some kind of) centralization. (and from there, you all go back to full centralization, first because of search, then because of convenience, at last because of laziness).

P2P File sharing services like LimeWire, kazaa, torrent... had acceptable search experience before being sued to oblivion.

Even if lawsuits didn't kill p2p networks, Virus / safety concerns would have. Imo, trust is a bigger issue than discovery, hence need for curation - centralization.

The reputation system of thepiratebay makes it my primary torrent site.

Laziness, convenience is more of a trust than a search issue.

Many uploads are viruses/adware/ransomware masquerading as movies, books, games...

This necessitates multiple downloads - it's frustrating. I remember downloading several gigs of rar and encrypted .avi movies files, only to be greeted with a message asking me to fill a survey to get password.

Yify - a reputable source eliminated this concern for movies.

If decentralization works out, I believe specialized search engines will emerge.

But note, trust is the bigger issue than search or content discovery for decentralization. If not, iTunes store, app stores and other walled gardens would have long failed.


I wonder if it is even possible to decentralize search. An authority needs to create an index and letting the authority query that index is far faster than downloading gigabytes of data on every single machine that wants to search the decentralized internet.

The web has some parts that are decentralized, and I think it’s worth noting the bad aspects of what exists today.

Perhaps the most decentralized part of the internet today is BitTorrent. It’s a very efficient way of sharing files and has a lot of success. One can see how BitTorrent could become the backbone of a decentralized web. However:

1 BitTorrent “naturally” throttles popular files over anything else. Niche items which are hosted by fewer people will be slower to download => BitTorrent makes a cultural echo chamber

2 BitTorrent needs some kind of centralized search engine: it’s not possible for everyone on the network to host a copy of the entire index of files on the network. The only way is to have a search engine, much like Pirate Bay. In fact one could say that google was this in the first place.

3 decentralized social media would be much more polluted with fake accounts since no “authority” would be able to fix it.

People have been excited by decentralized web for at least 5 years. The technology has existed for at least 10 years. If it was to happen, I think it would have happened already...


Let's be realistic here, AMP is the next big step for the web :(

You're not wrong. But it's a step backwards and not forwards.

AMP is not any faster to load than the actual site not on google's CDN. The only reason it appears faster in most situations is because google is abusing it's monopoly search position to pre-load and prioritize AMP results.

AMP gives google more control and that's why they push it so hard. This plus their quest for hiding and/or getting rid of URLs so they can use AOL-style keywords within their AMP walled garden is multiple steps back. All the way to the late 90s.


Just use WebRTC? Built in to most browsers (Chrome 28+, Firefox 22+, Edge 12+). Main hurdle is the UDP hole punching of NAT for users behind it needing a third party to help initiate the peer to peer connection.

https://en.wikipedia.org/wiki/WebRTC

https://en.wikipedia.org/wiki/UDP_hole_punching


Isn’t the founding concept of the Internet & by extension, WWW - decentralization?

Isn’t this the premise of HBOs Silicon Valley?

Where can I buy coins for that?

I thought pied piper solved it already

May be a bit philosophical but I feel real decentralization can only be achieved if it’s created by not human. Like air , no one controls it and its free to breathe. Blockchain started this way to be totally decentralized but as we know if someone gets control of the 51% of the mines then its game over.



Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: