Funny choice of examples, personally I would have pointed to bitcoin and bittorrent as two examples of distributed protocols with runaway success
That the servers did inter-server replication and forwarding doesn’t really change that.
As a piracy/anti-censorship tools it overwhelmingly relies on the generosity of a handful of power seeders... which could theoretically be taken down though with some effort.
Also to thank is the laxism of the Russian government (and maybe others) in fighting these seeding hubs.
I believe that for small sites, the main aspect that differs between them is how peer discovery/name resolution is done. For this, IPFS has the most distributed approach (using a DHT and IPNS) but is also the most fragile and highest latency one. DAT has plans to switch to a DHT in the future (hyperswarm) but relies on centralised DNS discovery servers + local network multicast right now. Secure Scuttlebutt takes a more social approach and organises its network in terms of pubs. There is no central discovery, you can only find users that you "meet" in a pub.
This means that DAT has the most website-like feel because their solution provides low latency (only DNS lookup) and is global. I also think that DAT is much more pragmatic compared to IPFS thus at the moment it is much more stable and its us ability is better (also because it has more focus on stable, core components compared to IPFS which is a bigger project trying to do lots of things at once). I like secure scuttlebutt's idea of focusing on communities but that is a different approach to how the web works right now.
The caveat is: those users can’t meet _you_. I used SSB for a few months, joining pubs, commenting, trying to participate. No responses. (Not a huge surprise—you can get lost in the piles of people on Twitter, Reddit, etc.)
However, eventually I discovered that no one could see my contributions unless they added me—and SSB (or Patchwork, in this case) gave me no way of advertising my presence. This was pretty self-defeating. So now I don’t have to just build a ‘presence’ inside the network, I also have to build a ‘presence’ outside the network to announce that I’m somewhere inside the network. The SSB tools also give you no inkling that this is the case. So just know to bring friends!
SSB is more in the realm of ActivityPub, but more p2p where ActivityPub is federated.
I'm not sure what you mean by "isn't rooted in HTML/http concepts". IPFS lets you share files and fetch them in a content-addressed way individually or grouped in directories, like a re-imagining of BitTorrent+magnetlinks built for the browser and website use-case. IPFS is usable through web gateways with regular browsers. An IPFS gateway can either be a public gateway (like gateway.ipfs.io), or it can be set up to serve a specific IPFS url for a given domain name. I can make mydomain.com have a DNS record that says the site's content is available at a certain IPFS hash, and then I can set up A/AAAA records pointing at a server which runs the standard ipfs daemon and serves only that IPFS hash, so regular browsers can access my site normally, and people with the ipfs companion extension or future ipfs-compatible browsers will automatically fetch my site's content from the ipfs p2p cloud when they visit mydomain.com, which is useful if my server ever goes down and other people have my content pinned. Cloudflare.com actually has a service where they'll run the ipfs daemon for your domain, so you can set the DNS ipfs record and just worry about keeping your content in available ipfs (possibly with a pinning service) without needing to keep your own web servers up.
As far as I can tell, Dat doesn't support or at least doesn't emphasize this web gateway use-case. From reading about it, Dat seems to have a bunch of features like directory versioning, allowing clients to publish files within a page, and some browser WebRTC-like p2p swarm thing, which sound neat, but also require its own browser and sounds like it's making its own separate browser ecosystem. To me, IPFS feels like it's doing one thing, it's easy for me to picture how to slot it into my current understanding of the web next to existing technologies, and it's easy for me to envision a future where it's incrementally adopted (in the graceful fallback style that most web progress has followed) and maybe even becomes part of browsers.
(Though I feel IPFS still has a number of UX and reliability problems, like unpredictability in ability to fetch a resource, lack of information about fetching/pinning progress, and web gateways and the ipfs companion extension are still a bit janky. There needs to be some nice front-end for pinning sites you look at, enforcing that your pins don't take too much bandwidth or disk space, and updating your pinned content as sites update. But it seems all of these things could be solved with it still retaining roughly the same interface.)
Yeah, sorry, what I meant was that it has a protocol spec (eg. as an RFC or equivalent), and not just one canonical client and server impl, resp., and aligns with HTTP's concept of a network entity, URL, or even ETag, etc., because that'd be natural for my use case.
If a site has a domain that's set up with a IPFS DNSLink record, then you just go to its domain name and access it over HTTP(S). But if you have the IPFS companion extension, then your browser will access the site through IPFS peers instead of HTTP. So even if the site's webserver is down, you'll still be able to access the site if any other IPFS peers have its content pinned. You could then pin the site's content and help host it.
This all would work through a normal web browser, and gracefully degrades to a plain HTTP connection to a web server for people without the IPFS companion extension.
If you have a domain name and are able to keep your site content available in IPFS (on your own machines you keep up, on a pinning service, on a vps, on some friends' machines who you convince to pin it, etc), then you could have someone else host a ipfs web gateway for your domain scoped to your domain such as Cloudflare (https://www.cloudflare.com/distributed-web-gateway/). Then you don't need to keep your own web servers up, and the only maintenance you have to do for the ipfs web gateway on your domain is to keep your IPFS DNSLink record correct. Regular users will access your content through Cloudflare (who fetch it from IPFS and then aggressively cache it), and IPFS companion users will access it directly through IPFS.
There's also the concept of IPFS links (like ipfs://QmQB1L5PDwcEcMW6hWcLQrNMTKWY3wxX4aDumnKi385KPN/introduction/usage/), which you can access directly if you have the IPFS companion extension, or you can access through a public ipfs web gateway (like https://ipfs.io/ipfs/QmQB1L5PDwcEcMW6hWcLQrNMTKWY3wxX4aDumnK...). But you probably don't want to give links like either of those to your users unless you don't care for domain names. I think the IPFS documentation makes a mistake by emphasizing raw IPFS links so much as opposed to DNSLink records.
> So, there’s some good news: we will be moving to putting the CID/IPNS-key/HASH in a subdomain by default (so every site gets its own origin). That is, websites will be hosted from https://HASH.ipfs.dweb.link/... instead of https://ipfs.io/ipfs/HASH/.... The nice side effect is that absolute links will “just work”.
So every IPFS website is going to be https://HASH.ipfs.dweb.link/? I like that I can stick with the familiar dat://kickscondor.com. And it works today.
If someone sets up an IPFS DNSLink record for their domain, then it's just accessible directly from their domain. The user goes to https://example.com, and if they have a normal browser, they fetch it as normal, and if they have the ipfs extension, they fetch it over ipfs instead.
(There is also support for https://hash.ipfs.dweb.link/, but as I said, I don't think that's what people should give to users if they can avoid it. Though that does have the nice benefit over dat:// links in that it works for normal browsers even if your web server goes down.)
The gain is that you don't manage any webservers, and other users can help host your site content and can keep your site alive after you stop hosting it on ipfs.
I find the concept of being able to make a site that outlives my ability to host it (and hopefully outlives me) as long as people find it worth pinning is super interesting.
Of course, it would be better if you ran your own node.
With Dat, they "solved" it by building their own browser. Which I think makes it easy for the developer, but adds a lot of friction to your users.
I'm hoping both projects meet somewhere in the middle.
I feel that dragging along all the problems of mutability-by-default that plague the current internet when rebuilding it in a more decentralized way is a big mistake. Yes, I know that there are some fringe efforts to enable more immutability with "hypercore-strong-link" etc., but they look like an afterthought, that won't be supported by most of the Dat ecosystem if it finds major adoption.
Would you want to see typos corrected, facts checked and other improvements on that content?
This would provide an amazing archive for anyone wanting to see, for example, what was on a "front page" on a certain date. It would also let you see if someone had altered a specific article for whatever reason.
Each atomic file change creates a new version.
I didn't say "mutability needs to be completely eradicated". It's mutable-only (what we currently have) vs mutable-by-default vs immutable-by-default. Each of them has their own trade-offs.
> Yes, I know that there are some fringe efforts to enable more immutability with "hypercore-strong-link" etc., but they look like an afterthought, that won't be supported by most of the Dat ecosystem if it finds major adoption.
- Most developers don't read spec and add e.g. validations in end-user applications where they think they are being clever
- Multiple fully fledged implementations of the core protocols are usually hard to come by. This means that rarely used features are that are not supported in all implementations won't be used and so organically wither
I've come to see now though that both have valid uses.
This is a nuanced issue related to privacy and user safety.
otherwise the problem becomes ensuring the immutability of bad actors how would change what they had published anyways.
Could it work? What kinds of use cases could it support? How would a bot that had a good model of which forums to post things to do in comparison if it had 2 hours, 24 hours, 1 week? What about if we put _lower_ bounds on the amount of time that had to be spent looking (this seems like a fun challenge for proof of 'real' work).
While this is at a much higher level than the current dat protocol, it is in a sense quite similar to the query "Who has 353904391670d2803b34990e37f4d2e96f49351998e162d0e335b16812daf592e0f71470af7bee31f6a1da03744d03bcde659d73a0ebf56fd4a9fc6ef67edf60 that is 5 bytes long?"
So, to verify I understand: the backend of this would be handled by grad students (working with some kind of 30/90 minute timer alarm), and would experience distributed consistency failures caused by differences in which departmental tradition they did their major area exams?
In it I do some analysis of its strengths and weaknesses.
Thanks for sharing this interesting thing I'd never heard of!
Also, if I need to learn a whole new stack to use a whole new product that has no-one using it yet, chances are I won't bother.
We may win on learning from the past and not having some of the tech debt the web has, but I've seen this industry repeating the same mistakes over and over again so I wouldn't bet on it too much.
You may not bother but others will (and really you can say that about anything that tries to come up with something new, that is not a reason to stop trying to come with new stuff).
It's really just a matter of constraining the novelty. We're trying to make some specific improvements on the applications & networking stack of the Web. Redoing the entire web platform is out of scope (for now).
Ipfs with filecoin sounds resonable at first sight.
This bothers me.
P2P Reddit ( https://notabug.io/ ) has been doing this for over a 1+ year.
All content there updates in realtime too, fully decentralized. (WebRTC or daisy-chained socket relays)
I did not build NAB, but I work on the underlying P2P protocol that powers it ( https://github.com/amark/gun ) which competes with DAT - but I've always recommended DAT to people because I thought it already had working multi-writer apps on it.
Paul, isn't this already possible? You've shown me demos!