Since then, we've done a lot to improve the browser and build on the ideas. I've written about the P2P Web here, and there's a talk at the Web 1.0 conference that goes into more depth if you're still curious.
Happy to answer any questions. Architecture/scaling, privacy, FOSS on the Web, etc.
I know torrent sites may seem shortsighted, but thats just an example of websites in constant threat of being shut down. Journalism sites such was WikiLeaks can benefit huge from decentralization as well.
When you say you use (among others) 'Kademlia Mainline Distributed Hash' for discovery, are you referring to the Mainline BitTorrent DHT, or just a different instance of it, operating on the same principles?
How are you resilient against sybil attacks?
Did you end up forking Electron?
> Did that effort get cancelled or am I just not finding the info?
I've paired the focus down to Dat, yes. IPFS is still supported but Dat's my main focus. I may write a post later.
> the Mainline BitTorrent DHT, or just a different instance of it, operating on the same principles?
Yes, the Mainline DHT.
> How are you resilient against sybil attacks?
Dat sites are singly-owned and signed by private keys. The DHT network has all the same issues that BitTorrent has always had . We use fallback networks that are less distributed (a custom DNS server) but we'll need to evolve that solution
> Did you end up forking Electron?
No, thankfully! I've been working with GitHub via PRs. They added the feature most needed so far, proper process-level sandboxing, so hopefully a fork won't be necessary.
Do you have a link to the issue where this was fixed/discussed? I searched for it last week and couldn't find it.
Sounds like you need the IPFS daemon running for it to work.
Thanks to the beaker dev for abstracting the backend!
0 - https://github.com/joshuef/beaker
However, we are concerned and thinking about how to improve reader privacy - which is the main need for anonymity in the Dat: https://datproject.org/blog/2016-12-18-p2p-reader-privacy.
Additionally, you have to explicitly opt-in to re-hosting another site. You'll never upload data unless you tell beaker to host a site.
Peerweb is able to handle fully dynamic sites (of course backend stuff still needs to reach your own server) and offer rich controls for optimally distributing popular-resource pools.
Unlike Beaker, our resource updating protocol is push-based (with active preload bundling analytics) and doesn't use polling, so we have much less client-side overhead. It also doesn't require a new browser, just a simple DNS change.
I guess this is science communication stuff, rather than talking through the technicals, my background is design.
pfraze, is there a recommended resource for communicating the players and ideas on the resilient web, placing Beaker in its respective place and laying out the benefits in a friendly way for laypeople?
(Sorry about that, I know it's a pain!)
Didn't use p2p but allowed hosting own content.
It's dead since long though.