Actually it's been ahead of the IPFS community for a long time even when IPFS was all the hype. IPFS kind of stopped meaningful developments when they started doing ICO and focused on Filecoin. I know the founders of the DAT community have been very skeptical of ICOs and instead quietly focused on actually building the system and the community. Also the fact that this exists without any money incentive by default makes it a superior decentralized technology IMO.
IPFS on the other hand is developed by a well-funded VC-backed company that clearly has a vested interest in somehow "owning" the protocol.
Not saying that there is anything _morally_ wrong with IPFS, but it seems pretty clear that Hyper is better positioned to develop distributed tech.
I would love to see a benchmark of hyper vs ipfs. Anecdotally, hyper is faster and more user-friendly.
Hyper suffers from a branding problem (it's a direct evolution of dat; changing the name didn't help). But it has a strong case for being the best technical solution.
(The problem being: Bittorrent is super effective, but torrents are immutable. How do we make a swarm protocol that supports streams and other datasets that change over time?)
IPFS at this point is a write off for me. It seemed like it was built up as a project that courted the decentralisation/p2p/etc people, then did an ICO off the hype and essentially vaporized. The tech is reasonable, the abstractions and tooling were a really interesting approach and enabled a ton of powerful new things, and they did really well with making it easy to get started and productive. But I'd never build anything on it again because I fundamentally don't trust them now.
Dat/hyper etc. is a great project and ecosystem. Technologically it's incredibly impressive. The project and ecosystem are themselves decentralised (which is a profound demonstration that the people in this community are true to their stated values). Unfortunately this means it suffers from two major, related, problems (which can be framed as selling points, depending on your perspective):
- To build something with hyper you compose small modules. This is an excellent strategy if you can quickly discover and understand which modules to use and how to compose them. There are thousands of tiny useful (often remarkably elegantly written) modules that do useful stuff relevant to hyper and can be composed to do really amazing things - and it's impossible to find them quickly or learn how to compose them except by having hundreds of conversations with people in the community. So it's really not possible to be productive with it unless you're looking to just immerse yourself in it. If you want to build something on it as a dependency without participating in the community - good luck.
- Once you've got a project that depends on the ecosystem, it's almost impossible to keep it working and up to date. To find out what the current state of the ecosystem is (which libraries are the current ones, which dependencies should I use for what, what has replaced some previous dependency), again you have to have a lot of conversations or very actively follow other people's conversations. As a dependency, it's a lot of continued investment, and it's probably only worth it (or even possible) to do that if you have a huge amount of spare energy and time (or money to pay other people) to invest in it.
In summary with IPFS the problem is it's vaporware and I fundamentally don't believe the project is safe to build important decentralised projects on. With dat/hyper the problem is the opposite - that it's a completely decentralised community and it's very labour intensive to onboard and then keep up. It's missing the meta layers needed. Neither one is currently the right choice if you want a reliable decentralised stack for something critical. But hyper is likely to become it, and is a great project.
I can't find any examples of community links or anything built with this
You can find a user list at https://userlist.beakerbrowser.com/
I ask because i'm writing a Git-like datastore and am reviewing p2p protocols. My implementation is in Rust, but while there is a beta client of Hypercore, i suspect there's a ton of functionality that i'd need to pull wait on (or implement myself).
How do you see multi-language ecosystems working with Hypercore/Hyperswarm/etc?
But I get that there are extreme technical hurdles here. Iirc Spotify started off with that and abandoned it. If so, the challenges have so far outweighed the benefits.
But it fits my sense of what the web should/could be. Asymmetric connections (slow upload) and hostile ISP TOS really cemented the server/client distinction ... but I live imagining that a different way is possible.
Congrats on getting Beaker to a real place.
Original Show HN: https://news.ycombinator.com/item?id=12270762
I really enjoy that it comes up on HN pretty consistently so I can follow up on progress.
HN allows reposts after a year or so, or if a story hasn't had much attention yet. In cases where it has had significant attention in the last year (or so), we mark the repost [dupe] and downweight it. That's not the case here (though I'm sort of playing the "or so" card with respect to May 2020).
I guess adding links to past stories (at least when not posted by you dang) can signal different things; in this case, maybe including the title may have changed my first impression.
You're right that when I post those I filter the list for only the most engaging discussions, because HN readers don't like clicking on things and being taken to something not-so-interesting. Unfortunately that makes it a bit harder to write software to come up with just the right ones.
I've spent several hundred hours just experimenting within it and testing out ideas or refining things that were not previously tractable. In many cases, simply having beaker with its unique possibilities and constraints as a playground has helped inspire new ideas.
Unfortunately that doesn't automatically translate to anyone getting paid, or widespread adoption, or many other things people use to measure value. But many of the most important things aren't the ones we measure or celebrate.
Looking forward to checking out your videos pfrazee.
Here's an answer to your question (what is one usecase that Beaker does that existing solutions can't) : Beaker is the easiest way to publish content without forfeiting control. No account to create, no giving away my data, no server to install and sleepless nights to maintain it, no need to remain always up and running for my content to be accessible. The few solutions that exist in this space still rely on some form of hackish UI, if they even have one. With Beaker all you have to do is launch it and use the microblogging app.
On thr contrary, Beaker may not be a product in the nkw usual meaning but that's because it is not an honest comparison: Beaker is a platform to help build products easily, as can be seen by the microblogging "example"
My claim is that projects like Beaker neglect to ensure that any handful of 1-100 people will ever have any desire to use it in their life over their next-best available alternatives, even for free.
When you point out that Beaker is a platform, that doesn't change my claim, it just makes my claim imply that no developer will have a desire to build on it.
I don't know how this fits in with IPFS now. dat:// seems to be yet another distributed storage protocol
E.g: when I publish my website, it gets a permanent address, like: hyper://b282d5efe484143816362d33b3f9b3ea45ecfb8a6ada97e278fdfdc6a725e22f/
When I change a file on the page, it is still accessible on that address, and so are the older versions. The same peers connected to the address will transparently receive the update and seed that as well.
And yes, it's totally separate from IPFS, but is built from many of the same ideas. Dat started back in like, 2011 or so? This is a good high-level overview of how protocol works: https://hypercore-protocol.org/protocol/ - the fundamental unit of storage is hypercore, the append-only log. IPFS's structure, the raw merkle tree, requires another layer on top to provide mutability, like IPNS or DNSLink, but I do not believe that these provide the same guarantees of "auto-propagate updates without loss of older version."
Hyper*/Beaker also provides mechanisms to use centralized authority like DNS to provide human-readable resources, that simply point from the DNS entry to the hyper URL.
So, for individual user to host a website, he/she need to keep his/her hosting laptop alive all the time?
EDIT: don't talk about up/down votes, my bad
https://hashbase.io/ is the one by the Beaker team.
Is this all about circumventing censorship? Would be cool if it was also backwards compatible with the current centralised web, i.e. once you "publish" your peer-to-peer site that it'll also upload it to some global shared host (like the WP/blogger.com kind of model) for people on non-Beaker (or uncensored) connections to be able to access the sites - would maybe make it more viral too..
Basically, you can add P2P support to your site with graceful degradation back to a centralized server.
I don’t think there’s anything stopping someone from building a service that mirrors DAT content to a shared hosting service, but to me it wouldn’t make sense for it to be built in to the core platform.
Question: Can this host big websites (eg 55 TB SciHub), where most seeders are only willing to give <1GB partial?
People rarely have interests outside of alternative browsers other than for education, software politics, or buying drugs.
I believe their gameplan is to work through a few challenges this way like adding support for multi-writer archives before returning to Beaker with a bucket of gold improvements for its foundation.