Given the broad meaning of betray used, that seems unlikely. If edsu gets big it will have to pay for all that storage somehow. Like all things edsu will one day come to an end, and like most websites/companies/endeavors it will probably come to an end before I do.
A few clicks in, the first line of https://edclave.com/ is:
"Edsu makes life better for both developers and users of Online Open Source Software (OOSS). Instead of the developer having to hold on to your data, you do. It gives you more control, and gives them less to worry about."
I'll need to think about this.
The main problem is garbage collection - to guarantee that you can sync across all devices for all time, your data structure must be append-only, so your document can only grow. A long living document will eventually grow very very large. You could let the user decide when to collect garbage/establish a new baseline doc; maybe like 5 years. But this means that if you edited the document on a laptop, didn't sync it with another client, and then closed the laptop for 5 years you would lose the ability for the changes you made on that laptop to be resolved automatically.
But not all! Quite a few that are state-based, auto-compacting, space-saving, small & efficient!
More info in a discussion on this: https://twitter.com/marknadal/status/1008610024875122688
But you're right that I'm wrong when I say "[all] CRDTs are generally append only structures." I meant to say "text-revision CRDTs are generally append only data structures."
Maybe do the routing through a distributed hash table like we use for finding torrent peers (perhaps even the exact "mainline" DHT torrents use), which would mean that even with the server being down you could still sync with your other online clients.
However, Edsu is a federated protocol with an open source implementation. So it only disappears when the last person to care about it stops keeping it up to date with the compiler. That's about as good a guarantee as you can get, I think (e.g. telnet is still around after 49 years).
Email, similarly, unfortunately fell to convenience - GMail, with Google selling your data.
The parent brings up a good point, how do we know Edsu won't betray its users? Or that it won't become dominated by a centralized provider?
I'd love to hear an answer.
For me, our team decided to take a route that cannot be compromised - your identity belongs to you, fully decentralized (https://gun.eco/docs/Todo-Dapp), yet can do 0-server password resets and other conveniences.
Bitcoin is theoretically fully decentralized, but when I sold mine off it took 2 days to sync the chain - I get why people use Coinbase. I'd still be running my own email server right now if it weren't for spam.
My point is that I think there's inflection points - when weaknesses in design or implementation become apparent - where centralization can get a foothold, and I don't think it's inevitable which course things take at those points. HTTP hosting, for example, has some big players, but it's still very much a commodity.
I realize that this is a crowded field, with lots of contenders, like yours. I think it's an important enough problem that it warrants parallel attempts, so that at least one of them sticks. Edsu picks a very specific strategy, which is that it's a compromise - it's not like, say, IPFS in its level of decentralization. For an app platform, I think there's challenges enough at any level of decentralization, and Edsu tries to b-line it straight there by being very orthodox and old-skool in nearly every other way. I thought it was a good bet, but only time will tell.
Props for the password resets, BTW. The importance of that feature is underappreciated :)
Don't let any counter argument ever stop you. Parallel experiments are critical for success.
With that said, can't we say that the federated experiment has already played out? Particularly, in email?
What new innovation or changes do you think will make the story play out differently this time around, versus outright P2P/decentralization?
Take, for instance, statically-hosted HTTP: if I'm, say, hosting at AWS and I want to switch to Netlify, it's trivial - I just change the place I upload my files to and switch the DNS. To me that's a complete success in decentralization - there's no hassle, no compatibility problems, no one other than me even knows that I've made the change.
So I don't see it as federated being a lost cause, I see it as a protocol needing to be an HTTP not an email.
For Edsu, it's something that I've considered at every design choice. For instance, the data storage format fully specified and trivial, and hopefully therefore trivially transferable. And there's features like transparent proxying/forwarding of usernames. But I'll absolutely admit I've not been able to completely mitigate it, aside from urging people to BYOD (Bring Your Own Domain).
However, a lot of it comes down factors that aren't the protocol itself. I'm hoping that Edsu's userbase is mostly just the open source community, with its consolidation-hostile "herding cats" nature. When Facebook first showed up, it struck me as a home page for people who didn't want to deal with HTML. So in a way, it was a bifurcation of HTML users, with the technical and non-technical people each going their own way. And in the technical people's world, HTML stayed a commodity.
SSH is another successful federated protocol - but only in the technical community. My family members don't use SSH, and that's just fine with me. That's one big advantage relative to alternatives that focus on social networking - Edsu is useful even if only a minority of people use it. And even email had 22 years before Gmail happened - if we've got 2 decades before we need to come up with a better thing than Edsu because the eternal September boat finally docked, I think that's fine.
So for sure, the gestalt at the moment is that full decentralization is the way to go. Edsu is a hedge - a bet against that. Federated is the devil we know, and there's successful examples of it avoiding its biggest problem. On the other hand, I think full decentralization's challenges are still unknown, and so A) it's unclear if it's any more resistant to centralization (a la Coinbase), and B) it might have other emergent bugbears that are an even bigger problem.
I think only time will tell.
Well stated, kindly argued.
I really look forward to what you build now, 6months, 2 years+ from now. :)
I like how low-level and simplified the protocol is, being limited to only 9 types of messages. Compared to Solid which carries baggage from RDF, this seems far easier for third parties to implement.
The note-taking app is really under-selling the protocol beneath it. What could help sell it better is by doing something better than a centralized platform can.
And itgoon has it right - this was the most useful thing I could think to write that only took a single day (I had an self-imposed deadline to hit earlier this week).
Here's a discussion of more interested projects and how Edsu could be used in each (and its advantages and disadvantages):
One place this shows up is that Edsu has a permissions model where other people can read and interact with what you've stored (within very tightly defined parameters), which is the basis for writing multi-user apps like, say, a distributed Reddit or Slack.
They both have the same model regarding the ownership of data though. The biggest difference there is that Edsu uses a Merkle tree as the storage, like IPFS and git, which has a lot of consequences in terms of how it gets used.
Compare https://litewrite.net, a RemoteStorage app presumably similar to NoteToMe. You can start writing immediately, and sign in with a RemoteStorage account only if you want that data to be available elsewhere as well.
The note-taking app seems more like a proof of concept than any kind of category-killer, though.
I'll be keeping an eye on that protocol, though.
And thanks for the kind words :)
The same is true of the text file I've been adding to for decades. After learning the hard way (several times) NEVER to trust someone else's prog/assume they'll be around for more than 6 months. Multiple backups sync'd religiously.
And ... no website needed! Not even a net connection! Just a 'personal computer'!
"You do have to sign up for an Edsu account...."
Whoops ... there it is.
I don't see if it's possible.
I see that scope is fixed for now in the name like that `prv.app.edsu-org.hello-world.storage`. So I assume `token` only works for `prv.app.edsu-org.hello-world.storage` but wouldn't work for `pub.app.edsu-org.hello-world.storage` or would it?
If an app wanted to be able to switch something from public to private or back, it'd get a token with write permissions for two names, one with the prv.* prefix, and one with the pub.* prefix. And that's a good thing: it makes it clear to the user that the app is requesting the ability to make the things that they write public.
It's still just one grant request, it's just that there'd be two line items instead of one. Also, names are simple pointers to blocks of data, so, for instance, if both names happen to be pointing to the same written piece, there's no duplication of data.
And I see that there is no multi-get request.
So how would you implement your notes app if you have multiple notes, not one.
Would you have a document which would store hashes of all notes and then fetch in parallel all of them (making n calls, 1 call per note)?
Essentially, you'd only want more than one name if there's different visibility requirements for different parts of your data, or you specifically want to disconnect the state of some of data from others. All of your structure (e.g. lists, trees, lookup tables, etc.) you want to keep in the blocks.
So in your case of having multiple notes, you'd likely want the name block to keep track of the block hash of each one (so, yes, exactly as you say). And correct, if you wanted all of them you'd need to do N block-get calls. However, due to pipelining and chaining, that's not as costly as it might sound: in terms of latency it should be costless, and in terms of bandwidth the overhead is a fraction of a percent if you're using full blocks (i.e. what you'd be doing if you're concerned about bandwidth).
A lot of this stuff is quite low level (think almost SCSI kind of low level) - it's meant to be abstracted away by libraries like basic-storage.js. In that case, having multiple notes is trivial - you put each note under a different storage key, and the library sorts out how to retrieve them efficiently and update them independently, even though they're all be under the same name.
In case you haven't run into it yet, I talk a little bit more about why names are meant to be used sparingly at https://edsu.org/use-cases/storage/
It is indeed stored on an Edsu server. The data format is trivial and completely specified, so switching providers should be straightforward (an Edsu app to do the data transfer would be easy to write, and I plan on writing one). If you're using your own domain (which is encouraged), from there you'd just update an A record to complete the transition, if not, you'd set up a transparent redirect on your old host to keep all your old links/permissions alive.
It's pretty old school in that if your provider loses a server and they're not doing replication/backups then there can be data loss. If they are, then a failover should work fine, with minimal loss and no corruption (the important stuff is atomic).
With localStorage, how it's used is up to the app (generally it'd only be used for keeping a token and maybe a block cache), so I wouldn't expect a loss of that to matter in most cases.
So if you squint enough, Edsu is kind of an "APIized" version of your setup :) So in addition to text files, it can also be structured application data.
And sometimes it turns out that the SASS product is simply a wrapper around a cli tool... for example the product  Mole, which is a wrapper around ssh to make it easier... but not really necessary...
If it's client-side, why would I need a service provider:
Different in that it's lower level - it's not a social networking protocol - it's something you can build a social networking protocol on.
“Safari cannot open the connection because the network connection was lost.”
A great thing about this is it makes a lot of caching problems simply go away. So while there's no explicit support for local caches disconnecting and then re-connecting later, the underlying protocol gives any library wanting to implement this feature a lot of support for it.
(not affiliated with the gun.js team, just think it's a powerful P2P platform).