You can also try PushPin for yourself:
likely outdated binaries are available here:
Also, we're currently in the midst of a new project called Cambria, exploring some of the consequences and data demands for a new Cambrian-era of software here: https://inkandswitch.github.io/cambria/
Plus, the value of the cloud app is not just your data, but the network effects. Like, if you've emailed links to a GDocs document, and 5 years later you decide to move to another service, those GDocs links will 404, regardless of whether you've transferred all of your data to the new service.
With local-first apps, the URL starts with you, not some some-sass-provider.com.
Cloud Computing (and SASS even more so) is little more than just another attempt to recreate access/info monopolies, essentially the same profit proposition as existed with closed source software, while pretending to be one of the cool kids and use politically more acceptable (but in this context rather meaningless) terms like Open Source and Open Standards. It may be a different generation of companies, with slightly different cultures, but they are all equally predatory in nature as the old ones.
It's going to be a rude awakening, when some of the bigger service providers will eventually fall over (which they will). Of course, everyone will blame anything and everything but their own willful ignorance, when that happens.
For some reason, "cloud computing" has become a bogeyman and therefore corporations paying for it are clueless "sheeple".
To help prevent the phrase "cloud computing" from distorting our thinking, we have to remember that companies have been paying others for off-premise computing without calling it "cloud" or "SaaS" for decades before Amazon AWS, Salesforce, etc existed.
In the 1960s, IBM's SABRE airline reservations system was the "cloud" for companies like American Airlines, Delta, etc.
In the 1980s, many companies used to process payroll in-house accounting software and print paychecks on self-owned dot-matrix printers. But most companies eventually outsourced all that to specialized companies such as ADP.
Companies tried to manage employees' retirement benefits on in-house software but most outsource that to companies like Fidelity. Likewise, even companies that self-fund their own healthcare benefits will still outsource the administration to a company like Cigna. Don't install a bunch of "healthcare management software" on your own on-premise servers. Just use the "cloud/SaaS" computers that Cigna has.
Some companies (really old ones) used to print their own stock ownership certificates and mail them out to grandma. Now, virtually every company outsources that to another company. Most companies that have Employee Stock Purchase Plans outsource the administration of it to a company like Computershare.
The major difference with "cloud" terminology taking hold is that services like AWS is offering generic compute (EC2) and non-vertical industry solutions. Otherwise, the so-called "cloud" has been going on for decades. Amazon AWS made "cloud" really convenient by allocating off-premise resources via a web interface (dashboard or REST api) instead of calling a salesperson from IBM/ADP/Fidelity/Cigna/etc.
That doesn't mean it's always correct to buy into everything the cloud offers. Pick and choose the tradeoffs that make financial sense. Netflix got rid of their datacenters and moved 100% of the customer billing to the cloud. But Dropbox did the opposite and migrated from AWS to their own datacenter. They're both correct for their situations.
>, when some of the bigger service providers will eventually fall over (which they will).
The big established vendors like AWS, Azure, and GCP ... all have enough business that they will be around for decades. If anybody will exit, I'd guess it would be the smaller players like Oracle Cloud.
Disrupting paychecks, retirement, healthcare, stock issuance, etc were all annoying but the business would not cease to make money and those were all relatively interchangeable with other providers or had offline alternatives.
The key difference with this current cloud embrace is that big billion dollar businesses are making their core business dependent on Amazon/Google/Microsoft cloud services. They don’t even trust power companies or ISPs to give them power and Internet that reliably.
A disaster recovery plan for a dependence on AWS lambda and co. is now just “Amazon shouldn’t fail, but if it does we’ll just have to rewrite all of that software at great expense and lost revenue to the business.”
There's still some history missing there. Before AWS/GCP/Azure, billion dollar businesses were making their _core_ business dependent on datacenters owned by HP/IBM/Sungard/Origin/Exodus/etc. E.g. Google's early search engine (their core business) in 1999 ran on servers at Exodus datacenter. Many of The Washington Post's core computing was provided by IBM's data center before Amazon AWS existed. In the 1990s, many businesses also had SAP ERP for core business processes that was managed by outsourced datacenters. (Both HP and IBM offered "managed SAP" hosting services.) When SAP went down, manufacturing lines got shut down, or no products got picked for shipping and trucks would sit idling. It's a similar disruption with AWS outages today.
Therefore, switching from lesser known and older "cloud" such as Origin/Exodus/etc to AWS/Azure/GCP is not that radical or as dangerous as some believe. Before AWS, many companies still depended and trusted other companies to provide off-premise reliable computer infrastructure. We just didn't call it "cloud" back in the 1990s.
Today's AWS/Azure/GCP with their modern multi-zone failover architectures are more reliable than the old HP/Origin datacenters that businesses used to depend on.
If you outgrown IBM/AWS/etc, you need to get connectivity and servers and run any services you were relying on. That's a lot bigger dependency.
You can have your own servers but it's at the cost of slower turn around on hardware issues.
It’s not even clear Intuit could continue to exist if Amazon disappeared, which was an unheard of level of dependency until the last decade.
I agree that "cloud" is becoming a thought-terminating cliché (and I myself fall prey to reflexive reactions to that word all too often). I think the core issue is something different - it's about relationships. It's a point of view I've been exploring recently, and I find it to be illuminating. Some thoughts:
1. Individuals do not handle relationships as well as companies. That's the source of "subscription fatigue" - you end up having to mentally keep track of all the third parties you may or may not owe money to, that may or may not do something that impacts your life down the line. Companies have dedicated people and processes for handling this, most individuals do not (rich people can hire help).
2. Individuals do not necessarily want all these relationships. If I buy a toaster, I don't want any relationship with its maker - for the same reason that when I buy bread from a grocery store, I don't want to enter into relationship with the seller, or the baker, or the farmer that provided the flour.
3. Power imbalance in relationships matter. Individuals are almost always irrelevant to the service provider, so for a regular person it's better to not have any relationship at all. For companies, it depends. A corporation like Microsoft or Google can be rather certain that Salesforce isn't going to pull a fast one over them, due to relative sizes and the amount of money changing hands. Smaller companies fall somewhere on the spectrum between individuals and megacorps.
4. The above colors risk calculation. If my relationship with the service provider is closer to that of peers, I can feel safer about depositing data with them or making myself dependent on their service, because they're incentivized to provide a good service to me, and I can hurt them if they don't (say, by switching to a competitor). Companies do the same calculus. A small studio is best to think twice about depending on third-party services for anything that may outlive such services. A large company can derisk the relationship through a contract.
5. Part of the objection I and many others have to SaaS is that consumer-oriented SaaS (including "stuff as a service", aka. turning products into services) tends to be exploitative by design. You wouldn't consider a friend a person that abuses you mentally or tries to cheat you out of your money, so why enter into a relationship with a company that tries the same? Except when you have no other choice, which is why everything becoming a service is a worrying trend.
 - Beyond the one managed by consumer protection laws. But this kind of relationship is something you do not need to mentally keep track of - it doesn't come with any unexpected consequences, and the terms of relationship are shared (they're part of the law).
 - Again, beyond the implicit ones fully handled by consumer protection and health&safety laws.
I work for a small unit of a large organization now and we have a highly formal process for buying SaaS subscriptions. You fill out some paperwork explaining why you need it, in 1 to 2 weeks you tend to hear it is approved or (best of all) the large organization has a site license already they can put you on.
You have to have some brakes on subscriptions.
I worked at a small co where we had maybe 20-30 different SaaS tools that we stored "documents" in. Every week we had an all hands and somebody would bring up the problem that they can't find documents and that the answer is that we need to have a new place to put documents.
The group seemed impervious to the obvious objection that having more possible places to put documents would make it harder to find documents. So we wound up with a lot of SaaS subscriptions, a lot of talking, but very little listening.
The obvious business response that to problem is to build "one ring to rule them all"; e.g. a small instance of the thing that the n.s.a. uses to read your email, sans the cryptoanalysis bit.
That's an astonishingly hard sell. On one hand that system is an existential threat to all SaaS vendors from Salesforce.com to the very smallest. So you know you'll get resistance.
Even though Google is great at making money from ads, the search is not impressive in 2020. (Look at the second page of results, then the third, and ask "why do they bother showing any of this crap?") Corporations still buy OpenText. Ask people what is a scalable full-text search engine and they say "Solr" (have you really seen how it executes queries?) The only full-text search I like these days is pubmed.
We can do better.
For example as a small provider you'll just kill all your velocity if you try to deliver on-prem to a number of large enterprises. They will all have different upgrade policies, testing and approval policies, different approved hardware etc etc and pretty soon you will suffer a death from 1000 cuts and not be able to deliver anything.
For example, I had a client once say to me that we had to replace postgres in our stack with Oracle because that was their approved database. Even though we had to support the stack. I had a client delay us by 6 months because they decided to order super bleeding-edge network cards (which I repeatedly said we didn't need) then these cards turned out to take ages to deliver, when they arrived they didn't work with the "corporate approved" version of linux they insisted on using for another couple of months etc.
In a saas you don't have to deal with any of those things. You do the one-off (painful) third-party vendor approval process, go through all the infosec audits etc but after that you own the stack and can run it the way you want.
If you want to change the hardware layout and it's on prem you have to go cap-in-hand to the client and get them to stump up cash, wait months while physical boxes get allocated, racked up etc. If you're in-cloud, you make a change to terraform, check it in and it gets pushed out through your CI/CD pipeline.
If you want to roll changes to all your customers that's really easy as a SaaS/cloud offering whereas it's very hard if they're each on-prem.
It's easy to be cynical, but there are very significant benefits to the vendor of this model. There can also be benefits to the customer too.
In addition, in my experience when you deal with a big enterprise customer you are contractually committed to providing "transition assistance" when the contract ends (even if your company goes bust) and returning data in mutually-agreed open formats. So vendor lockin doesn't really apply eithre.
Maybe this is true of some commercial cloud companies, but "cloud" computing is much larger in scope than you make it sound. There is a whole shadow PaaS/SaaS world used primarily by various research communities, for instance, often called "grid" instead of cloud, wherein nearly everything is publicly funded and the value proposition is web-based access to data stores, HPC clusters, etc, instead of every individual hacking their own data science environment on their laptop.
You are asking for a solution before agreeing that there exists a problem.
This is the "I'll listen to your problem if the solution is convenient" mentality. It's inside-the-box thinking. It holds you back.
The immediate costs are only part of the picture.
There have been times that the control plane blew up in one zone of one cloud provider.
I had a least-complexity system which lived in that zone so it kept functioning as if nothing happened.
Other people panicked or had their automation freak out for them, they tried to restore their multi-AZ systems, overloading the control plane for the whole region.
It's a stressful experience if you feel you can't do anything about it, and sooner or later you will feel it with cloud computing: periods of 8-12 hours where your environment is FUBAR and you are best having some faith they will stitch it back together and walk away from it for a while.
If my colo goes down. My customers are going to complain and think I am incompetent. If my cloud hosted infrastructure goes down and everyone else is down your customers are a lot more understanding.
“No one ever got fired for buying IBM”
It depends on what you want but managing complex software may be expensive and has some serious economies of scale, which makes a pressure to outsource, even expensively. You can't just brush it off with 'that's easy', it's not.
When Netflix previously owned and managed their own datacenters, they had a massive 3-day database outage that disrupted shipping DVDs to customers. Even though they are a Silicon Valley tech company and built their own datacenter competency with internal staff engineers, it didn't prevent that major failure.
Based on interviews with Reed Hastings and Adrian Cockcroft, they said the 2 big reasons they migrated to AWS was (1) the pain of that 3-day outage and (2) the future expansion plans into international regions. They didn't want to buy more datacenters and manage that extra complexity. Reed said, "they wanted to get out of the datacenter business".
Yes, AWS/GCP/Azure all have outages too but Netflix (and other customers) conclude that -- as a whole -- those cloud vendors will be more reliable than in-house solutions. The cloud vendors also iterate on new features faster than in-house staff -- especially for non-tech companies like The Guardian newspaper.
Some big companies such as Facebook, Walmart, can employ an army of IT staff with the skills to maintain complex "private clouds". However, most non-tech companies found out that internal clouds were an inefficient way to spend money on IT because it wasn't their core competency. Just because a raw racked server installed on-premise is cheaper than EC2 doesn't mean the TCO (Total Cost Ownership) is cheaper.
I’ll let that just sit there.
But you are going to host your own project management software? Your own expense reporting software? Your own email server? Your own payroll processing? Salesforce equivalent? Your own git server? Dropbox equivalent?
Longer answer: doing things in-house that are outside your core competencies and/or value creation model is a poor use of scarce resources (both capital and human—predominantly management bandwidth) and increases risk carried.
To give a concrete example: imagine you need to host your source code repository. You can pay for something like Bitbucket for $6/month/user and not have to worry about it. It’s a price that scales linearly with your team size and is a tiny fraction of their total cost.
Doing it in-house: you have to pay for hardware, storage, worry about backups, have somebody support it, have somebody manage the person that supports it, deal with users, find a solution to remote access, and so on. But all these miss the big cost—risk—what happens if the server dies or your office burns down? Nobody used to get fired for buying IBM, nobody now gets fired for buying a popular SaaS product.
You aren’t Google, at some point scale changes the equation, but that’s a rare spot to be in.
Remote access followed production norms, so no extra work there (other than a lot more people need access to git than other production servers). Maybe a few hours one time to lock down permissions for git etc, probably less fuss than getting SSO setup for a git SaaS.
And what happens when that one server goes down or becomes overloaded?
I've got opinions on collaboration software (why not put text files in git), but ignoring that, I don't really want to run a wiki, so sure, maybe your email provider offers something anyway.
> And what happens when that one server goes down or becomes overloaded?
You fix it? Same like when production breaks; hopefully you have people who can fix production, hopefully you monitor your important tools. What happens when it gets overloaded and its outsourced? You hope your provider fixes it, and you call and yell at them.
Well, in my case. I just clicked on “minimum” and increase it by 1 in my autoscaling group or I scale vertically and change the server from 2xlarge to 4xlarge.
Can you get another server shipped to your colo in 5 minutes, brought online and roll over and go back to sleep? Like I did at my last job. Most of the time by the time the notification alarm woke me up, scaling has already taken place.
But, depending on the service, part of “monitoring” is bringing up another server automatically when CPU usage spikes.
But I think my current employer knows just a little about managing servers/services at scale. As far as I know, my current employer manages more servers than anyone else on the planet and has more experts on staff than anyone.
Even that being said, we outsource a lot of the services that aren’t in our core competencies to other companies.
I couldn’t say that about my previous employer who had 50 people in all.
B) would it have the same reliability characteristics?
C) do you have a DR strategy?
D) after you spend all that money, did it help you either save money or make money? Did it give you a competitive advantage? Did it help you go to market faster?
This article also quickly moves past "local-first" software to conflict resolution which, in my opinion, is a distinctly different issue. It's certainly not reason enough to hold off offering users a local-first option.
At this point I believe that since it can be done it should be done. I'll even go so far as to say it's a necessity. At some point users will understand it's a necessity and demand it. All that really needs to happen to convince them is one big incident where they lose access to their data for an extended period of time, or worse yet, lose all their data forever, and it won't matter why or how.
Aside from that, as more app makers start offering local-first options and users begin to see the benefits of that they will begin to demand it. That could take some time, but I expect it's inevitable.
There are other benefits to a local-first approach for developers. Take a "Contacts" app for example. If we have a standard for saving contacts data on the client side that any app could access this would give users and developers options to create and use new apps and features that all use the same data.
CouchDB & PouchDB.js provide a pretty solid and easy way to do this right now. Installed on the user's desktop PC, CouchDB provides the missing link to a robust client side web app runtime environment.
There may be other ways of achieving this right now, but I am not aware of them.
The spec gets periodically refreshed/resubmitted. It last happened a couple of months ago and is set to expire at the end of the year.
And I outline the features on the new site for the app at https://cherrypc.com/home.html
That site is still under construction but there is a link to a demo of the app there. It doesn't run on a local CouchDB though, it uses the browser's IndexedDB.
You only change one line of code to use the IndexedDB, the cloud based CouchDB, or the locally installed CouchDB.
I did make a very simple demo of a "Rich Text Editor" app that runs on a CouchDB installed on your desktop pc though. After you've installed CouchDB and created an "Admin User" and password this page configures a user and a DB on your CouchDB:
After you created your user you're redirected to the app and prompted to login at this page:
After you log in you can CRUD & print rich text documents.
It's a very simple app and all the code to make it is included in the source of those two html pages.
This is the content providers design that was heavily touted in the early-ish days of Android.
The way this would work is for the computer in the garage to have the ability to divide itself arbitrarily into VMs for each purpose, with an ecosystem of images designed for things like fridges and gaming consoles. It should be possible to add or upgrade compute to the device in a hot swapped fashion, and because it doesn't have to be in a thin tablet, it could be easily cooled.
Interestingly, the solution to cloud software data ownership seems to be to use a self-hosted alternative, rather than use a non-Cloud solution like I would have expected.
I once imagined that homomorphic encryption would allow people to store data in their personal/neighborhood clouds and have third party SaaS code operate on that data locally. But I've recently been made to understand that homomorphic encryption would also allow companies to fully close off any access to data beyond what a program/service wants to give out, and unfortunately I get the feeling that the market will prefer the latter over the former.
Edit: of course local-first does not mean merely "backup", but instead the (redundant) hardware serves as a primary data store. I would welcome that as well!
- nextcloud - a system with appliance-like apps that do these sorts of things
- proxmox - a vm system that allows you to deploy VMs and containers including appliance-like templates
While we can reasonably expect software elements in any proposed solution, the hardware and physical elements of distributed computing may provide a far simpler pathway and likely will permit much greater reuse of existing proven software approaches.
For example, all future multi-unit residences could come with 'data center' along with the boiler, or possibly the actual units will host this equipment along with their air conditioning units. All your cloud apps can now point to this cloud. I don't see any fundamental reason why 'data center' can not become a modular utility unit, coming in domicile, commercial, and industry grade flavors.
In my view, the pure software solution approach to the 'modern informaton society' has implicit political dimensions. One of these is the concentrated private ownership and control over physical resources which are now a required substrate of modern society. I for one am not ready to accept that as 'acceptable'.
My landlord can barely run the water and A/C; no way they can run IT.
The improved quality and reliability is worth it for the trivial latency cost.
However, your implicit point regarding income level and the range in quality of building management is valid, and successful products in this space would address it.
I mean that security of such utility units would be roughly what we already see with current breed of IoT devices. To keep some kind of quality level of such devices there would have to be one big company that produces them, which does not solve the problem. If you look at IoT devices manufacturers now there is so much crap floating around because there are so many of them.
I don't think it is "evil corporations concentrating power" it is more "normal people have better things to do". If you are plumber you want to spend time fixing pipes not setting up your homepage. Putting some ad on Facebook for a plumber is perfect solution.
IoT is designed to work in extreme edge conditions: low power; intermittent connectivity; constrained local storage; limitations on embedded code, etc.
Further, there is currently NO financial incentive for anyone to tackle the issues necessary to take these bits of technology and make it 'home' techonlogy. We've done this for all sorts of things, including controlled combustion in the basement for heating.
You also have two strawmen here that you attack:
1 - I did not say anything about the "evil" of corporations. Simply that it is not acceptable.
2 - A "normal" person in a modern multi-residence is hardly bothered with "fixing the boiler", or "the network connection", or "the fire alarm system", any other utility tech. If you are asserting that this is "impossible" for "networking and hosting" (!!) please make the case.
The solution space is fairly permissive, with various business models to consider. It should definitely be explored.
If we go the other way around so for example we take Synology and Qnap to provide solutions for this it will be not much different than FB and Google owning all the data. Those companies have know how to do "server appliances" but then they will basically have an ownership of software and hardware and updates for those.
Which in the end achieves nothing because what is the point of having local servers running proprietary hardware and software when you have the same currently, only that servers are somewhere else.
Even if a whole slew of crap products lead the gen 1, the societal mindshift on its own is worth the effort, imo.
I get the tradeoffs. I'm not going back 10+ years.
(E.g., you can find example horror stories here on Hacker News: https://www.google.com/search?q=locked+out+of+gsuite+site%3A...)
It's a good habit to keep multiple interlocking personal email accounts from multiple providers, but being cloud-first is still obviously correct.
Just to be clear, regarding the "being cloud-first is still obviously correct", your stance is that it's ok for your access to your life's work being at the discretion a company? (Presumably one you trust.) Not saying you're wrong here, just curious how people who are all-in on the cloud think about this.
These automatically sync to the cloud. Usage is lower, as most commercial applications prefer cloud lockin.
> [...] These automatically sync to the cloud. [...]
What could go wrong...
It’s not that hard to have a caching strategy. And then your native app feels like a native app.
That's a huge load of responsibility taken off the software supplier's shoulders and it eliminates the entire cost and complexity of building and maintaining Cloud based data management infrastructure.
And that's a pretty great option to have on the table for both users and software makers.
I'll also note that these apps consume almost no network bandwidth. With Service Workers implemented they barely speak to your server. The amount of bandwidth that can be potentially saved has got to be pretty huge.
But I got a mumble server up and running in no time flat, so it works that way, and the audio quality is amazing - far better than Zoom - as long as everybody's wearing properly-configured headsets.
Federation isn't needed because you don't need to have servers that communicate with each other, you just need some server that can host a given video call. So any open source solution works, no federation architecture is required.
You put a repeater on (at least) one side of every bottleneck to remove redundant traffic.
The irony that Etherpad is written in Node is not lost on me.
The first thing is to expect the app devs to define a api in RPC so that what they app serve it transparent to the whole platform.
That way, users and other apps can reuse that api and integrate tools and other services without worrying about if the node is online or not.
Given they are RPC (over TCP or IPC), they can be used as a peer node in a distributed computing flock or in the app service process the application can serve the RPC requests not locally if it wants or needs, but going over the network.
But even in the case it goes straight to the "cloud" for resources, in the node it will always go through the RPC-api first. (How the application handle the RPC request is defined by the app developers).
There are a lot of others important details actually, for instance the UI sdk, window management, its already there, the storage layer (files and key-value db) are distributed over p2p(torrent) and everything is accessible/bootstrapped through a DHT address accessible anywhere.
But imagine if you had this architecture before, and Twitter, Facebook or Google search had to install a api-based application, where they need to ask you for permission to index content, or store locally your list of friends.. where later other apps could extract that information from your machine.. For instance you could change your search to DDG or export your Facebook list of friends and post to some other social network(this is actually the primary reason almost no one could compete with a popular social network).
In the end, things that should be yours are yours and third-party apps will have to be installed and granted permission by you to manage that digital property.
But overall i'm very glad to see that this is starting to become some sort of a trend, that people are finally waking up, because despite of all technological implications of it, it also have a lot of political and social benefits to this approach.
If it wasn't for GDPR, most of these companies wouldn't even offer exports of your data.
On the other side we have people who keep patting themselves in their back about their Emacs setup being able to run a web server from a raspberry Pi.
We're at a place where it's fairly counter intuitive to continue arguing against cloud providers. I doubt azure or AWS are going to go down with a probability that should worry any business (and even then it should be recoverable). The only danger with cloud providers is that if your engineering team is not exactly smart you can rack up a million dollar bill for a dog walking app. But thars more about your ability to recruit disciplines engineers, no point blaming it on the cloud.
Look at Imgur. For the most part bootstrapped, scales extremely well, and if I'm not wrong, is very frugal, also runs off AWS.
If I remember correctly, they use S3 to store and distribute the master files, and other AWS services for the user interface and the like, but streaming itself is handled from their own infrastructure.
Were you asleep when S3 East failed to the point that half of the internet was offline or did you just not get a notification?
> The only danger with cloud providers is that if your engineering team is not exactly smart you can rack up a million dollar bill for a dog walking app
The public cloud has never been recession tested and these next few years may be the only way to do it. The model works as long as there are people willing to pay for Amazon and Azure will raise prices if the base load drops below a certain point.
There is a problem with the lack of enforcement, but the regulation itself is sane.
Regulation isnt a black and white/good or bad thing. It can be either.
But acting as if all regulation is terrible and we should let the massive corporations simply walk over us is also not realistic.
The US House just had a dog and pony show with the tech CEOs where one representative asked Zuckerburg about Twitter’s content policy.
And considering a lot of regulation that is passing in the EU I think is at the very least a good step in the right direction, I would say its going pretty well.
I am positive that market forces would achieve the same effect sooner if it were not by GDPR. One thing that has to change in my opinion is the idea of 'exporting out'. People should be using such services to import in what they have to share with others, instead of blindly relying on them to access remotely.
Considering how many business practices across the board aren't very consumer friendly and are seemingly growing ever more anti-competitive in nature while skirting antitrust definitions in grey areas, I'm personally not nearly so sure we can rely on "market forces" to protect anything but the business in question.
In fact, the only thing I've seen market forces achieve is self-preservation and interest which may or may not align with the rest of what society wants or needs.