Hacker News new | past | comments | ask | show | jobs | submit login
The cloud is a prison. can the local-first software movement set us free? (wired.com)
273 points by samwillis on Aug 3, 2023 | hide | past | favorite | 206 comments



I'm going all in on Local First, it addresses so many issues with the current "app" web. Better responsiveness from UI, working offline, privet data, lower server costs, just as a start.

Somewhat counter to the article, I would argue that local first isn't about having "no cloud", it's "low cloud". So you could go P2P for sync, but you can also have a much more lightweight sync service on a central server. Both are valid "local first", it comes down to use case.

The way to think of the term is like when we all moved to designing web UI "mobile first". It's a kind of shift from operating one way and adding on support for the other, to the other way around. Start local and layer on server and cloud support.

One of the most exciting things about local first though is that once you have solved all the sync, and conflicting edit resolution problems (CRDTs are key here) you are 90% of the way to solving "multiplayer real-time". The only missing bit presence.

The community around local first is also really buzzy, it's got that great feeling of being part of something new, something that's going to be bigger than the sum of its parts.

To learn more about local first this is a great starting point https://localfirstweb.dev/


> Local First

> "Join our Discord"


Yonz here, I run the community and can be blamed for the 20+ swipes at the discord choice. We started in February 2023, and it has been amazing to see the now ~1k developers coming together and building awesome collaborations like Tinybase + Cr-sqlite.

Yes, we need a community platform aligned with our mission, but it is already hard enough to get people together. Before the website or the discord, it was just a bunch of people following each other on Twitter and a few disparate talks. Starting from there and asking new joiners to set up a new platform adds prohibitive friction. 'There is a platform for that is not enough'; we need most interested folks to also be on it.

A matrix bridge makes perfect sense, but I haven't had the time to set it up.


The effectiveness of using Discord for your community is a fact that undermines the community's only purpose. You use Discord as a stepping stone because it works, but if you don't prioritize moving off of it to an effective replacement above any and all more costly efforts and issues, it's hard to believe in the authenticity of purpose.

I'm not implying here that you are not prioritizing moving on (how would I know), or planning to use that movement as an case study to sell local/P2P-first to others, but if you all are not that's going to be a permanent blot.


Aside, a related concept:

https://en.wikipedia.org/wiki/Prefigurative_politics

"modes of organization and social relationships that strive to reflect the future society being sought by the group. According to Carl Boggs, who coined the term, the desire is to embody "within the ongoing political practice of a movement [...] those forms of social relations, decision-making, culture, and human experience that are the ultimate goal". Besides this definition, Leach also gave light to the definition of the concept stating that the term "refers to a political orientation based on the premise that the ends a social movement achieves are fundamentally shaped by the means it employs, and that movement should therefore do their best to choose means that embody or prefigure the kind of society they want to bring about". Prefigurativism is the attempt to [act] [prefigurativly]."

P.S. FWIW, as far as it stands, Zulip is the least annoying Discord alternative (and eventually Matrix might become better/non-slow).


I'd like to suggest that running wordpress with buddypress may fit the bill... and if you need chat (instead of the chat-similar groups messaging which is baked in) - you can also add wisechat to the mix.

This system is loads easier than matrix imho - although I love love love matrix and want it to continue to succeed and be used.

Blocking spam user signups and setting security plugins and auto-updates is not baked in.. but it's not hard and it's all open source - can run on a droplet or vps easily.


RocketChat does good with minimal resources. Red Hat used that internally for about 5 years in Consulting. Red Hat definitely fits the xkcd 1782 IRC meme, though.


Quite true there is some irony to that. There has been a little talk of building a local first community platform, sadly doesn't exist yet.


Discord's UX is miles ahead of Element, and I setup and maintain communities on that use both. For something like local-first, like a local principled makerspace, we are happy to deal with the friction and try and contribute back on Matrix/Element.

That said, infrastructure churn can be disruptive. Don't envy your position.


The problem of Discord is discoverability. Is you project known on the web? What about resources? In the end you might develop for a few other developers, but only those that happened to run into the correct invite links.

Everyone else will believe your project is dead.


Isn't that what email and mailing lists are for?


They are hosting monthly meet-ups - mini conferences - with video chat and presentations. Not local first solution to that yet.



This is interesting, been putting off setting up a discord <> Matrix bridge but will have to tackle this.


For the uploaded videos, perhaps PeerTube? For live presentations... could Jitsi suffice?


Kind of like how the so-called decentralized development of the Bitcoin protocol all happens on GitHub and discussion takes place (primarily) over Slack. And the team collectively settled on the incredibly tone deaf (although, admittedly, accidentally honest) use of Bitcoin Core to refer to the project, there by making themselves the "Core" team.


https://thenib.com/mister-gotcha/ vibes.

Discord is easy, and it is a good jumping off ground before you get a Matrix server up and running.

Use what ya gotta to get shit done.


I 100% agree.

At least in the Elixir space, Phoenix has some excellent lightweight methods to solve the presence piece! https://hexdocs.pm/phoenix/Phoenix.Presence.html


CleanShot X for Mac is a great example of this. It's basically a screenshotting and screen recording tool that you install. But you can also tell it to upload your images and videos to their cloud and get a link to share them with your friends or coworkers (especially helpful for screencasts for example).


Why not dropbox or whatever place you use to share files normally? a screenshot program does not need a cloud.


I use Cleanshot’s competitor Dropshare for this reason. I’ve configured it to upload files I want to share to my own B2 bucket.


It's completely optional. And it's very convenient to record a screencast and have it uploaded to the cloud with one click which also copies the share URL to your clipboard at the same time. I don't use Dropbox or a similar service, so this is a real value added for me.


You add yet another account you need to manage, and it's very easy for your private information to be visible in screenshots/casts, in a cloud managed by the makers of a screenshot app.


Dropbox would do well to make a decent screenshot tool and make it share dropbox links.


Once-upon-a-time dropbox had a built in screenshot tool that could upload to your public directory.

It was always a pain because I was forever having to determine which of onedrive or dropbox had taken print screen handling away from my preferred screenshot tool (at the time, greenshot, now ShareX)


The more important aspect of that story: your public directory. Once upon a time, you could actually link to content in your Dropbox directly. Like, not to some Dropbox interstitial, but to files themselves. dl.dropbox.com/u/[your user ID]/path/to/file.ext would let anyone access the file.ext under [Your Dropbox "Public" directory]/path/to/.

Insanely useful, which is why it both helped Dropbox gain massive userbase, and why it became one of the earliest victims of enshittification.


Since we are talking about local first, what about Gokapi?


Gokapi requires an externally-visible IP. It can be self-hosted, but is hardly local-first.

Also, why not nginx and a `~/Public` folder if we have an IP.


Because Dropbox links are completely different to just having a shared folder?


Depends on purpose.


Is this a webdev only community? I see this being an issue in other development environments..


Not at all, it's open to all.

There are many ways to build local first, both with web technologies and with "native" app toolkits.

Many of the CRDT tools have started as JS/TS libs but have been rebuilt in Rust to be cross platform, WASM or native.

The appeal of using web tech is its accessibility and ease of distribution. Local first and PWAs with the new origin privet file system api kind of go hand in hand, but it's not an exclusive relationship.


The cloud offloads a lot of maintenance worries, especially hardware related. E.g. no need to worry about replacing failed hard drives. Can local first really compete?


I'm happy for an article like this that I can invite people to read. I often tell people that the "cloud" is a lock-in for subscription based everything, where you have literally no control of your data nor the slightest insight in to who can access it, where all privacy boils down to whether mega-corporations don't lie, and so many other compromises and shortcomings.

Most of the responses are, "but it works for me", "it's convenient", "I don't want to run my own servers", and/or, "everyone else is doing it", as though not using the cloud means all those things go away.

Many of these people genuinely don't care about privacy and completely lack the imagination to envision what could be done with their home security cameras' feeds, their Ring doorbells' feeds, the settings on their Nest and so on. Even when we have stories about assholes scaring the shit out of people via babycams, Amazon turning over Ring footage that the homeowner explicitly said not to, or houses getting burglarized because criminals knew the house was empty because the thermostat was set for vacation, people still seem to imagine these aren't the kinds of things that are rare and won't happen to them.

It reminds me of the early days of online credit card fraud and online phishing - people had a hard time realizing that it was more likely than they believed.

While most people's eyes will glaze over during the discussion of CRDTs, and while I can't wait to see more local tools that negate the need for "cloud" services, there's a lot in this article that can be pointed to when people might benefit from learning about possible long-term issues with the cloud.


And this is why, as a hobbyist, I’m very skeptical of, for lack of a better term, the consumer cloud. We’re not talking about AWS here, we’re talking about companies’ servers (that are actually AWS on the back-end) that they use to offer services to consumers. They insist on calling that “cloud” because everything has to be confusing, I guess. And I think that distinction gets lost in the translation between hobbyists/power users and IT professionals/software engineers. I think those two groups often miscommunicate, because they use the same terms in different ways, because their worlds are so different.

It’s not about on-premises vs cloud for corporations, it’s about control over your data and what Apple’s marketing once called “your digital life” (before they went all-in on cloud) for consumers, for normal people, whose recourse when a company does something wrong isn’t “hire the best lawyers in the business and sue them” but “get screwed.” I don’t care what corporations do with their infrastructure, my concern is my own digital life, which is more important to me than the market cap of FAANG.


Before it was called "local first" it was known as "subsidiarity". Seeing the principle applied to food, politics, software is exciting to me.

"Subsidiarity is an organizing principle that matters ought to be handled by the smallest, lowest or least centralized competent authority. Political decisions should be taken at a local level if possible, rather than by a central authority".[1]

[1] https://en.wikipedia.org/wiki/Subsidiarity_(Catholicism)


Interesting that the word comes directly from the Catholic Church, because I don't think they ever allowed it to exist.

When you see it happen in society, it's almost always learned from a military group or earned by some popular revolt.


It's interesting, I'm one of what I presume is a small number of people that have conversed extensively over my life about subsidiarity, mostly on the context of subsidiarity and sphere sovereignty a rather niche area of theology.

Most Catholics, even highly engaged western catholics are not aware of subsidiarity. It's ~commonly/known term and catholic juxtaposition cited by theological/historical non-conformists in church polity and discussions around separation of church and state.


I worked on a local first version of an app I sell. I use CouchDB to store user data so it was pretty easy to implement because CouchDB will sync a local CouchDB to the Cloud based CouchDB as soon as any changes are made.

That app is now 20 years old and over those years I've had users who quit using it call me years later to ask if I could give them access to their expired account or a copy of all their data.

CouchDB is easy to install and easy for users to configure for that purpose so I explained the benefits and provided simple step by step instructions.

I explained how this would make the app faster for them, provide a local backup of their data, and that it was possible our Cloud based CouchDB could be inaccessible for a variety of reasons beyond our control but their local CouchDB would be there for them and it would sync up any changes with the Cloud based CouchDB as soon as it was back online.

I also explained how this gave them a way to use the app for free after their yearly subscription expired, so I assumed most of the users would jump at the chance to set up the local first version of the app.

Not one of the app's users wanted to do that. They really could not have cared any less.

I still use it for my own account so it wasn't a complete waste of time but I was, and still am, a bit amazed that users didn't grasp the value in that.


Thanks for sharing your experience, what was the app? Personally, I don't think I would have gone through the trouble as well. For example, I used to have a ton of Evernote entries and don't think I would setup a personal CouchDB just to get access to it.

A middle ground could be a file path to an sqlite file that can be copied, opened or backed up in the cloud.


The app is at ezInvoice.com. CouchDB is really a very good fit for that app.

In the early versions of the app I used CGI.pm to store and get user data and that was very fast and efficient, and the format for storing data was similar to CouchDB's (JSON), so it felt more familiar for me to work with.

I had never worked with SQL before making that app. I had a partner back then create a new version that used an SQL db. I did this mostly because I felt like the app wasn't keeping up with web trends and he had been pestering me to use SQL in the next version for about a year.

What he ended up with was buggy to the point of users calling and screaming at me and after a year he still couldn't get it right. Back then news of SQL dbs getting hacked were common and after buying several books on SQL I realized that to be proficient required years of study and experience.

CouchDB had just come out around then so I looked into it and for me that's been much easier to work with, and do so securely. It's practically designed for an app like ezInvoice. PouchDB.js has made it easy for me to work with.

If you're already proficient with SQL there's probably no compelling advantage to using CouchDB/PouchDB, but it's a really good tool set for building web apps.


NoSQL is definitely a breeze for data model use cases like that.


Seems like one problem is that you didn’t set it up automatically and expected them to do it on their own. Regular folks aren’t going to do that kind of thing without pressing need.

Harder when it’s primarily a web app, but customers can be convinced to try mobile/desktop apps at times. Maybe give a discount?


>> Seems like one problem is that you didn’t set it up automatically and expected them to do it on their own.

Downloading and configuring CouchDB on their PC isn't something I can do for them, but it's really simple and easy to do and I provided step by step instructions.

I didn't charge them anything at all to get and use the offline first version of the app. They already paid for the online version (that's a yearly subscription fee) so there was no extra charge at all.

They really were not at all concerned about a long term outage. I can understand that because widespread outages have been pretty rare, and the app has never been down for any significant amount of time for 20 years now. So I think it was really a lack of urgency.


> Downloading and configuring CouchDB on their PC isn't something I can do for them, but it's really simple and easy to do and I provided step by step instructions.

Innosetup is still a thing, and there are alternatives. Installers can take parameters, or be built on the fly. That's what we did at one place to install postgres and other apps for a customer, while making it as simple as possible. Next, next, finish.

Discount refers to reducing the cost if they went thru with it, lack of an extra charge while nice is not an incentive in itself.


Anyone who's worked inside a large organization knows the cloud is the solution to be able to get things done and deliver value to the business.

Why?

On-prem solutions require I work with the infrastructure group, who wants to cookie-cutter me into their limited stable of infrastructure options; I have to work with the systems group, who gives me very limited options and configurations for the operating systems; the network group, who are very fixed in their ways and make very few concessions; and the middleware group which is what you need to connect to anything else of importance. Four groups! All with their own workflows, their own SLAs, their own exemption policies, etc. It's a nightmare to get anything done and the business is frustrated that things take so long.

Now I'm on a team that does everything in AWS. We keep everything serverless so the infrastructure and systems teams can't butt in, and the cloud environment eliminates the need for interacting with the network group. I only have the middleware group to contend with. The result? We're delivering much-needed solutions far faster than any other group in the organization. They also have far fewer business impacts (read: support issues impacting operations) than the applications we have on-prem.

People seriously underestimate the time and energy required to keep a highly-available, and DR capable datacenter running. They also underestimate the complications of interacting with all the teams need to keep that datacenter running. We have found that AWS is faster, cheaper, better than hosting on-prem.


Forgot to mention - if you were doing anything new, plan to spend months getting that in on-prem. They're going to run an RFP, deliberate ad nauseam, try to build a business case the tool, procure infrastructure to host it, do five year usage projections so you can measure how you're going to capitalize - it just goes on, and on.

Our organization spent two years going through these gyrations for elastic search. They failed. It never happened. No other solution was brought on board, they simply couldn't justify the cost. All told they spend hundreds of thousands of dollars to come up empty-handed. Meanwhile my new team had a project where elastic search was the obvious component as part of the solution and we simply used that service on AWS. Done.

It's been a while since I've worked at a startup or a mid-cap company, but for any company in the Fortune 500? Cloud is a godsend!


Sure. Come back to us in twelve months.

Either the company will have lost control of their spending and be sending 6-7 figure paychecks towards AWS. Or the ops team will have clamped down on the accounts and now you have to go trough the same process to get EC2 instances and other cloud resources.

Solving cultural problems with technology generally doesn't work.


> Solving cultural problems with technology generally doesn't work.

Solving cultural problems is indeed difficult but most of us are not going to be at a company long enough to give a shit. The name of they game is taking money off the table and the best way to do that is to deliver value quickly and then move to the next table.


> The name of they game is taking money off the table and the best way to do that is to deliver value quickly and then move to the next table.

I pity the fool who hired you. Or will hire you, for that matter.


It's been 4 years for our team? Maybe in 12 years things will be different, but this is where we are for now.


> Solving cultural problems with technology generally doesn't work.

That's a ridiculous statement! Technology is the prime driver of culture! If you want to effect lasting change, the last thing you should do is try to convince people you are right. Set up the technical infrastructure to support the culture you want!

In fact, it's the reverse — culture has practically no impact on technology once deployed.


Right. The society today is defined by technology. I like to call it "the smartphone society" instead of "the modern society" or whatever.


> Solving cultural problems with technology generally doesn't work.

It does by getting rid of the people.


Yes, it only pushes the inevitable lean battle further down the road and never solves anything. How do you eat an elephant? One bite at a time.


Yeah. The - possibly insoluble - problem is that if you don't go through all the approvals exercises, the people whose time would be taken up approving your thing are still there, drawing a salary. So there's no obvious cost/benefit analysis to do, or not one that's politik.

But I like the idea of just doing it using cloud. Once a cloud vendor is approved, all its services are implicitly approved as well. Nice.


> Once a cloud vendor is approved, all its services are implicitly approved as well. Nice.

Exactly! When we start a new project the AWS Console is just a big tool box we open to see what's available that we can use to deliver the solution.


> Four groups! All with their own workflows, their own SLAs, their own exemption policies, etc. It's a nightmare to get anything done and the business is frustrated that things take so long.

This seems like a wound self-inflicted by a company that can't seem to run itself efficiently. You're not describing attributes of on-prem and cloud--you're describing a dysfunctional organization.


You've circled back to their point about forcing the rest of the organization into becoming cookie cutters.


So your organisation is so inefficient that it’s easier to just engage a third party to syphon off any profitability?


> your organisation is so inefficient

Very few organizations are able to stay efficient on the long run as they scale. This is a simple matter of processes being stacked on top of processes and inter-departement frictions. That complexity is inherent to having regulations and laws that you need to comply with that become hard requirements.

For a more concrete example think of HIPAA compliance, during an audit you can have everything on-prem and hope that your IT department actually understand the security and privacy boundaries or have everything in AWS and be compliant because they conform to the standard by default.

> syphon off any profitability

For most of those businesses software is not the profitable bit, it's a cost center like most other things that serve to support operations. In that context, migrating your code to AWS/GCP/Azure means very little cost inflation.


>Very few organizations are able to stay efficient on the long run as they scale

Simply using AWS doesn't do anything to help with this in that same long run. Give it a decade or two and those same organizations will suddenly have multiple bureaucratic departments to approve new AWS resources and spending.

All AWS is offering is a channel that a bureaucratic process hasn't been developed for _yet_ and the organizations hivemind response is to allow its usage freely because it doesn't fit in the existing form template of rules


Good luck maintaining your own infra, server, system middle war combo. It's a whole area in itself. Let the specialists do it and focus on your core business. Saying otherwise is just a lack of pragmatism.


>Give it a decade or two

A decade or two is someone else's problem.


Just because AWS is HIPAA compliant, doesn't mean what you do there is :).

But at least it solves the basics.


I think their point is that it gives you a relatively stable env, and allows you to be the one deploying and configuring infrastructure. Anything that can be miscommunicated to client on-prem deployment staff will be, in a lot of cases breaking guarantees like HIPAA compliance.


That's just nitpicking


Not really.

So much of HIPAA is actually getting the administration right.


I can literally right now, sitting here in my underwear, spin up 4 clustered servers in 3 different continents with a globally replicated object storage and a master + 2 slaves setup of SQL databases on each location.

And all that for less than the price of _one_ competent sysadmin's monthly salary.

How would you suggest a company would match that with local servers and sysadmins?

One sysadmin per 8 hour shift = 3 people for 24 hour rotation, you need 1-2 extra so that if one is sick the others don't need to pull a double. That's $400k a year alone.

Then you need to get the 12 identical servers purchased and shipped around the globe to colocation. Now at least one, preferably two, of the admins needs to be a proficient DBA so they can get the replicating SQL running.

Now you need to build monitoring for all of that and runbooks on what to do if crap breaks outside of office hours.

..and now a server has kicked the bucket, you need to buy a new one from $vendor, get it delivered and set up in Australia - you're in Michigan.

And in some years the servers are underpowered because your business has grown, now you need to buy 12 new ones and get them installed.

I can do all that from home without putting on my pants. I might need to grab a shirt if the AWS people want to hop on Zoom for something. =)

This is why we use the cloud and managed services, usually it's either out of our hands anyway or already fixed when we manage to log in. For the last two biggest downtimes I've encountered even Meta's crap was offline too (FB, Instagram at least).


> I can literally right now, sitting here in my underwear, spin up 4 clustered servers in 3 different continents with a globally replicated object storage and a master + 2 slaves setup of SQL databases on each location.

Hey I can do that too in our on-prem environment because it's well managed. Hell, I can do that in my fricken homelab if you'll accept postgress.

Cloud is neat and the major vendors know how to manage it. That doesn't mean you need to pay them to use a well managed platform, but you do need to pay _somebody_.


The premise was that one doesn’t already have a half a million dollar on-prem setup…

Like a bootstrapped one or two person startup. Or could I borrow your globally distributed home lab? =)

The difference here is that I can pay AWS by the hour (or even by the minute or second). I can’t hire a sysadmin like that.

I can try a few high GPU instances for 6 hours and shut them down. But if I need to buy 6 4090’s and the hardware to run them, I’m stuck with the choice and down a few tens of thousands.


> half a million dollar on-prem setup

Oh hey, that's my cloud spend.

EOM I have nothing, every month.


It depends on the size of your company. If it has 200k employees, you can mutualize the salaries of your dbas. There are also intermediate steps between ´build your own dc’ and ´run everything in was’. You will build monitoring tools anyway, and will most likely have sysadmins too if your fleet is large enough etc.

Cloud is not always an obvious win, and depends a lot on where you work


At 200k employees and being profitable, you can definitely can start investigating moving to on-prem. And most likely a lot sooner than that.

BUT it depends on what field you're in. If you're managing CCPA/GDPR/HIPAA data it's a huge bureaucracy mess to get everything certified. Cloud operators usually have their documented and certified already, you just need to figure out your own part.


The alternative to the cloud isn't running a server in the broom closet. It's running (racks of) dedicated hardware in a datacenter. Even "on premise" is often in a datacenter somewhere, and thanks to virtualized networking it's just as if the hardware is in the same building.


The datacenter hardware still needs people to purchase and manage the physical stuff. And those people aren’t exactly cheap.


If you want to be able to instantly spin up servers collocated you just need to have spare capacity. There is nothing that restricts you from delaying expanding your capacity until you need it. That is what slows you down.


Try going "Hey boss, I need 20k for some servers I'm just going to let sit in their shipping boxes in case we need them later." (or you can install them but not use them for anything).

Come tell me how it went =)

I've yet to see a manager who wants to spend money up front for stuff they might possibly need in the future. YMMV


Capacity planning is the process in which you estimate how many servers you are going to need. If you overestimate it just means there is extra time before you will need more servers.

You would be spending that 20k regardless assuming the company is growing.


I sense that you're writing this as if software is an end to itself as opposed to a mean. It's normal to engage with third party, especially for activities that are not part of your core business.


Did you read the article? "Local" here means running on the user's computer. It is not talking about private data centers.


And there are good reasons why we abandoned that approach 20 years ago. It's a nightmare to support.


Yup, and probably computers today is just a divergent and unique as computers were 20 years ago, so no need to ever look into this issue again. I guess that's the reason there is basically 0 good desktop applications in today's modern computing world, I certainly cannot think of a single successful desktop application.

Good thinking!


It’s just a local copy of many, and can run even when offline. Like git. No nightmares.


I still remember the time when "server" meant someone's old Dell tower we put in a closet somewhere =)

I thought we figured out it's a shitty idea and moved past it?


The only 'shitty ideas' in business are those that cost the business and shareholders money.

Cloud computing can either be a necessity for a business, or a money-hole from which is derived little to no benefit to the company.

Most small businesses could thrive with a -local- small dedicated server/workstation, and locally managed network resources.


Most small businesses should be focused on their business, and not their IT.


Exactly, this is why companies shouldn't host their own email either.

And, as a rule, gaming companies shouldn't build their own engines either. If you have the skills to do that, start selling the engine instead of the games you make with it. It's a much better business.


And what is the operating margin of the products your group produces? What was it prior to shifting everything to the cloud?

If you can't answer those questions, you may be architecting yourself out of a job. I watched more than one company that was operating on razor-thin margins, depreciating hardware over a decade, move to the cloud and promptly bankrupt themselves.

If you're in a business where you've got an additional 20% margin to piss away on someone else's gear, your logic makes a ton of sense. If you're in one that's making 5% on every widget that goes out the door, you should probably be working with your management to figure out how to remove the inefficiencies from your process rather than just moving your problem onto someone else's hardware.


We have those numbers. We reduced our costs by over 70%. We got the project approved claiming we'd reduce costs by 30%.

Obviously YMMV and the savings are going to be dependent on each project, but we've had enough solutions that we've had on-prem that we've shifted to AWS and show substantial cost savings that the business is comfortable with our taking new projects and just starting out in AWS.


I'm sorry, there's simply no world in which switching to the cloud, and serverless to boot, results in a 70% cost reduction unless you so grossly over-provisioned your on-prem infrastructure that whoever architected it should no longer be employed. Cloud infrastructure is more expensive across the board, full stop. Unless you're using hollywood math to claim millions in "efficiency gains" I don't buy it.


Millions, and 70% seems high to me, but let's look at some numbers.

what is the cheapest real actual server you can buy, new, with a support contract from a reputable vendor like HP/Dell/etc? (Assume we're running both the db and web-server on one server.) And what's the cost of sitting it in a reputable colo? And then multiply that by, let's say two, to get redundancy in a us-west and us-east region. Ideally, we'd want more regions to serve the entire world, but let's just go with two for now.

Let's say $10k for a server w/ support contract, x2, plus $400/mo for colo from Hurricane Electric (https://he.net/colocation.html) in us-west. It's a promo deal, and only in us-west, but let's just go with that price. $400*60 for 5 years (standard lifetime of computer equipment) = $24k. So $34k per server * 2 servers / 60 months ~= $1,130/month amortized. Thankfully, we're not in cloud, so that's the cost whether it's 1 request/second, or 1,000 requests/second.

A Raspberry Pi or old laptop in your basement with one consumer-grade ISP, and not factoring in the cost of electricity does not count here. Those are obviously going to be massively cheaper, but they're not remotely suitable for enterprise.

Meanwhile, Lambda gives you 1M requests and 3.2 million compute-seconds a month for free. This works out to be 0.4 rps (requests/second) or 22 requests / minute if spread out evenly. Which isn't, like, a lot, but you haven't paid a single cent to AWS for this yet. Let's say you want to get to 40 rps (to keep the math easy) or 100M requests over the month. According to the AWS calculator (https://calculator.aws/#/addService/Lambda), this'll run you $120/month, or $120*24 ~= $3k for 2 years.

But that doesn't count hosting for your static files with CloudFront, or does it count the RDS instance backing it, and we'd use AWS Cognito for user-auth since we've accepted AWS vendor lock-in. (Which isn't great, but that's the state of the industry.)

CloudFront: Free Tier 1TB, 10M requests/month. 10 TB/month will run you $1k/month https://calculator.aws/#/addService/CloudFront

RDS: This will vary greatly. A teeny tiny db.m6g.large instance w/ 2 CPUs and 8 GiB of RAM is going to run you $250/month, but a big one fat juicy db.r5d.24xlarge will run you closer to $20k/month. https://calculator.aws/#/addService/RDSMySQL

Cognito: With 1,000 active users, this'll be $50/month; 10,000 users is $500/mo, and 100,000 users is $5k/month.

Between Lambda, CloudFront, RDS, and Cognito you could easily blow past that $1,130/month estimate for a proper colo'd pair of servers, if your service gets at all popular. But if you stick within the free tier for Lambda and Cloudfront, then you're only looking at RDS and Cognito costs, which can vary greatly. Anywhere from $500/month to $26k/month, or more.

Except to get the same service in a colo as a $26k/month AWS bill, you'd need to spend way more and have much more than the single server in a rack that I started with. Thankfully, with something like Equinix Smart Hands, no one has to drive to the colo to futz with a server when it develops a bad hard drive.

Graph it out for your particular use case which one is better, but cloud infrastructure has the benefit of opex vs capex, as well as the opportunity cost of time. AWS Amplify will let you stand up a whole platform hacking on a Saturday before Dell can even quote you a price on a server or Equinix can answer the phone.

I also did reject a raspberry pi or old laptop server early on, but that's not to be underestimated. If I've already got some server running at home for my house, running a service on that, fronted by Cloudflare can take you quite far.

It really just depends on your use case.


Your entire premise is based on a pair of servers - we're talking about fortune 500s. I'll ignore for a second the fact that $20k/month in server spend for a pair of servers to be database+web is grossly overpriced, even at LIST, which no fortune 500 ever comes close to paying.

All of the "and you don't get all these other services" is just a red herring because, again, we're talking about a fortune 500 that isn't building their services on a pair of servers. They buy racks at a time, at minimum, and unless they're grossly oversizing what they're buying, will have better performance per-dollar all-in, including the people cost and the surrounding infrastructure cost, to say nothing of the egress bandwidth because you know, their customers aren't just other AWS servers...


I'm not sure where you're getting $20k/month for on-prem from. The number in my post for on-prem for a pair of servers, which is grossly underestimated, was $1,130/mo, amortized. Still, $1M in servers, over 5 years is $200k/yr, or $16k/month, not including hosting costs for that size of a system. Equinix is the gold standard, but they're what one would call expensive.

Mind throwing out some numbers you think are realistic for "racks at a time"?


Additionally, you need software to run on-prem. Rare is the application that is simply an executable running on a server and not having any dependencies on things such as databases. That software requires licensing, and there's always on ongoing annual tail that will be charged. On AWS all that licensing and support is rolled into their pricing. That turns out to be a substantial savings, along with all the other things you've mentioned.


I have direct multi-year experience where an AWS heavy client (with a strong tie to Amazon) decided to shift course because AWS spend went nearly vertical despite the velocity of features delivered, and moved to migrate all logic/data into other enterprise systems where possible.

So for the first few months, it was moving all business logic to lambdas/RDS then the next year, moving it all back elsewhere - not necessarily on-prem, but anywhere except AWS.

So I'd expect your cloud team to get very much like infra once cost realities become involved. It's possible your volumes/costs are much less for the features you build - in that case, kudos to designing stuff that is perf-aware - but sometimes you don't control volume and that's a cost-risk.


Read this first: https://www.inkandswitch.com/local-first/

> Anyone who's worked inside a large organization knows the cloud is the solution to be able to get things done and deliver value to the business.

Local first is focused on end consumer experiences and is orthogonal to the on-prem vs cloud argument; ideally there wouldn't even be a server. @pvh and i&s team have done a great job threading the needle in articulating the ideology. At a high level it is advocating for products that can provide the benefits of the cloud without being locked on or turning your phone into a paper weight if it doesn't have internet access.


The only local first we do these days is iOS apps. We develop those for our users who work outside the office in locations having spotty cell coverage.


On-prem thankfully is changing quickly. It used to be boutique, every shop kludging together their own plan, cobbling together an assorted mix of vendor and open source. There were few macro patterns.

The major change in the last 10 years is that there's now a very stable & general open-source cloud platform that covers a much wider range of the stack than what we had before, and which has achieved a critical mass of popularity.

On prem is still hard. Having an organization that isn't crufty & slow, where there aren't forms in triplicate & long waits for every request, is still hard. But it's gotten a lot easier than it was to bring up, and to let users do some self service. There's a small scale org where having a small handful of clusters & letting users semi-free, mostly using ci/cd, is quite attainable. Figuring out how to scale up to serving a mid or larger size org is still quite a challenge.


Yeah, I agree. I think cloud was a great forcing-function to make internal infra groups raise their game or be replaced.

However, if you have I do worry that once people have fully migrated to the cloud they will have a captive audience and will start ramping up the costs.


> Anyone who's worked inside a large organization knows the cloud is the solution to be able to get things done and deliver value to the business.

Nah. Multiple F500s to include manufacturing, railroad, and aircraft. Pick an industry, been around it.

3 for 3 -- large cloud rollouts clobbered the IT team and blew OpEx out of the water. Saw VPs catch an axe for it, and at least one CTO get nailed with embezzlement accusations due to a "cloud consulting" company he had ties to, and who managed to get a no-bid contract for north of 400k.

Never been anywhere that it wasn't a kind-of-clusterfuck, and the only answers in reply I've seen were No True Scotsman-type "you're just not embracing the cloud and it's changes to processes bla bla bla".


Curious -- in this scenario, who is doing security, governance, compliance, observability, etc...? You are probably masking a lot of benefits of a mature and competent IT team. ...or you are assuming A LOT of risk allowing velocity-driven software engineering teams to run amok.


My previous employer is or has already transitioned mostly to the cloud. The answer to your question is " the same teams that were doing it before, but with cloud guardrails instead of hacky bespoke solutions ".

AWS IAM is baked into every single product natively. It isn't perfect and their JSON dialect is annoying at times, but having granular RBAC for storage, compute, ops, network in a single language is incredible for security.

And using IaC, you can put guardrails on specific tasks that IT does often. Manual reviews become automated.

It is a ton of conversion and up front work, but there are upsides.

And then of course there is the instant global reliability, where a lot of formerly complicated sysops becomes automated as well

Final thought: other than the hardware abstraction, everything I talked about re: IAM could be done with a local software stack, if it existed.


" the same teams that were doing it before, but with cloud guardrails instead of hacky bespoke solutions "

you do realize that "cloud guardrails" often started out life as "hacky bespoke solutions". you are assuming more business risk than is necessary.


As someone who did it for two years, I know not everything is perfect. But the tooling, monitoring, automation, orchestration, etc. becomes a lot easier when there are 4-5 toolsets vs. dozens.

It's like taking an ops support team that is using perl, java, php, python, bash, ksh running on RHEL5 and HP-UX and getting everyone on RHEL8, terraform and Go.


At least with on-prem, you can talk to the infrstructure team's management to negotiate adding currently non-supported features.

I was hired at Amazon 8 years ago to implement a couple of features. One of them was difficult because it required communication / buy-in from multiple two pizza teams. By the time I left it was stalled. Three years ago I randomly started at an AWS customer who needed that feature as they had followed the AWS conventional wisdom for setting things up and architected themselves into a corner. I was able to craft some workarounds but they're hacky.

AWS has been promising us this feature "next quarter" for the last 3 years.

If you look around, you can find a number of weird corner-cases where AWS does not make it easy to do the simple thing you want to do, but you can hack around it.

We went looking at GCP and Azure and there are similar issues, though in different parts of the infrastructure.

Not to mention each of them change their interface frequently enough to be annoying. Thankfully, reverse incompatible changes are not at all the norm.

My cloud infrastructure is replete with architectural warts to avoid quirks we didn't know existed until after we deployed. My on-prim components, though managed by a different team, do what we want them to do because the team who built them asked us what we wanted before building it.

AWS weirdness accounts for about 85% of my time. On prim weirdness for 5%. But yes, local-first is obviously a non-starter.


Out of curiosity, which particular edge case was this?

I’ve hit multiple oddities with cognito (only supporting custom claims in id token), lambda (consuming SQS messages without invoking the function when run in low concurrency).

So there are strange edge cases, today RDS and ECS are the services of choice, I wonder where I’ll be in 12 months.


Our org decided to take the worst of both worlds; now we have "IS requests to (I shit you not) create a branch in a repo" level of nonsense. The cliques with their turfs still exist, the hoops are still there to jump through, and the "cloud tech" is just absorbed into the corporate ineptitude mindset.


The other nice thing about primarily cloud infrastructures is, as an infra/ops guy I can build tooling and abstractions over it to allow developer and BI teams to go much faster without my help at all. An on-prem solution can allow this, but the fact that so much of the complexity is managed by the cloud provider makes it a lot faster and easier. A feature that could take months to roll out is usually offered by AWS for a price tag that's easy for the business to understand.

My one irritation with stuff like AWS is that a lot of it is needlessly esoteric, to the point where I think they make their docs bad on purpose to get you to buy their enterprise support packages.


Yup. Most complaints about the cloud are people who dont understand how to use it. They have been left behind and havent reskilled. They call it "too complex". In reality, using AWS is 10000x better than some custom infrastructure.


You have to be able to beat the cloud locally to make doing so make sense. AWS et al. set the bar and it’s difficult to beat them with your local infrastructure.


If the thing you are trying to do isn't very complex then beating Cloud is not that difficult. Many small companies just don't need it and moving to it adds cost and complexity for what?


They have 10000 engineers working 50 hours a week to keep their cloud running. The thing works on the back of very hard work from very smart people.


You don't need that much to have a cloud with several 9's of uptime.

If you've never done the work or been around it, the difficulty becomes a bit mythologized.


Sure, but to a certain extent, that’s survivorship bias.

It’s very easy to run something with several 8’s of uptime if you’re not careful.

AWS has good patterns to guide you into the pit of success.


It all comes down to responsibility. In your example, you're responsible for the full stack including infrastructure (I'm assuming, if not, I feel bad for your infrastructure team). This model works for some developer teams and not for others, either due to lack of expertise, free cycles, or organizational pressure to have developers focus on code only.

Define your responsibility model first, then your infrastructure consumption patterns.


You're correct, the team is taking on all that responsibility - which is another reason we use serverless architectures. We have no intention of managing servers, provisioning servers, patching servers, etc. The whole idea of compute has been completely abstracted.


At my last org, the exec reaction to the cloud was to say we should build our own, AWS-compatible, in our own DC's.

Yes, really.


I'm actually shocked that aside from S3 clones, no piece of software or open source project has attempted to replicate the software ecosystem of the cloud as it pertains to APIs and IAM.


Oh, one other thing: this was in 2019.


Cool, cool. Now it sounds like you're doing two jobs for the price of one.


I see you're sold so I won't try to change your mind, but be aware your opinion is far from universal. I'd rather eat broken glass than work with proprietary cloud technologies.


Yeah, in addition to the recurring revenue aspect a major drive of cloud adoption is escaping your IT department.

This suggests that the way corporations do IT is radically wrong and actively harmful.


I think it points to IT is much harder to do at scale than most people realize. There's a big difference between running a datacenter and running a server closet.

Companies utilize HR services, payroll services, and legal services, amongst many other services, because that's not in their wheelhouse and a part of their core competency, so why wouldn't they utilize cloud platforms as well? Like I said, managing and running multiple data centers having high-availability and DR capability isn't easy, or cheap.


The entire structure is not value oriented.

The technical people want to solve technical problems and don't give a fuck about the company mission. The immediate management wants to improve the measurable recurrent tasks and don't even think about supporting anything different. The top management wants to cut jobs, minimize the need for competency, cut costs, minimize flexibility and lean time to think about non-recurrent tasks because IT is a cost center.

The result is that all of those problems you point get solved. That's not the issue. But nothing of value is allowed to exist.


Probably 70% that and 30% cowboys who think security and policy rules are for people who make mistakes.


if you think your IT dept is bad, wait until you see how little Amazon or Azure gives a shit


I have a hard time squaring the push of this, with the rise of the embedded VSCode in github to edit in your browser. Actually, really just the damned browser. It has been both a god send for software delivery and by far the biggest mistake/road block in building local software.

I remember in college I commented that I used a web browser based email provider because it was easily accessible, and one of the old guard chided that telnet was more accessible. I honestly felt properly chided and didn't understand why I was more inclined to think the web would be available. Fast forward to today, and I don't know many people at all that don't use a browser based email system. We certainly exist, but not in meaningful numbers.

Then there is the sheer idiocy of keeping up with home storage options. It isn't that a terabyte drive is expensive. It is that keeping it indexed with the latest search friendly media is. Why do that when you can offload the idea to someone else? (There are certainly reasons, but they are niche.)

Worse, we're a generation or three removed from people that built giant media collections that become liabilities more than anything else. I literally have a stack of records behind me with no record player. I have a few CDs that aren't available on streaming platforms behind me, and I don't really have a CD player anymore. Mountains of books that make me happy to have, but I'd be a liar if I claimed they were wise purchases. Most, if I'm honest, were hoarder indulgences that I should have kept closer tabs on by using a library. In the modern world, I can donate to authors easily enough.

So, I'd love for some of this story to be true. I just don't see it.


Is a physical media collection any more of a liability than a streaming service? If whoever you're paying wants to double their prices, throw in ads, pull your favourite series, replace your favourite book with one that's been "updated for modern audiences", ban you because they think your doorbell is racist, ban you because they automatically detected an infringing performance of 4'33" in your cloud storage or stop supporting your device because they want you to rent their own, they can say "We have altered the deal, pray we do not alter it any further." and there's nothing you can do.

Trusting your data or anything you care about to someone else's computer is the real liability.


For different definitions of "liability?" Losing access to media sorta sucks, but is honestly not that big of a deal for the vast majority of it folks have.

Consider that the vast majority of consumed media forever has been radio and broadcast/cable. And until very recently, you either saw the show live, or you missed it. Really isn't much different and you don't typically feel a loss that you can't watch last year's show.

Same goes for newspapers or any other physical thing you can get delivered. I have some magazines that I do keep in good condition as they made me happy to have them on a book shelf. I cannot justify that in any way that doesn't ultimately lead to hoarding, though. I think these should be available and preserved, but I see little benefit in having everyone preserve their own movie collection.

To get sad on it, go look at any estate sale and see how many VHS tapes many older generations collected.


Who is doing the preservation in your world if not 'everyone'? Every individual party has things they'd rather not exist, so the only way to ensure media is preserved is to just make lots of copies among lots of parties.


I appreciate the idea, but I assure you that the vast majority of media that people hold on to are tossed when they die.

To that end, I'm very much in favor of enabling and strengthening libraries.

Note that I'm also not against letting people build up these collections. I don't share in the romanticizing of building up giant collections of media that you still don't own. Yes, you can sell the media to someone else. No, you don't have any real ownership of the story or contents. And with how hard preserving digital media actually is, few are doing a good job preserving those.


This is how things worked 15 years ago, before the “cloud” hype took off. In my opinion, things worked great. You could get stuff done.

I’m intrigued that there’s a “movement”. I suppose that’s what you need to get news articles and promote change. What’s old is new.


I do mostly remember and encounter with on-premise datacenters the following pains

- Small selection of services (databases, queue's,...) If a service was not yet in house, brace yourself for a year of meetings, enterprise contracts, security audits,...

- Overprovisioning of everything as it was a weekly to monthly process to acquire new resources

- lots of half-assed documentation with many manual steps being missed. Sometimes the DEV environment was so different from the PROD environment you couldn't reliable trust deployments.

- lots of outdated software and servers with tooling that promised to ease all of that while delivering none of it.

- different teams for different components, all with their own procedures, zero API's and you hoping that the one guy that actually helps you is not on a holiday.

- Very strange limitations because how all the different vendors had their own set of restrictions and zero incentive to adopt another vendors solution.


It's interesting how scarcity of work experience in a technology can drive up advertised job opportunities which in turn drives technology adoption as everyone seeks to boost their resume with the in demand skills.


Things did not worked great, you should see the bad practices about ci/cd, security etc ... let the dev ssh root into a linux box, 1000 days uptime, no patch...


I briefly worked for a medium sized web company (millions of registered users, 100's of employees, pretty large infrastructure.) It was hosted on AWS. I would regularly SSH into production instances that had 1000 days uptime, no patches, OS releases that from 5+ years ago, etc.

Oh yeah, they were still running Python 2 for parts of the main application. It was literally like traveling back a decade. I found dependencies that hadn't been updated in 10 years.


Exactly the same things now happen in your cloud provider. Now you just don't see them.


You might see them. But it's far easier for a dev team to take control of the ops side in the cloud with IaC, API's and automated tooling like Terraform. It's still very hard to find companies that do on-prem, API first.


You didn't get the point. Your cloud provider can do things that you don't see and can't control. If you think all cloud provider's employees follows best security practices because they told you so, then good luck.


The cloud is DRM. Its popularity is primarily due to the ability to conveniently attach a recurring revenue model and decisively prevent piracy.

People expect local software to be free or very cheap, but in reality complex software applications with good UI/UX are extremely expensive to produce. The UI/UX aspects are often far more costly and time consuming than the core of the software. Basically computers are intrinsically hard to use and it takes tremendous effort to pound them into a shape that non-computer-experts can deal with without pain.

Lack of an economic model is why local software is nearly dead. Local is better in virtually every way except paying the developers.

You get what you pay for. If you won't pay for local software but will rent software in the cloud, you will only get cloud software.


The comment section seems to be going in the direction of On-Perm vs Cloud while the article is about the local machine of the user. As in instead of using google doc to collaborate, we each use a document editor on our local machines and then sync without going through a middleman.


It's a bit orthogonal, but I think Derek Siver's "tech independence" is relevant here.

https://sive.rs/ti


Btw. Lotus Notes did exactly this already in the 80s - apps based on local database replicas. CouchDB comes from an ex Lotus dev (Damian Katz) who, inspired by this, created a peer to peer database with similar functionality. Thus I think it would still be a good fit for this kind of local first application.


I've installed CouchDB and an app I made on both my local desktop pc (Mac Mini) and my app server (DigitalOcean) and it's pretty sweet.

I've also installed both the app and CouchDB on a Raspberry Pi in my office and set it up to sync with the DigitalOcean server. This lets me run the app on all the devices in my office even when the internet is not accessible.

In both scenarios the local app is faster and since syncing runs in the background there's no lag when using the app at all.


I like the term local-first software and I think p2p collaboration (think: protocols other than HTTP, e.g. to prevent nation-wide censorship or decentralized / synchronized via local means during emergencies) and e2e encryption are amazing features. Possibly increased performance for local sync is a nice benefit as well.

That said, the article and most of the comments here somehow talk about or even compare to The Cloud and I don't think that is particularly relevant.

The cloud has nothing to do with companies owning all the community data (user relationships, crowdsourced information). Facebook, twitter, reddit etc. are not "the cloud"

The cloud has little to do with data privacy concerns. Self-hosting exists and it is not a problem of trust in infrastructure. Self-hosting on AWS should be considered safe for all but the most extreme cases (legally forbidden content?)

The cloud has little to do with proprietary data formats. It is absolutely possible to build a SaaS with proper import/export protocols. Financial incentives are misaligned, not technical.

An example is excalidraw (https://excalidraw.com/). AFAIK they don't use CRDTs but they are what I would consider local-first.

Here, local-first adds the features mentioned above. In addition to that it is self-hostable from a small local raspberry pi to a proper server in someone else's data center / "The Cloud".

I would propose more use of the cloud for something like this.

If self-hosting became easier then perhaps more people would care for open data (access & standardized formats). A managed, serverless and on-demand model seems perfect and could enable non-technical people to host their own collaborative whiteboards or photo galleries or chatroom for family & friends.


I don't care as long as I control my data and leverage the collaboration opportunities that the cloud gives us. That means I want real files with documented formats on my real disk on my real computer and to be able to work with them with other people. And I need to be able to work offline for a couple of weeks at a time.

At the moment the only real product on the market which comes close to this is O365. I can literally stop paying for it, disable OneDrive and open all my documents with LibreOffice tomorrow if I desire. The same is not true for Google Docs and iCloud which have complex exit strategies.


the cloud is a failure of the internet

originally we were all going to be able to host our own stuff in our closets, the 'intelligence' goes in the edges.

then the industrialists went full-cloud on this idea. and the 'intelligence' got concentrated in a few gigantic industrial warehouses.

but it sure feels more orderly this way. why spread the load when you can focus it and centralize it?


> originally we were all going to be able to host our own stuff in our closets, the 'intelligence' goes in the edges.

This would have required fiber to the home with symmetric upload and ipv6 and ISPs not blocking all the ports.

In the US, the vast majority of people still only have access to a crappy coaxial cable broadband connection, where they allocate a few Mb/s upload to a whole neighborhood.

The lack of that basic infrastructure and demand for always connected devices meant hosting at home was never an option for the broad populace, and so the market never had an opportunity to deliver those solutions.


I've lived in country where symmetric Ethernet connection is norm for ~15 years (yes, 15 years ago it was something like 10/10 Mbit/s and now it is 100/100 or even more Mbit/s) and white, unfiltered, static IPv4 costs about +$2/month as option.

Yes, it is for big city, but even small ones have something comparable, maybe slower like "3 years ago in big city".

Typical ISP firewall for these IPs are "Port 25 is closed". And it's all.

One problem - typically you can not get good PTR record, your back-resolve will be something like "ip12-45-67.pool34.isp.tld", which is pity for SMTP, but irrelevant for all other protocols.

Unfortunately, this country has started bloody war, and now I'm living in civilized 1-st world country with crappy DOCSIS connection and without static IP :(


> fiber to the home

I used to self-host while I lived in an apartment with a proper Ethernet connection. Now I have GPON with all the implications vis-a-vis a crappy provider-provided router and shitty service due to their monopoly position, so no real external access for me.

My stuff is now tunneled through a VPS; it works fine, but I’m still salty.


This is the norm in Sweden, aside from Port 25, which you shouldn't be using anyway.


Why shouldn't you be using port 25? You need that if you want to receive email on your own mail server.


25 is plaintext.

465 is TLS, and is not blocked.


The connection is, but nobody’s using it to host local servers.


*Nobody?*

ahem; https://git.drk.sc is hosted in my home office.


I'm terrified. I'm someone who used to learn skills but now I learn products.

I am not sure I can get to the end safely if everything in my professional toolbox is something I rent.


It's also a failure/the greed of ISPs, who cut upload speed from average consumers so they can sell it to datacenters and firms at a markup, making sure that a local server will always be slower than a cloud instance unless you pay extra.


If everything is in your closet, how do you want to provide reliable auto-scale, redundancy, backups etc? I have friends in Ukraine, where shit literally blows up on a daily basis. And if this example is too extreme for you, I also have friends in Germany, where overflowing rivers remove whole houses once a year. There are services that need to be reachable all the time and you simply need replication in different geographical places to achieve that. for me decentralization = resilience


Someone else's closet.

Maybe you have an agreement with another friend and a set of scripts that do this.

Maybe you're using some kind of decentralized backup system where you have a big blob of encrypted, de-duplicated Other People's Backups in exchange for all your stuff being treated the same.

Or maybe you just don't provide all of this, maybe the entire network expects your server to be fragile and is running everything in a virtual machine to start with. You just have a box in the closet that runs a node in the mesh, with however much power and storage you want to throw at it, and this grants you x% of those numbers from ClosetMesh. It's pretty cheap and easy to have enough ClosetMesh storage to do something useful in a box about the size of the one your phone came in.

There is probably at least one SF novel's worth of backstory between where we are now and the world where everything runs on top of ClosetMesh. Do you want to be part of it? There's a bunch of projects trying to make this sort of thing work.


Services that need to be reachable all the time tend to not be connected to the internet.


i don't?

I just wanna have physical ownership of the photos I upload into the cloud

if my shit goes down, only me is to blame and only me has a problem

if my house get's blown over by a river, I have bigger problems than streaming music from my own personal server and backing up my own photos

in a more ideal world, I have some replication on 'friends' servers. but nobody runs their own shit.


Managed bare metal hosting?


Does that actually mitigate the issues people have with Cloud stuff though? It's still "someone else's computer" at the end of the day. You're still having to deal with the eccentricities of whatever management layer they have on top of the computer to be able to rent it out to you. The fact that you're not buying it from Google or Amazon doesn't really make it less "cloud" in the "it's not local-first" sense.


Sure.

Bittorrent-like tech has answers for you here.


Why not some of both?

... why does one exclude the other?

I used to host my own e-mail. I decided that I couldn't give it the time and effort it deserved, and moved myself and my users to a provider.

The best option is choice. Not forcing one or the other.


It's working as intended. Conway's Law applied to the global context.


What cloud? The $5 VPS with a basic linux on it? Or AWS with so many services that you will never be able to get out of without discarding all previous work?

It's also pretty sad that all the speed advances in the past 10 years have been squandered on ... running javascript.


The work Ink & Switch (unaffiliated) do has been an inspiration to me with regard to local-first and decentralized software: https://www.inkandswitch.com

They have a quasi-manifesto on local-first (https://www.inkandswitch.com/local-first/) and have published the best rich text CRDT around, Peritext: https://www.inkandswitch.com/peritext/

Lots of interesting work happening in this space.


The cloud may be a prison, but its infra services, specifically EC2+Autoscaling Group, S3, SQS, and DDB, make the cloud a Four-Seasons-like prison. So, what specifically can we do to make local-first at least equality productive for a med-sized company like Uber for at least the following simple (doesn't mean they are easy, though) requirements:

  - Anyone in the company can launch and configure an autoscaled VM cluster. It's surprisingly hard to create a system like EC2, as Uber failed to do so even though they tried, and a number of multi-billion-dollar internet company tried and also failed. 

  - Anyone can store any amount of data and scan them with practically unlimited bandwidth for analytics. In fact, object storages on cloud have been so good that I failed to see a thriving open-source community that actively work on an alternative.

  - A database that just works and can handle a wide range of workload, including seemingly anti-patterns like using database as a large-scale distributed queue. Yes, I'm talking about DDB. 

  - A simple queue that supports competitive consumer pattern and practically scales infinitely. Per Tim Bray, a queue that "is almost perfect" as SQS.


Honestly, I don't really care for software that is "local" to the machine as its primary path. What I like is software that provides a self-hostable backend that is as close to identical as the cloud-hosted backend as possible. Allow me to to configure things to point to my local network (or better yet use SRV records to discover it) so I can still use the mobile app, desktop app, and browser app with live collab and sync between them, but without relying on your cloud service. This isn't completely trivial to do, but it's also not that hard either, as long as you design your software architecture for the backend to be reliant on operating your own services in the cloud on dumb IaaS instead of relying on PaaS components.

It's one of the reasons I really like certain companies because it makes it possible for me to still provide my family the ease of use that comes with centrally managed services and mobile apps without the privacy and cost tradeoffs of abandoning control to the cloud. Something like OwnCloud but for everything would be really fantastic.


The difficulty with cloudless apps is that if you want to sync between devices you still need somewhere to fetch your data from. In theory you can have some true end-to-end encrypted peer-to-peer network, but in practice it works much worse than a cloud app with a central server.

CRDTs are awesome and offline-first applications are much more responsive than the ajaxified status quo. But CRDTs work much better with a hub-and-spokes model when you know for sure the hub is always online and has sufficient bandwidth. All data can be stored stored locally by the client and the cloud service only serves a distribution channel for end-to-end encrypted data that's a pragmatic middle road where you still have the convenience of the cloud but not the lock-in.


That's why I think Hybrid Cloud is where corporations are headed to (which means a mix of Public & Private Clouds and On Prem Servers).

You would use your on-prem servers for the core (and predictable) demand, as scaling it takes time and CAPEX.

You use cloud for elastic demand and innovation.

Once you have the technology layer to orchestrate those things, it should be more effective for corporations, specially those with high computational demand, to have this kind of setup.




Why not use SQLite? It is easily part of any app. Then what remains is some synchronization to some database on the cloud.


Is this anything more than a research project? I've been monkeying around with p2p sensors streaming data over quic in my house cause IoT is just SO BROKEN. Config goes the other way with self-resolving data structures.

I hacked together a "wide area overlay" using manually configured IPv6 VPN tunnels, but am thinking of grafting bittorrent on top of it to communicate config info for the VPNs (or quic or tls tunnels, but they're sort of being used as poor-mans vpns, so I'll keep calling them that even though they're operating above layer 3.)

Everything I've done is a hack. I should document it so people can taunt me about its hackishness or be inspired by its "elegant re-purposing of existing technologies."


I was reading through the comments after reading the article and realized some didn't take the time. So TLDR;

Local-first software is about doing the work on the device at hand securely, effortlessly and serverlessly collaborating on that work using data structures that are designed to facilitate local-first collaboration. It is a platform shift in software development and if you are "all-in" on cloud technology it would behoove you to understand its tenets if for no other reason then to know "who moved my cheese" in the future.


Best I can do is edge.


Nothing beats Google Photos in speed and search.

Nothing beats cloud for email hosting.

Other things are debatable, but in general nothing beats cloud on ease of use and setup.


Nothing beats them yet...


It is really hard to beat fill in form, push button, get service that is likely better than your on-site staff can provide, especially for small or medium businesses.


Unreadable... Popup after popup


Movement is an exaggeration


i ran a hosting company. a successful one that made money for years. it made me millions of dollars.

i knew it was time to sell when prospective customers literally couldn't believe our prices. they thought we were full of shit, and accused us of being amateurs, fly-by-night scammers. and our prices weren't low - they just weren't insanely inflated like public cloud. we didn't deliberately try to undercut anyone on our quotes. it just started happening one day.

of course we raised our quotes to test the market but then they became too expensive for what we offered. lol. lmao. okay. as you can imagine running infrastructure is something people do not take lightly, so this is an uphill battle all the way. once you have no service or price edge the game is up.

so anyway, long story short, i can take a hint. i'm not nearly as smart as jeff bezos so it was time to gtfo of dodge. luckily with those 0% interest rates we found a buyer and anyone stuck in public cloud hell due to their own incompetence can kiss my ass because we fuckin' told you so.


Sounds like customers were saying "the price seem suspicious", but what they were really thinking (and maybe they didn't even realize it themselves) is "I don't trust this company because it isn't one of the 5 that runs everything else in my life".


I don't get it. Surely there was a middle ground where prices weren't so low customers didn't think you were running a scam, but weren't so high they became "too expensive for what we offered"?

What you're describing doesn't seem to be anything unique to cloud; it's something every business deals with. Figuring out the right market segmentation to deliver cheaper prices to those who are more price-conscious, and more functionality to those who can pay.


> Figuring out the right market segmentation to deliver cheaper prices to those who are more price-conscious, and more functionality to those who can pay.

yeah smart guy, that's what we did for over 10 years. then it became impossible (because it's AMZN and MSFT and GOOG, remember???). then i sold. now i never have to worry about that shit ever again.


No need to be snarky/insulting here. It was just that your explanation purely on prices being either too low or too high seems incomplete. It's not clear why there wasn't a middle ground.

On the other hand, not being able to compete against the scale of major cloud providers makes sense. It's just not clear how that's related to you starting out with prices that were too low.


Can you give a ballpark figure by what factor AWS/Azure TCO exceeded yours?


to keep it simple, think of a 42u rack stuffed to the gills with servers and switches.

that would cost us maybe $10k a month on the margin, and we could resell and manage it for nearly $50k a month. i'm betting amazon could milk it for $200k a month, possibly 2-3x that if it's GPU's.


People that dont like the cloud are generally engineers who were left behind by the paradigm shift. They want development to go back to what they spent a decade learning. And so they complain about complexity or out of control cost. The truth is, they arent effective engineers with the cloud because they havent learned it in depth. It is clearly 100x better than on premise, and the boom in tech is partially from productivity gains from cloud. Infrastructure as Code is a major technological improvement.


The article (and the local-first software) has nothing to do with cloud vs. on-prem. It's about the priorities in developing and running/using your applications: "The word “local” in the name refers to your personal computer. “First” means your computer is prioritized over “someone else’s.”". Local-first apps can still use cloud resources for synchronization or for simplifying their initial configuration, but it's not a focal and critical point of the design and only serves as one of the nodes and can be made optional. You can still get all the benefits of being cloud-enabled, but you're also getting all the benefits of working without any cloud involvement at the cost of adding synchronization and conflict-resolution mechanisms.


Infrastructure as code is indeed a major technological improvement, which is why I use it on-prem lol.


I’ll do you one better, I use IaC techniques to configure VMs at home, on an old workstation I’ve turned into a server.

It’s great for hobbyists, too, and doesn’t require the cloud although I will admit it’s easier to go all the way when resources are totally spin-up-as-needed. But I don’t need to go all the way, I’m just a simple hobbyist.


What flavour?

I’d love to be able to configure a proxmox, linux vm through pulumi (or TF if that’s your thing), Ansible seems to be the best option but I’ve not had enough experience to be competent as yet.


I don't know what this fluff is doing on HN. They make it sound like the Cloud Mafia forces everyone to use their server farms, instead of offering a product that people willingly choose to use. The rest is some hazy hype of CRDTs, as if their usage magically eliminated editing conflicts. Kleppmann is legit and his book is great, he deserves better than this.


They do force it. The force your non technical CEO and your non technical customers to value these stupid cloud servers far too highly. This forces many to embrace cloud tech purely for marketing reasons.


Companies embrace cloud tech because it turns large capital budgets into operating budgets, and prevents dental implant companies or soda distributors from having to figure out how to hire competent sysadmins.


This is the important thing. So many people in tech think of these solutions in terms of tech, the ease of use of the tech, and the cost of the tech.

But the ultimate cost in everything is the cost of labor. On-premise solutions require on-premise administrators, with full time salaries (who are mostly sitting around waiting for work or problems to arise). It is far simpler and cheaper to contract the work out to a cloud provider and pay hourly for a tech's time. If you are a highly competent sysadmin you probably aren't working at a dental implant company. So the kinds of people non-tech companies hire are usually going to be lower quality and get paid less, increasing security risks


Competent sysadmins who don't let the production DNS expire for example [0]? =)

Companies use the cloud because then they don't have to hire a dozen sysadmins to manage their servers, they can focus on building the actual product.

If I can choose between having 12 developers who can manage AWS and having a rotation of 8 sysadmins wrangling in-house servers in three shifts and 4 developers, I'll always choose the one with more developers.

[0] https://news.ycombinator.com/item?id=29873306


Replace all your competent sysadmins with even more expensive cloud architects!


Cloud architectures can and should be relatively simple.

It’s also a once off (kind of), and there are very established patterns to follow.


Almost every company I’ve worked for in the last 10 years wouldn’t have even been founded in the first place without cloud computing.

The value of cloud computing is that it gets business logic in production fast with unlimited flexibility and a reduced dependence on cost center specialists in areas like operations, security, and database administration.

With cloud computing I can deploy my entire stack to a region in another part of the globe with ~10 lines of configuration, terraform apply and it’s done. I’m doing things by myself that I used to rely on multiple other people to handle (like setting up and maintaining databases, networking, firewalls, security tooling, etc).

I also used to waste a bunch of time maintaining vendor products that had nothing to do with the business. I used to have responsibilities like Jira administrator - ick!

The last time I worked for a company that had an on-premise data center I had a months-long email chain where I was told we were basically out of storage in a specific region so I had to wait for the VMs I requested.


While the article is certinaly and little "fluffy" - more infotainment and information - the concepts behind local first are sound, they are a strong and compelling way to build apps.

Quite right CRDTs are not magic, but they do get you something like 80-90% of the way there for most CRUD style apps. There is also a bunch of work going on into developing new CRDTs for different use cases.

What's important is articles like this help us to explain to less technical people what local first is, the advantages it has. They help to sell the concept.

But ultimately Wired is entertainment isn't it.


The cloud providers that also sold local software (Microsoft, Oracle) push people to the cloud by increasing the price of local software and discounting their cloud offerings (for a while at least).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: