It was an interesting decision to make a SaaS out of this solution, though, as I think 99% of problems are not solved by having machines closer to their users.
Fly.io fits that latter category perfectly. Global distribution was generally Too Hard to consider for my projects. Now it isn't.
It’s also pretty common for people running large LAN parties to run a local Steam cache so that when everyone in the building decides to download a game on the list of tournaments it’s coming over the local network rather than downloading hundreds of copies from the Internet.
My guess is the cost differences are still there, but the increases in capicity and decreases in all costs combined with the difficulty of users to control traffic at that level mean that you would really only see this if you're buying a lot of transit. If you're just a residential customer, it might be more likely to impact you as your ISP may have more congestion on oceanic routes rather than an explicit charge.
I do occassionally see hosting operators that will optionally charge more for access to some routes that are better but too costly to include in a bundled rate.
Most of the big operators have peering agreements in place, but that doesn't mean every participant has infinite bandwidth. Google Global Cache and Netflix Open Appliance go a long way to reducing costs by avoiding interconnect where possible.
As for this particular post, it's great as always, but I would have liked to see more specifics on how applications might write to the main replica vs their local Redis instance.
You're right, we kind of glossed over how to do that. People usually just keep two Redis connections in their code, something like `regionalRedis` and `globalRedis`. It's cheap to keep Redis connections around.
I can't really think of a better way to handle it, it's kind of a weird problem because not _all_ writes need to go a certain place, just writes you deem global.
Said another way, I just write my web app code and fly.io handles literally everything else. (I don’t even mess with docker etc. Just my app code and be done)
This is especially important for DBs. What I'd really love is for us to work with DB "owners" to jointly provide managed DBs. One thing I hate about AWS is that they have a lopsided, parasitic relationship with the people who build interesting databases.
I recall in thr early 2000s having a personal VPS account with Dreamhost and doing just that since they managed the OS, Database and Apache/nginx.
It’s amazing how in many ways - deploying code over the years has only become radically harder than simpler.
The situation on "shared hosting" was IMO much better in terms of reliability, since customers didn't have root, but these servers were definitely still pets and not cattle.
Basically, these companies are a way to outsource sysadmin labor to sweaty cubicle farms rather than a way to actually reduce the amount of labor that is needed. Arguably the same is true for cloud but I think in general the cloud paradigm is actually more labor-efficient. As a thought experiment, imagine if AWS tried to serve their current customer base with the techniques of Dreamhost-style hosting companies. They'd need to employee 1000x as many people! And it would still be worse!
Saying “I hit shared hosting limits, I would’ve been better off writing it in a different language and running on entirely different infrastructure” doesn’t really seem like the logical next step.
It’s like saying “I hit the limit of my barebones PostgreSQL server, and instead of getting a bigger instance I should’ve just built everything with NoSQL”.
I did what I’m describing very successfully 10 years ago and I had a very good reasons to make these decisions at the time.
I mentioned python because I ended up re-writing everything in python two or three years later. You’re right that it’s not related to this, other than that trying to run a python app under shared hosting was really difficult.
I moved to a unmanaged VM (this was 2008, first Slicehost and then Linode) since I’d been using Linux as my main home OS for 10 years anyway.
With my current stack I provision a VPS with Forge and deploy the repo with Envoy (zero downtime). It's pretty easy and feels very solid.
So kind of like, the "Netlify" of edge dynamic app-hosting?
Actually, I just looked back, and that was to solve the problem of only being able to write to a single postgres instance of the globally distributed cluster. For just caching, with every instance writeable, that probably wouldn't apply here.
I might have to launch a startup just to get a chance to use them.
You can build a CDN on top but if you just need basic CDN features then you should probably look at something else. Cloudflare allows purging all of the content, or individual URLs, for free through the UI or API. You only need enterprise for the more advanced tag-based purging. You can also look at Fastly which is another configurable CDN. BunnyCDN is also good. Start simple and then move as you need more.
Yes, we need more advanced purging functionality. Enterprise seems like the logical next step, but the price increase is hard to swallow. Using something like Bunny/Fastly etc can save us a bundle, but then we're kinda ditching all the other built-in features. I guess that's exactly the Cloudflare play to get you started cheap...
I wasn't aware fly evolved from a CDN. I just saw your comment and the docs mentioning speeding Heroku apps, running nginx proxy, openresty etc, so was curious if it's something worth looking into.
I'm not a latin expert, but 'is' is not a latin ending. The closest would be 'es', but then the plural would be Redia or Reda.
Or maybe it'd follow the -eris pattern and be Rederes or Redera.
I wouldn't be surprised if Neo-Latin hobbyists had rules for things like this.
Sorry I was just being pedantic on a Friday afternoon. The rest of the blog post was great and I agree with all your points. I've been advocating for immutable cache keys for years, for the exact reason you mention.
Any reason you run the apps on micro-vms? Why not directly on a container runtime?
> MicroVMs provide strong hardware-virtualization-based security and workload isolation, this allows us to safely run applications from different customers on shared hardware.
Cloudflare is too busy trying to build a unified stack for boiling the entire ocean.
(TLDR: mostly local in the US so far, with sparse presence globally)