Hacker News new | past | comments | ask | show | jobs | submit login

Full disclosure, I work at Fly.io now.

This exact setup is easier on Fly.io - our proxy layer runs in 20 regions worldwide with anycast, so your requests hit the nearest region and quickly terminate TLS there.

You can also run any Docker container, and either choose regions to run them in, or just set the min/max and ask us to start and stop containers in whichever region has demand, so your deployment follows the sun.




Correct me if I am wrong: fly's anycast has its limitations compared to Global Accelerator (GA) though:

On ocassion, it breaks UDP protocols that are "connection oriented" (like QUIC and WireGuard, though both have built-in capabilities to recover).

There is no way to pin traffic to VMs (route / client affinities) or shape traffic.

100+ locations with GA, and two Anycast IPs (in two distinct "zones").

---

Alternatives to Fly on AWS that I know of:

Anycast on AWS without global accelerator: S3 buckets with transfer acceleration); (edge optimized) API Gateway to Lamba / Fargate; S3 + CloudFront.

AWS AppRunner + Copilot (which are comparable to Fly + Flyctl) can be geo-routed to nearest instance by DNS-based load balancing with Route53 (not anycast specifically).

---

Fly's killer feature (and why we are transitioning to it) is its cost-effectiveness and almost 'zero-devops' setup.

- Super cheap bandwidth ($2 per 100GB!)

- Free deploys (AppRunner charges for deploys)

- Free monitoring (versus expensive but comprehensive CloudWatch)

- Free orchestration

- Free anycast transit (expensive on aws)

- Cheaper, zero-touch, global/cross-region private-network across VMs in the same fly org (zero-muck: transit gateway, nat gateway, internet gateway, vpc private endpoints, iam policies...).

- Super cheap and fast disks ($0.15 per GB/disk!)

- Easier HA Postgres and HA Redis setups.

- GA's TCP proxy does not preserve source/client ip-ports (Fly can).


AWS engineer from the container services team here. One small point of clarification on "AppRunner charges for deploys": We only charge for deploys if you are using App Runner's built-in integration to watch your repo, and automatically build/rebuild your container image from source code.

This is not a required feature for App Runner to function though. For example if you are using Copilot with App Runner you can drive the build and release to App Runner from your local dev machine, so there is no extra deployment charge beyond what it costs you for electricity to build your container on your own laptop. You only get charged for App Runner deployment automation when you are using AppRunner as a Github Actions / Jenkins replacement to do the Docker image build on the server side.


For the most part, yes, everything makes sense. There are some things worth noting though:

> On ocassion, it breaks UDP protocols that are "connection oriented" (like QUIC and WireGuard, though both have built-in capabilities to recover).

Yes, and no, in that QUIC and WireGuard do work consistently, it's not that they break. But Fly doesn't currently offer UDP flow hashing or sessions or pinning.

> There is no way to pin traffic to VMs (route / client affinities) or shape traffic.

No, but the system is built to obviate the need for this — you can choose which regions your app runs in and Fly will balance them for you based on the strategy you choose. I'm not sure what benefit is being missed out on by not having it — if there is a clear benefit that's not achievable under the current design we can make a case for building it.

> 100+ locations with GA, and two Anycast IPs (in two distinct "zones").

Fly lets you allocate and buy more IPs under Anycast, so more than two should be possible. Regarding the 100+ locations, that's technically true but irrelevant — GA doesn't serve requests, so they still need to hit apps deployed on one of the AWS regions (usually provisioned and managed separately). With Fly your app is running on the edge regions pretty much automatically.

The closest alternative to Fly on AWS would be (1) Global Accelerator pointing at (20) Application Load Balancers in every region, each backed with (1+) Fargate containers maybe? Would also need Lambda triggers to turn off traffic to a region if it wasn't worth running containers there, and turn them back on again.


Could one run caddy and serve their own TLS termination?


Yes, it’s possible to ask for the raw connection to be passed to the application and self-manage TLS.


>Super cheap bandwidth ($2 per 100GB!)

That’s only “super cheap” if you’re comparing to AWS’s outright highway robbery bandwidth pricing.


Sure, there are lots of VPS and dedicated server providers that offer lots of bandwidth, but they're not playing the same game as the big cloud providers, or even fly.io, when it comes to auto-scaling, self-healing, multi-data-center deployments.


I really like Fly and would love to move some side project workloads to it, the only thing holding me back is the Postgres product which seems to be a little bit 'not ready for production'. I'm referring to point-in-time recovery and ease of backup restoration mostly.

The product looks too good to be true, and when you dig into a little deeper it seems like it isn't totally 100%.

Amazon RDS is something that I really trust, but I didn't get the same vibe looking at Fly Postgres.


Our Postgres is not an RDS replacement. Lots of devs use RDS with Fly. In fact, Postgres on Fly is just a normal Fly app that you can run yourself: https://github.com/fly-apps/postgres-ha

Ultimately, we think devs are better off if managed database services come from companies who specialize in those DBs. First party managed DBs trend towards mediocre, all the interesting Postgres features come from Heroku/Crunchy/Timescale/Supabase.

So we're "saving" managed Postgres for one of those folks. For the most part, they're more interested in giving AWS money because very large potential customers do. At some point, though, we'll be big enough to be attractive to DB companies.


I mentioned a very similar thing to them on this community post. (May 18th) https://community.fly.io/t/fly-with-a-managed-database/1493

Their response was this: > Our goal in life is to solve managed postgres in the next few months. Either us managing, or with someone like Crunchy Data running on top of Fly. So “hold tight” might also be good advice.


I have a few toy apps on Fly and while I do like the service, it has been flaky. E.g. 12 hours ago my app raised a number of errors as it got disconnected from the Postgres database.

This isn't a show stopper for me as they're toys, but I would be somewhat wary of moving critical production apps to it just yet. (Also, everything else aside from PG has been rock solid for me)


Also way cheaper, if your container is efficient you’ll probably pay under the cost of Global Accelerator alone, and bandwidth is way cheaper as well.


Do you have your own servers or you build your service on top of the aws/gcp/azure?


We have our own servers.


If I'm not mistaken, you were using Equinix Metal (formerly Packet) at some point. Did that change?


Kind of! We still use Equinix Metal in some regions. Leasing servers and buying colo isn't all that different from Equinix Metal so I'd class that as "our own hardware".


Fair enough. And using multiple upstream providers is of course a good idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: