Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: So you moved off Heroku, where did you go?
410 points by nomilk on Oct 4, 2022 | hide | past | favorite | 316 comments
So you moved your apps off Heroku, where did you go to and how has it worked out?

Particularly interested in:

1. How much work it took to move apps (be honest)

2. How much experience you had at the time of the migration - e.g. on one extreme your entire devops experience may consist of just Heroku (that's me) or on the other extreme you may be a k8s guru (this helps others gauge how they'll go)

3. How valuable were your learnings? E.g. replacing Heroku with an IAAS instead of another PAAS might take longer but give more fundamental learnings, and hence be worth it for some

4. Cost comparison

5. Summary/description of your apps (e.g. 20 tiny apps with a few hits per month, 5 medium with ~20k hits per month, 2 large with 1-2m hits per month type thing). Please give language/framework.

6. Anything else you want to add




Hey, Andras here, founder of Coolify (https://coolify.io). Coolify is an open-source & self-hostable Heroku / Netlify alternative.

Lots of features are added day-by-day and I'm open for new ones. Visit our Github (https://github.com/coollabsio/coolify) or our feedback page (https://feedback.coolify.io/) to get more info.

A few months ago, I quit my job and started to work on it full-time (best decision I've made).

It is not VC funded (said no to 30+ investors - time will let me know if it was a good decision or not ), but funded by the awesome community! <3

Why I'm mentioning this? Because I would like to focus on the functionalities, the community & the users and not the revenue and other boring metrics. I just would like to make a good software, enjoy the process and make people's life easier. I do not want to make millions of dollars from it. If me and my family could live happily, that is totally fine.

Let me know if you have an questions. I'm happy to answer them.

(Also I'm working on a cloud/managed hosting version of Coolify (https://beta.coolify.io/))


I am a happy Coolify user and I could say that it is a promising and quite active project with a vivid community. The fact that you can self-host dozens of services using a single VM is a breeze of fresh air.

Furthermore, with just a bunch of dollars you can get a really good VM from Hetzner and host everything there. At the moment I am using Coolify for small Node.js apps as well as using a few more services: - VaultWarden. - Minio as an object storage solution. - Glitchtip as a Sentry alternative.

Essentially, if you are considering privacy and avoid paying a dime to Jeff directly or sideways (most of the PaaS services are built on top of AWS) projects like Coolify, CapRover, Dokku etc are the way to go.


My whole purpose in using Heroku is to avoid self-hosting something, to get the "platform as a service." Is that not the main attraction of heroku? Maybe not for everyone, if the idea of a self-hosted product as an "alternative" to heroku has traction?


Presumably this is why the cloud/hosted version OP is working on has a good chance of being profitable. Open source means it's hostable as a backup plan, but the cloud version is there for those who want the convenience.


Right. It's not even about the initial setup and provisioning. I can handle that. But what I don't want to handle is constant maintenance. I get weekly notifications of maintenance that Heroku is doing for me. I have no interest in doing these things, or hiring anyone to do them.


That will be Coolify Cloud :) (https://beta.coolify.io).


Ruby/Rails aren't listed as supported apps. My how ruby has fallen. :(

https://docs.coollabs.io/coolify/applications/#supported-app...


yeah, that's disappointing. I'm a Ruby fan and my latest startup is built on the rails stack.

It's using buildpacks though, so shouldn't it be (easily) supported?


Yes, as lolinder mentioned, there will be a hosted version soon. :)


I think simple and not doing it yourself is the entire selling point for heroku.

Coolify sounds cool, And i love the idea of being able to have self-hosting as a backup which it would allow, but i don't want to HAVE to self host. Would rather pay Coolify to host for me until they go out of business / change direction, at which point i can continue on with the self-hosted bit.


This (coolify) looks very interesting. Since others may have the same question as i do: Does it scale horizontally? We actually have our own racks in datacenter and if i gave 3 machines to it would they handle that?

I saw you mention the K8s soonest: is that your scaling solution? Seems it would be simpler.

Regardless I think we may look into this.


You can attach any number of servers (and Remote Docker Engines) to a single Coolify instance (docker swarm is not supported).

If you are using a remote server, the building process of an application is done on the remote server and not locally (on Coolify's server), so Coolify won't be affected on how many builds are doing concurrently. The only limit is the CPU/Memory of your servers. :)

The ultimate & standard scaling solution will be K8s.


Hi! While the project does look very interesting, i'm kind of put off by the code. While it is pretty organized and neat, I can see that all of the services offered are hard-coded. Like the 330 lines long if-else chain in apps/api/src/lib/services/common.ts.

This way every time you need to set add a new service to coolify there's the need to ship a new version.

Why did you choose to go this way rather than a more generalized concept of a service? Wouldn't having a generic service architecture be more maintainable and ( imo ) more importantly add the capability to have a dynamic, global and customizable 'app store' of sort?


> i'm kind of put off by the code. While it is pretty organized and neat, I can see that all of the services offered are hard-coded. Like the 330 lines long if-else chain in apps/api/src/lib/services/common.ts.

I agree and disagree.

I agree that this kind of approach to coding is something that would put anyone senior-ish off totally, but I disagree at the same time because it's only 20-ish services, and the code is easily refactorable.

My assumption - after having read code 30+ years - is that this is done on purpose, so that they want to stabilise the approach and refactor it later. Though, as we all know, there's no thing as "I'll fix this later" in programming, but still (in this case) not something that would put me off.

Also, it's on github, so I could always refactor it for them if I'm not happy with it.


This isn't such a sin for a young project like this - Making code configurable adds overhead and it's often right to avoid 'over-configuration' along with 'over-abstraction' and other 'over-optimization efforts'. What is important is to isolate functions for orthogonality which will allow a project to be refactored according to priority needs further down the line.


The reason, I (founder of Coolify) choose that, it was faster.

I’m currently making a feature, that will change this. All services could be defined with a simple yaml file that won’t be hardcoded. They will be sitting in a repository and everyone can add any service with a simple yaml file.

It is definitely more maintainable than the current solution, but with the current solution, I could iterate faster. :)


Death to yaml. Thank you.


> Like the 330 lines long if-else chain in apps/api/src/lib/services/common.ts.

As far as I can see, there is no 330 line long if-else chain in that file. Nothing even close to it. Can you link to what you're talking about more specifically?


Looks really ...cool. When you say Netlify alternative, you're referring to commit-to-deploy functionality of static files only, not a global CDN and serverless functions, right?


Yes, I mean that you can easily deploy static sites. It does not have global CDN and serverless functions (yet), but you can easily set a CDN (like bunny.net) in front of your sites and then it will be globally CDN'd. I'm using similar setup all my marketing sites. It is deployed by Coolify to a server and Bunny CDN will make it globally available. So I do not need to care about any traffic bumps (like I have at the moment) and all my commits are deployed automatically.


When did it become acceptable for the official installation method to be running a random script on your server?

No one should be running that without personally verifying each line. What if I want to uninstall it? (I haven’t looked at this script) but many don’t effectively clean up properly after uninstalling, as that’s an afterthought.

What is specifically wrong with the dozens of existing packaging solutions we already have?


It became acceptable along with containers, so that the very OS the script is working on, is already isolated and disposable. You also mentioned dozens of packaging solutions. Which package manager should a single developer target first? Which package manager offers safety assurances enough that a customer wouldn't need to "personally verify each line" of the package manager's script?


> No one should be running that without personally verifying each line.

Ok, if you feel so strongly about it, then before you paste the link into your terminal and hit enter to execute it, how about you uh... open the shell script it will execute in your browser and read the contents of it?

You use this idea that people are executing stuff they don't know and blaming the tools for it but don't think to read the script yourself?


> Ok, if you feel so strongly about it, then before you paste the link into your terminal and hit enter to execute it, how about you uh... open the shell script it will execute in your browser and read the contents of it?

As an FYI - unless the script is on a site like GitHub (which you can assume to be behaving truthfully), it's possible for a server to respond differently according to the user agent, allowing a malicious file to be served if it's downloaded via `curl` or `wget`.

https://www.onsecurity.io/blog/careless-with-curl-dont-be/


As an FYI - The bash script downloads it and then runs bash on the file created on your local system, there's no way to determine if the user is downloading and piping directly into bash or if they're just downloading the file, so change the script before executing from

    wget -q https://get.coollabs.io/coolify/install.sh -O install.sh; sudo bash ./install.sh
to

    wget -q https://get.coollabs.io/coolify/install.sh -O install.sh; cat install.sh
Once the script is locally on your file system there is no way to change the contents.


Yep. My response was specific to your comment of reading in browser which is not a guarantee of what's downloaded.


My apologizes, I interpreted it as a "yeah but they can change it so you can't trust it!!!1!" response.


How is running an "random" script any different than running a "random" binary? With a script at least you CAN look at each line. With a binary you have no clue what the fuck it is doing.


Running random binaries is also a terrible idea. They should be packages that integrate with your local package manager, and which are signed by some entity that you've decided to trust.


and who would that be? Are you suggesting that you ONLY install software signed by developers you have personally verified the keys of? or that you know personally? Because if not, your argument is void because you're trusting completely unknown people who may actively be working to compromise your computer.

The fact that the software was signed doesn't mean anything if you don't actually verify each signature, and even then it only means that what they put up is what you downloaded. It doesn't mean it's not malware. It doesn't mean it doesn't have a back door. It doesn't mean it's not filled with security holes.


You don't need to know people personally to trust a signature, you just need to know that the organizations they're coming from are at least somewhat reputable. Ideally, signatures should all chain up to the root of trust in your package manager, which is presumably operated by some entity that you've decided to place some trust in.


> No one should be running that without personally verifying each line.

Do you also verify each line of software you install? If you trust the author of certain software, why do you mistrust their install script?


It's not only the original author, don't forget, but it's any malicious actor that's managed to compromise that hosted script.

It should be viewed in the same way that a package author on NPM or PyPI may publish a malicious package, either themselves or via their account being compromised. It's not particularly common, but nor is it impossible and could present a good targe.


> What is specifically wrong with the dozens of existing packaging solutions we already have?

specifically? Specifically: they are even WORSE from a security standpoint because you don't have the option of "personally verifying each line". Just because someone's stuck it in a nice easy-to-install package doesn't mean it's not a virus / malware / whatever.

are you suggesting that you read the source code of every app / tool you download before you execute it? If not, hello... pot. kettle?


Presumably for the same reason that ML models seem to self-download to god knows where instead of giving you a URL to a download link and telling you where to place the file?

Or those awful programs that give you a stub when you try to download the app, only to download the actual payload to, again, god-knows-where.

Or even worse, packaging your app in a docker container!

Seriously people, give a CURL-compatible URL to your actual payload and then get the hell out of the way


You shouldn't use internet either, is dangerous


Do you also verify each line of the software that the script is installing?


this is very very cool! one question - u deploy to individual servers right ? can u target deploy to a kubernetes platform. Like EKS? I couldnt find it in your docs.

Because one of the cool things about Heroku is scale-up. A cool project here is https://sst.dev/. Would love to see EKS/GKE native integration there.


Atm, only individual servers, but Coolify will be able to deploy to any Kubernetes platform soon, natively.


There are a couple self-hosting services that rely on k8s or k3s. For example, TrueNAS SCALE and Kubesail. Do you envision that Coolify could deploy on top of one of these, or does it need to own the full machine?


This looks really cool, I can't wait to try it out!

From what I saw, it looks like it does a lot of stuff behind the scenes relating to domains and SSL.. does it work with the server behind Cloudflare (the dashboard and hosted sites), or is this something I would probably want to be accessing directly?


Personally, I do not use Cloudflare, but Coolify works with it!


No Rails support? :(


Not “natively”, but as lots of people asking, I will add it soon. Try to fit in this week! :)


Sounds great!

I feel like Ruby on Rails out of the box support would really help bring an influx of Heroku orphans.


Any chance of swift server?


I'm open for anything but I never used Swift server before, so not sure if it could be integrated. I have to take a look.


I am using Coolify (https://coolify.io), an open source self-hosted PaaS which is a relatively newer kid on the block compared to Dokku and CapRover. I tried both of these and I just didn't like how they were, always had some problem or another.

In contrast, Coolify has a great GUI that abstracts away the most common things about PaaS hosting, like connecting to GitHub automatically for git push deploys, SSL certificates, reverse proxying and custom domain support, and best of all, having support for Heroku style buildpacks as well as Dockerfiles. I've been quite happy with it, the creator has a Discord and responds to issues very quickly.

With regards to non-self-hosted options, I did try out Render, Fly.io and Railway but I found that their free servers were too anemic. I was compiling a Rust backend and it simply could not compile on their free servers. On Hetzner, for 5 bucks I could get a 2 AMD vCPU and 2 GB RAM machine that was sufficient to compile my Rust apps in a way that the non-self-hosted ones were not. I have a JS frontend app that works fine though but I wanted to keep everything under the same VPS, plus I can run other types of self-hosted services on it too, like Plausible analytics and a Ghost blog. I'm not sure if those are allowed on non-self-hosted options.

All in all, it costs me 5 bucks a month, and I never have to worry about sudden upcharges for traffic à la AWS as in the very worst, my VPS goes down for a while. I'm now running about 20 different services on this 5 dollar box including databases and applications as well as other services, works just fine.


How come you compile your app on your deployment platform? I know Heroku does this, but wouldn't it be easier to compile your Rust app on a beefy CI server and then just ship a binary or container image to your deployment infra where you can size the server according to runtime load.

E.g. I use Fly.io for one of my apps. I build a container image as part of CI on Github Actions and then just push that image to Fly as part of a deploy. Fly never deals with a build step (although it can do that as well).

$5 is very cheap for the Hetzner server you describe though.


Yeah I could but it's another hassle to figure out where to build and store those container images as I believe neither GitHub container repository nor Docker Hub are free for private images. I'm also not sure whether Coolify has Docker image support, so I've just been building and deploying on my server. It also does zero downtime deployments and queues builds so it doesn't bother me if the builds take longer than strictly necessary, because I won't have downtime anyway.


Thank you for mentioning Coolify (founder here). :)


Was looking for a service just like this only days ago, and posted to Software Recommendations on Stack Exchange after failing to find anything via web search. If this works as advertised, then you've made something amazing. Haven't tried yet, but have linked on my SE post (to which I recieved no other replies so far!).

https://softwarerecs.stackexchange.com/a/84179/7653


Wow, thank you!


It's funny how people are different, after using CapRover for years I tried Coolify but it felt like so much functionality was missing that I would not use it over CapRover in its current state. All the things you mentioned (GitHub webhooks, custom domains, SSL certs, reverse proxying) are supported by Cap. It also supports Docker Swarm.


When I looked at CapRover recently, the GitHub integration in the docs required manually setting up a GitHub Actions integration and I couldn't exactly figure out what parts I needed to change since the example was for a NodeJS app, and I'm assuming it needed to be added or changed for each repo you want to track git pushes from.

In contrast, with Coolify, you simply login with your GitHub account and it installs a (one-time setup) GitHub app onto your account that automatically sets up the webhooks and I didn't need to do any configuration after that, it just worked for all subsequent apps. That level of ease of use is the experience I had with Heroku and I do not understand why more self-hosted PaaS don't seem to replicate that as well.

Also I like the Coolify GUI over the CapRover one, dark mode plus modern non-Bootstrap design is nice.


There are a bunch of ways to deploy via GitHub, I would check out this action, it's the easiest and most straightforward. There should be no need to adapt it based on different frameworks: https://github.com/marketplace/actions/caprover-deploy

You can also use the built-in webhook from inside CapRover control panel but it requires you to log in via your GitHub account in Cap (I created a secondary CI account for this).

Installing a GitHub app comes with challenges, it basically gives Coolify access to your account should they want it, since they control the app and not you. This is bad from a privacy perspective.

CapRover has dark mode. :-) But yes the design is more utilitarian in Cap and not as nice to look at.


> Installing a GitHub app comes with challenges, it basically gives Coolify access to your account should they want it

Depending on the permissions they ask for, this may be limited to modifying webhooks in the repositories you specify.


It will be at least full repo download access, because how else would Coolify download the repository?


That's fine by me, because I want it to clone, build, and run my apps anyway.


Why would someone care about the look of a GUI dashboard for a PAAS and somehow base his choice on that ? Bootstrap, MUI, made by a designer, why should I care when there are dozens of more important points ?

And UX wise both have done a pretty good job at being plenty sufficient.


Hey, the founder of Coolify here. I'm constantly adding new features to Coolify.

Which ones are you missing (except Docker Swarm)?


The one thing I’m missing in Caprover, and that would make me switch instantly, is I want to be able to feed it a docker-compose file to run a whole stack.

Now if a service has only a docker-compose file I have to pull it apart and start every service individually.


100% this. Docker Compose has all the information you need, please let me just deploy it.


That feature is currently WIP. :)


CapRover has a huge collection of "One Click Apps" that will deploy hundreds of fully functioning applications for you including dependencies (so for example for WordPress it will spin up both a web and a database container): https://caprover.com/docs/one-click-apps.html

Also I think the way you create different kinds of resources are confusing in Coolify, like what is a "Git Source" or "Destination", all I want is to run my container. It's very different from Herokus/Caps interface and not in a good way.

Also why can't I just deploy from an image? In CapRover I often spin up a container, like `redis-commander` or `wordpress` and then just configure its env vars and volume mounts. In Coolify it seems you have to connect a GitHub repo which is not a desirable workflow for me.


I saw CapRover's one-click-apps and I'm thinking of using a similar solution in case of Coolify.

Resources are defined in this way, because you can connect a lot of things to Coolify, GitHub, GitLab, hosted or self-hosted, later on Gitea and other git sources, and also different destinations, local docker engine, remote docker engine and later on Kubernetes. I will improve the onboarding experience to not feel the first use as a burden. :)

Git based deployment is only required, if you would like to deploy a custom thing, that is currently not supported by Coolify. I'm working on a solution where you can use Docker Hub to deploy images (https://feedback.coolify.io/posts/6/deploy-from-docker-hub).


Dokku has a very easy solution for connecting databases and other services to an app with "Dokku link." Currently with Coolify I have to spin up a database then take the link generated, like postgres://user:pass@host/link and then paste it into my env file for my app. It would be cool for Coolify to automatically inject those environment variables into my app when I link two services and apps together like Dokku does. It just needs to tell me what env variable I should be looking for, like DATABASE_URL, and then I can use that variable in my app.


I had a similar idea before (linking resources like this), thanks for reminding me.

I will add such DX improvement, for sure!


I use CapRover and my main criticism is the non-zero downtime deployment.

Also when I update docker some apps don't restart properly, but that may just be my (basic Ubuntu dedicated) server though.


Non-zero downtime deployment is supported for apps without mounted volumes: https://github.com/caprover/caprover/issues/661#issuecomment...

Tangentially I was looking at Render.com the other day and they also don't support zero downtime deployments if you have a mounted volume, so it's not very uncommon even on cloud platforms.

As for restarts, I didn't have that problem yet, I believe CapRover adds `restart: always` to all the running containers so they should automatically boot. You might want to check out the logs of the containers that don't restart or just always hard restart the server after a Docker update.


I guess most of my apps have persistent data.


Every useful app has persistent data. It sounds like you have a design problem, though.

If you're treating application servers like cattle, then none of them should have persistent data (except caching that's disposable). There are fire-drill days that require deployments, so downtime incurred when updating your application code is unacceptable.

To solve this, put your persistent data on a separate server and store it in a proper database (Postgres, for example). Postgres never needs emergency updates, so the downtime for those (infrequent) updates can be pushed to non-peak hours.


I would maybe do all that if it was for more than side-projects that can tolerate 5-10 seconds of downtime when deployed :)


That's fine, I wouldn't tell you otherwise. You just complained about a missing feature that isn't actually missing if you use the service properly and treat servers like cattle.


Well yes, some containers only read the persistent data so if those could have zero-downtime as there's no risk of data corruption it would be great, but that option is not offered to me.


Also loving Coolify after using CapRover. Very much enjoying the simplified interface.


I have 4 apps running on fly.io for most of this year and can recommend them.

2 are elixir apps that have been created on fly, 1 was an elixir app I had running on heroku, and 1 was rails app on heroku I recently migrated. I used their heroku migrator and it worked pretty much automatically. I had a few steps to follow to move the DB, but their docs covered it - https://fly.io/docs/rails/getting-started/migrate-from-herok...

I didn't want a self-hosted flavor of Heroku, and I value an opinionated and active community around the platform. I'll mirror another comment here:

> Everyone wants to be the new Heroku, but that should be table stakes. Fly is innovative in the same way Heroku was when it launched.


I can second fly.io, I moved a bunch of personal (but "production") projects from Heroku to it recently (Node, Rust, Rails) and the experience has been great. I can also second their Heroku migrator, worked pretty flawlessly.

Overall, my pricing has been about the same or 5-10% less per month.

It doesn't have the same "Click this button to add this third-party service to your dynos, with billing included" eco-system that Heroku has (had?) but my Heroku usage was pretty vanilla so it wasn't an issue for me.


That's the one thing I haven't seen anywhere else. I think technically, for "run simple automatically built containers with little dev effort." there are many better solutions than heroku. The marketplace is the only existing differentiator for heroku, and I don't really understand why no-one seems to be trying to recreate it.

I get "marketplaces" are hard and there's a chicken/egg problem, but is anyone working on it? I think fly has the rest of it sorted and is clearly better. Would love to see the additional services as well.


> I get "marketplaces" are hard and there's a chicken/egg problem, but is anyone working on it? I think fly has the rest of it sorted and is clearly better. Would love to see the additional services as well.

I don't think that particular type of marketplace suffers from the chicken/egg problem, as that problem for me indicates when you must acquire two marketplace participants but it's hard to acquire one without the other.

But when it comes to integrations like this, you could easily write your own integrations as long as the platforms you try to integrate have APIs. Or contract freelancers to build those two. You don't need to have the platform owner to write the integration itself.

I guess I'd call it a "soft chicken/egg" problem rather than "hard".


Railway sort of offers this: https://railway.app/button.

I've seen it around a couple of times, but ofc it's not hit the same momentum that Heroku had back in the day.


Another +1 for fly.io from me. I moved a 10 req/s API from Google Cloud Run to fly.io a couple weeks ago. It was a fast and easy transition. I'm now deployed at several edges reducing latency for far-away users, and it's cheaper than Google to boot.


Related to the thread, How practical is it to setup the distributed routing stuff of fly yourself?

Seems you have to buy an anycast-ip from somewhere or become your own ISP/network provider, configure a bunch of bgp routers & routes but that seems impractical for most. Stuff like Akamai, Cloudflare, Route53 have anycast-dns but that seem specifically for dns queries, not responding to a query and routing it somewhere.

Is it a problem that's just too big and difficult that you can't really self-host/spit-and-duct-tape your own system?


You'll need to acquire a /24 (that'll cost somewhere around $15,000), and then sign a deal with an upstream network provider who will accept BGP announcements. The actual networking part of it isn't that hard. To make the /24 worth it, you'll want a bunch of different regions, so you'll be paying for hosting in each.

Check out https://docs.google.com/spreadsheets/d/1abmV_mXWWCsVxHLfouSi...


> Fly is innovative

Most definitely. For most parts, it is hashicorp-as-a-service but without hashicorp!


There's a bunch of Hashi in there.


I also moved my postgres/django backend to fly.io using their automated tool which worked flawlessly, and my frontend (sveltekit) went to vercel.

Both services have been flawless so far at a much better cost/value proposition.


> Fly is innovative in the same way Heroku was when it launched

I've not had a chance to play with it yet, what are they doing that's innovative.

(Genuine question, not being cynical).


The big one as a layman is they've invested in tooling for "edge" compute - similar to CloudFlare distributing your static assets worldwide for faster responses, Fly gives you the tools to do the same for your app servers.

If you've got a worldwide audience you can set up your app in Dallas, Paris, Chennai, and Tokyo, have it auto-scale (and re-balance) instances based on which regions are the most in demand (eg, minimum 2 app servers per region, max 20 app servers, and it'll auto drift them throughout the day as needed). It also puts all your servers on a common VPN for communication, and they've got some tooling in place to help manage things like db replicas, replaying write requests back to the primary db env, etc.

It's not necessarily trivial to port any old app over (if you're doing writes every request for logging etc, that's something you'd want to change up backgrounding the writes) but it really can make a huge impact in responsiveness once you're set up.


I think their auto-scaling is somewhat broken currently. I set up auto-scaling with min=2 and max=6, and my VMs aren't scaling down from 3 to 2 even though there are 0 active connections on all the VMs.


That all sounds pretty cool. Nice to see people trying something new.


See the recent flurry of their sqlite replication blog posts:

- https://fly.io/blog/introducing-litefs/

- https://fly.io/blog/all-in-on-sqlite-litestream/

This allows for extremely cheap s3 database backups as well as fast local serving layer.

And many more deep dives on their blog - https://fly.io/blog/


I'm on Gigalixir for my Elixir apps, and would switch to Fly.io if they had a "one-button" migration like they have with Heroku.


Price ?


Relatively cheap with some free tier? https://fly.io/docs/about/pricing/ One good thing is there is no DB-specific premium. It seems their PG is just an "app" on the platform and storage is priced accordingly.


I get an email each month forgiving my bill because it’s under $5. Got three small VMs running.


I moved to a Hetzner dedicated instance with cloudflare in front. Rails Postgres memcached all run on docker.

Vs

Heroku with cloudflare in front Postgres from heroku and memcached with an add on.

1. Very little. I put the app in docker a long time ago for local development but still used heroku for prod. I dumped my db and imported it into a docker run db on Hetzner. One thing to note: I did not expose the db to the internet. The app connects to the db over a docker network.

2. I do DevOps for a living.

3. It did not add to my learning.

4. $30/month for a large Hetzner dedicated instance and $12 for S3 (backups) vs $25 for the app (I added 2 more for 8 hours a day during busy season) and $25 for a job runner and whatever they were charging for DB. My total cost now is $42/month and on Heroku it was around $120 during busy season and $100 during non busy season.

5. Rails Postgres memcached and delayed job. I had to expand my app tier during busy season by up to 3 instances (winter). With my load testing the Hetzner instance handles much more than the 3 instances ever did in my load testing.

6. I added db dump cron jobs along with restic backups to S3.


I am curious why you chose S3 over B2?


The advantage B2 offers is lower pricing

The advantage S3 offers is seamless integration with other AWS services

IMHO, the ease of integration has significant value, it would take a very sizable S3 bill for me to consider not using S3


I agree.

The reason I was asking is because the OP did not appear to be using any other AWS resources.


No real reason other than the fact that I already had an AWS account and I was saving quite a bit of money already.

I may look into B2 later. But again, I’m well below my original costs so I may not look for a while.


or over Cloudflare's R2


Controversial opinion: given the vast majority of answers here: cheapskates looking for free tiers wherever they can find them.

Heroku should’ve closed that free tier years ago! Now they added some very very cheap plans as well which is fantastic for them = gets rid of some of the horrendous cost from platform abuse they dealt with every quarter.

Good for you Heroku, you were always under appreciated by the enterprise, with their clingy sysadmins and ops people fighting you like a plague and at the same time you were taken advantage of by an army of penny pinchers!

Heroku


Companies count on the "cheapskates" and the free plans to work as a "gateway drug" to corporate plans.

I sometimes feel this services "need" an affordable paid plan, so I don't "freeload" on their free plans, and I don't need the expensive big plan.

I.e.:

I monitor a bunch of my hobby sites with StatusCake. They're great and I use the free plan but I'd like to pay a little for very simple monitoring. But the first paid plan is around $200 a year. But I don't need the premium plan, I just need the free plan, but paying, so I feel like I'm contributing to the business.

AWS and the others... you've got a free plan, but once you begin to deploy... ;)

DnsMadeEasy. They don't have a free plan, but I'm using their "Small business 10 domain plan". That was $30 a year for years. Past year was $40. Then they got acquired by DigiCert and since last month they've posted their new prices: ~$200 a year for a 5 domain plan.

It happens in a lot of providers, so I guess is a regular VC tactic to acquire volume, clients, publicity or whatever.


> I monitor a bunch of my hobby sites with StatusCake. They're great and I use the free plan but I'd like to pay a little for very simple monitoring. But the first paid plan is around $200 a year. But I don't need the premium plan, I just need the free plan, but paying, so I feel like I'm contributing to the business.

Odd mentality. You don't want to feel like a freeloader, but don't want to pay the asking price.


Hence why they said they want to "pay a little." If this is a common objection to the price of a good in a market, and if someone else comes by offering a simpler product at, for example, half the price, they could take the market. This is an example of a low-cost pricing strategy: https://stratechery.com/concept/strategy/low-cost-strategy/


I know the ending to this story! The "someone else" comes along, gets a lot of buzz, attracts users, gets funded, then the investors say "you should be focusing on the enterprise market and increasing prices!" Then they become like the company they were trying to undercut and sell for $100m to RandomCorp.


> Now they added some very very cheap plans as well which is fantastic for them

Can you share a link to their new plans? The pricing listed on their old page (https://www.heroku.com/pricing) still showing $7/mo as their cheapest option and it's already an overkill for my applications. In comparison, the cheapest option on fly.io is $1.94/mo (https://fly.io/docs/about/pricing/).

Note: I know $7 is the maximum cost, but by default their hobby tier app will not sleep, so I guess it still costs $7/mo to run in the end?

Thanks!


I think they aren't in effect yet, but were announced in this blog post from last week: https://blog.heroku.com/new-low-cost-plans

I am not sure if it will meet your expectations though, it doesn't really provide something much cheaper than 7/month or for an always-on production service. As I see it, the "eco" dyno is mainly a way to more easily save money with transient dynos (used for things other than production services) -- it does meet your concern about hobby dynos not sleeping though!

There's also a new cheaper very limited postgres corresponding to the previous free postgres.


If you're running a project with a database, you need to take into account the cost of a Hobby dyno and a database add-on. I believe the cheapest Heroku Postgres plan is $9/mo.

So, if you had a light-traffic, or demo, or staging project that was using the free dyno & postgres offering, you went from $0/mo to $16/mo. That's a big jump, depending on your use case.

I'm all for well-restricted free offerings that let you try a platform; Heroku absolutely needed to change its plan. But if the cheapest web app with a db plan is $16/mo, people are going to look for something closer to $5/mo for their low-load deployments.

If I'm wrong about any of this, I'd love to know.


The new "cheap" plan is only two dollars less so I don't think it is an alternative for many. I will most likely move infrequently used apps to another platform.


The new “eco” dynos work the same way as free do today — they draw down a shared pool of hours across multiple apps. So you could run tens (or hundreds) of infrequently used apps for a total of $5/month. That’s in contrast to the originally-announced plan, which had Hobby as the lowest cost option which costs $7 per dyno per month, which can get expensive with lots of rarely-used apps.


Ah I missed the "shared across all of your eco dynos" part. Thanks for the info. That helps. I also like the new small Redis plan.


I'm gonna play the devils advocate here

$5-$10 / month is peanuts for most western people. In other places of the world, that could be equivalent to a solid 10%-20% of their monthly paycheck (if the hosting services do not offer prices based on location).


Interesting claim/model. Can someone cite some real data or at least give an anecdote about this?

Are there IT workers who earn ~100 USD a month? (Okay, thinking about it there must be. Freelancers who are not really experienced, or working in very local very low productivity companies.)

Aren't there already small hosting providers regionally who price accordingly? (We tried to run one here in Hungary around 2009-2010, but there was no point. Free stuff provided by giant companies is a much better value, so the local market is composed of otherwise non-typical clients who actually want local hosting with something extra, all in all not a great business. Prices are very low, competition is fierce, etc.)


Moved backend services for my videogame from Heroku to Render. It's been super-easy, and the service is more responsive and significantly more stable now on Render, despite me paying much less.

1. Almost no work. The app was Node.js + Postgres, it stayed that way and only needed a few environment variable changes plus a database migration.

2. Almost none, I use PaaS because I concentrate on making games, and want to spend as little time touching infrastructure as possible.

3. Learned mostly that switching PaaS is not a big deal if you stick to common offerings (Postgres instead of a fancy exact-purpose-fit DB).

4. ~$100 to $27/mo

5. A Node.js app that gets rather low traffic but needs to be stable, in the 1000s of req/day. It only handles highscores/ranked mode seeds, so low requirements.

6. Would heartily recommend Render if you want to stay with a Heroku-like PaaS approach and they already support what you need.


Thanks for the info - I have some similar use cases.

How has the Postgres DB worked out for you with Render? I've been weary of using a relational DB in general since it's a single point of failure and really disruptive if your instance goes down (even with a back-up DB to failover too can be problematic if data doesn't get replicated before a crash).

Also are you just using your Node.js app as a back-end server or are you also serving web pages from it?


No issues with the Postgres - it's rock-solid so far (100% uptime over 6ish months). Most DBs are going to be a SPOF, unless you're willing to go into a distributed/replicated DB setup (which for my use case, I'm definitely not - I'd rather be down for an hour once in a blue moon).

It's an API consumed by the game client, no user-facing webpages.


I have moved a Rails and Go app to https://northflank.com. I have my Rails app running in their free project (limited to two services and 2 jobs) and my Go app in a paid project. I find the pricing to be very reasonable and considerably cheaper than Heroku.

I was looking for somewhere I could run web services and cron jobs both in the same place.

They have a Heroku importer, however I think you need to ask to have it turned on. I found that it was easier for me to build docker images for my apps and use a 'build service' instead however. YMMV

I found their support to be very responsive and enjoy using their UI. The UI, builds and so on all feel very fast.

Northflank can run databases, however for my databases I've been running them on ElephantSQL for some time (https://www.elephantsql.com) - even when I was on Heroku.

Free for Dev's PaaS list is worth a review: https://free-for.dev/#/?id=paas


I'm just in the process of moving to Northflank from Heroku for my Django app after trying SO MANY other PaaS services. I really like how they organise things (apps have multiple services, the build stuff makes sense) and their support so far has been amazing. One of the few where I'm confidant they're not going to work as I need without too much work from me and not cause me stress in the future.


Do you see a way to delete your account in your account settings? Their privacy policy says it should be there but I can't find it.

I actually don't see a way to delete a project once created either.


Northflank co-founder here.

You can delete your project navigating to the billing page inside a project. To delete your account you can send a support request and we can process your request (described in the privacy policy). We'd like to automate it more, however we'd like a formal opt-in via email of the account/team owner when deleting backups and stateful workloads right now.


Can anyone say why DB disk space is so expensive compared to just a volume? Tempted to just roll my own with these prices.


Northflank co-founder here.

Addon disk pricing is the same as our volume pricing for services.

The disks are SSDs. We've added a margin on-top of GCP, EC2 and Azure SSD pricing so I wouldn't say they are expensive in comparison to other providers.

It's possible to configure HDD storage which is much cheaper, would be happy to enable that feature flag for you. SSD $0.30 per GB, HDD $0.15 per GB. We'll add HDD pricing to the site and start to enable it by default for everyone.


Hi I looked at the pricing page and cannot see managed postgres is that an option or would I install it myself on an nf-compute-N?


Managed Postgres is a feature and we install it for you on your selected resource plan, replica count, version, storage and other options.


thanks for this mate, was looking for a free tier paas to show off some proof of concept projects, northflank seems good.


Certainly the best option I've found for my needs. Good luck :)


(I'm assuming the OP is referring to production workloads.)

I think PaaS is awesome. I actually consider moving to one of the managed offerings from time to time... then I look at the pricing.

As a systems engineer specialising in automation, it's hard for me to justify the $75-100/month fee from a PaaS provider (not including network bandwidth), when I currently pay that now for the same thing in AWS including network bandwidth (and I haven't even optimised it with CloudFront yet and static asset deployments to S3.)

I'm not saying PaaS should be a foregone conclusion based purely on my microscopic example and edge case, but if you can configure a CI/CD pipeline (I recommend GitLab CI or GitHub Actions), manage/configure an OS (I recommend Ansible), write Terraform to build out your infrastructure (loads and loads and loads of TF code libraries with example code you can just run with), and a few extra steps, then I'd recommend going IaaS.

You can keep the entire thing KISS, you'll know it inside out, you can grow it (or not) to any scale whilst keeping costs considerably lower than any managed PaaS available, you can hire someone else down the line to look after it when you've reached that level of scale/success/financial freedom, and more.

That being said, I can certainly see a huge value add in doing both: IaaS with a self-managed PaaS stack on top of it. That is, I think, the sweet spot.

Note: yep, maintaining your own infra' is undesirable to a lot of people. If you're one of those people, then stick to managed PaaS and pay the price (it's worth it in your case.)

Note: yep, your time does (technically) have a cost associated with it, but if you're building solutions based on IaaS everyday of your life, you've likely done a lot of the work already.


This is odd to me as well. The value of PaaS scales with developers. But Heroku et al target small shops.

If you have 100 dev teams, infra experience is a lot of “overhead.” If you have 5, you just make sure some of the devs know how to strace and tcpdump and can collab in an incident. Frankly, building infrastructure is easy. Maintaining it is the hard part. Preventative maintenance, security, recovering from incidents are all much harder than figuring out enough for it to run.

It was a brilliant idea. Let devs be devs. But the price has always been so high that, at scale, it wasn’t far off the HR cost.

We’ve gone 180 on that with k8s. Now you have to understand a datacenter with different vocabulary to effectively leverage it. You see low adoption at “I have an app” because it’s more complicated than it’s worth. Higher adoption above that. But almost 0 adoption at I need 100+ clusters. And 90% of what I see assumes to exist in a single cluster as if etcd never hangs itself.

We’ve built all of these abstractions, but ultimately you have to understand the thing below them (ie Linux and VMs) when performance sucks or something goes boom.

I don’t know how someone walks in and learns it all at once vs my life of learning with the growing system from Unix on physicals to now.

It’s like my life with Java. Most devs don’t understand where to start troubleshooting performance or the JVM, despite rich tooling (magical APM, JFRs, etc).


Um, k8s was built for hundreds of clusters. I doubt it's 0 adoption for 100+ clusters, we have hundreds across AWS and Azure.

I've read of smaller companies building their own PaaS over K8s, which is in fact how Kubernetes makes sense.


I used to believe there was a future in having a "heroku-like" PaaS on top of your AWS/GCP/Azure. Enough so that I tried to built a company that would be an offering in the space.

The reality is that each of the major cloud providers have gotten so good that it basically defeats the purpose of having your own PaaS on top of them. For 95% of cases, the default products from each cloud provider is good enough now. It's all very easy now.

So the remaining bucket from Heroku that still needs to be filled is for those who have small or side projects and aren't willing to pay (a lot) for hosting. If you're trying to build a business around these "customers", you will be in a world of hurt and just racing to the bottom dollar.


We have been running a Rails monolith in Heroku for over 5 years. Our application has pretty strict uptime requirements and in the last 24 months we've had several major outages that were entirely due to Heroku. We're in the range of 5-10 million requests per day.

Heroku's erratic performance over the last year or so combined with what we see as a steady decline in the responsiveness (and sometimes quality) of Heroku's support made it too painful to ignore. Their security breach was obviously also a big motivation, but by then we had already made the decision.

We are moving as much as we can into GCP.

- Web app: Cloud Run

- Background jobs: Compute Engine

- Heroku Postgres: Google Cloud SQL

- Heroku Redis: Redis Cloud (https://redis.com/redis-enterprise-cloud)

- Deploys: Custom scripts

- Provisioning: Terraform

It has taken about 6 months for me to do most of the work, with some help from the team on some of the heavy lifting. It's not full-time work though, so perhaps 3-4 months if you count person-hours.

I have decent linux sysadmin experience, but that's about it. Had to brush up on Terraform. Testing GCP services took a long time. Their docs are often good, but important details are sometimes missing or not easy to find when you're doing initial research. I don't trust the GCP console (web UI) much these days; the `gcloud` command shows your actual resources and not some half-baked abstraction. Sometimes resources aren't even visible in the UI.

The learnings were valuable for me personally. I've been interested in doing more ops-related things. For the company, the extra control we get over our own service levels is the major benefit. It has a cost in terms of developer experience and resources we need to allocate to ops as well.

Costs are about the same. Cloud SQL seems cheaper. Redis Cloud is almost exactly the same price (which feels like a lot tbh).

If you're interested in working with us (remotely) with ops, send me an email at other.water6357@fastmail.com


Why Redis cloud over Google Memorystore?


We use Redis for the source of truth for some of our data. More key-value store than cache.

Pros of Redis Cloud over Google Memorystore for Redis:

- Five nines uptime SLA, vs three in GCP.

- No sharding support in GCP, which limits scaling write throughput.

- GCP persistence is limited to hourly RDB snapshots whereas Redis Cloud supports append only file (AOF).

- During migration we needed to access our Redis instances from Heroku. We're not on Heroku's Private Spaces, so we can't set up a VPN. We'd need to set up a proxy like Twemproxy or Envoy to expose the Redis instance but with Redis Cloud we can just make sure the firewall is open.

- Redis Cloud's support is pretty awesome. Had an engineer available during our migration. GCPs support is okay at best.

Cons:

- Memorystore is quite a bit cheaper.

- No GCP IAM integration.


Great information, thanks.


We moved from Heroku to GCP after approximately two years of using Heroku. (This was three years ago so some information may have changed)

The move worked out incredibly smoothly and has saved us money and allowed us to "modernise" our infrastructure to take advantage of some of the newer trends in Infrastructure and Security.

To address your direct questions:

1. Not very long. We were running a NodeJS app with a web layer and several background workers. We were able to get this running on a Google Compute Engine VM in about 1 day using Packer. The whole migration process took about two weeks start to finish.

2. Our team is relatively experienced and had experience with all three major platforms and Kubernetes (although we chose not to use Kube in this case). We are definitely a team of developers, not sysadmins though. This means we had to learn some new things particularly about tuning NodeJS apps on raw linux.

3. I don't think we learnt too much (other than the undocumented rough edges of both platforms) but it was definitely worth it for financial and quality reasons.

4. It's a relatively hard metric to calculate when the company is growing user base and features quickly - but I would estimate it at around 50%.

5. 1 app with around 5000 requests per second. NodeJS / Typescript / Rust

6. If you have only ever used Heroku I think it would be worth getting comfortable with Containers (Docker basically) and making your app run in a container. From there you have tools like Railway (https://railway.app) or Cloud66 (https://www.cloud66.com) that can do most of the rest for you.


I‘m curious about your reasons to decide against Kubernetes and instead opt for plain VMs + Packer, especially when already familiar with the platform.


I moved a Go app with 20k users/mo from heroku to render.

1. It took about 1 week due to lack of render documentation for Docker and buildpacks. I posted on the render help forum and contacted their support which did respond but were not the most helpful. I ended up finding a single example buildpack file in a render blog post which shed light on how to structure their build system. If I had not found it I dont think I would have been able to get it to work.

2. I don’t have experience with 3rd party deployment frameworks.

3. Learned how to use Docker in order to utilize Renders blueprint feature which would automatically link a redis instance to my hosted app.

4. $0 to $0

Render has a similar interface to Heroku but some missing features and bugs a few of which I’ve found are: No newline/special character support in environment variables. Applications cannot get the public hostname from calling the standard call in the programming language’s library like hostname(). Unlike Heroku where a publicaly reachable host is return, render returns an internal hostname is returned instead.


(Render hat on)

Thanks for the candid feedback! We've been working on our buildpacks-in-Docker support over the last month or so and it's greatly improved, IMO. (I'm biased--I wrote the changes.) So this should be smoother for newcomers.

v3 (or the to-be-released v4) of @renderinc/heroku-import should significantly grease the skids for folks looking to make a similar move, and the Render CLI we've got cooking also includes helpers for making ongoing development with buildpacks more reasonable.


How was it 0 to 0? Just curious.


One of the best alternatives I found is using dokku with a cheap VPS. Dokku even has herokuish buildpacks and supports (automated) deployment of docker images etc. On top of that dokku is open source with a very helpful community, total cost: 0$. Not looking back at all.

https://dokku.com/


I tried Dokku out but I didn't like the lack of GUI, I wrote in another comment but I've been using Coolify (https://coolify.io) as a great alternative with a built in GUI, also free and open source.


There is dokku pro if you want a GUI. Personally I bought a copy but never used it. I don't really want a GUI :P


That's true, I used to use Dokku a while ago but after some time I just wanted to take it easy and click buttons haha


I moved to a $5 DigitalOcean droplet running Dokku (dokku.com).

To answer your questions:

1) 2 hours for 8 apps — most work was setup of new DNS and testing

2) I have very litte experience in DevOps

3) I think its totally worth it for the additional flexibility and the super cheap cost

4) Just $5 - can’t beat that

5) 8 Ruby Sinatra sites w/ a total of 20k hits/month


We have been using "Amazon Web Services". They should market their services more, they definitely have a bright future...


For personal stuff or business? I'm extremely familiar with AWS at the corporate level but I refuse to use it for personal stuff. Not only are the pricing schemes suspect, the results can be surprising. A single front page feature on HN, for example, could suddenly cost you $150,000 because you didn't wade through enough labyrinthine configuration pages to set the right budgets.


I use AWS for personal stuff and it's not labyrinthine at all, I found it very simple to set a budget alert.

But more important than that is of course to have a CloudFront cache infront of anything that exposes S3 storage.


I hope you don't wake up 7 hours after a budget alert because you're in the wrong timezone. Please stop justifying the huge risks AWS knowingly carries. It would be easy to place a hard cap, but they refuse to do so.


AWS gives you all the tools you need to shoot your foot off, they also give you all the tools you need to avoid this. I can easily hook up a budget alert to a pushover alert on my phone that is just as likely to wake me up as any alert duty would.


Most people don't have alert duty. I don't want alert duty. I want safety and it's extremely easy to provide it.


You just said that there's a danger of not waking up to a budget alert, and then you say you don't want alert duty. Make up your mind.


So you're saying that if someone buys access to a kitchen, doesn't learn to cook or operate it safely, burns the place down trying to serve 100,000 people, they'll get a big insurance claim made against them and have a $150k bill? You're kidding me? Are you telling me people have to learn to USE these tools if they don't want a nasty surprise? :-P

(I'm being tongue-in-cheek cheeky here.)

I know you're mostly pushing boundaries with your $150k figure, although at this point it wouldn't surprise me, but AWS is a professional tool aimed at professional engineers. They created Lightsail for "personal stuff."

Right tool for the right job, I guess? Although tyou can actually combine the two on a network level (Lightsail and AWS, that is.)


Eek what a hot take.

He's talking about the pricing model not the way to operate it. AWS bills for egress data. You can't operate it in any different way to stop that.


Sure about that? So if I have static images going through an ALB to the requesting client, I can't operate in another way to reduce those costs? ... you're sure?


Sure you can front static assets with the free Cloudflare tier...

[a] which is fine if you're happy with the inflexibility that free Cloudflare offers. And you live in a country where the free tier doesn't have horrible routing (eg use the Sydney AWS region, put Cloudflare in front of it and then watch your traffic to/from Sydney take a round trip via the US or Singapore)

[b] every single AWS service charges egress fees (ie Cloudfront doesn't help at all)

[c] this does nothing for non-static assets

Am I missing something?


> Am I missing something?

No, but you're not the OP above my comment, so my question still stands.

> (eg use the Sydney AWS region, put Cloudflare in front of it and then watch your traffic to/from Sydney take a round trip via the US or Singapore)

I don't understand? I don't have this issue (I'm in Brisbane; I use ap-southeast-2)

> [c] this does nothing for non-static assets

Non-static assets are going to be very tiny in most cases, and the problem then becomes about volume. If you've got volume and your business model doesn't suck, then you can afford the rate (my understanding is AWS' network egress charges are gross compared to other vendors.)


Nice strawman. I never said you can't operate more efficiently.

Every service on AWS charges egress fees that's my comment. There are other cloud operators that do not. I can safely run some static compute / storage / network at a fixed cost, you can't do this on AWS.

If too many people come to my website it won't wipe out my credit card. The site might go offline but I'd rather take that than a huge bill.


> I can safely run some static compute / storage / network at a fixed cost, you can't do this on AWS.

No provider on the planet gives you truly unlimited, fixed cost networking throughput. None.

AWS provides Lightsail for a fixed cost, static compute, storage, and networking solution. It's a not strawman argument just because you don't understand it.


I ended up with a $700 bill for a month with Route 53 due to bogus DNS requests (a normal month would be like $5 or something). And there is nothing a professional engineer could do anything about it - except pay $3000/month for AWS Shield Advanced.


Can you share the gory details so we can learn more about this? It would be interesting to study what happened, in detail. Perhaps there was a misconfiguration?


I made a video about it but it is not published yet. I hope to publish it soon here: https://www.youtube.com/channel/UCkc8xf5A7qCQydN6tG0BmmQ

There was no misconfiguration, just millions of DNS requests but not millions of actual users. I was in contact with AWS support multiple times. The only solution was so use AWS Shield Advanced. They did refund most of the charges but it was too risky for me. Even after I moved DNS provider there was DNS requests to the R53 zones. I can highly recommend https://dnsimple.com though.


Yeah, the problem is, dnsimple.com isn't going to NOT charge you for the same thing. They have T&Cs too.

I'm guessing AWS refunded close to 100% of fees associated with provable bad DNS requests.


Why is it a problem that dnsimple is not going to charge for bogus DNS requests? (or any DNS requests for that matter).

AWS did do a refund but it requires me to monitor usage and do some investigation. I really don't want to spend time monitoring DNS requests.


AWS is IaaS not PaaS. With IaaS you need to hire competent infra admins. PaaS abstracts away much of that requirement.



BTW, EB (Elastic Beanstalk) is generally viewed as just for those getting started, but I've seen companies build really serious stuff (and at significant scale) on it, including an extremely high performance distributed IoT time series database (that was 10 years ago, back before there were good time series DBs to just use).

AWS doesn't push EB much, partly because it does offer great value and doesn't make them nearly as much money as things like FarGate or AppSync. If you need something that just does what really needs doing cheaply and well, though, give it another look.


AppRunner is great while being cheaper than Fargate, but it is not cheaper than its NewCloud counterparts like fly.io and render.com. Not sure if railway.app is, 'cause I never could wrap my head around their pricing model. Lightsail Containers are a credible alternative, too.

Whereas, Lambda / Lambda@Edge / CloudFront Functions are of course wayyyyy more expensive (even if more capable) than workers.dev.


Are you comparing Heroku to EC2 and other non-managed parts of AWS? Because for most parts Heroku and AWS is two different offerings. There are two main reasons to use Heroku: they manage it and they provide easy tools for configuration. Even if AWS provide similar managed services I would assume it is as the rest as AWS: a pain to configure. I use many AWS services and not only it is required to have a PhD in AWS to understand basic tasks but the risk of doing something wrong is high. Take IAM for example. Or just setting up a static website with S3, Cloudfront and Route 53 - what a mess. Not saying that AWS isn't great for many things or for companies wanting more control (and with resources to manage it), but comparing it to Heroku is for most parts not a valid comparison.


For another self hosted alternative, which can be hosted on Digital Ocean, Linode, Hetzner, check out Sailor[1].

Sailor[1] is a tiny PaaS to install on your servers/VPS that uses git push to deploy micro-apps, micro-services, sites with SSL, on your own servers or VPS

[1] https://github.com/mardix/sailor.


I found great advice on Twitter https://twitter.com/pauld_fgc/status/1574107791651905536 by Paul P.

API & Platform Results:

Rails: @Railway wins. Render wakeup boot extremely slow (~1m). Flyio cannot run Rails for free.

Crystal: @Render wins. 5-10s wakeup boot time. Railway & Fly keep growing memory infinitely.

Node: @flydotio + Railway both win. Very slow boot on Render.


These are free plans though? Not a good comparison if you are a professional looking at this.


For Crystal, I wrote the IdleGC shard https://github.com/compumike/idle-gc to combat slow memory growth. Runs garbage collection when not servicing requests. Running in production on https://totalrealreturns.com/


I moved my small free Python/Django app to Fly.io and am extremely happy. The move was seamless because they allow you to run your app from a Dockerfile which was straight forward after going through the documentation. Looking forward to building more things on Fly.io.


I'm planning the migration with my Django side projects from heroku to fly as well. Fly has a special org on GH with example projects, but the one with Django is empty. I created a small test repo to see how Fly works. hands on. Thought I'd share.

https://github.com/tomwojcik/django-fly.io-example-project


Highly recommend fly.io as well; I started an elixir app for a client and decided to use fly.io and very impressed with how easy it is with the feature set so far.


Price ?


Cloudflare Pages

https://pages.cloudflare.com/

I really didn't need much it turns out. Most things were old experiments, testing, fun that I let die.

I really just salvaged an old personal page of mine. Setup was super fast, tied to github, very nice features. It's not heroku by any means, but it works for what it is and I really enjoy the simplicity of the setup.


1. Not a lot of work, probably a day to move things. 2. Almost none. I used Heroku and understood what it was doing on a very surface level. I knew I could push my repo and my project would automagically be "there", but since I was early on in my learning process, I didn't know for sure what was happening. So the migration was a great experience to understand how to setup my website, how to setup a reverse proxy, how to prepare the certificates, how to secure a linux box and many many more (it's an ongoing process!) 3. Extremely. I am now a huge supporter of IAAS in general. I think all the things I learned are valuable in general and widely applicable. I know what an lsblk does, I know what ufw allow/deny does and so on. Hell, at the most basic level I understand how to use nano to edit some config files, how to search or edit files from a terminal. I really enjoy it and I went as far as to setup my own home server. 4-5. A lot cheaper now. I had one website running and another hobby project and that was 7$ a month. I have my website now (static page with caddy as reverse proxy), pihole vps, another VPS used to do backup checks, VPN server - all these are free, except the website which runs on the cheapest Hetzner vps and I could probably run other things on it. 6. I think everyone should spend some time deploying their apps o a remote server, securing it and accessing it. Even if you never use this again, it will help you understand how to manage all of these things and get a better idea of the information flow from start to finish. Learning how to poke around a command line is also very helpful.


> the migration was a great experience to understand how to setup my website, how to setup a reverse proxy, how to prepare the certificates, how to secure a linux box

If you have any resources on the topics you had to learn, please share!

> I went as far as to setup my own home server.

Congrats! How long did it take you to skill up from Heroku to home server?


I started with Digital Ocean (DO) and Linode's guides to get an idea of the setup in general - https://www.digitalocean.com/community/tutorials/initial-ser... (you can change distros if you're wanting the guide for something else) and https://www.linode.com/docs/guides/set-up-and-secure/

I used DO because I had gotten some free credits as a student and they were valid for a year, so it gave me time to experiment. Once I went through the guides, I started researching the topic further, each time expanding on anything I didn't know or understand.

To go from Heroku to home server, I spent about a year all in all doing the research, learning what needs to be done, what hardware, what security considerations and so on. Before buying the server hardware and doing that, I setup a VM with the Linux distro I wanted to use, mimicked the storage setup and then went about doing the whole setup from start to finish and documenting things along the way. Installing, setting up users, changing login policies, adding drives, formatting them, hard drive error checking and automated notifications, setting up docker containers, docker container backups, container updates, limiting access of containers, setting up network shares, LAN sync, multimedia playback, software backups and automated server backups.

Each new step would highlight a new learning opportunity and would raise new questions. It's really interesting and I went ahead and installed Pop OS (initially I used Zorin os pro) on all my devices which gave me a place to test out small scripts to automate a system reinstall/setup. My next goal is to collate all of this information and organize it into a guide that explores this setup from start to finish, but that's a few months out.

After that I want to move away from a single Linux install running to a Proxmox install on my home server, a couple of VMs running my services and space set aside to allow me to start a new VM when I want to test something out. I also want to build my own router using Open BSD. Hope this helps!


We wrote a blog post with a comparison of the prominent alternatives for Heroku: https://blog.flycode.com/7-excellent-alternatives-to-heroku we compare there Render, Fly.io, Railway, Porter, Northflank, Koyeb and Qoddi.com


Thanks for sharing. This introduced a few I missed back when I was evaluating alternatives. Nothing I have tried so far fits quite as well as Heroku but Porter looks like it has a chance…


I'm using two services, fly.io and encore.dev. Both are wonderful, but different worlds.

fly.io (as others have mentioned below) lets you build a Docker image and run that. I've found having them serve static files lets me not include caddy in my app (simple go-based website).

encore.dev is, in effect, a URL service. You write go code with comments that say which path goes to that function, and it takes care of routing for you. They have a lot of ancillary services (cron jobs, database, queue, etc.) that integrate very nicely. Different way to write code, but it seems to work very well (it watches a git repository and rebuilds when you push).


fly.io is a big deal.

It's not as easy as Heroku as there's no managed Postgres, but it's well-documented with a few paths.

However, the main reason Fly stands out is their focus on multi-region deployments. Ever wondered why facebook.com is fast no matter where you are? Fly provides app-level tooling to make this superpower available to everyone.

It's especially important if you're making heavy use of Web Sockets.

Everyone wants to be the new Heroku, but that should be table stakes. Fly is innovative in the same way Heroku was when it launched.


> Ever wondered why facebook.com is fast no matter where you are? They provide app-level tooling to make this superpower available to everyone.

They're doing that in the app? GeoDNS has been around for quite some time: https://en.wikipedia.org/wiki/GeoDNS or more thoroughly explained: https://jameshfisher.com/2017/02/08/how-does-geodns-work/


Fly's multi-region capability is BGP Anycast-based, which is vastly superior (and much more complex to deploy, outside of Fly) to DNS-based solutions like GeoDNS.

One example of app-level tooling is the fly-replay header, which permits your app to automatically replay requests in different regions. This works really well alongside the FLY-REGION environment variable that is available to each node. https://fly.io/docs/reference/fly-replay/

Another example of app-level tooling is the availability of NATS Jetstream, a key/value store that is less feature-rich than Redis, but supports multi-region active/active replication. This is a big deal, as Redis doesn't offer it at the free tier, and many others (KeyDB) who advertise it seem to have trouble delivering it when there's more than two regions in play.


Would have been a good comment if you dropped the first line.


So you disagree that it's superior? Please explain why.


The comment has been deleted to remove the unnecessary comment: you're seeing the improved version.


GeoDNS is a technology that allows you to point web users to a nearby server. That's great, but you still need to coordinate each server. For instance, if you have distributed app servers and a single database server, you still have long distance communication/slowness whenever you need to run database queries.

One solution to this problem that Fly provides app-level support for is to allow you to have a fast local read-only copy of your database on/near each server, so that only inserts/updates need to run remotely. Your regional copies basically fail whenever there's an update, and Fly's platform re-runs the request on the leading webserver (which replicates the update to the regional database servers).

How well does it work? Never used it. But it would save you the trouble of having to notice the problem, come up with a solution and then implement it.


Closed everything and now I grow potatoes.


Most based comment I've read this week.


I tried many PaaS providers and ended up settling on Render, mostly because of its minimal boilerplate and lack of proprietary tech or vendor lock-in. There is a custom infrastructure-as-code YAML file, but it is optional and pretty nice in its own right, anyway. I get the sense that the Render team cares about DX and doing things the right way for the long term. Moving there from Heroku is simple as the platforms have similar capabilities and there is a migration guide - a day or two unless you are deeply integrated into Heroku’s buildpacks, etc.

Heroku recently removed their free tier. Render has a free tier that is pretty generous and very usable. For paid plans, Render pricing is roughly equivalent to Heroku. However, I find Render’s pricing significantly easier to understand and thus more friendly. Heroku has pricing tiers with varying features and sometimes you have to upgrade to get a feature even if you don’t need more resources. Additionally, Heroku can be vague about some of their specs, particularly when it comes to how much CPU you are promised. Render, on the other hand, gives the same features to all paid plans and then the resources just scale up and out. I much prefer Render’s pricing model, even though for me the cost is about the same.

My one major gripe about Render is that they don’t have a CLI. I find a CLI to be a very important part of any PaaS. Heroku has a great CLI, Render doesn’t, and the feature request to create one has been open for three years. Clearly it was not a priority for them. That said, the ticket did just get marked as In Progress a few days ago, so I am optimistic, but we will have to see where that leads.

https://render.com/

https://feedback.render.com/features/p/render-cli


My main gripe with Render is they don't support adding Cloudflare CDN to cache static assets with far-future settings.

Rails apps have hashed assets that change when the asset content changes, so it's needlessly bad for performance to not cache them.

This is a feature that's been in the "Planned" stage for some years now, one of the most upvoted ones, but unfortunately still not there.


(Render hat on)

> My one major gripe about Render is that they don’t have a CLI. ... That said, the ticket did just get marked as In Progress a few days ago, so I am optimistic, but we will have to see where that leads.

Watch this space. Since coming onboard, the CLI became one of my biggest goals. But I'm new here and I forgot to mark it as In Progress on the feedback site. Sorry about that. ;)


I took static ips on my fiber and I use my older workstation as a server.

I use zfs for backup of the whole thing.

It is not mission critical so I can accept downtime. Even then, in the last year I had zero downtime.

I think it can be a viable (and fun) option for many small applications.


We moved off of Heroku in late-May/early-June of this year. It took a few weeks of half-time work for one person to make everything happen. We were already using Docker in CI, so we didn't need to spend much time making our app work in Docker for production. We paid the $500 for Porter's "white glove" migration service because we figured it would be useful to be able to get quick feedback about choices and changes.

We had some AWS experience from running our backing services (RDS, Elasticsearch, Memcached, Redis) on AWS despite doing compute on Heroku for a long time, but we'd never done EC2 or EKS for deployments on AWS. Despite having been on Porter now for months, I'd say we still don't know or really need to know a ton about k8s, but we are familiar with some basics around pod sizing, healthchecks, deployments, etc.

I think Porter does a lot to put themselves out as as destination for people leaving Heroku. I'm not sure if this would have been more work to do something like Render, but I was very pleased with the timeframe, hours spent, and the ease of cutting over.

We have a monolith that handles ~50k/min traffic at peak use as well as a couple very tiny services that do some accessory stuff. All the apps are Rails apps.

One of the reasons that we chose Porter over other options is that we really liked their setup for Preview Environments, which are an important part of our workflow here. The experience of running preview apps on Porter is notably better than on Heroku, where we started to see a lot of unreliable behavior on app launch that resulted in the app being unusable and the only fix was to close and open a new PR .

On the whole this was a really positive experience for us. We're seeing better performance, we're paying less (in Porter + AWS compute combined) than we did for Heroku Enterprise, and our ability to deploy mid-day when we're under load is better than it was before. I've spent most of my 12 years or so of Rails development working on Heroku (and we had been on Heroku for almost six years), so we had some fear around moving to unfamiliar tooling, but this has been a big win for us.


Moved from Heroku to Scalingo

1) Took a few hours to have everything setup. Transition was easy 2) Not that much. I chose heroku because i hate doing sys admin stuff 3) Can't say for this one, scalingo is a PAAS 4) Scalingo will probably be more expensive for starter plans but as you grow it won't be like heroku's horrible pricing 5) Mostly app with no traffic to be honest


I used Scalingo for Rails apps a while ago, and was quite impressed by the quality of the platform. Very Heroku-like. Maybe not as polished, but still doing the PaaS job pretty well (git push, and you're done).


Thank you for this! We've been looking for an EU based heroku alternative for a while.

Thanks to a couple of people suggesting scalingo here on HN, I gave it a try last night and I'm very impressed with the offering. It took me less than an hour to get our rails template running - with no code changes at all.

The prices are comparable or lower than heroku, and unlike heroku your not forced into paying much more when your memory usage starts exceeding 512 mb.


Vultr.

I program in Go so I get a self-contained executable file at the end of the day that I just upload to the server. So whatever "Heroku" provides is completely irrelevant to me.


I went with render.com (no affiliation).

Found them fairly easy to use, really excellent support.


Every time I've tried to give render.com a proper go, I've run into some weird small issue. Ended up settling on fly.io and use that for everything now.


(Render hat on)

Interesting feedback--feel like sharing in detail? You can reach me directly at ed@render.com.


Hey! Thanks for that, I'll find out what the most recent issue I ran into was and shoot you an email. In general I like the idea and direction, seems like a much closer Heroku successor.


They also have a migration guide and have even built some tooling around that process: https://render.com/docs/migrate-from-heroku


Same here, also moved to Render and love their UX, UI and docs. Good customer support too (needed it to add VAT ID for my business, so nothing technical).


Same here. My use case was simple — a bunch of static pages — so render.com isn’t even charging me.


I'll chime in and say that I did the same. A lot of people like Fly but I wanted super low complexity. My hope has mostly panned out. I was able to move without too much difficulty and I'm happy with how it turned out.


Yes exactly. I tried fly.io first from recommendations on HN, then on setup realised i needed a full docker workflow which was not what I wanted coming out of heroku.

I’ve found render the closest to heroku. I was able to setup a service for my rails app, for Postgres and for sidekick.

Not as polished as heroku or maybe fly but like I said they made up for any gaps from the bend over backwards live chat support.


Same here, Pretty much the exact same features, for a fraction of the price. Stability/Availability has been perfect since we moved.


Old school approach: only old boring tech + one bare metal server + free cloudflare

- No salt/puppet/ansible, no docker, kubernete, etc. Only standart linux stuff.

- All done with makefiles/ssh/git - Send email with Mailgun

I try to keep the dependencies as small as possible: external services, programs, libraries.

One server can serve an incredible amount of requests if you only do normal web stuff.

Login with ssh to get stats with top. Huge performance, snappy, no concern with cost.

You could do the same with a vpn

It took me quite a bit of time, and it’s not perfect. I need to set up a vpn image as a fall over if the hardware break (low probability)


I have some personal apps that I use regularly that were being hosted on the free or $7 tier at Heroku, all three were Rails apps with postgres and one needed redis. I spent about 20 hours trying out AWS, Digital Ocean, and Render.com. I eventually decided on Digital Ocean as the ease of use + pricing was where I wanted it to be. I spend about $14/mo on hosting at Digital Ocean (db + apps) and grabbed a free instance on Redis Cloud. Its currently more expensive than Heroku but it does provide me with better especially with my db setup.


1. Install caprover on a fresh Linode instance. Create captain-definition for each app. Add instances as you need.

2. I'm a pretty knowledgeable DevOps guy.

3. Not really valuable but gives me a bit of Linux experience.

4. Flat instance pricing, was able to get all of my stuff into a $5 instance... however I have been increasing it as I get projects from friends and myself in these.

5. Supports almost everything, it just runs your stuff inside a docker container.

6. I liked Heroku, it's where I started with Rails but it gets ridiculously expensive quickly... there's no reason adding a DB should cost $40.


DigitalOcean. Databases are much cheaper. No big deal in moving as digital ocean app platform supports buildpacks


I'm considering the same for https://pwpush.com. Currently on paid Heroku ($59/month) with a lot of traffic.

DO App Platform is better than last year but still a bit quirky. I haven't found a better alternative that is more 1) mature, 2) reliable, 3) paid/supported, 4) enough features.


After clicking the link to pwpush.com I had a recollection of the splash page.. Seems we've met before! https://gist.github.com/stevecondylios/16a53b73f22621e3cde2e...

Awesome to hear Password Pusher is going very well :)


Ah that's great. I still use that Feedback form too :-)


I've been waiting to make this same move but holding back because DOAP isn't quite as featureful yet, e.g. not supporting scheduled jobs, which for some reason ALL my Heroku-based projects use in some way.


For scheduled jobs right now is a pita I expose HTTP endpoints which I call from an outside service on a scheduled basis


How have you found reliability/uptime on DO?


An order of magnitude better than heroku


I moved to CapRover, and shortly after ended up moving to Portainer, mostly because CapRover does not have very good collaborative environment (eg. A single password for access)

With 20Eur a month and a VPS on Digital Ocean you can get quite far.

I also made a small project to spin up a PaaS like environment with docker swarm, Portainer and Traefik if you're interested: https://github.com/sergioisidoro/honey-swarm


If you want better bank for the buck VPS, checkout Hetzner. Or oracle's free tier.


I had a very bad experience with billing and Hetzner. My card failed on a personal project without me realising it. When I noticed it was already too late and they had wiped out all my servers and all my data.

Yes Hetzner is cheap, but I'm scarred from that event.


Sweet, this is my exact setup which I love for the simplicity to power ratio. Will check it out.


I was looking into that, but there's no other super easy heroku like platform. Something where I could just push code and it'd deal with the build packs. Most of them require Docker.

I heard keyob is decent, otherwise fly.io. Render and railways.app have inactive or day limited services.


I am working on heroku/dokku-like CLI tool which manages and deploys apps to any Kubernetes cluster and abstracts all kubernetes concepts from you. Tool has simple commands like "config set KEY=VALUE" and so. I also plan to implement simple web UI.

The tool requires minimum dependencies inside the kubernetes cluster and it tries to follow all best practices on kubernetes. Unfortunately the tool is not ready yet.

Would something like this be interesting for someone else other than me?


Yes! I think this is the right approach. Using kubernetes is going to be much safer on the long term, because most software projects die and kubernetes isn't going to die soon. It is also much more used and probably a lot more stable.


Yes, exactly this is my point. I use kubernetes as a framework for infrastructure management which has all concepts I need.

Here is the github link https://github.com/alhafoudh/eien

I will push what I have today.


as an avid fan of dokku, def interested! are you able to drop a link to the repo (if it exists yet)?


Repo is here. I will push it today https://github.com/alhafoudh/eien


I got a VPS off Contabo and started using docker compose for deployments. Learning it all took me a day (I had been using docker but compose was new) and the implementation took me one more day. Mainly setting up GH workflow for automated deploys. There's just so much freedom now! Cost wise, I am down from about $24 to $8 with a much better performance. I can finally have a server near to the users and not be restricted to just US or Europe.



I moved 4 personal websites with a total of 7 services to railway.app (no affiliation) this weekend and was very impressed with how simple it was. Previously these websites were hosted using a combination of GitHub Pages + Heroku + Fly.io but it’s nice to have all of them hosted in a consistent way now.

I vastly preferred the railway.app migration to the fly.io migration but fly.io feels much more production ready than railway. This isn’t a big deal as none of the projects I moved have paying users so downtime would be acceptable if it happens. I was able to deploy all of my applications from the UI without ever having to install any company specific CLI which I prefer. Railway being such a small company has lacking docs and is very difficult to google for specific question about the provider —- most “railwayapp” google responses are for actual railways. Pretty much everything was intuitive and didn’t require the docs though which was much more pleasant than my experiences with fly.io.

To get the custom domains working I had to move my DNS to cloudflare for all sites which was something I had planned on doing anyways but felt annoying in the moment. I was previously using my domain registrar’s (namecheap) DNS but they don’t seem to support CNAME flattening which allows for CNAME apex domain entries which were required for getting custom domains on railway.

I moved all of the services in about a day and everything has been running pretty well. I do think railway is slower than fly but everything is still plenty fast so far. All projects have automatic github deployments which have been working well so far. Overall for a new project made by a small team I have really really enjoyed using railway for hosting. Railway is organized in a way that matches how I think about project hosting and most things seem well thought out with lots of good stuff still coming.

The websites moved were https://trentprynn.com https://habitapper.com https://weightwatchapp.com https://samueldebartolo.com

these are all open source and can be viewed from my github https://github.com/trentprynn


$10 per GB per month!


Yeah railway's pricing is super confusing -- it will be interesting to see what the total for my (very low traffic) websites will at the end of the month. On the dashboard it currently estimates my total bill for all sites to be $2.82 which means I'll be under their $5/mo free credit.

At least they have a very convenient single page where I can see the total for all projects at one time.


I was asking myself exactly the same question today. I would especially be interested in providers that have data centers in Australia, as most of them don’t unfortunately.

What can you recommend for an Heroku alternative down under?


Dokku + DigitalOcean droplet and managed postgres.

1. The migration - from "I just learned about Dokku" to "deployed my app for testing" - took about two hours.

2. I'm an urban planner who dabbles in front-end development. DevOps is not my thing.

3. Not too valuable of learnings. Overall, the dokku commands are almost identical to heroku commands (by design). So the transition was smooth and comfortable.

4. Costs: I chose droplet and postgres services with DO that equaled our heroku costs. But we get about 4x the ram now. The site is much faster.

5. Rails/postgres. We don't track hits, but it's pretty popular in terms of, uh, maps for finding pinball machines.

6. I haven't figured out how to set up logging. I would like some kind of "last 5 days of logs, being piped to somewhere." It seems simple, but I don't quite get it yet. I also miss the "usage" metrics from heroku that showed ram and stuff throughout the day. But I guess I pretty much get that with DO and Scout APM.


Late reply, but you might want to checkout our Vector integration for log shipping. You can ship your logs to whatever service you want.

Disclosure: I'm the Dokku maintainer.


1. It was right after the first MVP was hosted on Heroku so not that much work. I had to write Dockerfiles and set up a CI/CD myself (Github Actions thank you).

2. I'm a dev with a bit of DevOps experience but mostly as a user. I still have to run to my DevOps friends for help occasionally :)

3. I learned tons. Most usefully setting up the entire CI/CD myself and Dockerizing my setup.

4. Cost is one of the main reasons why I moved. Digital Ocean is much more cost efficient for CPU intensive instances (my backend runs audio/video and physics). For the web app, Digital Ocean Apps are a bit more pricey than just running a droplet but save me loads of time.

5. I run a spatial video meeting app (https://flat.social). Tech stack is Node, Next.js, Mediasoup and Typescript. The traffic is rather spiky - anything from a 50-60 users on a chill Sunday to over 25k website hits from Reddit during xmas.


Tried railway (unstable), boyab (unable to setup) and settled with fly.io (easy to setup, stable).

all free tiers, even with a free custom domain.


Railway founder here

Can you mention what particularly was unstable?


my service was not reachable after two days running. guess the vm wakeup from a an external request was too slow.


Dokku. ("Docker" + "Heroku".) The migration was done before I joined but looks pretty easy.


We went from Heroku to a self-hosted OpenShift cluster on AWS EC2 for compute and managed AWS services for state (RDS and Elasticache).

1. A few months. The actual changes to the apps were not fundamental, but we discovered a lot of things we didn't know about our own codebase and it had to be fixed. The majority of changes were devops plumbing not Ruby/Rails. The move itself, including all the necessary admin changes and supplemental work took about 1.5 years from decision to cut off date.

2. Me - about 30 years of total experience, 10+ years of devops/SW mixed experience. But total Ruby noob (and still is, you don't need to know Ruby to move apps)

3. The move made me a multimillionaire and a couple people billionaires and another 100 or so millionaires or multimillionaires. I mean this move opened the path to the IPO and we became one of meme-stocks of 2021. So even if I never touch AWS, Openshift, Heroku or Ruby it was totally worth it. I didn't learn any fundamental new things (after 30 years in the industry it rarely happens), but solidified my skills in many departments. I never run such a massive containerized workload before, for example.

4. Nobody cared actually. We grew like bamboo at that moment, so very difficult to compare apples to apples.

5. A single monolithic app about 0.5M LOC and a few smaller ones. Hundreds thousands of customers, tens of millions of revenue/month


I moved to fly.io. My costs are about half. The hardest part was copying the database over. The data kept getting corrupted. Everything took about 8 hours.

Self-host redis, do not use their free beta app.

My app ruby on rails, sidekiq, hasura, redis, postgres, 2m rows, 2-5qps.


I'm working on an open-source framework for converting Jupyter Notebook to interactive web applications. I was using free Heroku dynos for demo app deployments. I have 19 demo applications running (example https://automated-pdf-reports.herokuapp.com/)

All apps use 1000 free dyno hours in ~25 days, so at the end of the month, Heroku switched them off.

I didn't find any free alternative for my users and me. I'm thinking about building my own service where users can upload notebooks and publish them as web applications.


Is there any context regarding moving apps off Heroku?

I don’t have any apps there myself but I’ve worked on projects hosted on Heroku and I remember being happy with that provider.

Did they raise prices, change their services, or something similar?


They killed their free tier.

Apparently paying customers benefited too so was perceived as a loss of value provided


Generally they've been having an increasing number of embarrassing mistakes/problems/outages that they handle clunkily, and do not seem to be investing in the future of their service.


They removed their free tier plan.


Skimming HN over the past year shows many negatives and very few positives: https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

TL;DR

- Security issues

- Declining innovation (since being owned by Salesforce)

- Changes to pricing/plans


Railway.app

No affiliation, I’m actually preferring the experience. Heroku is showing its age now.


I started PikaPods (https://www.pikapods.com/) a few months before the announcement as a simple way to throw up some open source web apps. Part of the mission is also to provide an additional revenue source for FOSS authors with our revenue sharing program.

Since then the project has seen good traction and we run a few thousand apps in two regions, EU and US.

Anyone still looking for an Heroku alternative, do check us out! There is some free welcome credit to get started quickly.


Render. As easy as Heroku and migration was easy due to github based deployment. Cost should be same as using $7 instances in both. 3 tiny apps with 10K hits/day. All using Node.


I'm currently working on an MVP and moved it over to railway.app

Overall the process was smooth, the React + Vite frontend app was up and running as soon as I connected GitHub and added the envars. But the dotnet backend took a little bit of trial and error to get running because it didn't like my initial Dockerfile.

Overall the process has been pretty smooth though I wish the docs were a little more robust and had more examples.


I’m building apps for a portfolio. I’ve dabbled with heroku before and was always skeptical of their motivation, a convenient hosting solution to latch you in (when you can learn how to host on your own for low cost and high roi in regards of time and knowledge).

I have a linode vps with a wordpress blog up but I want to learn docker and host my apps on there for 5 dollars a month.


So glad to see this thread, reading everything! Question I've been wondering for months and unsuccessfully seeking out people with experience on the topic - I'm going to be launch a socket.io app on heroku soon, a multiplayer card game, and I have _no idea_ what kind of hosting costs to expect in terms of dyno usage. Has anyone done something similar?


This is a thread filled with great insight and experience, so thanks for starting it. If you're looking to move from Heroku/PaaS into your own cloud account on GCP or AWS, please take a look at Coherence (withcoherence.com) [I'm a cofounder]

Coherence configures managed services in your own cloud such as Cloud Run in GCP or Fargate/ECS in AWS, while adding in all the missing pieces these providers don't include out of the box, such as load balancer/SSL/DNS configuration, database/redis orchestration, CI/CD integration, Cloud IDEs for development, VPC/networking best practices, long-running workers and cron tasks - all with one simple configuration.

Coherence itself has a free tier that fits most projects, but you will need to use credits or pay for you usage of your cloud resources. We'd love feedback and are happy to answer any questions, feel free to ping hn@withcoherence.com!


For simple web apps? Netlify/Vercel for frontend + Serverless for backend (AWS, Lolo)

1. Maybe a day 2. Not a lot 3. You learn event-driven architecture but it wasn't new to me 4. It's free for me as long as you don't attach a company github with Netlfiy or Vercel (then you move out of free tier) and then AWS has enough free credits, Lolo is free for your first two apps (although a bit of a hassle to have to re-deploy every 7 days if you don't want to pay 9€) 5. Web platform primarily with maybe 10-30k hits per month, but also a few web apps. Maybe move the Serverless api from AWS when it increases to not incur too much of a cost.


Not a big app but I had a few small sites I ran on Ruby on Heroku. I moved them to a shared hosting provider using PHP. These apps are super basic but I was paying Heroku and realized that PHP would save me a 20 bucks a month and it wouldn't take long.

The real benefit is not having to worry about upgrading stacks at EOF and that stuff.


For years I paid for “unlimited” LAMP shared hosting from one of the many providers for about $150 a year. This was great for hosting personal stuff, side projects, proof of concepts, etc.

Recently I started rebuilding some stuff in React/Python and went with Heroku for my backend (before they announced they’re ending the free tier).

Professionally I’ve used both AWS and GCP for real/big stuff but I’d love to pay $150 a year for a “personal” instance that has predictable billing at the expense of “pro” feature/uptime.

Where are the PaaS flat rate providers for hobby stuff?


Founder of EvenNode here - https://www.evennode.com

We don't have big marketing budgets and are a less known alternative, but we see influx of new customers moving off of Heroku every day. It's mostly Node.js and Python apps since we focus on this stack.

The migration is very simple - git push evennode main. This works for most of the apps without any extra steps required. It's really this simple.

Cost is cheaper than Heroku - 4.50 EUR / month. Each app also gets its own MongoDB database at no additional cost.

The biggest advantage I see, compared to VPS, Lightsail, etc., is you get a fully hosted and monitored platform. A server goes down? No problem, your app is migrated to a new one automatically.

I would says a pretty good alternative to self-hosting and Heroku.


Throwing Scalingo [1] in here because I feel they get way less exposure than they deserve. We [2] offer a SaaS for medical device companies. GDPR compliance made us look for a EU-based company, and the recent Heroku situations (disappearing free tier and GitHub OAuth fiasco) made us switch to Scalingo. Scalingo is based in France.

Their PaaS is rock-solid. The UI sometimes looks less polished but that shouldn't detract from the fact that their service quality is really, really good. No problems so far. When I had questions, I've been in touch with their support - you have real engineers responding to your message in a few hours. Really cool.

I've recently come to increasingly appreciate profitable, bootstrapped startups, because they have (much) less incentive of getting acquired, subsequently shifting their focus to "scale" and "enterprise sales" (wait, which company comes to mind here.. ah, Heroku). Scalingo is bootstrapped and profitable.

Why not render.com or fly.io? Both are based in the US, so GDPR compliance becomes more tricky. They're VC-funded, so I have no clue what will happen to them in the future. Maybe they'll be profitable and their investors will say "oh okay, it's fine to have a boring, profitable startup in our portfolio". Maybe not. Likely not.

[1] https://scalingo.com

[2] https://openregulatory.com


Thank you for this! We've been looking for an EU based heroku alternative for a while.

Thanks to a couple of people suggesting scalingo here on HN, I gave it a try last night and I'm very impressed with the offering. It took me less than an hour to get our rails template running - with no code changes at all.

The prices are comparable or lower than heroku, and unlike heroku your not forced into paying much more when your memory usage starts exceeding 512 mb.


For Rails apps, I've moved over to Hatch Box.

https://hatchbox.io/

Now I can deploy multiple apps to a single server, not have to worry about provisioning. I pay for the single server instance.

It's been great for my hobby projects.


Same here. I manage all my projects via Hatchbox. Use UpCloud for vps and thats it... not free but closes thing for seamless deployments.


Linode has predictable costs and simple human support.


Love linode.


I used AWS because of PCI compliance, and it was HELL.

It took a whole week to set up a simple EKS cluster for one basic Rails app.

Everything is ridiculously complicated, and the docs are terrible. When the Log4j exploit occurred, I had to go and figure out how to search the logs. I had to request them to make an update manually, and it took them 3 days to do it!

Cloudwatch is terrible. EKS UX is terrible.

Unless you want to have a full-time job of wasting time with AWS, stay away!

Cost is horrible as well. Heroku was $7/month and AWS around $80.

Fortunately there are several good options. After seeing HN posts recently, fly.io seems popular.


Moved from Heroku to Railway:

> 5. Summary/description of your apps

Simple discord.js bot interface to fire Lambda functions, about roughly 5000 times a month

> 1. How much work it took to move apps (be honest)

Roughly 2 hours and it's running! Pretty good replacement.

> 2. How much experience you had at the time of the migration

Experienced with k8s, practically a newbie with heroku-like environment.

> 3. How valuable were your learnings?

Barely any learning, it was quite straightforward.

> 4. Cost comparison

My application is pretty low resource so it remains free with the Railway pricing model.

> 6. Anything else you want to add

Related to point 4. Cost: Heroku did have more extras built-in (logging system, alerting), these are missing in Railway.


I write an article on Dev.to about the self host alternative for Heroku/PaaS.

Basically these three

- caprover

- dokku

- k8s

https://dev.to/timhub/self-host-heroku-alternative-40l4


The way to go is either caprover or pure docker - compose (deployed from a git repo) on Oracle free tier.


I've tried both render and fly.io, finally settled with fly. Love the DX!


Used to run all my bots (less than 5, doing each 30min - 1hr of computation daily) in Heroku using the free dynos, now moved them all to a Raspberry Pi I had from years ago that I dusted off for this.


What do the bots do?



Picked up a NUC and moving my apps there (several BE only things like bots with FE served off GH Pages and one small app with some server processing that gets like 600 unique visitors a month). They were all served off the Heroku domain anyway so I'll just move them to DynDNS. Will take several years to make it worth not just paying Heroku but at least I won't have to play this game again when Render or Fly.io end up having to restrict their free tiers as well.


I had a comment system (staticman) on heroku, I moved to fly.io and the import (build by fly.io) was done in some minutes, adjusting my blog and ready to go. All in all 15 min.


Oh good call. I was trying to figure out what to do with my two staticman apps that I have on heroku. And I wanted to try fly.io!

Edit: though I don't want to pay for staticman stuff. Are you running them for free?


Update: I moved mine to fly.io using the heroku migrater. Took 5 mins.


I use the m3o.com/app service (because I wrote it). It's a simple interface on top of Google Cloud Run that gives me back a *.m3o.app url.

I can run it like `m3o app run --name=helloworld --repo=https://github.com/m3o/m3o-app` and then go to https://helloworld.m3o.app


For my next project (pun intended) I will use vercel to host and grab a postgre instance somewhere (maybe heroku or elsewhere) for the storage


My condolences

moved out of vercel and couldn't be happier, what a buggy mess that is


I vouched your comment (was dead...), to see what you think the buggy mess is. I have not seen any bugs but I have only scratched the surface of it.


I've been using Render.com, but I experience very slow boot time with my Node application. I think I will be moving my project to Fly.io


Years ago I used Heroku but I've moved everything to Dokku running on Linode with no issues. Dokku is basically the same thing.


Railway has been on my mind as a possible candidate, but haven't made the move quite yet. https://railway.app/

Seems like a similar idea, you give them a Dockerfile and they deploy it somewhere at `$PORT`, and then you listen on that port.


I moved to Railway. My little bot barely consumes any resources, so it's free :)


I've been using render.com


Vercel has been awesome for us - they've solved all these little pain points that really add up. Definitely a great starter service like what Heroku was. Also free for individuals.


Front-end: - vercel.com - nginx load balanced next.js apps running on digital ocean

Back-end: - fly.io - nginx load balanced python/node apps running on digital ocean


I didn't, I actually moved more things into Heroku because it wasn't worth the effort to maintain the platforms myself.


I moved 1 thing to Github with GitHub actions. The rest I kept on Heroku and paid.

I use it and they should be paid for it. Why not?


Digital Ocean + Dokku = Heroku ;)

Or... at least as close as you can get. I actually preferred it before the heroku thing.


I wrote about it shortly here in another Heroku topic: https://news.ycombinator.com/item?id=32585299

But to answer your questions: 1. It took around 1-2 months of experienced DevOps works (1-2 devops), this included testing and the actual switchover to new AWS infra.

2. We actually used outside contractors (actual k8s gurus) to do the work. But the decision was made in-house by myself. I'm no k8s guru, hell, I'm not even devops, but I have enough experience with the "big guys" so the choice was calculated from technical perspective, considering our growth needs and technical needs for now and for the future.

3. I haven't had any reasons to regret the decision, but I knew that before. I knew AWS offered the things we need and how to use them and what it's roughly going to cost, so this wasn't really a surprise. I did my due diligence.

4. Heroku bills were around 5$k/mon... AWS ones are in 5 digit, but it was expected because we started to use so much more services and resources than we did (or could) in Heroku. We also expanded into two new countries (one of the main reasons the switchover was done) so the costs are sadly not directly comparable. I can say that the new costs are in expected ranges and I'm happy with what we get for the money.

5. We run a eCommerce site with monthly GMV of 1m€+ and around 700k monthly unique visitors. Our stack is (per country) 3 nodejs services, PostgreSQL, redis and some frontend services. Main cost factors are the database(s) and CloudFront.

6. These decision have to made per use-case, per project and taking into consideration your actual infrastructure needs and budget limits. I don't think there isn't anything AWS can't do, but is the cheapest? Of course not. Hell, there have been plenty of topics on this very page about moving your infrastructure to your own bare metal and saving hundreds of thousands per month.

Think through what are the issues you are currently having (ie, for us it was the Herokus stack limitations, couldn't get http2, couldn't do custom monitoring / alerting, no access to LBs, no scalable DB hosting, the need to easily roll out new countries without having to do too much of manual work, weirdly high cost of some services, like redis for an example). If your problems are in the wallet, you need to consider this as your first priority and find a provider which will meet your technical needs with best price. All the big guys also have cost calculators available which will allow you to get an estimation on what you would be paying. Take time, this isn't an easy decision and it will affect you a lot in the future as well, so you don't want to get it wrong to be stuck with another set of issues.


Digital Ocean + CLoud66


We are really recommending https://www.clever-cloud.com/ . Their stack is awesome, great uptime and great support.


Are you planning to make a disclosure regarding that statement, such as being co-founders with Quentin Adam, the founder of Clever Cloud, in other projects?


Ouch!

Stuff like this should always be disclosed. Something like "My co-founder / friend built ..." would be totally fine, but this comment looks like it intentionally obscures the connection. Not a good look.


I don't find it a big deal. Smart people will evaluate the offering for themselves and not make a decision based on a 1 line comment on Hacker News.

Plus, I doubt they would spend their lives building the project and not recommend it.


Other project meaning being contributor to open source together?


> Other project meaning being contributor to open source together?

Other project meaning actually founding that project together [1]:

> The Makers for Life collective and project were founded by three entrepreneurs in Nantes. Quentin Adam (CEO Clever Cloud), Baptiste Jamin and Valérian Saliou (Co-founders Crisp) [...]

You live in the same city. You founded a project with him. It's not like you have submitted patches to Linux from different parts of the world, without ever seeing each other.

I don't want to sound like I have something personal (I didn't even know your names until yesterday) but your assumption that the readers here on HN have no clue makes me sick.

If you know each other, that should be disclosed. I have no qualms about recommending a friend's startup, given that the friendship is disclosed.

[1] https://makair.life/2020/05/01/makers-for-life-the-making-of...


It was just to be clear that we do not have a financial involvements together


I use Clever cloud too for application prototyping when I have to go fast, and for my side projects.

Very reliable, I've never had a problem with it.


I have moved my side project (single server handling ~3 requests/s) from Heroku to Clever Cloud about 3 years ago and they work very reliably for me.

The truth is, on this scale Heroku worked well too. The reason to move for me was that they are an EU company so there is less questions from my clients about GDPR compliance.

I'd say the UI is quite old-school and I am missing good built-in metrics. There are some paid add-ons providing metrics but I'd really want to have it built-in.


Literally anywhere else


Almost, but not quite the audience for your question.

I was going to use Heroku, but the security requirements for my app made that a non-option. Instead, I went with simple Terraform that spins up CoreOS nodes, using cloud-init to spin up Docker containers. CI process built the Docker images, and deploying was just a matter of making an ECR tag point at the new image then cycling instances. Not quite as simple as Heroku, but pretty close.

I moved onto AWS via this: https://registry.terraform.io/namespaces/GoCarrot

No k8s. No Docker. Beautifully clean system. Blue/green deploys with automatic rollback. Continuous deployment (there's a CircleCI orb as well). Tightly buttoned-down system configuration. Debian built from scratch with an eye towards supply chain security. (There's a root image factory and base image factory to handle the layering of image build processes involved.) Log aggregation and configuration management baked in. Security that can only be described as "insanely paranoid, yet oddly pragmatic." Cascaded image builds, so I can update the entirety of my infrastructure by pushing one button to kick things off then clicking a few buttons to approve deployment of the various services.

1. About a week, although I had the assistance of the author. I'm now rebuilding my personal infra on Omat and it's going much more quickly. Probably 3 days total, with no assistance.

2. More experience with DevOps stuff than most, but not a DevOps person by any stretch.

3. Very, very, very instructive.

4. Given what I was coming from, cost-neutral. Compared to Heroku? Notably cheaper.

5. At present, we're in alpha so traffic is negligible and back-end workload is fairly minimal (tens of thousands of jobs per day). The author of the tools, however, is CTO of a mid-tier SaaS that handles a quite significant (millions of transactions/day, IIRC) amount of traffic, and he is super aggressive about not being hobbled on performance needs -- but also being cost-efficient.

6. Avoiding the k8s iceberg, while having all the modern amenities in a system I actually have a hope of understanding top-to-bottom (modulo my hesitation around reading the systemd source) is nice. This system is an object lesson in "loosely coupled, highly cohesive" design. I haven't felt at any point that I may be stuck in a You Can't Get There From Here situation. Avoiding layering a second layer of software-defined networking (Docker/k8s) on top of the software-defined networking of AWS means I neatly avoid the single biggest source of chronic issues (and, via the workarounds needed, system complexity) that I've experienced with "modern" (Docker/k8s) DevOps approaches.


Fly.io


Fly!


Cloud66


for security focused apps (hipaa, soc2, iso, gdpr, etc) check out https://aptible.com


flooring nj Dustless sanding wood floors nj Dustless sanding wood floors Sanding wood floors nj Dustless sanding wood floors nj Dustless sanding wood floors Installation wood floors nj Installation laminated wood floors nj laminated floors nj hardwood floors nj hardwood floor refinishing contractors nj




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: