What pain is Heroku saving you to justify being 5x more expensive than Lightsail/Digital Ocean/Linode?
All websites I maintain/deploy are either built as a Docker image and published by CI on git check-in or deployed locally with a single rsync/supervisord bash script (or right-click Web Deploy for some older IIS/ASP.NET Apps). I probably have over 50 sites I'm currently hosting so using anything that much more expensive wont enter into consideration.
But I'm not seeing what could justify the extra cost? especially as the cost is reoccurring, if it's some kind of effortless/magical scalability I'd rather put that additional cost towards more hardware and buy more headroom.
Heroku turns a strategical liability ("I have only one employee who understands what rsync is; if he leaves I'm screwed") into a fiscal one. Any company I've ever known will always choose the latter, for anything outside of their core competency.
We use Heroku heavily, and we went from 2 full time devops engineers to 0. Everything is now buttons and sliders. There are no "security patches". The CEO could log in and scale if he needed to; it's a slider. We get monitoring for free (labour free, not cost, i.e. the "good" free). Memory usage, CPU usage, HTTP status codes, logging: it's all there. We spend no time thinking about rsync or devops or any of that: we just solve the problems we're good at.
Of course, everything is a matter of scale. Legend has it, Deliveroo UK only moved off Heroku after they grew so large, Heroku wasn't willing to offer them more dynos on their account. That sounds like a reasonable time to go in house. But any <100 people company.. why bother? focus on what you're good at, and let other people do devops.
This may just relate to the circles I move in, but I'd have a hard time finding a group of what I'd call skilled developers, where not a single one can manage rsync or find their way around a Linux server in general.
It's all well and good at the "we need Kubernetes" scale to say you need specialists, but a team that can't manage a VPS is strange to me.
Wait until your VPS is breached because your OS wasn't patched, or because the firewall wasn't correctly configured. Learning is good, however, at a certain point I think it's fair that you can't minimize DevOps expertise to just "rsync" a jar or a docker image.
Nothing about "just use a PAAS" changes this. Sacking ops and telling developers to they should "just use EKS" or whatever is precisely why we keep seeing open S3 buckets and ES servers.
IMO the fear is being overstated, everything's written down in either in single deploy script or configured CI, i.e. there's not going to be some loss of know how. It's the same as if the person in charge of Heroku leaves, someone else needs the login credentials and know how to setup Heroku as they would with any CI.
I don't know what Heroku is offering, but Lightsail and ECS instances also have metrics and pretty graphs (tho admittingly I rarely check them myself), maybe it will save me some ssh sessions to manually update security patches, I was recently able to upgrade my Lightsail instance to the latest Ubuntu 18.04 LTS with just:
$ do-release-upgrade
> But any <100 people company.. why bother?
Because it's 5x more expensive.
> focus on what you're good at, and let other people do devops.
But it already takes hardly any time/effort to keep doing what I'm already doing.
I guess it's for different Companies who see the value-added benefits that justify the cost, but it's being propositioned here that everyone should be using Heroku first, just boggles my mind why most people would do that as the first option when it's so much more expensive. I already think the cloud is too expensive, so there's little chance I'm going to be paying a re-occurring premium for something that's not going to save me any time over what I'm already doing.
If it replaces one engineer, that's like $150K+/year (when including taxes and overhead, that is not at all a high estimate). So it depends on what the 'x' is in '5x more expensive'. And will probably be more reliable than what you'd get paying one (more) engineer to do it in-house too.
If you're hiring a "devops engineer" whose total responsibility is cloud touching, sure. You're right.
But where is that actually the case and your app can run comfortably on Heroku?
At multiple jobs I've been the only person who could credibly claim to understand the entire stack used at the company, from the web frontend to the OS the backend database runs on and the person to whom teams would come to validate their designs for scaling and reliability. I didn't write product code in those roles. But I multiplied the effectiveness of the people who did.
Heroku is a wonderful tool that doesn't get you the actually hard parts of the job req.
at 5x the cost of 10k/year in infrastructure spend, Heroku is significantly less than a dedicated DevOps team. however, If you're product is backups as a service and you'll need 100PB of storage, then Heroku is probably not the best option.
At 100x the traffic the business would be at 5 million dollars a year in spend, and would exceed the standard pricing model of Heroku. The business can then
1) Negotiate with Heroku for an enterprise contract
2) Consider migrating to a more cost effective platform
3) Dedicate time home-growing a solution.
Part of the reason Heroku charges so much is because their customers are typically small, but they'd likely rather find a price that keeps you on their platform vs. home-growing a solution.
So now the non-engineer needs to understand snapshots, ensuring snapshots don't break, ssh, roll back, etc. That's assuming the bad upgrade didn't leave any damage behind (like DB data, etc.). And assuming they didn't lose the post-it they wrote the CLI credentials on since you easily can't reset those unlike Heroku.
Snapshots are typically part of the provider's web management interface and are basically the first thing anybody who makes changes to anything should learn how to use.
Moreover, if you're having someone else manage your systems and they upgrade them, now what do you do when the new version causes problems?
I ran into an issue recently where newer systems default to a newer version of a protocol but the implementation of the newer protocol has a major bug the old one didn't. When that happens on your systems you roll back until you can solve the issue. When it happens on systems managed by someone else, better hope you can identify and solve the issue quickly because in the meantime your system is broken.
> Heroku turns a strategical liability ("I have only one employee who understands what rsync is; if he leaves I'm screwed") into a fiscal one
So, what's your plan in case Heroku shuts down, or gets bought and changed completely? Isn't that also a strategic liability, just a much larger and arguably less likely one?
Heroku got bought by Salesforce several years ago.
If you have 12-factor apps then you have a fighting chance of moving off it anyway.
Apart from Dokku, I'd say Cloud Foundry is the closest next environment that you can install and operate directly, though it's an 800-pound gorilla by design. But there are fully hosted services for it (eg. Pivotal Web Services, IBM BlueMix, SwissCom Application Cloud). There're also semi-hosted options (Rackspace Managed Cloud Foundry) and IaaS-provided installer kits for AWS, Azure and I think GCP as well. You can also buy commercial distributions from Pivotal, IBM, Atos, SUSE, IBM and SAP.
Disclosure: I work for Pivotal, we sell Cloud Foundry and Kubernetes distributions (PAS and PKS).
It's a negligible liability compared to the risk of the devops guy leaving the company within a year, without leaving any documentation or any clue how the app was deployed or run. The same thing will happen next year with the replacement guy, if there ever is a replacement.
Any project that still fits Heroku has much cheaper options available with all the same reliability features: Elastic Beanstalk, OpsWorks, ECS, etc (I’m naming AWS options because I’m not knowledgeable enough about other cloud providers to suggest anything else, but I know all the major players have something in this realm.)
It really doesn’t take a devops engineer to run these if your app still fits on Heroku. A little bit of overhead goes into learning the service, much like you’d learn any new API or programming library.
> What pain is Heroku saving you to justify being 5x more expensive than Lightsail/Digital Ocean/Linode?
The pain of setting it up, applying security patches, making sure you set it up securely to begin with. The pain of having a mental model more complex than "the server is what I git push to."
> I probably have over 50 sites I'm currently hosting so using anything that much more expensive wont enter into consideration.
No one would argue someone in your position should use heroku. It's for people who are willing to pay to avoid sysadmin work... which is a lot of developers.
How does Heroku and other managed services perform updates that might contain breaking changes? Or do they only perform minor updates or security updates with no breaking changes?
My biggest fear with managed hosting and managed databases is being given too short of a window before they update.
tl;dr: You get a few choices of Ubuntu LTS releases, which they maintain for a long time (currently they still support 14.04, now nearly 5 years old). Or you can push Docker images, at which point the underlying OS is squarely back in your court — technically they must be applying kernel patches, but Linus is fairly religious about not breaking userland.
I agree. You can set up a Heroku like setup pretty easily. Push to GitLab to trigger your pipeline that builds your container. Now your $5 vps with docker compose and watchtower is updated.
With your stack, how do you deploy new versions with zero downtime? Do you have a load balancer and start the new version, switch new traffic to the new version, drain connections to the old version, then stop the old version?
And how do you update the OS and the kernel without downtime? Do you setup a new machine, deploy to it, switch the traffic to the new machine, and decommission the old one?
I'm asking because these are the kind of things Heroku and other PaaS do for you.
1 person company: sure, let's do everything myselve. As cheap as possible.
Two: here I'll teach you, take over DevOps so I have more time.
10 person team, everyone new and your application runs in the cloud. The person who took over, by now, left the company. Extended the infrastructure, didn't inform you and the application has been reworked so it runs on the cloud.
All websites I maintain/deploy are either built as a Docker image and published by CI on git check-in or deployed locally with a single rsync/supervisord bash script (or right-click Web Deploy for some older IIS/ASP.NET Apps). I probably have over 50 sites I'm currently hosting so using anything that much more expensive wont enter into consideration.
But I'm not seeing what could justify the extra cost? especially as the cost is reoccurring, if it's some kind of effortless/magical scalability I'd rather put that additional cost towards more hardware and buy more headroom.