Hacker News new | past | comments | ask | show | jobs | submit login
Docker is the Heroku Killer (brightball.com)
139 points by joeyespo on Sept 26, 2014 | hide | past | favorite | 68 comments



Docker is great, and I love Docker. I have a couple of specialized non-HTTP apps I'd love to deploy to a Docker cloud, and I'm waiting for a convincing Docker PaaS vendor to come along.

But:

1. A single, full-time sysadmin has a total employer cost of $200K per year in most tech markets.

2. A single sysadmin won't provide 24x365 coverage. They sleep, they take weekends off, they go on vacation.

3. I know people who run tech startups with 80+ employees and heavy server load who pay Heroku a few grand per month, and they just don't care. It works great, and it's a rounding error compared to their programmer salaries.

4. As I explained to one consulting client, if Heroku goes down (they do, every couple of years), you spend your time reloading https://status.heroku.com/ and apologizing to your clients on the phone. And eventually the problem goes away by itself. If you run your own servers, it's all fun and games until you lose 2 RAID drives and the RAID controller board in one night over Christmas vacation—one month after your sysadmin quit, leaving the backup system broken.

In practical business terms, you need to be either huge or completely broke before Heroku looks like a bad deal. $1000/month will buy you a lot of dynos, and for most actual business, $1000/month is a rounding error.


An excellent way of putting it ekidd. Exactly what I came here to say. I work for a small startup and I am the one wearing the sysadmin hat most of the time, and every single day I imagine these disaster modes where it could cost us almost triple what we would have payed Heroku and others (Granted, we have technical reasons why we are using bare servers, nothing to do with money). Icing on the cake is that I am going on vacation this coming Tuesday. :D

Always reason about stuff like this in terms of disasters. They are far more lethal to our company than the occasional $1000 you will be giving a dedicated hosting service.


In practical business terms, you need to be either huge or completely broke before Heroku looks like a bad deal. $1000/month will buy you a lot of dynos, and for most actual business, $1000/month is a rounding error.

I'd dispute that, because of the following:

* Heroku is in no way equivalent to a single full-time sysadmin, not even close

* The choice is not between managing your own hardware and Heroku, it is typically between Heroku, VPS (managed hardware), AWS, and then possibly your own hardware. The same answer does not apply to everyone.

* Many small companies don't have a large wage bill (outside SV), and don't have lots of devs or a sysadmin, instead they tend to have devs who do the sysadmin too.

* Heroku can be significantly more expensive than other options like VPS which also don't require managing hardware (your RAID example) - for some businesses not burning VC money, that makes a big difference. In your example of $1000, you could probably serve the same customers with $100 a month using VPS servers.

* Dynos are not the only cost, you'll be charged for storage too, and $1000/month is not a rounding error for many businesses starting out

* As your business scales Heroku would become more and more costly - there have been reports of clients being charged things like $20000 a month when they start to hit scale.

* They've had more than a few major incidents in this year alone: https://status.heroku.com/uptime - no service is perfect, but that's not once every couple of years

Heroku can look like a bad deal for all sorts of reasons, not least of which cost, and it is not comparable to hiring your own sysadmin, more like having a freelance sysadmin on call who is also responsible for thousands of other sites which are liable to be down at the same time. If you are medium/high traffic website with few other tech costs, heroku can be very expensive compared to the alternatives. Additionally, as documented by some of their customers, they are an extra level of indirection between your app and the user and that make troubleshooting issues or optimising harder -

http://news.genius.com/James-somers-herokus-ugly-secret-anno...

I would disagree anyway though that Docker (in its current form) is going go to kill services like heroku, because it requires significant expertise and setup time to get right, and doesn't fully address the same concerns.


> Many small companies don't have a large wage bill (outside SV), and don't have lots of devs or a sysadmin, instead they tend to have devs who do the sysadmin too.

It's exactly those kinds of companies that Heroku is aimed at. I can hack being a devops. I don't want to:

— I don't know what I don't know, which has huge ramifications for both security and uptime

— I have plenty of opportunities available to me and am not interested in being on call all the time. Your business could not reasonably pay me enough to answer the phone at 3 am.

— One day of heads-down product development can easily create several thousand dollars in value (or more) for a company. When I am interrupted with server patches, setting up new servers, or debugging a failed deploy process, and you throw a meeting or two in the mix, I'm downgraded to shipping a few bug fixes that day.

Any competent executive wants his key developers heads-down shipping features and should not expect them to manage devops anymore than he would expect them to interrupt their day to balance the accounting books. It's a context switch. Throwing money at Heroku makes perfect sense here.


What if you can automate your AWS infrastructure, so there will NoOps, like using Heroku: CD/CI, AutoScaling, AutoHealing, just battery included?

The point is that you have the 100% control of the infra, but the orchestration service can get you covered. And of the orchestration service went down, your instances will continue to run. Not like in Heroku, their downtime is your downtime!


> They've had more than a few major incidents in this year alone: https://status.heroku.com/uptime - no service is perfect, but that's not once every couple of years

This is a frequently misunderstood metric. Whilst there is a number of incidents on status.heroku.com, the average platform uptime for a single app is still 99.995% [1]

[1]https://status.heroku.com/uptime


> In practical business terms, you need to be either huge or completely broke before Heroku looks like a bad deal. $1000/month will buy you a lot of dynos, and for most actual business, $1000/month is a rounding error.

Heroku is a great deal. Still, let's not forget about the cost of production tier Heroku addons, such as databases & logging.


People don't talk about it much, but it's very possible to mix Heroku with other services running on AWS. We use heroku for our application tier and host our own DB on EC2.


> I'm waiting for a convincing Docker PaaS vendor to come along.

Have you looked at Stackato? That looks to be very similar to Heroku, but uses Docker containers.


(For us) Docker is not about security or scalability but about (good enough) isolation, separation of concerns and reproducability. Let me elaborate.

* Isolation: Docker enables us to pack any software and let it run with close to none side effects. Different versions of packages, libs and gems needed by apps don't interfere. It's like bundler but for all kind of dependencies not only gems.

* Separation of concerns: For our operations it doesn't matter what's inside a Docker container. We have mid sized Ruby web apps, small Go demons, NGINX with Lua support compiled in, legacy PHP apps neatly packed in Docker containers. They have a well defined interface:

The build script which consistently sums up dependencies and the build process. `docker run` wrappers which clearly state the interface of the running container like exposed ports and mounted volumes.

* Reproducability: We are able to run the same containers in development, staging and production. A dozen containers will easily run on a developers laptop.

As a side effect the Docker architecture makes us think harder about app architecture like which services are stateless and which are not and for what reason.

The fact that containers share a kernel and thus are not 100% isolated or reproducable as with virtualization hasn't been an issue for us (so far).

There are still issues and features we're missing. For example private Docker repos are PITA and building instead of pulling from a repo means you might get fooled by the build cache. And we'd love to have build in support (or at least a common standard or best practices) for orchestration. But all together for our needs it's already pretty useful.


> And we'd love to have build in support (or at least a common standard or best practices) for orchestration.

Look into BOSH[0][1]. It's a IaaS orchestrator that works for multiple cloud backends -- AWS, openstack, warden and vsphere out of the box. I use it in my day job.

It's already been applied to working with Docker containers.[2]

[0] https://github.com/cloudfoundry/bosh

[1] http://docs.cloudfoundry.org/bosh/

[1] http://blog.pivotal.io/cloud-foundry-pivotal/products/managi...


My problem with every single one of the orchestration solutions I've seen is that they tend to be overcomplicated for small deployments, and make a lot of decisions that I often don't agree with.

Looking at BOSH, my immediate reaction is that they have a huge page just explaining terminology [1]. The fact that they need one (and they do, judging from what's on it) is already a red flag for me.

We run "only" about 150 containers at the moment, so maybe I'd appreciate it more if we had thousands, but to me it seems horribly overcomplicated. And I have an immediate dislike to anything that requires constantly running processes in every VM. That may be necessary for full VM's, but it's one of the reasons I don't think VM focused orchestration solutions are a good fit for container based systems. Our existing (homegrown, on top of OpenVz) system makes heavy use of the fact that every resource in the containers are accessible from the host, and that's one of the thing I like abut Docker too.

[1] http://docs.cloudfoundry.org/bosh/terminology.html


It's complicated because orchestration is complicated. You're looking at essential complexity that can't be made to go away. What you can do is wrangle the complexity into a repeatable, declarative form. That's what BOSH and other orchestration tools do.

Everything on that terminology page is what has been found, in various systems of various sizes, to be necessary to ensure some semblance of robustness and operability as you scale from a single box to thousands of VMs and containers.

In my dayjob I've written BOSH release and deploy manifests that stand up a virtual machine containing 2 containers. I've used other deploy manifests that start a virtual machine with a small cloud of containers. My employers use BOSH to orchestrate multiple clouds on multiple backends with thousands of VMs.

Running a process inside the box is necessary to be agnostic of the substrate. BOSH can't assume that you'll run everything in Docker, or AWS, or vSphere, or any other such system. You can run heterogenous combinations per your requirements. Again, some people absolutely require that capability.


Until shit hits the fan.

I really don't understand how all these "instead of Heroku just use X"-people do not understand that one of the main benefits of Heroku is not managing servers. If your app on Heroku has an issue Heroku will fix it (not your app of course). If your app on Docker has an issue. Who you gonna call?


You don't understand. A big barrier to entry for competing with Heroku for any other startup is creating a similar virtual environment that abstracts away from metal machines for people who just want to write apps. Docker provides that to startups. So we will see more startups competing in this space, you are going to use them and call them when shit hits the fan.

Recently, Google App Engine allows deploying apps in any language using Docker. So, with Docker, App Engine is suddenly a competition to Heroku for Ruby programmers.


So I don't understand and I can switch to other companies (startups coming) that will work like Heroku - got it.


Why would you need something like Docker for Google App Engine?


To use App Engine's orchestration layer for any runtime at all you want to bring to it. https://www.google.com/events/io/io14videos/54bf5fec-50ec-e3...


I think they meant Google Compute Engine.


I can't deploy Kafka to Heroku. I can technically deploy Akka but 1X/2X instances aren't going to cut it and PX dynos are hilariously expensive, plus clustering isn't possible.

Yeah, if your app can fit on Heroku, leave it there. But Heroku is not a silver bullet for all types of apps. Containerizing them makes it much easier to add and remove instances, easily deploy new services, move instances around as underlying resources die, etc. Docker doesn't provide everything to make this happen but it's the key.

I've spent the past two weeks moving an Akka cluster into Docker with CoreOS. I'm about to deploy Kafka in the same way and so far it's been fantastic.


I can't deploy Kafka to Heroku

But you couldn't before either and yet Heroku still existed, so how does that make Docker an Heroku-killer?


Fair enough. My point is that Docker makes packaging up the various services I need to deploy much easier and more predictable. Now, rather than spin up three EC2 nodes to be Kafka instances, I just use fleet to submit three Kafka units and it finds available resources among my existing CoreOS cluster. If I need more capacity, I spin up more generic CoreOS instances on EC2 and fleet starts using them.

The groundwork is being laid and I think we're going to see a lot of competition in Heroku's space. Given the price difference between Heroku and EC2 (much less DigitalOcean) I think there's a lot of profit margin for competitors to attack. That's Heroku's weakness: it's easier than ever to build the equivalent functionality.


Heroku has its own issues. Maybe they will fix them quickly, maybe not. At least on your own server you are in charge.


Being in charge is, to quote OP, only good until shit hits the fan.


Docker + {aws, digital ocean, rackspace} is substantially cheaper than heroku.

Docker makes these services relatively comparable to Heroku in complexity, so the heroku value prop has been diluted.


This may be true, but it also makes the common assumption that your time (or the time of your team) is somehow worth $0 dollars. But, I imagine that you would be the first to say that your time is worth much more than $0 dollars.

Put another way, hardware stores sell paint and paint supplies all day. Buying those and painting a room yourself is certainly cheaper, but it assumes you have the extra time to paint and don't mind you being the laborer. Hiring a painter costs more, but saves you a ton of time, energy, and elevates the level of expertise you are bringing to the job.

Getting started is one thing. The ease of how you maintain and grow is another.


No, my assumption was that previously we had:

(Heroku Cost) <=> (EC2 Cost) + (EC2 DevOps Cost)

Now we have:

(Heroku Cost) <=> (EC2 Cost) + (EC2 Devops using Docker)

Where (EC2 Devops using Docker) < (EC2 Devops)

So the value proposition has shifted. It's not a "Heroku Killer", but it does dilute their value prop, as my prior post mentioned.


This. No one ever factors in Ops time when I hear them complaining about Heroku.


I'm not sure ops time is even the one to worry about. Most things I've learned I've learned the hard way. So a server going berserk isn't fun, but it's not like I just wasted either time or money on it. I learned something. I better understand my execution environment.

Downtime, on the other hand, is just lost revenue opportunity.


I certainly do, and I can tell you the amount of time I've spent managing servers over the past year has cost us less than running our entire production stack on Heroku would have done. So has the odd bit of downtime we've encountered because I've not got the same level of experience as Heroku's ops team.


I don't know, Docker + CoreOS is pretty dead simple.


If your heroku costs significantly cut into your profit margins, you should certainly start paying attention to those.

For most well designed services, that is simply not the case. If my revenue from $1000 of heroku bills is $20,000, cutting $500/mo off of my server costs in exchange for any amount of hours or risk is simply not worthwhile.


Welcome to the latency hell.


I'm going to call the provider that hosts my docker instances and get it sorted out.


The actual drawbacks (the author has chosen to ignore them):

- less secure

- you still have to update the system images and redeploy a lot (its made for that so its fine). you cant just spin an image and stop caring for it. libraries will update and potentially breaking ur app because u know, performance, sec fixes, the usual

- docker itself is not very nicely made, hard to debug etc. hopefully will get fixed over time.

what is actually does better? its faster.. not 26:1 but it is faster obviously. mainly, you dont have to preprovision vms since there is no boot time. so you can deploy fast. its also providing a much needed API/glue for all the things.


We couldn't find exactly what we wanted to replace heroku with in our setup, so we ended up building longshoreman :

http://longshoreman.io

http://mikejholly.com/introducing-longshoreman/


Very interesting thanks! I started outlining/working on a similar docker orchestration too a couple weeks ago here: https://github.com/pnegahdar/sporedock


Hi Adrian,

Longshoreman looks interesting. I have two questions I couldn't find an answer to. How does port allocation work? And can you specify rules such as: service a should be on all hosts.


Port allocation is quite simple. We randomly select an unused port between 8000-8999.

The most recent version has a --count flag which allows you to specify the number of application instances to launch. I believe the Heroku equivalent is --scale.


Just saw the spike in traffic this morning from my post being added here, thanks!

I'm very much a "DevOps" guy. I like running my own servers. When I first took over a Heroku based application it was extremely frustrating to me that I couldn't tweak nginx, Apache, or some type of on-dyno cache without adding network overhead to each call. I was also dealing with a system that I took over after a buggy relaunch and at that time, Heroku seemed to have fairly regular outages which was a problem for our customers who were already very sensitive to the quality issues of the relaunch. Amplified the lack of control significantly because I was constantly aware of what I couldn't do to fix the problems.

I'm still actively using Heroku and the work they've done with their PostgreSQL offering has been really impressive. It has been significantly more stable too. From a development efficiency standpoint, it really is hard to beat. They've silently implemented "Shadowing" which is supposed to provided redundancy across availability zones (but not regions). The additional Dyno size options have been great too. It is a great company. I just wish Heroku had an option similar to RightScale to deploy within other data centers or at least AWS regions - but that becomes complicated because of their network of add-on providers.

What I was getting at with the article was exactly what arihant mentioned below. Docker open sources the core piece that makes Dyno-like functionality possible which opens the door for disruption in the PaaS market.

For what it's worth I also posted a followup a couple of days later called Tempering My Docker Enthusiam (http://www.brightball.com/devops/tempering-my-docker-enthusi...).


Heroku has always seemed like more "sysadmin as a service" than just pure hosting, case in point: their response to Shellshock -> https://status.heroku.com/incidents/665


I'd say it's more that they aim to be more full-stack than just the upper sysadmin layer.


One of the biggest issues with Docker is security more than anything else. You can't really rely on container separation in a shared environment if someone somehow plans on selling Docker containers.


I was surprised i didn't see it mentioned in the post. VM-style isolation is there for a reason. Dunno if Docker is the wave of the future, but as a Heroku(AWS?) killer would be nice to learn how sharing all this performance goodness on the same system is resolved isolation-wise.


Docker and Heroku use the exact same sandboxing technology under the hood (Linux namespaces and cgroups). This is not surprising since Docker came out of Dotcloud, a direct Heroku competitor. The pros and cons of linux namespaces have been extensively discussed, it's no silver bullet but when used properly is quite robust and is rapidly gaining industry recognition.


Yet Heroku relies on container separation, don't they?


Heroku can run containers as non root


Eventually this will work with docker as well when distros enable user namespaces.

but that doesnt change that you share the same kernel, which isnt going to be as good as running a kernel per container security wise.


You can use SELinux to secure containers.


you have to be careful how you define "secure" here. you are correct that selinux is a requirement if you are worried at all about x-container interaction. but even with selinux, you are still exposed since you are sharing the kernel and would have to do work for top, ps, and others to not share what is going on across the system.


Counterpoint: Heroku Button is the Docker killer [https://news.ycombinator.com/item?id=8148794]. :)

It's cute to be bombastic, but no one's getting killed. The dotCloud people have always been my favourite PaaS people, and competition in deployment UX is sorely friggin' needed.


I know this is a dumb/basic question, but it's one I've wondered and don't fully understand. How is Docker that much easier/better than just writing some other kind of script that configures a machine for your app to run in and deploys it?


Well, its sort of like that, except its better.

Docker like a package manager, except its better than that, its like a package manager that works across distributions. And there is already a full stack for just about everything. And we don't need to go through the normal package management red tape.

Docker also provides a level of separation and isolation that makes it easier to set up configuration and also increases security.

It provides a type of building block with a standard interface for connecting to other pieces.

It allows me to easily customize a build off of an existing stack.

It provides a binary distribution of an application, so you know the whole system that you are deploying is exactly the same one you tested.


Docker is an application run time, basically a small package that is build to run software of your choice.

It's much faster (and smaller) to build such small images .vs building out the whole OS with Memcached installed (for example).

And you don't have to deal with OS configuration, etc.


Not to mention the fact that because intermediary containers are imaged, you don't have to worry about making a mistake in your script.

Sometimes (okay, every day) I find myself trying a bunch of different things to get something to work. With Docker, you've got a series of "save points" along the way. You can continue building an image from the last good point, and end up with a really clean image that represents the shortest path from bare install to "what I need to get done."

I was playing around with a Perl script about a month ago that would strip all the RUN statements out of a Dockerfile, convert them into a shell script and use that to bootstrap a Vagrant box. So what I'm saying is, if you don't know what you're doing (or just like to experiment with OSes), Docker is really interesting.


AWS or Azure, right mouse click "save image snapshot".


If LXC was better, i dont think people would like docker as much. Docker works because theres very little configuration or fiddling around needed.


Exactly. LXC is just too overwhelming.


I work on a sorta-kinda Heroku competitor (Cloud Foundry), so my views are suspect and of course are mine alone etc etc.

However, asking "which should I use for my app, Docker or #{PaaS}?" is a bit like asking "which should I use to build my house, a brick or a general contractor?"

They are different things, at different levels of abstraction.

People have already written Docker deployment systems for Cloud Foundry[0]. Plus you get all the other stuff you'd have to write yourself.

[0] http://www.cloudcredo.com/decker-docker-cloud-foundry/


Check out Dokku for deploying docker containers. It's like a mini-heroku in 100 lines of bash.

https://github.com/progrium/dokku


Also worth checking out is dokku-alt (https://github.com/dokku-alt/dokku-alt), a pretty active fork of dokku with added features.


I don't think docker is accessible enough right now to kill Heroku. You have to put on your devops hat when you use it. With heroku, you push and it works.

I'm not trying to bash Docker. In fact, I use it quite a bit, and I never use Heroku. I just think that compared to Heroku, it's much more of a low-level tool. With Heroku, you can just paste a few really simple commands and your webapp is now up.


The tooling isn't there, yet, but will come. Someone - I hope Heroku - will make something really good, that developers rather than ops people can use to get something production ready with Docker easily.


in addition to that, heroku comes with an ecosystem of things that work with it.

I.e. In any new app, even a toy one, that I deploy to heroku, I am going to get a basic new_relic account for free.


For what it's worth, you do get that anywhere now. http://newrelic.com/application-monitoring/pricing


For those us using JVM and .NET languages in AWS and Azure, the real value are the runtimes running on top of hypervisors without a needless OS layer.

I would rather see more investment into exokernels.


I agree with the article. However, I am also a fan of PaaS like Heroku, IBM's BlueMix, etc. because they save labor costs. I also always run a beefed up VPS and I have my own "Heroku like" git push deployment set up.

Docker in particular, and containerization in general are the future especially for very large shops like Google, Facebook, etc.


> I'm waiting for a convincing Docker PaaS vendor to come along.

I believe Google Cloud services support that now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: