Hacker News new | past | comments | ask | show | jobs | submit login
Docker really is the future (circleci.com)
99 points by mfenniak on June 19, 2015 | hide | past | favorite | 61 comments



Docker does neat stuff, but if it's really the future then I am going to be disappointed. Using the Docker daemon as a high-level interface to clone(2) has been nice, but Dockerfiles are a weak format (why not use a general purpose programming language?), the pre-built binaries on DockerHub are just asking for exploitation, and unioning a bunch of disk images is a hack to deal with the imperative nature of how images are built. Projects like Nix and GNU Guix are what I want the future to be. With them I don't have to put trust into any single third party, I get nearly bit-for-bit reproducible builds, system-wide deduplication of packages, functional/declarative system configuration, atomic updates/rollback, quick setup of development environments (with or without a container), and more.


I would be extremely interested in an article contrasting the approaches of Docker and Nix/Guix. Is there anything like this available? A cursory search returned mostly information on using the two together...


It's hard to compare them, because it's really 2 different things. I understand what parent means, and I think it's fair, but comparing Docker and Guix is weird, because they aren't really competing technologies.

Docker is somewhat hacky solution for lightweight containers on Linux. It is something tagged by #virtualization, although it isn't really that. And, well, it comes with even more hacky solution to configure these containers.

Nix/Guix is a package manager for your Linux distro. It is solution for stuff, for which you use apt-get. More generally, it is "the right one" solution for system configuration, which makes configuration reproducible. So it's hard to say, what is the difference between Docker and Nix/Guix because they serve different purposes, but if you compare configuration language, "the approach" - you don't have to think twice to decide which is better. Nix isn't hacky and Guix is even less so, as opposed to Docker.


I should point out that Nix has container support built-in, without Docker, using systemd-nspawn. So, you get to use the same tools to manage systems on "metal", virtual machines, and containers. Pretty cool stuff!


> but comparing Docker and Guix is weird, because they aren't really competing technologies.

I think that's his point; he doesn't want the future to be Dockery, he wants it to be Guixy. A different direction; a different future.


I don't know of a good article about it. That would be quite useful. Maybe I should attempt to write it once Guix's container support lands in a release (I'm working on it). I use Docker by day, and hack on my own container implementation for Guix by night.


Does Guix help with the building of reproducible binaries, or is it focused on reproducible build environments/config?


Yes, it helps with that a lot. Binaries built with Guix are likely to be bit-identical. See the manual[0] for some more details about how we (and Nix) do that.

[0] https://gnu.org/software/guix/manual/html_node/Invoking-guix...


I think a docker wrapper atop of Nix makes a lot of sense. There are going to be imperative things we do in docker images that aren't going away -- like downloading HEAD from a git repo and building it -- and using Nix for every other part makes a lot of sense.


I'm pretty sure you could do that with nix and skip the docker part


> Up until now we’ve been deploying machines (the ops part of DevOps) separately from applications (the dev part). And we’ve even had two different teams administering these parts of the application stack. Which is ludicrous because the application relies on the machine and the OS as well as the code, and thinking of them separately makes no sense. Containers unify the OS and the app within the developer’s toolkit.

False. VM Blue/Green Phoenix Deployments were essentially "build VM for each release to production; spin up new VM; spin down old VM" which is essentially what Docker enables, except in container form....which you could have done with OpenVZ or any other containerization solution that has existed to date, even on AWS.

> Up until now, we’ve been deploying heavy-weight virtualized servers in sizes that AWS provides. We couldn’t say “I want 0.1 of a CPU and 200MB of RAM”. We’ve been wasting both virtualization overhead as well as using more resources than our applications need. Containers can be deployed with much smaller requirements, and do a better job of sharing.

As someone who runs & leases 128MB RAM VMs for various purposes...wut?

You could have just as easily used OpenVZ to achieve this and literally everything else on your list:

https://openvz.org/Main_Page

Or any other container-based solution.

The only real thing you are saying with this article is:

"We like the Docker ecosystem and we feel its better than all other solutions."

Fair enough, but at least don't pretend Docker is the only way to solve these problems.


This is a good example of what I was talking about at the start. Nothing that Docker does is completely new, and people were doing all these things before Docker.

What Docker does is make them easier, pull them all into the same package, bring the ecosystem of tools around a single technology, and most of all: traction!


> Nothing that Docker does is completely new, and people were doing all these things before Docker.

You're contradicting yourself, because in the same article you wrote:

Into that world drops Docker: a new way of doing almost everything. It throws away old rules about operating systems, and deployment, and ops, and packaging, and firewalls, and PaaSes, and everything else.

Then all the hype about the "future".


It's the same stuff, but it does it differently. So for example, instead of using AMIs to prebake images, it uses a weird AUFS layer. And it deploys using Dockerhub or by pushing images directly to hosts. And instead of using Mesos it has Swarm. And Kubernetes: that really is quite different to what we're doing, but not that different, conceptually, from what Heroku is.


Yeah, I took your post as vezzy-fnord did.

But honestly, you have a very AWS-centric perception and the traction you talk about is really "Silicon Valley" specific. As long as you understand you are looking at a very, very narrow slice of the world when you use terms like that, sure, I can agree to that.

For personal stuff, I use docker because its easy w/o all the tools I have at work. But that isn't the same thing as "unique and new".


Your post is invalid at some point. Of course Docker is great, of course new technologies are great. BUT the first point isn't so much of a joke. Docker is not the future, at least not for everyone. Currently Docker will not change the way how apps will be built, Docker will change nothing, ... at the beginning.

To start out new development, people should and never forget this, don't care about microservices and docker and anything else. They should just build a fucking big Monolith. After they did that, and they are getting more people for development or more people at their page / service / whatsever they can still start to split everything.

Don't build a fucking Unicorn just to be accurate. Start out boring. Do this every time and don't listen to anybody that tells you how good microservices and docker is. Deploy your app manually (okai this step could be skipped), then use a tool LIKE ansible or puppet, then if you need more, look at the things that bigger companies use.

But never ever over architect your project / application / service.


As "the Docker guy" I don't want to enter the debate, I will simply try to explain how we approached this problem when designing Docker.

Docker was designed explicitly so you don't have to change the architecture or flow of your application on day one. Rather, we want to provide tools that make your life easier in small ways now, and make it possible to improve your architecture and workflow gradually, and on your terms.

This was a hard-learned lesson from building our previous product (Dotcloud, a Heroku competitor), which did require developers to change everything on day 1. As a result it was simply not possible for many developers to use it.


It's humorous that what you just said goes exactly opposite to the blog post, which talks about how docker is designed to do everything completely differently because current methods "don't scale".

Keep up the good work on docker though, it seems to be getting some good traction so far! I'm personally wondering if it's more fashion based traction though, and someone will be inventing the next "docker, but more buzzword" before long.


> It's humorous that what you just said goes exactly opposite to the blog post, which talks about how docker is designed to do everything completely differently because current methods "don't scale".

I don't think it's contradictory. We do want Docker to change, for the better, the way applications are built and run. And if you do want to throw away your existing stack and build your next application in the most portable and scalable way possible, then Docker can definitely help. It's just that it doesn't require you to, because most people don't upgrade everything at once: they improve their toolbox gradually.

For example, there is a meme that "if you run more than one process in Docker, you're doing it wrong". I actually disagree with that. I think if you want to transpose an existing VM into a container, and think of it as a mini-server that you ssh into, that is your prerogative and Docker should support that use case. Maybe later you will look into the benefits of breaking up your application into smaller, single purpose containers (for example, you can now the Docker API and ecosystem at a finer level of granularity). And when you do, Docker should support that use case too.

A small disgression: I think it's unfortunate that the tech community feels the need to coalesce around polarizing "you're doing it wrong" statements. I find it particularly unfortunate that Docker, a tool I created partly to make the development world less polarized, was chosen as a battleground for ideological battles that I find frankly boring... Everything doesn't have to be a battle.

> Keep up the good work on docker though, it seems to be getting some good traction so far! I'm personally wondering if it's more fashion based traction though, and someone will be inventing the next "docker, but more buzzword" before long.

Thanks.

Obviously it will be hard for me to answer that in an unbiased way. I think the "fashion" aspect is a matter of perspective. From the point of view of heavy Hacker News and Twitter users, there is a lot of hype, both positive and negative. But the huge majority of Docker users don't hang out on Hacker News (if they even know what it is). They have a job to do, Docker helps them do that job, and they tell their friends and colleagues about it.

We've tried to invite as many real-world users of Docker at next week's Dockercon, to talk about their experiences, both good and bad. Maybe watch a few of their presentations and decide for yourself if it feels like "fashion" :)


I hope I will see some interesting talks about scaling out SQL Databases and running Docker behind a firewall or running containers on customer hardware.. Docker has still some rough edges especially in "non internet" environments. where stuff is not moving that fast. Also it's really hard to scale out while running on a single box without internet access and trying to add more.. I hope thats something that soon (tm) could docker also fix.


> Keep up the good work on docker though, it seems to be getting some good traction so far! I'm personally wondering if it's more fashion based traction though, and someone will be inventing the next "docker, but more buzzword" before long.

That's how Docker got started. It started as lxc and has a bunch of sugar on top. Someone else will come along making something more user-friendly, add more sugar, or both.


On the other hand, even if you follow YAGNI and build a monolith, you're still going to need to develop and test it locally and then deploy it somewhere. And even deploying a simple service, it's still common to have stuff break due to differences in production vs dev.

So if installing docker and writing a Dockerfile is easy enough, it might still make sense to use Docker even if you don't (yet?) care about microservices and cluster deployments.


You still need staging environments, so where is the part where I get any gains from docker?

There aren't any. Also Docker makes development at a small scale a real pain.


I've never actually used Docker in production, I have some reasonable experience with Chef, a bit with Vagrant, and many years of my own ad-hoc configuration and shell scripts. The reason I say this is so you don't immediately dismiss what I'm about to say on the basis of kool-aid consumption.

Aren't additional environment like staging exactly the place where Docker gives you marginal gains?


If you don't have problems keeping your different environments in sync then perhaps there isn't any benefit for you. For whatever reason using docker (after trying a couple of other workflows with varying degrees of success) has solved that problem for me without being much of a pain at all.


There's a big difference between over architecting something and not designing for growth at all.

Ultimately conway's law is what drives your architecture, because communication among people will always be reflected in the software.

If you have a cohesive team that works well together, build a unified component. If you have people working independently, those should be separate services. It doesn't require a lot of architectural astronautics to do this.

Whether one uses some kind of Docker platform or PaaS has more to do with whether you prefer doing your own undifferentiated heavy lifting with containers, schedulers, servers, networks, and storage... Or not.


> There's a big difference between over architecting something and not designing for growth at all.

Mostly I agree with you, but there is no difference between over architecting and designing for growth.

Okai some people will just do everything right from the beginning, but as long as we are humans, nobody will do it right. So mostly keeping things really simple is way better than having everything extremly loosly (so that you could change parts as you grow). Let's consider ForeignKeys as an example and / or Transaction. Having them keeps you going really really fast, but as you grow some of the Database needs to be splitted and so that ForeignKeys don't work anymore especially when you introduce the model that every service needs to have his own data service. Okai that example wasn't that good, but let's go further, it's way easier to have a transaction in a single application than when you need to have some kind of a transaction go over multiple microservices.

Also take filesystem access for an example, if you just have a "few" different places where you do that you should do it. But that is not a good design for growth, but a design for keeping things simple, it's just not necessary to have some Kind of interfaces and file service abstractions just to have an easy way to replace that kind of stuff.

Most things I'm writing is stuff that I learned over the past years, that some things work and some things not. Back to the topic, Docker is great, I already said that, but it doesn't fit in the use case of everybody. And still most things Docker trying to solve are already solved (like application deploying, or prod and dev should nearly be identical [whatever that means..] (vagrant)), still nobody has found a good solution in scaling simple SQL Databases, when running them on your own hardware.

Edit: Fixed some spelling mistakes, god it's hard to write that late.


lol, this reads like the way the world ended up with MongoDB.

There are some things you have to do right from the beginning or you're never going to get them right.


One of the things that rankles the greybeards is when people think over-hyped tools like Docker are original creations and they don't acknowledge that containerization has been around for a long time. It's not really that anyone makes this claim per se, but just a general impression fostered by the cool kids' relative youth and ignorance.

We all wish that software could be judged on objective merits, but the sad truth is that now more than ever the software development world is so big that UX and marketing for dev tools actually matter a lot more. Of course over time we still gravitate towards better things as lessons are learned, but in order to figure out what the actual best tool is, huge investments need to be made to get it to work. Until millions of man hours are invested, it's impossible to say whether something like Docker will in fact be better than what came before, or whether it will peter out at another local maximum due to a fundamentally flawed philosophy. If you can't generate some initial hype then it's hard to get enough developer mindshare to even test the premise of something as complex as Docker.


> One of the things that rankles the greybeards is when people think over-hyped tools like Docker are original creations and they don't acknowledge that containerization has been around for a long time.

That does rankle the greybears, but a question: why does it matter? Why does it matter that they "acknowledge that containerization has been around for a long time"? I think it's a reaction to the invading of their space by younger, hipper folk, who don't know how to set up sendmail and don't know perl, etc, etc. That is, the "get off my lawn" crowd.

While I get that feeling a lot myself, esp as I get older, I try hard to push it down. What gives me the right to say that X is shit because the Xers don't know about the V and the W that came before?

Secondly, I'll challenge your assertion that containers have been around for a long time. I am of course familiar with chroots, Solaris Zones, FreeBSD jails, the LXC stuff that's more recent but still significantly predates Docker, etc.

However, Docker is more than prettier LXC. It's also a distribution format and a run-time for that format, and a suite of tools that works well with it. Look at Kubernetes and look at the state of the art in 2005. Not even close!


Bingo. More and more development around *nix seems to be coming from the desktop/website down, creating with little to no time taken to stop and look at what is already present.

And even worse, when someone points out that their hot new thing breaks some age old way of doing stuff, it is the old stuff that is broken and/or archaic. Except that it is only "broken" in the presence of the new stuff, and it is the new stuff that is doing the complaining.


What are the older examples of containers? Virtualization and chroot jails have been around forever, but LXC didn't exist until 2008 (https://en.m.wikipedia.org/wiki/LXC). My beard is red, not gray, but I did a lot of years of coding and devops before I ever heard about containers.

What am I missing?


Solaris Zones? OpenVZ?


Cool! Thanks for the references I had never heard of those. They don't seem to predate LXC by that much though. Solaris Zones was released in '04 and OpenVZ was started around '00 and open sourced in'06.

I guess I don't consider 10-15 years a "long time" that's just approximately the amount of time necessary to turn a fundamentally new architecture concept into something you want to work with every day.


So Docker is beautiful and wonderful thing, and it is the future, and anybody who doesn't like it has no credibility because they just don't like change, and they are Philistines.

I stopped reading after that.


I agree with this summary assessment of the article.

I like docker and I believe it's potential is huge, but I don't like this article. At all. It's based on a false dichotomy and it's intentionally divisive. Trying to get a rankle out of people who disagree with you by trying to pin them as irrational "haters" is not discourse, it's propaganda.

"At the same time, most of the software industry makes its decisions like a high school teenager: they obsessively check for what’s cool in their clique, maybe look around at what’s on Instagram and Facebook, and then follow blindly where they are led"

I think my reaction to this is... this is definitely true, but how do I know you're not yet another lemming like the rest of us?


> how do I know you're not yet another lemming like the rest of us?

I'm pretty sure I am!


> We are always faced with a choice between staying still with the technologies we know, or taking a bit of a leap and trying the new thing, learning the lessons and adapting and iterating and improving the industry around us.

I think they forgot the third popular choice -- taking a bit of a leap, trying the new thing, and finding that it doesn't involve really adapting, iterating, or improving, it's pretty much just a reinvented wheel.

This might explain why the author apparently has trouble drawing a picture of the motivations behind curmudgeons who hate anything new.

If your tool really genuinely solves problems without creating a new layer of complexity, it's not going to have very many haters.


> If your tool really genuinely solves problems without creating a new layer of complexity, it's not going to have very many haters.

OP here. A thing I wanted to touch on, but left out. I don't think it's possible to solve problems without adding some complexity. For example, golang genuinely solves problems but you have to learn go and the new toolchain, etc. Once you learn them though (and the same is true for Docker here), you get to remove some of the complexity that existed with your previous tools.

So for example, in Docker, we'll be able to remove the host OS and move it into the hypervisor, and the complexity will drop to lower than it was before the start. Or we'll start to use the Google Cloud Platform (which is Kubernetes) and we won't think about anything other than our container image and the exact resources it needs.


> I don't think it's possible to solve problems without adding some complexity.

Sounds like you're with the curmudgeons, apparently. ;)

> you have to learn go and the new toolchain, etc. Once you learn them though (and the same is true for Docker here)

That's fine -- the overhead of learning something new isn't (inherently) "complexity" at all. This isn't to trivialize that overhead (or our frequent failure to minimize it), but complexity is more in whether the abstractions offered for dealing with a problem demand as little attention as possible.

Good tools/abstractions categorically reduce the number of details users have to pay attention to, and correctly pick the prominent ones most relevant to the problems they're trying to solve. Details beneath the abstraction very rarely bubble up from beneath to break things or otherwise demand attention, and nobody even pretends that when they do that it's anything other than a problem. The end result is less complexity.

The curmudgeon invoked in the article is probably just someone who's seen that a lot of what we produce doesn't live up to this standard. It's not impossible to produce these things. jQuery is one of my go-to examples: it did a great job of insulating developers from browser API differences and blew native APIs so far away in terms of convenience that people could mistake it for an application framework (or even a language). But that's relatively rare. Most abstractions are leakier and/or don't spare you from the details that are really bogging you down.

I'm not qualified to talk about which category Docker is in. I might be more qualified to talk about it if more of pieces like this focused on the specific problems rather than speculative theories about haters and curmudgeons. :)

I do like that this piece did at least spend some time on a high-level overview of the kinds of problem-orientation it wants Docker users to take and by extension a very fuzzy introduction of how Docker can help.


How can you remove the host OS in Docker? I thought it was an interface for LXC and the virtual network stack in Linux. I didn't know it was possible to run it in a bare hypervisor like Bhyve.


Both this and the previous tongue-in-cheek rant are correct. It depends on who you are.

The fact is that not everyone is going to need to scale to Google-like sizes, and not every app is going to need to scale in the same way.

For many developers trying to build products, getting bogged down in all this emerging Docker-centric complexity is a case of premature optimization. Build your thing, polish it, get users, get customers, etc., and if you manage to get so many that scaling becomes a problem then you now have a "good problem to have."

... and the problem with the Docker ecosystem is not that it is new. I love new stuff. The problem is that it's a lot like the web framework ecosystem, which is an example of the CADT development model:

http://www.jwz.org/doc/cadt.html

To a certain extent that's an artifact of all this Docker stuff being new and very much in its experimentation phase. I expect that the web will settle down a little someday, and this stuff will too, but for now it's a crazy wild west of people implementing Yet Another Everything. There's also a bit of a funding wave going through this area, which is causing a lot of me-too Docker startups to pop up and do the same things over and over.


I'd agree that containers will be huge in the future. But it might not be with Docker, that's all.


The same thing happened with "the cloud" and "nosql" etc etc. 2% of people crow "this changes everything!" and 2% yell back "this changes nothing you morons!" and both groups of people are wrong, and the other 94% of us just keep working and are grateful for all of the cool new tools that help us get to our goals faster.

10 years ago you couldn't build a Heroku unless you were a genius. Now you can build a Heroku clone in a month. That's progress.

Also, it's still just computers. Can we move on?


Previous Blog post[1] is Gold!

[1] http://blog.circleci.com/its-the-future/


The only potential use I see for Docker's potential is to distribute applications with data as combined appliances: so if you want a PostGIS server preloaded with maps, you can do `docker fetch some_postgis_server` and end up with something you can query. But then when you try to build such an appliance (containing a lot of data), and push it to Docker Hub, it ends up failing with a mysterious error code overnight, and you open a ticket, and nobody follows up on that ticket. Docker has to get better at that kind of thing before I can consider using it again.


I'm looking forward to using Docker (or another container format) for reproducible computational work. Now, it's still somewhat difficult to keep versions of programs in sync on clusters (even using environment modules). It will be really nice when we can store the application environment as a container and be able to pull that configuration off the shelf to repeat an experiment/analysis.

Unfortunately, that use-case usually happens on multi-tenant HPC clusters, where we can't use Docker yet until the security issues are figured out (or we can get a solid and standard micro VM for running containers). Job scheduling is also a non-trivial issue.


I think the future is something like prefabricated / disposable / immutable(ish) infrastructure. Stop managing servers, start managing your service.

Containers are a big part of that - they made the idea more palatable and usable. But if you're doing this BECAUSE of Docker and containers and PaaS being cool, rather than the benefits of disposability and prefabrication in enabling stable / predictable scale out , availability, and change management of your bits, you've probably already lost.

Netflix arguably started this cloud-native wave. They still use VMs.


Docker seems interesting. But it seems like there is room for something even better.

I imagine a Docker-like product where you can write a Dockerfile for your checkin test suite as easily as you can write .travis.yml right now. And then you can run a command to easily submit this "job" to AWS, Google Compute, or whatever other cloud provider you want.

Maybe I'm thinking about this right now because my Travis build has been stuck waiting to be scheduled for over 24 hours for no apparent reason. I'm at the mercy of the Travis people to take a look at this. What I imagine is a world where cloud execution is so commoditized that I can say "sorry Travis, too slow, you lose my business today." I can hit CTRL+C, change my command-line to --submit-to=aws.amazon.com, and run exactly the same test suite there instead.

Oh, to dream...


He made some good points, but there's a really good chance I'll never need to scale at a level that requires Docker. Hopefully I'm wrong, but Azure/AWS/Heroku are probably going to be good enough or overkill for my needs.


Docker is an opinionated way to use containers. You don't need to adopt that to get the benefits of containers. A lot of the messaging, hype and marketing conflates the 2 and it suits the docker ecosystem to do that but it does not benefit informed discussion.

By eschewing plain containers in favour of Docker you are embracing some complexity and it would help to have more discussions on the tradeoff and benefits of every approach, rather than just conflating containers to Docker.

The LXC project in development since 2009 on which Docker was based and now Systemd-nspawn give you pretty advanced containers with mature management tools, multiple container OSs, full stack linux networking, storage options, cloning, snapshotting etc. [1]

LXC and soon Systemd-nspawn (version 220) support unprivileged containers [2] that let's non root users run containers. That's a pretty big step forward for container security.

There is lot of innovation happening outside the hype of Docker. But these projects are not opinionated and stop at giving you container technology as lightweight VMs like KVM, Xen, Vmware stop at giving you virtualization.

Docker takes that as a base and restricts the container OS template to a single app, builds the container as layers using aufs, btrfs, device mapper, and enforces storage separation. This is not rocket science, you can do this yourself with overlayfs, aufs, btrfs, build single app containers etc [3]

By adopting the Docker way you are immediately taking away seamless migration of VM workloads and embracing some complexity. There are both upsides and downsides to this. For a lot of use cases the Docker approach may help, in others it may add unnecessary complexity. We have an indepth [4] look at the differences between LXC and Docker here for those who are interested.

Disclosure I run flockport.com that provides an app store for servers based on Linux containers.

[1] https://flockport.com/guides

[2] https://www.flockport.com/lxc-using-unprivileged-containers/

[3] https://www.flockport.com/experimenting-with-overlayfs

[4] https://www.flockport.com/lxc-vs-docker


I'd say my problem with the whole massive containerization hype circus is less that I'm a curmudgeon who hates anything new, and more of a practicioner who hates marketing and social hype promoting a half solution to a narrow problem as a full solution to all problems.

Containerization isn't new. It just has a brand name now, and the "all the way" solution to this problem is unikernels.


Docker is a broken package manager with no checksumming or version, leveraging layered filesystems to introduce more bugs with a chroot post install hook


This whole article seems completely confused with its definitions and train of thought, which I suppose is delightfully ironic.

Some quarrels:

Into that world drops Docker: a new way of doing almost everything. It throws away old rules about operating systems, and deployment, and ops, and packaging, and firewalls, and PaaSes, and everything else.

That's a dramatic overstatement if I ever saw one. The rules haven't been thrown away. They're there, just the subsystems partitioned into multiple namespaces under a single host.

But then something interesting happened. Web applications got large enough that they started to need to scale.

The whole portion of the essay about web applications and distributed systems operates under a broken causal chain and continuity. That assumptions break down and new use cases arise with scale is obvious, though here it's presented like some recently attained enlightenment, and moreover that every J. Random Hacker should be thinking about distribution and high scalability right from the conception of their CRUD app. Not the case. Dumb setups work for the commons.

Instead of dealing with simple things like web frameworks, databases, and operating systems, we are now presented with tools like Swarm and Weave and Kubernetes and etcd, tools that don’t pretend that everything is simple, and that actually require us to step up our game to not only solve problems, but to understand deeply the problems that we are solving.

This paragraph makes no sense. The author is listing completely orthogonal tools.

------

On to the allegedly solved problems:

Which is ludicrous because the application relies on the machine and the OS as well as the code, and thinking of them separately makes no sense. Containers unify the OS and the app within the developer’s toolkit.

Depends on your domain. Plenty of applications are built to be self-contained. The unikernel/libOS approach is one that treats the OS as an implementation detail, ironically taking us straight back to the 1950s where all code had to independently initialize the machine, though in a good and reusable way.

Up until now, we’ve been running our service-oriented architectures on AWS and Heroku and other IaaSes and PaaSes that lack any real tools for managing service-oriented architectures. Kubernetes and Swarm manage and orchestrate these services.

Those are all different deployment strategies and application environments you're mixing up here. It may not be that you've lacked tools so much as you've had no need for them in your use case.

Up until now, we have used entire operating systems to deploy our applications, with all of the security footprint that they entail, rather than the absolute minimal thing which we could deploy. Containers allow you to expose a very minimal application, with only the ports you need, which can even be as small as a single static binary.

And it does so by cloning the various subsystems of the host OS into their own namespaces. You don't get around using the whole OS, you just work around it because your host OS can't handle multi-tenant properly and the dynamic linking quagmire has become a maintenance burden.

Up until now, we have been using languages and frameworks that are largely designed for single applications on a single machine. The equivalent of Rails’ routes for service-oriented architectures hasn’t really existed before. Now Kubernetes and Compose allow you to specify topologies that cross services.

This hasn't changed. You still need to bolt on lots of heterogenous components. Seamless multi-node distribution is beyond the scope of nearly all language runtimes, or frameworks, though then again nor is there any obligation to support this. At sufficient scale, you will be doing lots of homegrown integration work.

We couldn’t say “I want 0.1 of a CPU and 200MB of RAM”.

Pretty sure you could. I'm assume you're referring to the likes of Mesos, in which case I can name at least HTCondor, which is a cluster manager and scheduler not unlike Mesos, intended for HPC. It's been around since 1989. Then there's the much smaller scale things you could always do to limit resource utilization. It's not like this was discovered yesterday.

Up until now, we’ve been deploying applications and services using multi-tenant operating systems. Unix was built to have dozens of users running on it simultaneously, sharing binaries and databases and filesystems and services.

Author confuses multi-user with multi-tenant. Unix is the former.

As an example, how many protocols had to die before we got REST? ... Yet, we still haven’t got the same level of tooling for REST-based APIs that we had for SOAP a decade ago, and SOAP in particular has yet to fully die.

REST isn't even a clear protocol suite like SOAP or CORBA. It's more of a design philosophy than a formal definition.

And the same thing has been going on with programming languages since we escaped Java a decade ago.

We did?

If you’re looking for me, I’ll be in the future.

Damn, it looks an awful lot like the past.


> REST isn't even a clear protocol suite like SOAP or CORBA. It's more of a design philosophy than a formal definition.

It's also one of the most misunderstood architectural styles I have seen. If you see any read-write API that calls itself "RESTful", chances are it violates at least one compulsory constraint of REST (very often Uniform Interface), in effect meaning "HTTP-Based API that is not SOAP".


Agree all over. I especially love the bit about how cgroups are apparently never before seen magic new hotness.


Docker is the present - given we don't have jails on linux.

Docker COULD be the future, but I frankly hope it's not. Despite the idea being good, there are better designed alternatives.

Like rkt.


TLDR; A frank retraction of the sentiment expressed sarcastically in the previous post is summarized under Real problems solved. However, each of these points is dubious...

1. Up until now, we’ve been running our service-oriented architectures on AWS and Heroku and other IaaSes and PaaSes that lack any real tools for managing service-oriented architectures. Kubernetes and Swarm manage and orchestrate these services.

While some options for managing large groups of services running on one type of infrastructure do indeed now exist, and this is one step further in automation and therefore a good-thing(tm), it is by no means the end-game and in fact at this stage may not even be desirable as it is in effect simply shifting the basic scope of service-oriented infrastructure comprehension and management from a single service to a group of services, and likewise the units of deployment and management of infrastructure a cluster rather than a host, while making certain (and not safely universal) assumptions about how the service(s) will need to be managed in future.

2. Up until now, we have used entire operating systems to deploy our applications, with all of the security footprint that they entail, rather than the absolute minimal thing which we could deploy. Containers allow you to expose a very minimal application, with only the ports you need, which can even be as small as a single static binary.

Yes but this rarely happens in practice. It's like saying "now we use Linux, we get the benefits of NSA SEL". No, you don't. You have to put a lot of effort in to get that far, and it's highly unlikely to be used. So this is basically a moot point right now.

3. Up until now, we have been fiddling with machines after they went live, either using “configuration management” tools or by redeploying an application to the same machine multiple times. Since containers are scaled up and down by orchestration frameworks, only immutable images are started, and running machines are never reused, removing potential points of failure.

Yes, immutable infrastructure is good, but we have 100 ways to do this without docker. Docker is like an overpriced gardener who comes to your door, knocks around the garden for half an hour, flashes a thousand dollar smile - ie. put a cute process convention over the top of what's there already - and tell you all smells sweet in the rose garden (PS. here's your fat invoice). Never trust a workman with an invoice, and never trust abstraction to solve a fundamental problem.

4. Up until now, we have been using languages and frameworks that are largely designed for single applications on a single machine. The equivalent of Rails’ routes for service-oriented architectures hasn’t really existed before. Now Kubernetes and Compose allow you to specify topologies that cross services.

Well that's cute, but actually bullshit. We've had TCP/IP and DNS for decades. To "specify topologies that cross services" you just go host:port. What's more, the standard approach and protocols actually have deployment, documentation, and are known to work pretty well on real world infrastructure. Their drawbacks are known. Now, I'm not saying there's zero improvement to be made, but the way this is phrased is ridiculous.

5. Up until now, we’ve been deploying heavy-weight virtualized servers in sizes that AWS provides. We couldn’t say “I want 0.1 of a CPU and 200MB of RAM”. We’ve been wasting both virtualization overhead as well as using more resources than our applications need. Containers can be deployed with much smaller requirements, and do a better job of sharing.

Sure, we've known that container-based virtualization was far more efficient than paravirtualization for decades. Docker has not actually provided either, nor made it measurably easier to mix and match them as required, so this claim seems bogus.

6. Up until now, we’ve been deploying applications and services using multi-user operating systems. Unix was built to have dozens of users running on it simultaneously, sharing binaries and databases and filesystems and services. This is a complete mismatch for what we do when we build web services. Again, containers can hold just simple binaries instead of entire OSes, which results in a lot less to think about in your application or service.

What kool-aid is this? The implication is that unix and its security model are going to go away as a basis for service deployment because... docker. What? Frankly, I would assert that many application programmers can barely chmod their htdocs/ if pushed, let alone understand a process security model including socket properties, process state, threads, resource limits and so forth. Basically, the current system exists because it is simple enough to mostly work most of the time. While it may not be perfect, it's a whole lot better than throwing the baby out with the bathwater and attempting to rewrite every goddamn tool to use a new security model. The mystical single binary services that docker enthusiasts seem to hold up as their raison d'être are likely therefore to either tend to be huge, complex, existing processes allowing almost anything (like scripting language interpreter VMs) or nonexistant. By contrast, the 'previous' unix model of multi-process services with disparate per-process UIDs/GIDs, filesystem and resource limitations seems positively elegant.

All in all, this post's argument doesn't hold that much water in my view. However, I applaud CircleCI for working on workflow processes ... I think ultimately these are the bigger picture, and docker is merely one step in that direction.


Excellent article.


I want to like Docker. I just don't have the time to learn the tooling and keep up with all the changes. Maybe in a year or two when things stabilize.


Software has become like Matryoshka dolls - layer after layer of packaging. Even Python and Javascript programs now need "building". (I saw a makefile for a Python program last week. All it did was "test: python app.py test", but there was a makefile.)

It's sometimes easier just to make a static executable in Go or Rust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: