However, I think the author trivializes the amount of work required to make different types of Python/PHP/NodeJS/whatever apps all work in a consistent way through configuration management, saying "I can just write a bash script or makefile." or "just download roles from Ansible Galaxy". This is so painfully ignorant and irresponsible that I fail to take the article seriously as a whole.
Even if it's just a Jenkins job doing a docker pull && docker run I've still seen massive improvements in the maintainability of configuration management code because there are so many fewer edge cases you have to take care of. No K8s required for that.
The author, writing about languages like PHP and Python (and I assume NodeJS becuase it also works in a similar way, but he doesn't explicitly mention it for some reason):
> They arose from an old paradigm, which is not suited to a world of microservices and distributed computing in the cloud.
So just because they can't build a single static binary, they're not suitable for microservices and distributed computing. Got it...
> If the switch away from Ruby/Python/Perl to a newer language and eco-system allows you to achieve massive scale with less technologies and less moving parts, then you absolutely have a professional obligation to do so.
Wow it's that easy?! Be right back, re-writing all our code in Go over a weekend hackathon.
This article made me angry.
I can't stress this point enough. If you don't know what's up with a failing service, any SRE can go into the Dockerfile definition (actually, we're rarely dealing with Dockerfiles anymore, these days it's more the Kubernetes definition), look at which ports are exposed, what the environment variables are, what the service is talking to, etc.
You can harden security between those services much tighter than between binaries, by dropping capabilities, seccomp, network policies, read-only file-systems.
Also, all the other advice about the drawbacks is pretty dated: While I've run into my share of super annoying Docker problems myself in the early days (and yes, especially on Ubuntu, goodbye Device Mapper...), I've yet to encounter a single, Docker related bug on Google managed GKE on Container-Optimized OS since we've switched all of our services there a year ago.
I didn't regret our choice in a very long time.
Ding ding! Applications are only a part of the issue. It's the whole server stack, all internal communication, configuration, service compsition, and networking setup.
Having these things in a declarative format are a godsend for production, a godsend for mixed language environments, and a godsend for whoever gets to kill those services and replace them with better things.
Working with sensitive data: there is also the whole shit-show of secrets management... It's not like this is impossible without containerization, but having formal security boundaries around them (both in process and systems), gives you guarantees instead of well-intentioned guesses.
Containers are ran on VMs. Most of those VMs are ran in a cloud operated by the same providers peddling the container is new cloud. AWS/GCP/Azure building block is not a container. It is a VM. VMs are not hard. VMs can be treated like containers. Use them. Tool around them. You would be amazed how less crufty your stack would be when you don't use abstraction on a top of abstraction.
Containers gave us a rise of super flaky software architecture. The one that theoretically should be horizontally salable but is not.
Whenever people talk about how containers did the same to the apps as to shipping i tell them they are absolutely right. There's a pile of hard science(tm) used to properly load containers on the ships based on the content of individual containers and the trip they are going to take. That's hard science is sorely missing from the app level container management
Re: VMs vs Containers -- the implicit assumption in what you're describing, and in the article, is that you're building in AWS/GCP/Azure with engineers who are competent in that specific suppliers ecosystem and relevant server techs.
Working with hybrid cloud installations and "air-gapped" systems with international partners invalidates a lot of those assumptions... Having a stable cross-section for deployment and scaling across clouds, backed with holistic answers to configuration-heavy parts of the application stack, is a total game changer. Containers don't let you ignore VMs any more then VMs let you ignore hardware, but trying to recreate Kubernetes ad-hoc in-situ is a crap-fest.
As a software engineer I gotta disagree about abstractions creating cruft. That's what poor abstractions do. Good abstractions moves complexity to it's most reasonable point of management and the lowest point of pain. "All problems in computer science can be solved by another level of indirection" ;)
I have spent two years watching people moving broken architectural design to containers because the consensus was "containers would solve our problem". When the containers did nothing of sort the next level of solution was "K8s would solve the problems of the containers that we have". That also did not work. We are now waiting for a better version of service discovery/up/down wiring to come to K8S.
Container is an abstraction of shared hosting. Application that works well in containers is the same application that works well in shared hosting environment, on a bare metal or on a VM. Do that. Make your application work well in a shared hosting environment. If it does, then it will work great in any kind of a container, be that a chroot, a docker container or a VM.
I absolutely agree that containers let get one started with a real work sooner, but so can rails app template generators or express application builders in nodejs. If this is the class of problem that people advocating containers try to address then I'm absolutely, positively for containers everywhere.
> Having a stable cross-section for deployment and scaling across clouds, backed with holistic answers to configuration-heavy parts of the application stack, is a total game changer. Containers don't let you ignore VMs any more then VMs let you ignore hardware, but trying to recreate Kubernetes ad-hoc in-situ is a crap-fest.
Theoretical ability to run an application across clouds because it hooks up into Kubernetes is a pie in a sky promise. If your application even marginally complex, there's no bloody way it runs on AWS and Google without an ocean of the blood (load balancers work differently, VPNs work differently, firewalls work differently), sweat and tears of the SREs and old grumpy developers who don't care about container specialness, but can look at the logs and write some additional special proxy-wiring-of-this-to-that-on-a-container-using-this-kind-of-adapter code.
> Containers don't let you ignore VMs any more then VMs let you ignore hardware, but trying to recreate Kubernetes ad-hoc in-situ is a crap-fest.
That's not what containerize all the things people advocate. Their claim is that container is a magic bullet that allow incompetent developer produce a production code. And technical management is buying it because you know... Docker.
Except for Tryton (SmartOS).
He also trivializes the massive cost and difficulty of switching stacks. Even if a company can afford to upend their development processes to use a different language for new development, most of IT is legacy technology, and dynamic languages have been popular for long enough that legacy Python, Ruby and PHP applications are part of IT estates, just like legacy C++ and Java applications.
I cant take this article seriously, its just some person refusing to adapt because "in my days we did it differently"
I think you paint the whole world with the bleeding edge brush. Many fortune 500 companies are a decade behind the technology stacks described in the article and in this discussion.
Hybrid cloud installations to leverage cloud computing resources are becoming more and more "the norm". In that context having apps and services pre-packaged for deployment in heterogenous environments is strategic power.
"The cloud" tends not to be cheap, but using cloud services to carry out jobs your dedicated site never could hope touch is something with business value.
So how do Hetzner, OVH, Rackspace, and hundreds of other companies stay in business?
Hint: cloud services are in fact way more expensive on every single metric. But they are more flexible.
And in regards to scripts or docker. No you wouldn’t need to rewrite everything.
Web development is full of engineering astronauts who think they need 8 levels of abstraction to solve relatively simple problems. Oh, and for that 0.1% that does actually switch, they soon realize they're abstractions weren't as good as they thought they were and they have to reimplement every interface anyway.
In regards to setting up a new server for a new provider, or docker, you have some boiler plate:
-> create script to attach docker to provider
-> create new script to attach docker to provider
-> create script to provision server on provider
-> run generic script to configure it how I want
-> create new script to provision server on provider
In this regard, moving from 1 provider to another, is identical.
How do I abstract this unique idea of IOPS that AWS has?
And if you know all your servers have the same Python version and your code is pure Python, you can even forget that and do a .pex (https://github.com/pantsbuild/pex), which a the Python equivalent of Java .war : a zip that embeds all the program and dependencies. It's light, fast, and problem free basically.
No way. Just package your libraries with your executable:
/my_path/ld-linux-x86-64.so.2 --library-path /my_path /my_path/my_binary
I’ve done what you mention with Docker and alpine builds (for alpine Python wheels, for example) but honestly it’s a lot more painful to ssh into running containers when figuring out the build script than it is to just throw up a clean VM.
I understood the point that it may be an overkill for static binary deployments, but how is that dangerous?
Docker is an unifying interface. It's great at what it does: make a standard facade for app deployment, setup and service management.
But it also comes at a cost. And as usual in the IT industry, the people using the tech sell it like a silver bullet, and avoid to tell you that:
- docker is complex as a product and a concept
- knowing how to run docker != knowing how to use docker properly. Remember the nosql debacle where people though they could not formalize their schema because the DB didn't enforce it ? Geeks love the circle of life.
- docker is rarely alone, it will come up with an ecosystem that will creep in your stack. Like you never really do reactjs alone but soon find out you have 1000 plugins for a complex build system and you architecture depends on a flux implementation
- most things just don't need docker. Dockers solves a technical and organizational scaling problem. If you don't have the problem, you don't need docker. Just like you probably don't need a blockchain for your new client reward policy.
- docker is a LOT of new things to learn. Not only how to use it, but also how to do all the sysadmin things you used to do right, again, but with docker. Network management used to be a fun one. Not running as root again. Persisting data.
- you will need something to install docker and the infra anyway. So you will not drop Ansible/Chef/Whatever. They will work together.
- people use docker as an excuse to not do their job properly. No packaging, little doc, etc. "Just use the docker image". This one leads to bad times.
- Not everyone uses docker. You don't live in a bubble.
- Docker images are big huge binary you download from internet and run on your server. It's very painful to audit. And so easy to ignore.
- The paradox of choice. So many things in the docker world. And it changes so fast. It's really hard to know what to use, and to keep up.
I've seen many projects wasting time and resources just because of the docker setup. It's like those people setting up a distributed architecture for a job that takes 10 seconds to run as a bash script.
Bottom line, if you think you need docker, don't use it. People needing it don't think they need it. They know they need it. And it's useful for them because they use it right.
- No packaging, little doc, etc. "Just use the docker image".
I'll take a project with a dockerfile used for CI over a project with short setup instructions. The first one is less likely to go out of date, or be incorrect. (Ideally it would have both, and a good operations manual. But given only those two, very common choices)
- Docker images are big huge binary you download from internet and run on your server.
I agree this is how most people use it, but you're not forced to use it that way. You can deploy memcached based on 200MB Ubuntu base, or a custom images containing the lsb layout + libc + few devices, totalling ~8MB. The huge images are a documentation/community issue, not a technical one.
Well, we did it that way for 2 decades without Docker, and the world hasn't collapsed, so?
All of this is possible without tools like Docker, but they're a hell of a lot harder without some of the facilities brought by tools like Docker. It doesn't have to be docker, but it does need to be something offering isolation, a mechanism to manage images, and ways of managing data volumes. You can do that in bash if you want to, but the value in these tools is not to have to reinvent it from scratch.
I can't help thinking that Docker is building a massive legacy application headache further down the line when all these applications will not be developed anymore, the developers will have moved on, the technology stack moved on too to new shiny tools, but the underlying OS needs to be updated and these legacy apps still need to run. The compatible base image will probably not even be available anymore. Some poor guy is going to have a horrible time trying to support that mess.
That is largely what Docker is providing. It doesn't do it perfectly, but it does do it.
> I am can't help thinking that Docker is building a massive legacy application headache further down the line
That "legacy application headache" existed before Docker too, and it was much worse, because you had to hunt down toolchains and underlying OS images that'd let you get the apps to compile or run, and you could often not rely on the build and installation instructions to be complete and precise enough, because you had no guarantee that nobody would manually interfere with the process. This is in my opinion the biggest thing Docker's focus on automated image builds have given us: if you have a remotely sane process, the Dockerfile serve as evidence that the build steps work unattended (yes, you can do this with CI to verify your build steps too... until they fail in production and someone applies a workaround and don't roll it back into the build scripts; it's possible to add all kinds of steps to avoid it, or you can just build to images - docker or otherwise - and prevent the problem entirely by overwriting all he static data on every deployment)
> The compatible base image will probably not even be available anymore.
If you have a running copy of the image, then Dockers layering mean you have the compatible base image. Won't necessarily help you that much when you want to upgrade to a newer version, but at least it allows you to inspect what the dependencies are.
A docker with no coupling between the host and guest OS would make me happier. Then you have a binary you can just deploy 10 years after it has been compiled. And then you could extend this model to all kind of apps, including client/UI apps.
> But all you have done is to move the problem further away from the ops guys.
No, what you've done is ensured that you have the full chain available, so that you can do your upgrades, test them, and deploy them first when they're fully tested, instead of having to try to upgrade production machines without knowing if it'll work. It doesn't save you from untangling dependencies and version incompatibilities on upgrades, but it ensures that you can do that offline, and that when you're ready to upgrade, you have images you know works.
> Even if you have the source code the tools to compile it probably don't even compile on today's OS versions (if you can even get the right licenses).
You don't need to, as long as you retain the images of the versions the tools do run on. This is why I maintain most of my build environments in images too, so that I can retain not just the finished images, but the images used to build them.
> A docker with no coupling between the host and guest OS would make me happier. Then you have a binary you can just deploy 10 years after it has been compiled. And then you could extend this model to all kind of apps, including client/UI apps.
Sounds like you want a VM. Though Docker container come close in that you're depending on very narrow interfaces. I've got Docker containers running that were made from OpenVz containers that were built from machine images back in 2008. They're still running, if only because the original machine images were poorly documented enough that rebuilding them would have been a pain. But packaging them first for OpenVz and then for Docker worked just fine with minimal effort. I'm not sure what more you want.
so you mean ... reinvent kubernetes?
Not the clusterfuck that is kubernetes.
What you would want is an interface that doesn't require that coupling.
POSIX is probably something closer to what I mean.
As the Joyent branded zone is the native instantiation of the OS sharing the same kernel, there is no hardware virtualization, or rather, hardware is virtualized only once and zones then share the abstraction, as they are just normal UNIX processes. It's a completely different approach than hardware virtualization. One global zone can have up to 8192 non-global zones all sharing the same kernel, and the imgadm / vmadm combination makes the Docker problem become a non-issue.
Note that Solaris Zones predate Linux cgroups by several years, and that Zones offer full process isolation and were engineered explicitly for isolation/"OS virtualization" purposes, whereas cgroups were initially an attempt to give the kernel information about how to share resources between processes, and still do not offer real security separation or process isolation. cgroups is also a moving target, as "cgroups v2" or "unified cgroups" is still just barely becoming usable (CPU controller just added in 4.15, which is only a few months old).
You can run Docker images directly on SmartOS, because Joyent implemented the Docker API, and they also implemented a Linux compatibility API. It's not always perfect, but the fact that it works at all is admirable.
SmartOS also contains the equipment for hardware virtualization, and Joyent has even ported KVM (and recently bhyve) to illumos.
You can, theoretically, manage OS and hardware-virtualized systems at once through SmartOS, similar to what libvirt tries to provide on Linux (though their LXC integration is deprecated iirc).
I am not a SmartOS fanboy and have not as yet deployed it into anything more than a lab setting, but it, along with anything else non-Linux-centric, is consistently ignored for no real reason.
Well, if it ain't broken, don't fix it.
Is "sort of worked" better than "only works until the next version of docker and docker's next new, new storage layer"?
If you need to add 100x the moving parts and hidden interactions to solve some hard problems then your hard problem just got harder. You've just kicked the can down to production.
To add: this is wrong from another perspective. There are more goals than "massive scale" in software development. Your main costs are wages, so anything that allows programmers to be much more productive (or cheap!) is good for your business. Therefore, settling on something like Python makes sense if you can leverage its ecosystem and the wide availability of the relevant skillsets. I won't be switching to Go until its ecosystem and popularity can compete with older languages.
For instance, it's not particularly hard to wrap everything from the app down to the ruby interpreter in a single directory that you can then drop into a .deb, but because deployment stories in the Ruby community, from at least Capistrano onwards, focused on getting the app code onto the server and then working from there, with all the problems that entails, the folk knowledge of how to do it just never spread that far. You had commonplace craziness like using RVM to install Ruby on the production servers because it was convenient, and once that mindset was embedded, the community as a whole (and I'm tarring with a very broad brush here, but it's not inaccurate) ignored simpler solutions that worked better, and didn't need tooling with as many points of failure. Bundler is another case in point: you just don't need it in production. Or at least, I never found it necessary past the build pipeline, and I think a lot of folk never found out whether they did or not.
The point is, a lot of the apparent deployment difficulty in interpreted languages is entirely incidental and self-inflicted. Now, there is a real argument that Docker removes the need to learn all the ecosystem-specific folk knowledge you need to get to a point where you can do painless, sensible deployments in a new environment, but I think it's just moving the problem around.
But that's not the only way to do it; it's in keeping with the "uberbinary" idea to have a single tarred up directory and a known entry-point, such that once that's copied to the server you don't need anything else, no post-install build steps to get running. I built something to do everything-but-the-interpreter a while ago: https://github.com/regularfry/au.git.
The tricks that let these things work almost always boil down to knowing about the existence of a very few environment variables, and when in the build process you need them set to which values. `au` relies on $GEM_HOME, $GEM_PATH, and $PATH, and knowing what to do with $DESTDIR and `ruby-install` ought to get you a ruby interpreter as well (although `ruby-install` is just a shortcut to `make` in the MRI source). Where you'd look to learn that stuff now, I don't know.
Read this again. It make total sense.
Eventually, some path become running against the current, and then the better path is move to something else.
And what about that millons-of-lines of code? Is the same thing as "what about all the investment in money with that inneficient car that eat gasoline like is not a tomorrow, I push so much on it but the new tech is far more efficient?, and you tell me, let that in the past?"
You put that millons-of-lines in the past and rebuild. If you have the correct mindset the change will pay off GREATLY...
P.D: I do this for work several times in my life, With very drastic/radical changes.
In fact, the MORE RADICAL the BEST. "Small" changes are not worth it. The $$$ is not there. But when you do a hard change is when the benefits get more evident...
Just from an ops perspective, if I can use Ansible to provision infrastructure, configure bare metal, configure VMs and build containers (Docker/LXC/LXD)...all of a sudden life gets easier - especially for keeping the containers themselves up to date.
Containers get rid of all these problems: Your images are immutable, you can always build from scratch, and you have full control of what goes in the image. Because of this, a plain shell script or something like Dockerfile is much simpler and easier to reason about. I really don't see the point in Ansible Container.
Do you typically deploy all of these technologies into one VM in production?
I don't think I understand the scenario being suggested here. (I'm experienced with Ansible but not Docker or Kubernetes)
Look, this is from a Windows perspective but I totally get this pro of Docker. I am the last line of defense at my company when stuff goes wrong and the sheer amount of variables that customer environments introduce is frightening. In one case, some HP server management solution changed a benign environment variable to something strange; completely breaking MSBuild (which we used heavily at the time). It took weeks to escalate to me, and hours in a meeting and ILSpy figuring out what was going wrong. Oceans we're boiled by burning money.
Docker is plug-and-play. We can mandate that customers can't futz with our image. Our software becomes an appliance that plugs into their network. We can upgrade from known states, with no HP BS throwing a spanner in the works. I am pushing it heavily for on-prem for this reason.
That answers the question cleanly: stupid stuff that happens on customer environments. Don't get me started on how much this lifts off of the ops team - our Docker stuff isn't yet in production but ops are salivating over it. There is no installer to run, no xcopy, you just ask Kube to give you an upgraded cluster.
I strongly suspect change aversion here. Nobody likes their cheese being moved.
Are you still upgrading from a known state when you don't know how long it's been since they upgraded Docker? What if they're on a different version of the Linux kernel than you, or a different filesystem, causing Docker to work differently? All kinds of things about the customer environment that can affect your software are still out of your control.
The problem I've encountered with Docker in my own experience is that, while it intends to abstract over fiddly details of the system environment it's running in, it leaks just as many fiddly details, at a different level. If your goal is a self-contained appliance, you still need something wrapped around Docker, like a VM, at which point a cynic can ask why you need Docker.
Now, even though I say this, the ops team at my company (who know a lot more than me) do put Docker inside the VMs instead of just running the code in them. There is presumably some value to the abstractions it provides.
One of the overlooked benefits of this approach is how technically aggressive it lets you be when deploying components into strange environments... More and more we're getting tools from our suppliers based on containers that use databases, proxies, heavy server tools, and such. We fire up in seconds tools that could never viably be installed in our environment and that we have 0 competence with.
Long running databases in containers is something you need to think about, but guaranteed known states for supplier-side configuration frees them up to be daring, and firing up local DBs for config data with no outgoing ports lets me not care about how other engineers see the world.
I think in a few cases it makes sense to move problems out of containers. A good amount of our customers would likely tell us what to do if we asked them to not worry about the SQL database in a container - they have their backup policies and so forth. This will change over time, but I'm pretty sure that there will always be scenarios where a VM or even metal is required.
That statement right there is, I feel, what Docker is all about. It converts your application from a set of files and configuration into a single blob. The conversion isn't free, but the benefits are clear and compelling.
What's great about my examples is that two of the applications were single binary Go applications that could be downloaded and run on almost any linux box. But they were distributed with docker ... for reasons?
Docker is great when I want to run a local Redis database in one command for development. But past that- ehhhh. Each new feature seems like Docker is just digging itself deeper.
When a technology democratizes access to something that was previously only accessible to those with more specialized skillsets, it will attract people who don't know enough to know how to use it right. But, well, they're still using it. Thus, left-pad.
Its uncovering something that was always there; incompetence is rampant in software engineering. Ask the average engineer to properly set up an Ubuntu box with CICD for deployment and they'll likely fail, even with all the help of the internet. Ask them to get a docker image onto a server and they can make it work.
Wider access to technology is... well, I don't know if its a "good thing" or not. We as an industry pretty strongly believe in "fail fast"; we write tests to make sure our code works, we structure our startups to recognize and pivot around failure, and if that failure comes we stop and say "somethings fucked, stop what we're doing". So I do get scared when a technology enters the stage which allows hard failure to be delayed as long as possible. Maybe we need more technology to be harder, have opinions, and be willing to say "you're doing this wrong, just stop and go get an adult." Because the alternative... Equifax? Yahoo? Ashley Madison?
It horrifies me that people run stuff in production not knowing how it's configured. But, this happens and is why there is just a booming market for DBaaS and other like productions. Docker is an amazing tool, but does make somethings a bit too easy.
That's what I took away from the article. Production was setup and they don't know what kernel version it has, or what JVM version it's using, or which C library, etc. Or, now we want feature X, so we need to upgrade dependancy Y. I already did it on my local machine, but Bob from DevOps doesn't like upgrading things on QA/integration/prod without a change control...
Granted, apps have thousands of dependencies now, so keeping track of it manually is impossible. Docker came along and promised that only whatever's in this one config file is what's installed. Easy!
So it eliminated lots of the server-specific configuration management but created other problems.
Where I work, we have a specific team dedicated to managing this tooling.
We support Python Django, Java, and Go. You can clone the base repositories which include vagrant scripts to setup everything. You develop in that and it gets sucked up into the pipeline. You want something newer? If it works locally in our environment, the SLA says it will work in production. Doesn't work locally, then tough shit.
In personal projects I do with others, I try and do the same thing. Python? 3.6 is our version target. Java is OpenJDK 1.8.0 latest. Rust & Go are compiled with the latest versions. It might not be incompetence in software engineering as a whole, but more of an issue in defining version and software targets.
As a developer, one thing I absolutely love is being able to run an image without having to install anything (other than Docker). If I want to play around with something new, its easy to get started and easy to clean up.
But where is that state held? maybe your lucky and your data can be hosted in a single RDS instance. Maybe your partition scheme is simple.
However, managing distributed state is hard_.
Also the world you describe as should is not the world we live in. I should be able to exist in the world without danger or undue prosection.
Playing with something new should be as simple as unpacking a tarball and rm -rfing the directory when done.
Trying to recreate this system without using containers or VMs or something is much more difficult, because you can't for example install the MySQL 5.6 and MySQL 5.7 rpm packages on one host at the same time, you have to instead use a generic binary or build from source into custom /opt/ locations, as well as make sure everyone listens on a different port, uses unique config, etc. Oracle express gets extremely upset if things are not exactly the way it wants, and Oracle only installs from an rpm or deb package. The system used to work this way before and moving everything to docker just erased all the complexity and difficulty, it's now ridiculously easy to maintain, upgrade, and extend. Postgresql released the 10.x series some months ago, to add it alongside the 9.x versions all I had to do was add "10" to an ansible file and re-run the playbook to spin it up on the two CI workers. All of the code I use to build these servers is easily understandable and reusable by anyone familiar with Docker and Ansible.
Besides that, I use docker containers for hosts where I have for example services running that need to use particular VPNs, using the containers so that each service has its own network environment. Again, I could instead get multiple OpenVPNs running all at once on the host and mess with the routing tables and hope that a new routing rule doesn't break one of the other services, but sticking everything in Docker containers again totally simplifies everything.
I seem to be good at finding use cases where Docker makes things much simpler and a lot less work. I'm also using Ansible to orchestrate everything so I suppose I shouldn't get the author of this post started on that :).
This person doesn't like that you need to spend time Dockerizing your app but then writes this:
> I would recommend that a company look for chances where it can consolidate the number of technologies that it uses, and perhaps use modern languages and eco-systems that can do this kind of uber binary natively.
So you want me to re-write my 50,000 line Rails app into Go so I can ship a binary instead of Docker?
That doesn't seem reasonable, especially not when it takes literally 10 minutes to Dockerize a Ruby / Python / Node / "just about any language" app once you understand how Docker works.
That and most web applications aren't just your programming run-time or big fat binary. There's nginx, Postres, Redis, Elasticsearch and whatever other service dependencies your application needs.
I've been developing and deploying web applications for 20 years and for me Docker solves a ton of real world problems and am very thankful that it exists. Nothing that I've tried prior to Docker has made it this easy to get an app up and running consistently across multiple environments.
It also makes it super easy for me to keep my development environment in order which is important because I'm always hacking away on new side projects and need to keep a bunch of freelance related projects in total isolation with their own set of dependencies. Docker is perfect for that.
Why can't you compile your Rails app into a binary? Genuine question, I haven't used Ruby in ages, but I don't see anything special about the language that would prevent this.
It depends I suppose.
I run Docker for Windows on a 4 year old computer with an SSD and those 50,000 line Rails apps have plenty of speed in development.
Rails picks up code changes in about 100ms. Basically it takes longer for me to move my eyes to my browser and hit reload than it does for a Dockerized Rails app to reload the code changes.
Flask, Phoenix and Node apps are similarly fast.
In production this is a non-issue because the code isn't mounted.
A MacOS ".app" is just a folder with executables and supporting artifacts/objects in it. (So, quite a bit like a container actually!)
I think you're being downvoted because you haven't asked "why" enough times. Why is it that you want a binary?
What is it that a container image doesn't give you that you would get some other way by making a binary? Because a container image is, at least usually when translated to its simplest form, a binary (tarball) with an entrypoint (cmd). So what are you hoping to get out of a "compiled" Ruby app that you won't be getting when it's simply Dockerized?
Is there a good reference for "dockerizing" a typical hobbyist/MVP node.js+express+babel+webpack application that would make sense to someone who understands docker and what it is used for but hasn't waded into dockerfiles and the more technical parts? Just to get a simple Ubuntu webserver set up with all the ports exposed correctly and the apps running.
Feel free to use the Dockerfile as inspiration for your Node app.
The source for that app template is available at https://github.com/nickjj/orats/tree/master/lib/orats/templa....
I think it's worth the effort into learning the Docker and Docker Compose specifics because those skills are what will let you create the files you want. Basically it comes to down to how you would end up setting those things up without Docker, and then transforming those steps into a Dockerfile.
If you want to get up to speed with Docker and Docker Compose quickly (but not specifically webpack) I do have an online course available at https://diveintodocker.com/.
Now there are tons of issues in the way things are created and ran with Docker, but they will all be sufficiently addressed over time. By adding enough things around the edges (better orchestration and resource management tools, better security scanners, etc) the ends goals and needs of "business" will be met. In the end you have a messy solution compensated by a series of tools. The pros outweigh the cons.
Until it goes "Boom". In production. With real $$ on the line. And that's where the CTO/VP of Eng recognizes that his developers don't actually understand how their system works and hence can't bring stuff back up in minutes or even hours .
If the measure of success for a technology is that a lot of people are using it and working on it, then Docker is tremendously successful. If the measure of success is that the company building it is profitable, I have no idea. If the measure of success is that the founders can raise a lot of money and then take some off the table ... gonna bet successful here; they've raised $248m. Whether or not it's a good way to run applications is entirely irrelevant based on those definitions of success for the technology.
They were operating on a different definition of success.
As with any abstraction layer, lower levle complexity is abstracted and replaced with a new higher level complexity. Is that good or bad? It depends. It's a boring vague answer, but it is the truth. Saying "Docker is terrible" doesn't help anyone.
Docker as in downloading random images of Dockerhub and plumbing them together. Not so great, quality varies and most people don't actually check what they are really running. Often they are VMs packaged up as a container, badly.
As the IT world loses the freedom through the "productarisation" of everything and the wish of an unreasonable, lofty simplification our jobs become un-skilled, payed less, and a bunch of inept lead the way towards the disaster.
The rub is that there is not single answer, each IT project has its own peculiarities and history, I mean, of course there are patterns but Docker won't solve all problems, nor Python nor Go. It depends.
To unravel the complexity of the current problem at hand you need expert people, not only products and buzz words seller/buyer.
I argue that sometimes by hiring good people you would probably save the money you spend an all the hassle, licenses, bug, that those "mature" products come with.
There are some wise of companies around but the reality I see is that they over-spend on their IT infrastracture as the management, who has no idea, follows the craze.
All this will come back as more and more of those companies go burst, but we need to wise up not to get hurt ourselves.
The rub is that to run a middle-big infrastructure, or to run a software company, you do need competent people and there are not too many of them around, I am afraid.
> The rub is that there is not single answer, each IT project has its own peculiarities and history, I mean, of course there are patterns but Docker won't solve all problems, nor Python nor Go. It depends.
Docker goes a long way to solving exactly that problem, though. It allows you cover any variety of peculiarities and history by tailoring each and every app's environment appropriately. You seem to dislike Docker because it is buzz-worthy. Sometimes the buzz is onto something.
There is too much confusion around and way too much marketing, it all seems new and it is not, which does not mean it is good, it depends.
"However, when you consider the hoops you have to jump through to use Docker from a Windows machine, anyone who tells you that Docker simplifies development on a Windows machine is clearly joking with you."
(By the way, can anyone give me the 20 second version of how to get a container to be able to talk to an existing remote database?)
On the other hand, his comments on language seem... schizophrenic. Some other languages are not "cloud ready", but Go is? Python has always had a concurrency thing, but jumping on Go or Elixir and expecting the magic fairys to come out of the walls and fix your stuff is delusional.
On the other hand, it's been clear to me almost from when I started using it that docker is great for development, but likely to cause huge headaches in production. It occasionally behaves in weird ways I can't explain, which is fine during development, but a would potentially be showstopper at scale.
2) Existing remote db: I haven't tried, but recently came across docker 'run --net="host"' when finding host.docker.internal as how to access services running on the host from the docker image. Would that do the trick?
Python's packaging story is just remarkably awful. Every few years we hear about a new packaging format or tool or database that will fix everything, for real this time, and it turns out to be just one more failure on the pile. This is what the author means about not being "cloud ready": it's just way too hard to get a python-as-it's-actually-used (i.e. with a bunch of native C extensions that rely on system shared libraries for their functionality) service packaged up so that you can deploy it to some random server and be confident that it will work right. It's not that Go or Elixir have magical fairy dust, they've just managed not to screw it up.
Docker AND Kubernetes combined though is such a powerful pattern. They bring the best of orchestration and binary management. It becomes a single API to scale horizontally and vertically.
Kubernetes is really what will make container work.
Docker as a pseudo-VM where creation is incremental and intermediate states between Dockerfile lines are cached is really nifty for development, but the end result is little different than tar-ing up a filesystem after doing a clean box deployment. I think this bit is the sugar that got a lot of people hooked on Docker, even though it's not actually the good bit in the end.
Building self-contained .exes that install with minimal dependencies has been a pretty easy task under Windows since the mid-90s. My entire company was founded on this premise, and has been developing and marketing embedded (as in, compiles directly into the .exe) database engines for Delphi/Free Pascal for 20 years. Our customers range from small shops to very large corporations, and they all need one thing: easy packaging, branding, and distribution. Some of our customers distribute their application to thousands of machines, and most of the time the entire thing is comprised of one (or just a few) executable(s). And these are machines that the vendor has zero control over, so the product has to work in some of the most "hostile" environments one can imagine.
Under Windows, it's simply:
1) Use a standard installer to package your application executables/DLLs.
2) Make sure that you install all binaries into \Program Files.
3) Store your configuration information under the user's application data directory (local or roaming, your choice) in an .ini, .json file, etc.
In most cases, that will get you an installation (and application) that works on any Windows machine back to Windows XP. The only exception is if you need a Vista+ or Windows 8+ API, but you can code your application to fail gracefully in such environments, such as falling back to a different API or just trying to dynamically load it and display a decent error message if the API isn't available.
So, after all that: why is there such a reluctance to do the same thing on Linux ? Why does everyone want to over-complicate things ?
1) Install dependencies via package manager.
2) Execute binary.
That's it. I don't need a "standard installer", I don't have to install application files to an arbitrary location blessed by the operating system, I can store configuration information in the home-directory or in the application directory or anywhere that makes sense for the application. This works in almost every linux system "going back over 15 years".
Now explain to me how I'd deploy an application to Windows with a dependency on, e.g., two specific version of IIS, a MSSQL Database and a node application server to any Windows machine in the last 15 years.
Edit: also just found this, which does a way better job than I could of describing the issues:
Re: Standard installers: you aren't required to use an installer, but it makes things easier. You could just copy the .exe to a directory and run it. Most utilities work that way.
Re: \Program Files: you aren't required to install your application there, it's just good practice.
Re: installing other products with dependencies: you would install them just like any other product and would use their installer. It's up to them to make sure that they keep their dependencies in order. I, for one, certainly won't defend MS in terms of how they distribute their applications. I personally think they're a rat's nest of overly-complicated dependencies, but that is not determined/caused by Windows itself.
Obviously, anyone can screw up anything, so the fact that an application installation won't work on a particular OS instance/version can very well be an issue with the application, and not the OS. But, that's basically my point: it's okay if the application screws something up, but the OS should present consistent and backward-compatible APIs for application binaries, and any application-specific libraries should be bundled with the application and installed into application or user-specific locations.
You mention DLL hell: this really stopped being an issue in Windows XP because of two things:
1) MS made it so that you cannot very easily drop DLLs into system directories anymore, and strongly discouraged anyone from doing so going forward.
2) MS made an effort to add features like assemblies to allow versioning, etc. to be used in the case where you absolutely, positively needed to do the above:
However, almost no one besides MS uses assemblies (.NET uses them extensively) because they're complicated to manage and they're not necessary (this is the lesson that Docker advocates are not learning). Global, shared user libraries are a feature for a past that no longer exists where disk space was at a premium.
Linux distributions need to a) figure out what a standard Linux API consists of, and b) make the changes necessary to keep these standard APIs in place across all distributions (with backward-compatibility). The browser vendors were able to do this pretty well, and JS in the browser wouldn't work at all without it.
Finally, my motivation here isn't to bash Linux because "yay Windows !", rather it's my frustration of watching this go on year after year with Linux, each year hoping that this kind of thing would get resolved and that I might be able to start targeting Linux wholesale. I just cannot understand why this is not a priority...
0.1) Convince every single library author to open source their code.
0.2) Convince them to submit (and maintain) the package across all Linux distributions.
And many more..
Oh, and hope you don't depend on a specific package version, because you might run into dependency hell on Linux..
Everything is "easy" when you re-define the problems as non-problems for your particular use case.
>Ironically, this is a non-issue with docker since you can base your image off of whatever distro you need.
Maybe docker would help, but I do think suggesting docker in the comment thread of an article detailing how broken docker is is doubly ironical :^)
If there was an easy way to create a software project -> Build -> distribute the result to several OS -> Expect it to run the same way config wise, then there would not be a need for a tool like Docker.
But, as with all new technologies, there are always idiots who misuse the new power. Building Docker images for deploying a single Go binary is about as idiotic as it gets.
Many Docker use-cases can be solved by a basic script that sets up the software, runs it, and cleans up after.
*) That, and tens of millions of investor money/ad spend to implant the idea into people's heads that you need Docker
As a consequence, it's a nightmare to update a system without some kind of regressions, it's also a nightmare to make sure environments are close enough to be representative of production.
Docker kind of solve this issue by bundling every dependencies (a bit like a big java .war file, a python virtualenv, ruby bundler, or even some LD_LIBRARY_PATH trickery or static compilation with C/C++).
But this approach is wrong. Very soon, you have 69 frameworks in production each in 42 versions at least, with 13 installation patterns, so you cannot really scan the containers, and you start giving up on maintaining this huge matrix.
And at one point, you get some old (CentOS 6) containers failing to boot anyway because you updated the underlying OS (the last bit you're able to manage), and it disabled something this old container was relying upon (like its libc needing the old and somewhat dangerous vsyscall).
I see two things missing:
Systemd can already apply cgroups to a process. If could also do a process firewall and/or use vnet interfaces for a process that would be incredible.
The next thing is a cross-platform definition language for installing and initializing a process: "I need a ldap server with this init script setup". Basically bring the IOC, DI, and Hollywood Principles to IAC.
Please, for the love of FSM, don’t give Poettering any more ideas for responsibilities to stuff into systemd.
The ecosystem is still maturing, but many organizations are adopting containers as a catalyst for advancing their DevOps efforts. Some may want to wait longer than others for the ecosystem to mature, some may want to abstain altogether. It's not a radical concept though, so I disagree that most will "regret" containers being mainstream in 5 years just like most don't regret Cloud platforms being mainstream today.
It's funny. I work for a company who has a fully Dockerized pipeline, and a relatively small operations/devops team. We're looking to expand the devops team while staying rather steady for the product development team.
Maintaining services, even ones installed over the top of PaaS and IaaS requires manpower and time. And the larger we grow, the more apps we produce, the more external services we consume... the more expertise and time is needed in operations. What could at one time be handled by AWS and Google on our behalf doesn't scale as we grow.
Someone needs to understand the AWS API and all of its lovely discrepancies. Someone needs to own CloudFormation template standards and IAM roles and secrets management. Someone needs to set up and maintain security packages and vulnerabilities and prop up the pipelines. Someone needs to own monitoring and the logging pipeline and be on call 24x7 to respond to issues that aren't, can't be automated.
Certainly not what management expects or even wants (especially since it did, at one time, just work), but production is still not free, even in this day and age.
> 95% of the budgets of most software companies goes towards writing code, not dev ops.
As a company grows, expect this number to be slashed dramatically. Not because of operations, but for sales and marketing and administration and legal and HR and CS and infosec and...
The problem is docker on its own is a gloriously complicated chroot with a cgroup wrapper. If it was just that I think it'd be ok. However the horror around storage (overlayfs, the mangling of DM, and the avoidance of a real filesystem designed for snapshots) is annoying as hell ()
However what people think of docker, isn't, and thats the orchestration layer. The problem is, there is no one orchestration system that fits all. For example AWS batch is a reasonable orchestration system if you just want to fire off a bunch of vaguely related tasks. However its terribly limited compared to Pixar's alfred or tractor(1/2)
K8s is a mess of config, instantiation, orchestration and logging, its a complex beast, which is difficult to tame cheaply (unless you use GKE, but then you still need to program _for_ it.)
I am currently trying to tame AWS batch to perform actual batch jobs, and write the support tools needed to make it useful. Of all the things that docker does, the only _useful_ thing it provides is the chroot+tar wrapper. However thats not worth the massive penalty of using AWS bollocks lvm+dm horror for storage.
Which leads me onto storage.
Docker, dm+lvm is the worst of both worlds, it gives you the appearance of thin provisioning, but doesn't actually give it. You have the penalty of IO redirection, but no gain. I now have to waste money by creating an AMI that uses ECS and ZFS
There's a recent presentation on Kubernetes without Docker, Buildah and the rest of this work here:
I've worked with OpsWorks (Chef), Saltstack, Puppet, Ansible, and Capistrano, and Convox is a breath of fresh air. I hate Chef with a passion, since I've had to maintain a cookbook for a client that is using OpsWorks. It's ugly, painful, and extremely slow.
Docker is a great tool, but it's not a PaaS. You need a layer on top to make it useful for production. But I do love having a CI image that is identical to production.
Say, I have a JVM-based web application along with a database, like Play/Scala with some MySQL or H2. Is there any good reason to dockerize anything, or am I better off running them in bare form on a server?
The applications themselves (or the JVM as platform) should already act as a good enough abstraction from then underlying platform, right?
Better behaved applications (statically compiled golang apps being one i guess) will not be so dependent upon a specially configured environment.
I think it gained popularity originally because this is a big problem on development environments and docker was the first (possibly only) technology to really try to solve it. It doesn't solve the "works on my machine" problem and it doesn't do what it does particularly elegantly, but it gets part of the way there.
I think the reason people then ported it on to production was because they liked consistency between environments, not because it is especially well suited to production environments. The creators obviously encouraged people to use it in prod (so now it's "best practice") because there's more money in your prod environment than there is in your dev environment.
I still think the whole ecosystem is somewhat shoddily put together and the culture is cargo cultish, so I try to avoid it where possible.
If you are planning to bring more developers, or there is a risk that you will have to leave the project for long enough to get some form of dependency rot, then Docker will eventually make your life much easier. Though Docker or any other container technology requires a non-trivial time investment.
Although docker images are not exactly reproducible ( you do apt-get update and you can get different results), its good enough. For me, it boils down to isolation and reproducible systems.
I don't understand how the argument goes from docker unnecessary and a little bit silly, to docker being actively dangerous.
Supporting a legacy docker app (ie something that was shipped two years ago) is a massive nightmare, because the build system used to make it is almost certainly not going to work.
This means keeping the software patched becomes a massive massive pain. (mind you the same argument can be said for statically compiled binaries)
The assumptions, the wrapper, and the plain horror that is the storage system is just such an arse.
Although the same can be said for any package management libraries (npm, apt, etc), because Docker works at a server level, it opens up a whole new ballpark of exploitations on your app.
"I wish developers were more willing to consider the possibility that their favorite computer programming may not be ideal for a world of distributed computing in the cloud. Apparently I’m shouting into the wind on this issue."
Maybe understand someone's job before telling them what tools they should be using? Sure, simpler is usually better engineering. But results are important. Docker is helping a lot of developers get results, and the author is clearly not one of them.
And in conclusion:
"Docker strikes me as a direction that one day will be seen as a mistake."
Name one technical movement that wasn't! I'm not sure why it's such a shock that programmers are "fashion-conscious". It matters more that we put our collective effort behind something with a few years vision, not necessarily what that something is.
(I speak as a member of the minority waiting for the phone industry to get over its obsession with black glass rectangles and get back to making proper phones for grown-ups that come with proper keyboards, rather than pandering to the teenage dem~~increasingly loud static hiss~~)
The author also seems to downplay the simplicity and ease that container orchestration brings to the deployment and management of distributed infrastructure.
I may be too dumb because I'm unable to infer for myself all the reasons Docker is bad that you haven't bothered to include in your article.
Or I may be too smart, because I found setting up Docker for Windows incredibly easy (I downloaded an installer and ran it), I find Dockerfiles easy to write (most fit on one screen), and I find running Kubernetes in production (on Azure AKS) to be straightforward (write YAML files, Google stuff occasionally).
I honestly had to scroll back to the top of your post from halfway down because I thought maybe it was written three years ago.
I'm not this either.
I do disagree that "devops" is a job role and the sole responsibility of particular engineers on your team.
So what does Docker bring to my team? Well it means repository maintainers have control over how their software is packaged and our deployment systems don't have to worry about configuration management on the host systems we deploy too. That saves our business time and money. It has instead allowed us to focus on high-level A/B canary deployments.
Though Docker today is much better than Docker just a few years ago.
I've used those third-generation configuration management systems to deploy services across hundreds of machines across two data centers. Usually without a hitch. It works great. But we were deploying a homogeneous suite of services that were all written in the same language ecosystem. Never had a problem.
This is why I think it's important to think of devops as an approach to structuring teams. When the people who run and operate the software are on the same team as the people who write it then the trade offs that make sense become much clearer.
If it were just about isolation you could always tarball up some chroots in a very similar fashion to Docker. But it's not just about application delivery - it's about the whole process from development to production, code to ops.
Beyond that, I don't think I'm going to "regret" Docker. Yes it's overly complicated, has a bad track record, and so forth. So? It's the best option we have right now. Something better will probably emerge and we'll move to it. I was banking on rkt myself.
But until then, I'd rather get the best we have even if I know it could be better.
I surely do feel old reading this. I actually remember being excited about Puppet as the new thing.
I recently interviewed a candidate. When discussing languages in use, I mentioned a language this person probably didn't like all that much, or didn't have that much proficiency in. His comment wasn't "why did you chose to use that" or something along those lines, but "when are you getting rid of it". The interview came to a close pretty quickly after that. There was little interest in why these choices were made, or why perhaps that language was deemed to be the best tool for that particular job. I really wish this was an outlier, but it is the prevalent attitude. Every day I work with developers that have little regard as to what tool is the best for the job at hand, and approach problems with the idea of "Well, I have this hammer here, and I really know how to use it well, I'm pretty sure I can ram this screw into the board with it" and subsequently proceed on a mission to convince everyone that the hammer is the only tool worth considering. I suspect that the deeper issue is one of either supreme overconfidence and arrogance, or masks a deep insecurity.
Whichever one it is, the issue remains. I have had devs insist that "docker is the only way in which can deploy my app, we need Docker RIGHT NOW". Container orchestration is being deployed at $work and we have published a set of guidelines to make sure that developers have an idea on how to ensure their apps will function properly in this environment - it is much along the lines of 12-factor apps, with some tweaks. The pushback I have received from some is that "with docker it shouldn't matter, and whatever I do on my PC you can just pick up and drop in place", with a complete lack of regard for the fact that their PC is not that same as our production environment. After discussion it becomes clear that this is rooted in a combination of ignorance ("I don't know how to do that") and laziness ("I made this app, it works on my PC, it is not my problem anymore").
Doing battle with the tech-fashionistas is a regular thing. No, we are not going to re-write all out applications in C# because we have an intern that can only use C# (an actual discussion). No, we cannot randomly deploy everything on Docker because that is the current hype du-jour. An now, we are not going to back up Petabytes of data to the cloud because everyone is doing it - laugh all you want about tape, but it more cost effective, fits our use-case better, and is orders of magnitude more cost-effective.
One of the things that Lawrence mentions really resonates with me:
"The guiding rule should be “What is the simplest way to do what we need to do?” If the older technology gets the job done, and is the simpler approach, then it should be preferred. But if there is a new technology that allows us to simplify our systems, then we should use the new technology."
I wish we could all get behind that....
Yes. This misconception is the main promise made in Docker's marketing, so it's no surprise that people who've gobbled it up are unhappy when confronted with the fact that the pixie dust and the lands of eternal rainbow-sunshines are fantasies. (Sidenote: it's sad how non-admin-capable developers interpret the promise of demoting ops to mindless button-pushers as fantasy fulfillment, but that's an issue for another time).
Docker had a massive bonfire of VC money to perpetuate these false beliefs, and the message has been magnified by BigCos like Google, who've placed it at the center of their corporate strategy to retake cloud.
Unfortunately, in most cases, the Direction-Setter is not going to take the credit for being misled and needing to double back and fix it. They just lob it over the wall to RealOps, courtesy of DevOps(TM), and expect them to be grateful for it.
Meanwhile, RealOps's job is much harder than before, because you have the ignoramus shouting "What are you talking about?! You just don't know how to use this revolutionary new technology from DOCKER and GOOGLE! Are you smarter than Google, Bob? I coded up a whole Dockerfile over lunch!"
We have a lot of naive, self-important dilettantes in tech these days, trying to act like the 4 minutes they spent skimming the Docker manual make them smarter than the greybeard who just doesn't have the time for every painfully empty fad on the block.
We should talk about how we can stop that infiltration.
As someone who doesn't like docker (the software) or particularly trust Docker (the company) to be good stewards of docker: I disagree. To me, the author sounds like someone with a severe case of NIH.
In fact my experience has led me to the ideal of: every environment should be identical aside from its scale. I'm well aware that docker doesn't accomplish this unless everyone's running Linux. However, the argument of just slap together some ad hoc scripts for the dev environment and then again some new scripts for the production environment makes my skin crawl (and it was a bit of an ongoing battle with ops folks at Megacorp outside of their docker shenanigans). Having the same interface for a dev environment that I do in production means that I can practice and test things well before they hit customers. Change requests become less scary. And, of course, my ubergoal: ad hoc environments become possible.
People would actually learn to do things well instead of chasing the latest crap every year so and only learning to use a technology superficially.
Honestly if the developers are thinking at this level and management doesn't understand pushing back and separating things out such that the app developer should only worry about "redis" or "postgres" or maybe both, but they should be environment variables to that part of the app...The company isn't doing a good job of understanding it's stack.
"Let’s say I’m working on an existing code base that has been built in the old-style scripting paradigm using a scripting language like Ruby, PHP, or (god help us) node.js.
…I can just about see how we can package up all our existing code into docker containers, sprinkle some magic orchestration all over the top, and ship that.
I can also see, as per the article, an argument that we’d be much better off with [uber] binaries. But here’s the thing: You can dockerise a PHP app. How am I meant to make a [uber] binary out of one? And if your answer is “rewrite your entire codebase in golang”, then you clearly don’t understand the question; we don’t have the resources to do that, we wouldn’t want to spend them on a big bang rewrite even if we did, and in any case, we don’t really like golang."
And the reply was:
"In this example a company has had a PHP app for a long time, and now it needs to Dockerize that app. Why is this? What has changed such that the app needs to be Dockerized now? This feels like an artificially constrained example. How did the app work before you Dockerized it? What was the problem with the old approach, that you feel that Docker will solve?"
But I think that totally missed my point. Lawrence has written a compelling argument about how wonderful what he's now calling uber binaries are. I'm sold! I want them! But I cannot make an uber binary for a PHP app (as far as I'm aware). I can dockerise it, and sure, an uber binary is much much better than a container, but maybe a container is still slightly better than a traditional app. Dockerisation is possible (for most people); uber-binarification is impossible (for many people). If you're writing an article about how uber binaries are better than containers and you miss out on the biggest advantage containers have over uber binaries for many people, your article is not going to engage with people the way you're hoping.
And for the record: We don't think we need to Dockerize our app, we haven't dockerized it, and and we're not dockerizing it. We have zero containers of any type in production; we rely on some Puppet templates and a small handful of hand-rolled shell scripts, and it works great. We're docker skeptics and agree 110% with everything Lawrence is saying here, including the bit where he talks about how great uber binaries make deployments.
"What was the problem with the old approach, that you feel that Docker will solve?"
Man, don't ask me, you're the one who wrote an entire blog post about it! (Admittedly, the post was about how uber binaries are even better than containers as solving these problems, but as above, I can't use uber binaries. I can use containers. And the more you talk about how uber binaries are like magical super-containers, the more it makes me wonder if we're missing out on something with our repo of Puppet templates and bash scripts.)
Moving to docker is not free and the team should honestly evaluate whether those resources would be better spent making fixes to the underlying problem. Instead of using docker to sweep the problem under the rug under rug while getting credit for being modern and utilizing the best and latest tools available.
His final response was not to you, but to all the people dockerizing php apps without honestly tallying up the real cost/benefit due to a biased affinity and comfort with languages no longer suited to the changing world.
That's just my tl;dr of his argument, not an endorsement.
Damn docker is so good for local development at least. Was showcasing a script for putting metrics into InfluxDB. The guy looked at me like I was a freaking sorcerer when I had it running in seconds with one command: `docker run influxdb`
What are those hoops? I'm curious / slightly worried as to what I missed. Surely author didn't mean enabling Hyper V / virtualisation? Thanks.
Since the Docker for Windows client was released I've had no problems at all. (In the interest of full disclosure there were a couple of early beta issues that were quickly fixed, however that's to be expected).
It's been very smooth for me as well, especially after figuring out how to do write docker compose files. Which is arguably not hard at all, once you get used to them. :)
It’s not glamourous but it’s also not chasing fads or cool-kid flavours of the week.
Previously I was terrified of touching any server for fear of messing something up and then having to twiddle with tens of config files and apt-get. Now I just rebuild the image and deploy. Its 100 times easier.
I missed the ansible train, but my understanding is that ansible is fundamentally just doing the same stuff, just automated and better documented. If you make a mistake, you can mess up many servers and it will be a pain to fix.
Node.js, Docker, DevOps, TDD, Agile, blah blah blah. Yes each of those can be quite useful in the right situation, but it is a symptom of lack of experience (particularly where management and tech leadership is concerned). A good many of the actual jobs and work environments is probably being held back or otherwise limited by the companies' need to have some of the more common popular tools in their list. But they don't know that, and there's no point arguing it. If the job still attractive, one's best hope is to get in there, get some seniority, and quietly start moving things in the right direction. Or maybe you find out that their buzzword-soup is actually seasoned right for their situation...
But to be fair, part of our problem industry-wide is the so many choices we have. You could argue that choice is good, but it creates fragmentation which eventually self-organizes into bandwagons. The huge question is, "In the end, are we better off?" I greatly suspect the answer is, "Not much..."
"In the cloud" has become such a thing these days that companies that explicitly avoid the cloud on the technical side end up writing job ads that say things like "help us transition to the cloud!" I don't know if this is now considered standard marketing/HR mumbo jumbo or what, but just like a resume, a job ad should not necessarily be taken at face value.
Here's what I tried: https://www.indeed.com/jobs?q=software+engineer&l=New+York%2...
Not that I want to manage solaris though...
All new technologies are inherently more complex than the last ones, solving some problems and introducing others...
I think that technology is driven mainly by waves of simplification, not added complexity.
Indeed this is exactly why I don’t like docker because it introduces fresh new deep complexity.
I only know Docker from a high level (the concepts and the problem it tries to solve) so I'm not in a position to give an opinion about it's implementation, but I think a reasonable expectation from every technology is that it should be as simple to use as the use case requires.
Of course, sometimes the problem isn't the technology but the way it it used. You won't try to make a commercial airliner suitable as a quick means of transportation to your local convenience store. It's not even overkill - it's just the wrong tool for the job. The problems start when that's considered the cool thing to do. Unfortunately, technology isn't always about what's good, but many times about what is trendy, but things will eventually converge to what works best, even if they have to go through the longer path.
So there is the challenge - finding the tools which give enough advantage without imposing too many future burdens. I'm not one of these people, but there are people who still use C and Make and bash and build huge, impressive systems. Sometimes narrow and deep means capable and happy.
It's always seemed to me as nothing more than a way to package a (mostly) preconfigured piece of software.
Containers aren’t even encapsulated like virtual machines, they are tightly bound to the kernel of the host. Ugh, nightmare.
Then management of all this. Ugh.
Complex, complicated and has necessitated effectively building operating system infrastructure within an already complex operating system to facilitate all this complexity.
Which bit of this is simple?
I'm currently developing Sitecore websites and working with the CMS is pretty demoralising.
My use case: I wish to deploy easier Postgres and python/.net core apps. That is.
The world moving toward standardization is a BIG deal. Docker doesn't add complexity. Docker reveals the real complexity of the problem when you require a stable solution.
Let's take this simple example. Let's say you setup shop with a Virtual Server that has Ubuntu xx.xx and you write a simple PHP code that sends an email:
mail( firstname.lastname@example.org, "spam" );
-> It is not clear what it sending the email here.
-> It is not clear what PHP is using to send the email.
-> It is not clear how your Virtual Server is relying the email.
-> It is not clear what will happen if it fails.
-> It is not easy to change after you have coded it.
That single line is very simple and works. But it is a mess as you keep progressing. The reason I keep investing in Docker is this: I have been burned to the core by complexity that just start pooping out on my face as my day moves.
Imagine you have this line of code instead:
mail_service->send( email@example.com, "Still spamming" );
-> Consul managing your nodes.
-> Mail nodes that expose a mail_service
-> The Mail service needs configuration. Tricky. Can setup and configure an SMTP server, load up and build some template docker file made by some other guy, or pull the MailChimp docker container and give it my API keys.
-> Maybe I'll have three instances. Prioritize the most efficient one and leave the others just in case.
-> Mailchimp goes down? Consul moves traffic to my other nodes.
-> Mail failure? Report to my logging node. Maybe we need to standardize this. So I just load my mail node that integrates with my logging node.
-> Moving from MailChimp to MailXYZ doesn't require any code modifications. No updates for the code. No downtime.
So does it work now? Uh, tough call. Yes, it is a gamble. It is a big gamble. But it is a big gamble that will make us all better off. So let's invest in it. Remember that the tech sector was mostly made out of technological gambles.
TL;DR: Docker is messy. But not because Docker or containerization sucks. It is because they expose the messiness of the real world. Your development WAMP hides a ton of this complexity that only reveals itself once you start scaling and going through practical tests.
Until then, static binaries seems like the least bad option, and for the containerization you can just use the native OS features. And I don't understand how people take this to mean they have to write all their software in Go or Rust, any language can be compiled into a static binary.
Sure, if that's the only kind of workload you ever plan on running. No one is really doing this though so this is just useless outrage. This is like Fox News for developers.