Hacker News new | past | comments | ask | show | jobs | submit login
10 Myths About Docker That Stop Developers Cold (dev.to)
18 points by rbanffy on June 28, 2017 | hide | past | favorite | 4 comments



These all feel pretty dead-on to me. One thing that I think Docker has struggled with is successfully mapping the mental model of working with VMs to working with containers. I doubt I'm alone in having tried and failed to wrap my head around the concepts.

Is there any comprehensive guide on "this is how you do X with VMs, and here is how you do the same thing with containers"? Or are the use cases sufficiently disjunct that this wouldn't even be a reasonable thing?


It depends. Maybe you know the "pet vs. cattle" analogy; VMs are used on both sides of the spectrum, whereas containers are squarely on the "cattle" side of things.

VMs are often used in a pet-style workflow, where an individual VM is long-lived and serviced (e.g. you install security or feature updates and maybe reboot it once in a while). So the VM is a "pet" that you give a name and care for.

You can also deal with VMs in the "cattle" manner: When the VM is having trouble, you just kill it and spin up a new one. (Only when problems cannot be solved by this do you start investigating more closely.) When you want to apply a package update or new configuration, you build a new VM image, spin up the new VM with the new image, and kill the old one when the new one is up and running. That approach requires that you can spin VMs up and down within very short notice, so you would see that approach more often in cloud environments vs. traditional hosting environments (where you, as the customer, book each VM individually, e.g. via a ticket).

And the whole toolchain around containers is manufactured around this cattle-like workflow, since containers are easier (and quicker) to launch than a VM, and can be thrown away more quickly as well.

So if you're managing your VMs with a cattle-like workflow, it should be possible to map the tasks that you're doing there to Docker commands (or, more likely, Kubernetes commands etc.). But if you're working in a pet-like workflow, that's not going to map as easily.

Note: All of this is about VMs/containers in production. During development, different tools may be used (in the same way that, for instance, you wouldn't use Vagrant to manage production VMs). I'm skeptical if there's enough of a unified approach that you could make a guide like "command X maps to command Y". (For example, I develop applications that are packaged in Docker for shipping to prod, but I don't use Docker at all during development.) If in doubt, I would look at the program in question, and see if it has instructions around how to use the Docker image in development.


> I can’t see anything in this container

This is only a myth because of the full os base images - which not every container has. And frequently, the more minimal base images are missing a lot of useful tooling, so it's still an issue.

> Docker hardly works on macOS and Windows

Docker for mac is still not very reliable. Shared volumes (related to IDE and DB "myths") frequently get "stuck", it still occasionally consumes a full core of CPU, and finding a shared port (say, for remote debugging) is still a pain.

It's a useful tool, but for reliable development I still prefer a normal local environment; going back to Docker when I'm done developing and am ready to do a release.


> And frequently, the more minimal base images are missing a lot of useful tooling, so it's still an issue.

Oh yeah, I know that problem. This happened to me a while ago:

Me: "Hm, that container is having trouble."

  $ kubectl logs my-container-24fa8
  ERROR: cannot connect to theotherthing.example.corp:443
Me: "Let's see if DNS is working from the container."

  $ kubectl exec -ti my-container-24fa8 bash
  kubectl: bash: command not found
  $ kubectl exec -ti my-container-24fa8 sh
  sh$ dig A theotherthing.example.corp
  sh: dig: command not found
  sh$ host theotherthing.example.corp
  sh: host: command not found
  sh$ nslookup theotherthing.example.corp
  sh: nslookup: command not found
  sh$ ls /bin /usr/bin
  ...
  ...
  sh$ openssl s_client -connect theotherthing.example.corp:443
But I still build my containers on an Alpine base, because the datacenter on the literal other side of the planet has such a shitty link that if the image is 100 MB (rather than 10 MB), it will take multiple minutes to download it on all nodes, even though we have a Squid proxy in place for that in every DC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: