Not sure what the appeal is for downloading 5 Megabytes and running on a VM (in the OSX case) through a daemon's REST API for what could be a direct exec() of a 250 kb binary (in the case of jq).
Don't get me wrong, for running services without worrying about their dependencies I couldn't be happier than I am with Docker and the slew of tech that's built up around containers. This, however, is a very complicated solution to a really simple problem.
"All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections."
Agreed it's a bit silly for things like jq, but I wanted to see how far I could stretch it. My long-term goal is to have Docker as the only thing installed on my laptop. ;)
This actually solves a too-many-layers-of-indirection problems; wherein Python has virtualenv on top of pip, Ruby has bundler on top of gem, both of those sit on top of apt + dpkg or yum/dnf + rpm.
Instead of two levels of package management (virtualenv + apt-get), you have one; a docker image.
If you're referring to the VirtualBox dependency, that has been gone for a while now. Docker for Mac is noticeably lighter weight and faster (though still a VM, yes).
5MB + REST: oh heck yes. And far slower to start up (though fast enough for most purposes - it's nothing like what it was with VirtualBox). For problematic things like mysql though, I'm thrilled to pay that cost rather than fight with the config / linking / etc song and dance every single time.
The vbox dependency is gone but Docker for Mac still runs in a virtual machine (because it has to); it's just now running in a HyperKit VM [1], presumably with a very lightweight linux image. This is akin to switching on Linux from VirtualBox to KVM. It's faster, but it's still full virtualization, with all the pros/cons that brings ( rather than simply kernel-enforced separation.
For services (like mysql), though, I agree with you fully. Docker is a wonderful solution in that space, because the savings justify the overhead. For wget and jq? Not so much.
Yep, agreed. I do like the sandboxing quality though, as most things I install aren't as widely used and trusted / audited as wget and jq. I might consider using this for well over half of the stuff I touch, simply for default-safety.
Anyway. It's pretty clear this is mostly an experiment to see how far it can go, not making any claim to what you should use it for. I think we can agree it's pretty neat, if a bit silly sometimes?
wget and jq are nicely behaved programs that can run fine from whichever directory they are installed in. What problem are we trying to solve with this? I am only a casual Docker user, so I probably don't get it.
EDIT:
README.md says It's like Homebrew, but everything you install is cleanly packaged up in a Docker image.
Homebrew already cleanly packages up programs and libraries in plain and simple Unix directories.
The other advantage is that the processes are not only isolated, they only have access to the current working directory at runtime, and cannot access any other path unless you explicitly specify.
Almost exclusively simpler build/install processes, because they don't e.g. rely on you having an up-to-date XCode install, and are likely to handle OS upgrades without any changes for anyone.
So when the next openssl advisory comes out (like today), then instead of installing 1 patch, you have to patch and rebuild $(ls /usr/local/bin|wc -l) docker images?
Only if you build the docker images yourself... in which case you have all of the dependencies on your system. If you're not building them, then you are at the whim of every maintainer to rebuild with a fixed openssl.
Most of the time, that rebuild is as simple as "just push a new version"; if the base image is something like Ubuntu 16.04, it will get the update automatically with the next clean build. We already trust the maintainer to write good code that does what I expect; this is just one more thing.
But what you described is literally "the worst case scenario is as bad as before (building my own images), but it would commonly be better". I'll take that any day.
Also, a lot of apps don't need a full linux distro shipped with them. If your binary is statically linked (like, for example, anything written in Go), the docker image can start from scratch, resulting in an image only a few megabytes large and with no issues like you outline.
>But what you described is literally "the worst case scenario is as bad as before
No its not. Before you might just have to rebuild openssl yourself. Now you need to rebuild openssl for every docker image you use and rebuild every one of those docker images.
Guess how many docker images will make or accept TLS connections. That's how many you will need to maintain the infrastructure to build yourself to respond to openssl vulnerabilities if you run any kind of business that requires good security.
Vendoring (the go and container model of shipping software) is great when you work with perfect upstream devs. Otherwise it effectively means you have to build it yourself with patched dependencies for fixes and hope it works.
Wait until you run into a compat issue between the code and a patched dependency and the upstream dev says "won't fix" because you're not deploying the container provided and the security risk isn't addressed because it "doesn't seem that bad".
Most images should not be affected by the advisory, especially something like openssl.
Still, it would be good to have something in between a packaging system and containers, so the whole repo of images could be quickly rebuilt on top of a new base image with an updated library (mixed-in image).
Since base images are shared, it would be the best of both worlds.
yes, but only things that use openssl that you connect to untrusted input. (a substantial number, but far from all)
Generally though, this should only require patching images that provide openssl, which should be expected to stay up to date, not the utilities which use openssl. The utilities should be picking an openssl-providing parent image - assuming that's the case, you'd just rebuild. That would pull in the new openssl image, rebuild against that newer target, and that'd be the end.
Homebrew is using bintray for precompiled packages and I also don't have XCode installed (only the commandline tools). And I never had problems with Homebrew because of OS upgrade. This "issues" are all solved IMHO.
OS upgrades nearly always require re-installing more than half of my packages, which can only happen after fixing various command-line tools, which can only happen after updating xcode. And some things perpetually link to old versions, and the problems crop up over the next few weeks. And running a pre-release of OSX can cause looooots of complications. Generally though, for release-channel people, yes - homebrew has done a remarkably good job.
Obviously YMMV, but in a Dockerized world the required steps would be: 1) upgrade the OS, 2) maybe update Docker
Less likely, no. Easier to fix, yes. There is only one thing that can break: Docker. And Docker is developed by a company who's job it is to fix their software. With homebrew, we rely on each package maintainer to fix their software, and that fix to be pushed to homebrew.
I seem to be in the minority here. Here it is: I really like this solution. Homebrew is not perfect and I had lots of headaches and problems with it. To install a program, I have to download lots of dependencies with it. Sometimes, these dependencies (especially Python) messes up with their counterparts; and then I have hell.
Lots of pythons versions, installed everywhere, and something broke somewhere. Then it comes the day I need a package, and homebrew give me that finger. There is, certainly, no way I can debug this problem since the debug log is a 5,000 line of incomprehensible gibberish.
This solution, looks like it consumes lots of resources. Well, resources are cheap these days. And I expect the next Macbook to have a more powerful CPU and at least 32GB of Ram. So consuming resources is the least of my worries.
My workflow today already consists of 8 docker machines. Every product is a machine and each machines has several containers. The dev deployment is a bash file that create and run these containers. I don't even need a full backup anymore! Everything is backed in Github and reproducible given enough internet bandwidth.
The next logical step is to have whalebrew; and thus all my unix/linux utilities are now contained in Docker too. My MacOs fresh install will have: Chrome, MacVim and Docker.
Now if Safari is as good as Chrome and a superior Vim (to Macvim) could be installed through a Docker container, my fresh MacOs install will only has Docker!
Packaging commandline tools which run perfectly natively makes me cringe... the first thing which comes to my mind is: bloated, more complexity, especially on resource management (in sense of spending resources and accessing e.g. CPU/RAM ressources). I think it can be useful though if you have something which only runs on Linux and not macOS. <sarcasm>waiting for the Electron version which runs this in WebAssembly</sarcasm>
- this had the potential to use a lot of disk space by pulling many different base layers
- are UIDs inside the container forced to match the current user UID? The container can run anything as root. If the container writes a file with root, host will have to `chown` to use it
- It looks like only `pwd` is mounted. What if a command references a file outside of `pwd`, like ../file.txt
The concept is neat but the issue with root is a huge security concern. What about instead providing a single image with proper permissions? Then installing packages inside of that container? You could even create different instances of the container. This would also solve the disk size issue since the base image would always be the same. And since you trust the image, you could mount `/` into it to allow commands outside `pwd`
This is a terrible idea in the short term. I want programs I run to be me. With the same access and permissions as I have. And to transfer that access between them.
Long run, I'm interested. Hope it heads some where.
Homebrew while a huge leap forward is not a proper package manager. I think the this docker approach is the right approach and I'll be checking it out for sure.
I wouldn't go that far. Docker very specifically relies on Linux containerization technology, and this will likely never change. Docker for Windows or OSX only work because they have a small Linux VM running 24/7 on your machine, and all Docker commands are piped through to it.
What you are saying is a commonly misheld belief (although it used to be true). Microsoft and Docker have working techonlogy available right now to allow you to use "Native Windows" Docker containers. https://blog.docker.com/2016/09/build-your-first-docker-wind... It is only a matter of time before this is available in areas other than just servers.
Don't get me wrong, for running services without worrying about their dependencies I couldn't be happier than I am with Docker and the slew of tech that's built up around containers. This, however, is a very complicated solution to a really simple problem.
"All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections."