
Homebrew, but with Docker images - juanpabloaj
https://github.com/bfirsh/whalebrew
======
andrewstuart2
Not sure what the appeal is for downloading 5 Megabytes and running on a VM
(in the OSX case) through a daemon's REST API for what could be a direct
exec() of a 250 kb binary (in the case of jq).

Don't get me wrong, for running services without worrying about their
dependencies I couldn't be happier than I am with Docker and the slew of tech
that's built up around containers. This, however, is a very complicated
solution to a really simple problem.

"All problems in computer science can be solved by another level of
indirection, except of course for the problem of too many indirections."

~~~
Groxx
If you're referring to the VirtualBox dependency, that has been gone for a
while now. Docker for Mac is noticeably lighter weight and faster (though
still a VM, yes).

5MB + REST: oh heck yes. And far slower to start up (though fast enough for
most purposes - it's _nothing_ like what it was with VirtualBox). For
problematic things like mysql though, I'm _thrilled_ to pay that cost rather
than fight with the config / linking / etc song and dance every single time.

~~~
andrewstuart2
The vbox dependency is gone but Docker for Mac still runs in a virtual machine
(because it has to); it's just now running in a HyperKit VM [1], presumably
with a very lightweight linux image. This is akin to switching on Linux from
VirtualBox to KVM. It's faster, but it's still full virtualization, with all
the pros/cons that brings ( rather than simply kernel-enforced separation.

For services (like mysql), though, I agree with you fully. Docker is a
wonderful solution in that space, because the savings justify the overhead.
For wget and jq? Not so much.

[1] [https://docs.docker.com/docker-for-mac/#/what-to-know-
before...](https://docs.docker.com/docker-for-mac/#/what-to-know-before-you-
install)

~~~
Groxx
Yep, agreed. I do like the sandboxing quality though, as most things I install
aren't as widely used and trusted / audited as wget and jq. I might consider
using this for well over half of the stuff I touch, simply for default-safety.

Anyway. It's pretty clear this is mostly an experiment to see how far it can
go, not making any claim to what you should use it for. I think we can agree
it's pretty neat, if a bit silly sometimes?

------
sigjuice
wget and jq are nicely behaved programs that can run fine from whichever
directory they are installed in. What problem are we trying to solve with
this? I am only a casual Docker user, so I probably don't get it.

EDIT: README.md says _It 's like Homebrew, but everything you install is
cleanly packaged up in a Docker image._

Homebrew already cleanly packages up programs and libraries in plain and
simple Unix directories.

What am I missing?

~~~
Groxx
Almost exclusively simpler build/install processes, because they don't e.g.
rely on you having an up-to-date XCode install, and are likely to handle OS
upgrades without any changes for anyone.

~~~
0x0
So when the next openssl advisory comes out (like today), then instead of
installing 1 patch, you have to patch and rebuild $(ls /usr/local/bin|wc -l)
docker images?

~~~
010a
Honestly, that's so much easier and faster than dealing with native
dependencies sometimes.

~~~
hueving
Only if you build the docker images yourself... in which case you have all of
the dependencies on your system. If you're not building them, then you are at
the whim of every maintainer to rebuild with a fixed openssl.

~~~
010a
Most of the time, that rebuild is as simple as "just push a new version"; if
the base image is something like Ubuntu 16.04, it will get the update
automatically with the next clean build. We already trust the maintainer to
write good code that does what I expect; this is just one more thing.

But what you described is literally "the worst case scenario is as bad as
before (building my own images), but it would commonly be better". I'll take
that any day.

Also, a lot of apps don't need a full linux distro shipped with them. If your
binary is statically linked (like, for example, anything written in Go), the
docker image can start from scratch, resulting in an image only a few
megabytes large and with no issues like you outline.

~~~
hueving
>But what you described is literally "the worst case scenario is as bad as
before

No its not. Before you might just have to rebuild openssl yourself. Now you
need to rebuild openssl for every docker image you use and rebuild every one
of those docker images.

Guess how many docker images will make or accept TLS connections. That's how
many you will need to maintain the infrastructure to build yourself to respond
to openssl vulnerabilities if you run any kind of business that requires good
security.

Vendoring (the go and container model of shipping software) is great when you
work with perfect upstream devs. Otherwise it effectively means you have to
build it yourself with patched dependencies for fixes and hope it works.

Wait until you run into a compat issue between the code and a patched
dependency and the upstream dev says "won't fix" because you're not deploying
the container provided and the security risk isn't addressed because it
"doesn't seem that bad".

------
csomar
I seem to be in the minority here. Here it is: I really like this solution.
Homebrew is not perfect and I had lots of headaches and problems with it. To
install a program, I have to download lots of dependencies with it. Sometimes,
these dependencies (especially Python) messes up with their counterparts; and
then I have hell.

Lots of pythons versions, installed everywhere, and something broke somewhere.
Then it comes the day I need a package, and homebrew give me that finger.
There is, certainly, no way I can debug this problem since the debug log is a
5,000 line of incomprehensible gibberish.

This solution, looks like it consumes lots of resources. Well, resources are
cheap these days. And I expect the next Macbook to have a more powerful CPU
and at least 32GB of Ram. So consuming resources is the least of my worries.

My workflow today already consists of 8 docker machines. Every product is a
machine and each machines has several containers. The dev deployment is a bash
file that create and run these containers. I don't even need a full backup
anymore! Everything is backed in Github and reproducible given enough internet
bandwidth.

The next logical step is to have whalebrew; and thus all my unix/linux
utilities are now contained in Docker too. My MacOs fresh install will have:
Chrome, MacVim and Docker.

Now if Safari is as good as Chrome and a superior Vim (to Macvim) could be
installed through a Docker container, my fresh MacOs install will only has
Docker!

------
therealmarv
Packaging commandline tools which run perfectly natively makes me cringe...
the first thing which comes to my mind is: bloated, more complexity,
especially on resource management (in sense of spending resources and
accessing e.g. CPU/RAM ressources). I think it can be useful though if you
have something which only runs on Linux and not macOS. <sarcasm>waiting for
the Electron version which runs this in WebAssembly</sarcasm>

~~~
zeckalpha
What do you think of the NixOS approach as an alternative?

~~~
therealmarv
never looked closely at NixOS so I cannot say anything about it ;)

------
WillyOnWheels
reminds me of [https://blog.jessfraz.com/post/docker-containers-on-the-
desk...](https://blog.jessfraz.com/post/docker-containers-on-the-desktop/)

~~~
irickt
HN discussion of that post:
[https://news.ycombinator.com/item?id=9086751](https://news.ycombinator.com/item?id=9086751)

~~~
WillyOnWheels
thanks! I was not aware.

------
caleblloyd
Neat idea. A few problems that I can see:

\- this had the potential to use a lot of disk space by pulling many different
base layers

\- are UIDs inside the container forced to match the current user UID? The
container can run anything as root. If the container writes a file with root,
host will have to `chown` to use it

\- It looks like only `pwd` is mounted. What if a command references a file
outside of `pwd`, like ../file.txt

The concept is neat but the issue with root is a huge security concern. What
about instead providing a single image with proper permissions? Then
installing packages inside of that container? You could even create different
instances of the container. This would also solve the disk size issue since
the base image would always be the same. And since you trust the image, you
could mount `/` into it to allow commands outside `pwd`

------
taeric
This is a terrible idea in the short term. I want programs I run to be me.
With the same access and permissions as I have. And to transfer that access
between them.

Long run, I'm interested. Hope it heads some where.

------
lloydde
Reminds me somewhat of:

'Cmd.io, or “Command IO” as we pronounce it. Cmd.io will let you use and share
commands for the terminal, as a service, over SSH.'
[http://gliderlabs.com/devlog/2016/announcing-cmd-
io/](http://gliderlabs.com/devlog/2016/announcing-cmd-io/)

------
gigatexal
Homebrew while a huge leap forward is not a proper package manager. I think
the this docker approach is the right approach and I'll be checking it out for
sure.

------
jaequery
i like this, it seems we are getting closer and closer to the future of cross-
platform apps.

~~~
010a
I wouldn't go that far. Docker very specifically relies on Linux
containerization technology, and this will likely never change. Docker for
Windows or OSX only work because they have a small Linux VM running 24/7 on
your machine, and all Docker commands are piped through to it.

~~~
WinContainer911
What you are saying is a commonly misheld belief (although it used to be
true). Microsoft and Docker have working techonlogy available right now to
allow you to use "Native Windows" Docker containers.
[https://blog.docker.com/2016/09/build-your-first-docker-
wind...](https://blog.docker.com/2016/09/build-your-first-docker-windows-
server-container/) It is only a matter of time before this is available in
areas other than just servers.

------
diimdeep
too much yak shaving

