Hacker News new | past | comments | ask | show | jobs | submit login
Deck-build – A powerful and tiny bash framework to build custom Docker images (github.com)
61 points by d3ck 36 days ago | hide | past | web | favorite | 34 comments



I took over a Dockerfile that called out to ansible for all the heavy lifting, presumably because that's what the authors knew and liked.

Translating to individual RUN lines removed 100s of lines of code, and vastly sped up the build process. In addition the build process should be more familiar to future Dockerfile devs, who can see the entire build in one file and don't need to learn an extra tool.

Wrt this project, how well does it use Dockers build cache? And is it really better than idiomatic Dockerfile when you account for future developers?


Build cache is supported (https://bit.ly/2S8Z3oi), it's the job of the user to structure his code wisely (same as with docker RUN commands). And: No, it's not "better", it's a different approach. Like most/any other community solution it will not replace the standard Dockerfile process :)


Thanks for the link


FWIW, there was an "official" project for Ansible to generate container images, but it's constantly in limbo.

While I do think this is reasonable, I have an intuition that you might be trying to do too much in one image, if you feel like you need Ansible. There are good reasons to sometimes have fat-containers. Do you mind sharing what all you're installing with Ansible?


It's a Django backend: the dockerfile uses apt-get and pip. Pretty simple.

Using ansible from a Dockerfile in the first place was a code smell, IMO.


This combines two of my least-favourite things.

If anything, Dockerfiles are already too powerful[^]. It's trivial to zoom into an exciting world of asset opacity and rotting bits.

Images are very useful, though. Insofar as they are liberated from the sins of Dockerfiles, there's a bright future ahead.

I don't see bash as such a liberator. Swapping one mess for a different mess still leaves me in a state of resentful squalor.

Disclosure: I work on Cloud Native Buildpacks (https://buildpacks.io), so I have a horse in this race. We're getting near to our first beta release.

[^] The original, pre-dang-editing title was something like "Dockerfiles not powerful enough?"


>> If anything, Dockerfiles are already too powerful

How? Why doesn't docker allow me to arbitrarily copy files, create any layers and execute shell commands from command-line instead, as opposed to a Dockerfile?


Can't you via `docker exec` or do I misunderstand the ask?


You can't `docker exec` unless the container is running, so you first need to `docker run` something. (Like an infinite sleep loop, like `bash -c 'while true; sleep 60; done'`.)

You can create images via just `docker`, but it's a wee bit hard to do, IMO. IIRC, the docker command that captures the resulting image will by default also change the entrypoint to whatever the `run` was, which isn't helpful.


>> You can create images via just `docker`, but it's a wee bit hard to do, IMO.

Also if you do that, then you can't edit the ENTRYPOINT without adding another layer because docker can't edit the respective json in the .tar file directly.

As in - it has the documented functionality to do so in the "POST /commit" API, but that would never work and that parameter will be simply ignored.


You can already run whatever commands you want when creating a Docker image, including your own shell scripts. It isn't clear to me why one would want to use this.


You are right, deck-build doesn't have any magic regarding the build process (this is part of the concept). But:

1.) It bundles a lot of useful functions (https://bit.ly/2N9mAEu) in the "kit": Have a look at this gist: https://bit.ly/2Nby0HZ - Four lines to create a container with two users (foo and root) including their own python packages installed in their home directories (PIP_USER=1).

2.) It's very easy to expand the kit and the plan/artefact concept helps to structure and reuse code (https://bit.ly/2BEIcnH) stored on your local disk or in repositories.


I think you may have inadvertently proven my point. This kind of abstraction is completely unnecessary. You could have added users and installed requirements in a similarly small amount of Dockerfile commands.


Why would I use this tool for 1) instead of Make or just throw CMD useradd -m foo into my Dockerfile?


Yes, your are both right, of course you can RUN "useradd ...". But we have often made the experience that we need to run more complex processes during docker builds (with if...else conditions etc.) resulting in complex Dockerfile RUNs. Bundling and reusing this code in plans/artefacts is very useful. In the end we use docker-build to create our images, of course it's not intended to be a substitution for the great (docker hub) image concept.


But we have often made the experience that we need to run more complex processes during docker builds (with if...else conditions etc.) resulting in complex Dockerfile RUNs

That's fair, I've come across those types of scenarios as well, but trying to solve them during docker build seems like you're giving yourself, and inheritors of your docker image a lot of unnecessary extra work. Shouldn't those processes be solved independently of the docker daemon, and the results fed to Docker when it's ready for them?


I am not sure if I understand your comment correctly, but we all know that there is a very large range of image requirements. Some of them need to be solved during builds (e.g. sometimes you want to distinguish between dev and production setups), other will be processed during container setup.

You are right regarding inheritors. Creating a docker-hub image has differnent requirements than creating images for your private environment.


I guess I'm not entirely understanding how the scenarios you bring up to myself and others aren't sufficiently solved in the current Dockerfile implementation, along with other command-line tools that I can use immediately or retrieve trivially. A 'framework' for bash + docker feels a bit anti-pattern-y/or at least the benefits just aren't obvious for the way my team uses Docker presently.

That said, if what you've put together has solved problems for your team, maybe it will for others, so don't let my incredulity spoil anyone else's curiosity who reads these comments later.


In the meantime I have realized that the term "framework" is very missleading... ;) But you're pointing out a crucial point. IMO, the need of many Show-HN solutions is often not fully understandable in the first moment. But when processes and requirements change over time, sometimes these solutions are remembered again and can be a starting point for own ideas. That's one of the reasons why I read HN :) Thank you for your constructive comments!


when processes and requirements change over time, sometimes these solutions are remembered again and can be a starting point for own ideas.

You have a good point, here. Thanks for the responses and helping me understand your work a bit better!


Looks to be the type of thing where people didn’t want to learn a new technology so someone wrote a “framework” to make it “easier” for them.


Abstractions over abstractions over abstractions. Shaking my damn head.

I’ve only been in the industry 7 years or so, but I’ve already begun to prefer sticking with standard operating procedures - like using Dockerfiles for Docker, or using Swift to build iOS apps instead of react-native, because in my experience, if you don’t constantly maintain and stay on top of something that isn’t built with the officially-supported tooling, it’s going to be a lot more trouble when you have to eventually change something. Especially if that change is years down the road and when you finally need to make it, your build system was built on some esoteric unsupported third-party tool that people lost interest in two years ago and stopped developing. Not saying that’s what this is... just a warning.

It’s cool to make new stuff and all that, especially when you have a real repetitive problem that isn’t solved elsewhere, but sometimes it seems like people are making problems up or creating solutions to problems that aren’t really problems in the first place. I’m tired of new stuff. I don’t want flashy. I want reliable, stable, usable things with excellent tooling around them. I don’t want something that downloads 300 packages from npm and breaks when I try to update it later and some unknown dependency that wasn’t hard-pinned gets updated in my app. Why things aren’t hard-pinned by default blows my mind. It seems like every time I have to work on a JavaScript application I run into problems like this, which is part of why I hate that juvenile ecosystem. And lack of typing information doesn’t help. At least with typed languages, if something gets upgraded that shouldn’t have been, the IDE or compiler will catch the error usually. And this is all coming from a Python guy (historically). Don’t even get me started on pipenv.

/rant


No, it's not "easier" and it doesn't replace the thinking in images and using images in the final step. The user really need to understand the docker concept. As I wrote in the other answers: It's only (another) approach to avoid complex (RUN ... if ... else ...) Dockerfiles. Hey, this is HN: We share solutions. some you like, some you don't - But the ones you don't like will give you new ideas sometimes :)


There are three aspects of Docker that aren't too bad in itself (and have obvious or logical reasons), but taking them together leads to close coupling and code duplication:

- There is no composition of images, but only inheritance

- Shell scripts can't define any environmental variables that will survive beyond the script itself

- Docker doesn't have any way to take or template values from other Dockerfiles

That means that you might be able to separate RUN logic out into shell script (like the method deck-build proposes), so that you can compose multiple tools into the same image. Still any ENV variables that your tools might need (if only a simple $PATH) need to be manually copied into the composing Dockerfile and maintained when changes occur.


Thx for your constructive summary. Not sure what you mean with "tools might need" but deck-build only needs some well defined ENVs (https://bit.ly/2TSTImv). Regarding shell scripts used in RUN commands: Why not COPY a conf file that will be sourced by every script?


If you find yourself using complicated custom methods with Dockerfiles, you should probably be building Linux packages, publishing to a local repo, and installing them from the Dockerfile. You gain the idempotent, immutable, versioned, system integrated, dependency-mapped, cryptographically verifiable, remotely distributed, cacheable benefits, and you don't have to adopt any new software or systems.

Your organization can follow any workflow/lifecycle it wants to build and publish the packages. Once they are published, developers can just install and use them with no learning curve whatsoever.


And to create those packages, I've found fpm[1] a joy to use. [1] https://github.com/jordansissel/fpm


Got your point, and this is a nice solution. But another (not uncommon approach) is to understand the docker container itself as the application.


This reminds me a bit of the container image builder "Kubler" (https://github.com/edannenberg/kubler) which is also configured with Bash scripts.

Just creating a cross-reference for the interested here :)


Thank you, I didn't know and will have a look :)


Check out buildah, it's a tool that builds container images using real shell commands, with no big fat Docker daemon necessary. So instead of a COPY command, you can just use cp.


Why not just use buildah?


Came here to mention this. I loved rkt for the fact that I didn't need a special syntax for a file. Instead, just a shell script that built the image I needed. Honestly when it comes to deployments, it was much better for us to just get everything into one big layer and move on.


I will be sad if I ever need to spend significant amounts of time configuring docker, or any deployment system.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: