Translating to individual RUN lines removed 100s of lines of code, and vastly sped up the build process. In addition the build process should be more familiar to future Dockerfile devs, who can see the entire build in one file and don't need to learn an extra tool.
Wrt this project, how well does it use Dockers build cache? And is it really better than idiomatic Dockerfile when you account for future developers?
While I do think this is reasonable, I have an intuition that you might be trying to do too much in one image, if you feel like you need Ansible. There are good reasons to sometimes have fat-containers. Do you mind sharing what all you're installing with Ansible?
Using ansible from a Dockerfile in the first place was a code smell, IMO.
If anything, Dockerfiles are already too powerful[^]. It's trivial to zoom into an exciting world of asset opacity and rotting bits.
Images are very useful, though. Insofar as they are liberated from the sins of Dockerfiles, there's a bright future ahead.
I don't see bash as such a liberator. Swapping one mess for a different mess still leaves me in a state of resentful squalor.
Disclosure: I work on Cloud Native Buildpacks (https://buildpacks.io), so I have a horse in this race. We're getting near to our first beta release.
[^] The original, pre-dang-editing title was something like "Dockerfiles not powerful enough?"
How? Why doesn't docker allow me to arbitrarily copy files, create any layers and execute shell commands from command-line instead, as opposed to a Dockerfile?
You can create images via just `docker`, but it's a wee bit hard to do, IMO. IIRC, the docker command that captures the resulting image will by default also change the entrypoint to whatever the `run` was, which isn't helpful.
Also if you do that, then you can't edit the ENTRYPOINT without adding another layer because docker can't edit the respective json in the .tar file directly.
As in - it has the documented functionality to do so in the "POST /commit" API, but that would never work and that parameter will be simply ignored.
1.) It bundles a lot of useful functions (https://bit.ly/2N9mAEu) in the "kit": Have a look at this gist: https://bit.ly/2Nby0HZ - Four lines to create a container with two users (foo and root) including their own python packages installed in their home directories (PIP_USER=1).
2.) It's very easy to expand the kit and the plan/artefact concept helps to structure and reuse code (https://bit.ly/2BEIcnH) stored on your local disk or in repositories.
That's fair, I've come across those types of scenarios as well, but trying to solve them during docker build seems like you're giving yourself, and inheritors of your docker image a lot of unnecessary extra work. Shouldn't those processes be solved independently of the docker daemon, and the results fed to Docker when it's ready for them?
You are right regarding inheritors. Creating a docker-hub image has differnent requirements than creating images for your private environment.
That said, if what you've put together has solved problems for your team, maybe it will for others, so don't let my incredulity spoil anyone else's curiosity who reads these comments later.
You have a good point, here. Thanks for the responses and helping me understand your work a bit better!
I’ve only been in the industry 7 years or so, but I’ve already begun to prefer sticking with standard operating procedures - like using Dockerfiles for Docker, or using Swift to build iOS apps instead of react-native, because in my experience, if you don’t constantly maintain and stay on top of something that isn’t built with the officially-supported tooling, it’s going to be a lot more trouble when you have to eventually change something. Especially if that change is years down the road and when you finally need to make it, your build system was built on some esoteric unsupported third-party tool that people lost interest in two years ago and stopped developing. Not saying that’s what this is... just a warning.
- There is no composition of images, but only inheritance
- Shell scripts can't define any environmental variables that will survive beyond the script itself
- Docker doesn't have any way to take or template values from other Dockerfiles
That means that you might be able to separate RUN logic out into shell script (like the method deck-build proposes), so that you can compose multiple tools into the same image. Still any ENV variables that your tools might need (if only a simple $PATH) need to be manually copied into the composing Dockerfile and maintained when changes occur.
Your organization can follow any workflow/lifecycle it wants to build and publish the packages. Once they are published, developers can just install and use them with no learning curve whatsoever.
Just creating a cross-reference for the interested here :)