I care about tracking down issues before they reach production. Meaning that I want an environment that mirrors production as closely as possible. Meaning heavyweight not lightweight virtualization.
Our build scripts get tested a dozen times a day and cannot tolerate half-assed broken build scripts.
Our deployment pipeline (after verifying the image is good enough to be deployed) packs the docker image into a machine image along with several other containers. The machine image is then deployed to staging. If the machine image passes staging, it goes to production. If there is an issue which has hit production exclusively (it has happened only a handful of times,) it is simply an issue of rolling back to the previous machine image.
Well, it gets you a step closer to accurately mimicking production.
>It also doesn't "cover up" broken build scripts.
That seems to be what that 'build once' rule is for. If your building process isn't risky, why the need to prohibit running it twice?
Just because I can install the operating system doesn't mean I want to do this on every deploy of an application.