From my perspective, containers just make explicit what's always been true: the library dependencies linked into the "release" of an "app" need to be stewarded by the app developers, rather than the OS distributor. System components are owned by the system maintainer, but apps are owned by their developers, and the SLAs are separate.
If Nginx, for example, has a security flaw, it's properly Nginx's job to make a new container. That flaw might be because of Nginx's dependency on OpenSSL, but it might also be because of a vulnerability in Nginx itself. Nginx (and every other "app" developer) needs the process in place to get new releases out in the latter case; if they have that process, there's no reason to not apply it also to the former case.
Distro maintainers have been picking up the slack of creating "security releases" for app software for decades now, but they only do it because app developers were unwilling to take on the responsibility of maintaining packages in several different package ecosystems. App developers are not nearly as timid when it comes to container-image releases, since it's just the one QA process rather than five. So, properly, the responsibility—and the SLA—is reverting back to the app developer.
You can't expect, now, that paying Redhat or Canonical for a support contract will mean that Redhat/Canonical will be required to rush you patches for your app software. It never worked this way anywhere else; Microsoft and Apple aren't responsible for patching the vulnerabilities in the apps in their ecosystems. (Even the ones you buy through their stores!) The Linux world is just coming back to parity with that.
Now, on the other hand, system components—things the OS itself depends on—still need to be patched by the OS manufacturer. Microsoft still delivers IE patches; Apple still delivers Webkit patches. Because those are fundamentally system components, used by the OS to do things like drawing notification dialogs.
Those components happen to come with apps, but the system component and the app are decoupled; there's no reason the version of Webkit the OS uses, and the version of Webkit Safari uses, need to be the same. And they're not: you can download a fresh Webkit from Webkit.org and Safari will pick it up and use it. Apple, thus, only takes SLA-responsibility for the system-component Webkit—not the "app Webkit." The same is (soon to be) true for Linux distro-makers.
---
The near-term effect of all this, of course, isn't that app developers universally pick up the maintenance-contract stone and start carrying it. Most have spent too long in the idyllic world of "downstream support" to realize, right away, what this shift spells for them. Instead, in the near term, it'll be your responsibility, as a system administrator, to be a release-manager for these container-images. This was always true to some degree, because every stack tends to depend on some non-OS-provided components like Nginx-current or the non-system Ruby. But now it'll be pretty much everything.
An interesting incentive this creates is to move your release-management more toward monolithic synchronized releases. If you're managing the releases of all the components in your stack, it's much easier to think of yourself as releasing a whole "system" of co-verified components, rather than independently versioning each one. When you upgrade one component, you bump the version of the system as a whole. This sort of release-management is likely familiar to anyone who has managed an Erlang system. :)
But don't you think that something like Nix gives us the best from both worlds?
I can have containers, but these containers are still manageable, not black boxes.
Furthermore, since packages come as declarative recipes, one can try to reproduce binaries (see guix challenge command). Otherwise you are at complete mercy of the packagers.
For very large deployments, Docker-like containers are fine. But for desktop applications I think it's not the way to go.
The thing you don't get with Nix is the ability to package software that was developed for some silly legacy distro. Containers embed the required bits of whatever OS the app was built for; Nix requires the app was written for Nix (or written to be autotools-style flexible.) Pragmatically, in the case where the app developer isn't the app packager, containerization wins—I can just slap together a container-image formula that pulls in the e.g. Ubuntu-PPA-only libs the software depends on, that might not be available in Nix's ecosystem.
If you're writing software from scratch, though, by all means, target Nix as your package-dep ecosystem, and make your own software into a Nix package. Nix is a great source for creating "community verified-build" container images, for any "legacy" container ecosystem. Where a Nix package exists, it's exceedingly simple to write a container-image formula sourcing it. Thus, having a Nix package makes the software much more accessible to many platforms.
It would also, as an aside, make perfect sense to enhance systemd with the ability to run Nix "units" that are just references to Nix packages that get resolved and downloaded+built at launch-time. (systemd would still be better off turning those Nix units into container-images, though—systemd relies on containers to provide runtime sandboxing, ephemeral container filesystems, and cross-image deduplication of base-files.)
Smaller teams aren't usually so concerned with where their SLAs come from, as they are with just having software that works—or rather, software releases that are incentivized by some force or another to be both stable and fresh.
So, for small teams, I see a likely move more toward the model we see with "community-made verified-build Docker container images": a GitHub repo containing the container-image formula for the release, that many small teams depend on and submit PRs to when vulnerabilities occur.
While not ideal, this is far better than the Ubuntu PPA style of "some arbitrary person's build of the release." It doesn't give you anyone to blame or bill for downtime, but it does give you frequent releases and many-sets-of-eyes that hopefully make your downtime quite small.
It's a bit like the atomized "bazaar" equivalent to the "cathedral" of a distro package-management team, now that I think about it. Each verified-build formula repo is its own little distro with exactly one distributed package, where anyone with a stake in having good releases for the software can "join" that distro as one of its maintainers. :)
If Nginx, for example, has a security flaw, it's properly Nginx's job to make a new container. That flaw might be because of Nginx's dependency on OpenSSL, but it might also be because of a vulnerability in Nginx itself. Nginx (and every other "app" developer) needs the process in place to get new releases out in the latter case; if they have that process, there's no reason to not apply it also to the former case.
Distro maintainers have been picking up the slack of creating "security releases" for app software for decades now, but they only do it because app developers were unwilling to take on the responsibility of maintaining packages in several different package ecosystems. App developers are not nearly as timid when it comes to container-image releases, since it's just the one QA process rather than five. So, properly, the responsibility—and the SLA—is reverting back to the app developer.
You can't expect, now, that paying Redhat or Canonical for a support contract will mean that Redhat/Canonical will be required to rush you patches for your app software. It never worked this way anywhere else; Microsoft and Apple aren't responsible for patching the vulnerabilities in the apps in their ecosystems. (Even the ones you buy through their stores!) The Linux world is just coming back to parity with that.
Now, on the other hand, system components—things the OS itself depends on—still need to be patched by the OS manufacturer. Microsoft still delivers IE patches; Apple still delivers Webkit patches. Because those are fundamentally system components, used by the OS to do things like drawing notification dialogs.
Those components happen to come with apps, but the system component and the app are decoupled; there's no reason the version of Webkit the OS uses, and the version of Webkit Safari uses, need to be the same. And they're not: you can download a fresh Webkit from Webkit.org and Safari will pick it up and use it. Apple, thus, only takes SLA-responsibility for the system-component Webkit—not the "app Webkit." The same is (soon to be) true for Linux distro-makers.
---
The near-term effect of all this, of course, isn't that app developers universally pick up the maintenance-contract stone and start carrying it. Most have spent too long in the idyllic world of "downstream support" to realize, right away, what this shift spells for them. Instead, in the near term, it'll be your responsibility, as a system administrator, to be a release-manager for these container-images. This was always true to some degree, because every stack tends to depend on some non-OS-provided components like Nginx-current or the non-system Ruby. But now it'll be pretty much everything.
An interesting incentive this creates is to move your release-management more toward monolithic synchronized releases. If you're managing the releases of all the components in your stack, it's much easier to think of yourself as releasing a whole "system" of co-verified components, rather than independently versioning each one. When you upgrade one component, you bump the version of the system as a whole. This sort of release-management is likely familiar to anyone who has managed an Erlang system. :)