Hacker News new | comments | show | ask | jobs | submit login

Maybe this isn’t clear, but something I read a while back on this topic:

Running a container from dockerhub is basically the same as curl piping into bash.




What? No.

Curl piping into bash will trivially steal all of your data at once.

Running a container from dockerhub is much safer, provided you do not give it privileges using --privileged or bind-mounting system files like docker control socket.

If your system is up to date and there are no docker 0-days active, the worst "docker run --rm -it RANDOM-CONTAINER" can do is to use too much resources -- your local secrets would be safe.


...unless said docker container is running an app server that has direct access to your database.


It is kind of disturbing that apparently a huge number of people installed these Docker containers and did not care to notice that they were using 100% CPU on all available cores, 24x7.


The fact that so many people are nitpicking the analogy instead of the argument is indicative of how true this is.


Even worse. A simple script can be stored, reviewed and then executed. Reviewing a whole image is practically impossible.


Reviewing images is relatively straightforward. For anyone using automated builds you can just review the Dockerfile either on Docker hub or github.

For non-automated builds just pull to a local machine and use something like portainer to have a look around.


> For anyone using automated builds you can just review the Dockerfile either on Docker hub or github.

And then review what it `FROM`s. And then review the core OS build that relies on.

It's a lot of work. It is doable, but it is a lot of work.


indeed there's a hierarchy to follow up so can be painful, but then no different to where a shellscript goes and pulls more code as part of it's run.

I just wanted to make the point that I don't think it's impossible :)


No, it isn't. It's like curl bashing in a chroot jail.

(Unless you explicitly expose ports or mount volumes or grant elevated kernel permissions.)

I can't think of safer way of running someone else's code, can you?


qemu


Yep. Even full virtualization isn't truly sandboxed, but the sandbox is much tighter.

FreeBSD has jails and Solaris has zones, both of which were designed to be safe sandboxes for OS-level virtualization or "containerization" as it's called today. The consensus, as far as I can tell, is that these are pretty safe/strict, at least as far as "provide a safe environment to execute untrusted code" goes.

On Linux, resource control mechanisms like cgroups and namespaces have been co-opted to simulate secure sandboxes, but it's not the same as actually providing them.


FWIW, AWS Fargate -- which uses Docker containers as the unit of virtualization -- is now HIPAA compliant.

I can't speak with authority on Docker security, but that's a data point, from the largest cloud provider in the world.


Sure, and there is nothing wrong with either one in most cases. Salesmen, bloggers, security people, and others like to disagree, but they do it out of bias, and not because they want you to get things done.

Edit: I'd like to be wrong about this. Maybe some brave downvoter could help out here?


Security people certainly "do it out of bias". Most are, rather understandably, biased against having systems they're tasked with managing get pwned from under them.


Piping curl to bash is equivalent to running a remote code execution exploit against yourself. Even if you implcitly trust the endpoint, do you trust that they will remain uncompromised forever? Also, it's especially silly because it's never the best or only way of accomplishing a given task, so it serves only to shoot yourself in the foot.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: