Hacker News new | comments | show | ask | jobs | submit login

> to be doing everything and trust everything

Also, it's sort of weird how often people conflate these two things. There's this idea that home-rolling is naturally safer, and it's simply not true.

Everyone doing anything with software is relying on layers someone else built, and we should keep doing that. Layers I handle myself are layers that I know aren't malicious, but that doesn't mean they're secure. The risk of malice doesn't just trade against convenience, but against the risk of error. Using somebody else's programming language, compiler, crypto, and so on doesn't just save time, it avoids the inevitable disasters of having amateurs do those things.

We live in a world where top secret documents are regularly leaked by people accidentally making S3 buckets public. I'm not at all convinced that vulnerable containers are a bigger risk than what the same people would have put up without containers.




There's this idea that as long as everything is not rigorously proven secure, we might as well grab binaries of file sharing sites and run them in production.

This argument tires me. Every time some smug developer asks me if I have personally vetted all of gcc, with the implicit understanding that if I haven't we might as well run some pseudonymous binaries off of docker hub, I extend the same offer to them: Get a piece of malware inside gcc and I will gladly donate a month's pay to a charity of choice.

Sometimes I have to follow though the argument with the question if they will do the same if I get malware on docker hub (or npm or whatever) but the discussion is mostly over by then. Suffice to say, so far nobody has taken me up on it.

The point is, that there's a world of difference between some random guy on github and institutions such as Red Hat or Debian or the Linux kernel itself. Popular packages with well functioning maintainers on Debian will be absolutely fine, but you probably shouldn't run some really obscure package just because some "helpful" guy on Stack Overflow pointed to it, and you certainly shouldn't base your production on some unheard of distribution just because the new hire absolutely swears by it.


Right. All-or-nothing thinking is the bane of analysis, and philosophy in general.


The difference is that Docker has centered their momentum on the transclusion of untrusted/unverified images. It's true that executing random untrusted code has been a major problem since people got internet connections (although most HN denizens like to fancy themselves as too smart for that, so this story is undoubtedly uncomfortable for them), but when Docker makes it a core part of the value proposition, it's worth calling out.


Very true, but doesn't that make this basically a cost-benefit calculation, with risk-of-malicious-code vs. risk-of-reinventing-the-wheel(badly)? I assume the critics would say that container tooling makes it easier for reckless amateurs to put things up when they otherwise might not have managed to deploy at all without them...


> basically a cost-benefit calculation

Absolutely. There are some famously settled issues - don't write your own crypto, you'll screw it up, do write your own user tracking, third parties will inevitably farm data - but generally there's a decision to be made. And it's not the same answer for everybody; there's a reason Google and Facebook do in-house authentication services, which everyone else borrows.

I've seen the "containers let clueless people go live" claim before, but I'm not really convinced. Containerization offers most of its returns when we talk about scaling, rapid deployment, and multiple applications. If you just want to put up an insecure REST service with no authentication, it seems at least as easy to coax some .NET or Java servlet horror online without any containerization at all.

The examples in the article of containerized learning environments are a bit of a different case, granted. A space specifically designed to let stranger spin up new instances and run untrusted code would usually take a fair bit of thought, but containers have made it much easier to do without any foresight.


I don’t think either offer much assurances unless there’s good test coverage, mocking, stubbing, fuzzing, property testing and so on to ensure code is solid. Trust but verify (automatically)




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: