Hacker News new | past | comments | ask | show | jobs | submit login

Backdoors aren't bugs like most others. Buffer overflows happen because someone mistypes or forgets a length check, etc.

Backdoors are unusual: They happen because someone writes code of the form addAccount("s3kr3e", "s3kr3t"), and that's code that's written. You can typo and accidentally omit a bounds check, but you can't typo and accidentally end up with a valid SSH key pair and code that installs it.

It's possible to ship that SSH key pair and the code to customers that as a bug, e.g. if someone writes that code on purpose, intending to add and deploy s3kr3t/s3kr3t but not intending to ever have that code on the branch that's deployed to regular customers, and then someone else mismerged. In that case serving it to customer X is due to a bug, it should only have gone to customer Y or test environment Z. What I'm saying is that shipping those backdoors at all must have been intentional.

(Personally I think shipping backdoors to test environments is fine. Including test environments at customers. Risky.)




> Backdoors aren't bugs like most others. Buffer overflows happen because someone mistypes or forgets a length check, etc.

> Backdoors are unusual: They happen because someone writes code of the form addAccount("s3kr3e", "s3kr3t"), and that's code that's written. You can typo and accidentally omit a bounds check, but you can't typo and accidentally end up with a valid SSH key pair and code that installs it.

Not sure if you're intentionally trolling but a backdoor is simply some code which bypasses security that a particular person knows about. It does not have to be obvious. The ones that have plausible deniability are the better ones as that is considered a feature. That way the company can say "oops, we made a mistake".

To say that a backdoor must be obvious is absolute nonsense particularly for closed sourced binaries where disassembly and using simple tools like https://en.wikipedia.org/wiki/Strings_(Unix) would reveal the presence of such backdoor.

> (Personally I think shipping backdoors to test environments is fine. Including test environments at customers. Risky.)

It depends. Unless that "test build" specifically has an option that disables all "test related backdoors" then the answer is no. You cannot risk having something slip through to a production build.

As the previous poster said:

> I've worked for a company that built OS images for distribution to customers. Putting my SSH key in development image builds would have been convenient, but there was too much of a risk of exactly this problem; instead we just made it easy enough to download an SSH key on a development build (and start up an sshd) once you've booted it and have physical access to a terminal.

Another very common solution would be a template where at build time the keys are generated/imported by whatever build system is being used.

That way if something unintended happens or is "forgotten about", the build simply won't have a key at all and therefore will not work, rather than having a set of keys that are the same across all production builds.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: