Hacker News new | past | comments | ask | show | jobs | submit login

But auditing the source is only useful if you can do reproducible builds to be sure you run the audited source.

This is rarely the case unfortunately, and for most of open source prebuilt software you use, you rely on trust and not on audit.

That's not true. You could always, you know, run the version you compiled yourself.

We’ve known for 30 years that’s not enough (depending on your risk characteristics).

Trusting trust is one of the seminal talks in software.

We've known that in theory your compiler et al can be backdoored but in practice we can feel a lot safer compiling our own software than using proprietary binaries.

I think that might be where we disagree. Without a competent security audit I think you are falling back to trust.

Open Source vs Closed source is not where I or the security professionals I know put most trust emphasis. I would enthusiastically trust something closed from Google over a rando open source project.

But back to the original point, even the most basic audit steps are the same on an open source project vs closed one. Observe what the binary does & inspect it for standard patterns.

Observation only goes so far. You couldn't find backdoors unless you started to decompile it.

I think having a trusted compiler is an important first step to trusting software, even if you have to analyize it in depth yourself.

2 actually:


But note you are now adding a lot of extra preconditions that are largely not available.

The counter argument is reverse engineering & black box audits are actually easier than getting the conditions right to trust code audits. As a bonus they work regardless of the code availability.

So you can trust your disassembler and strace, but not your compiler? Your method is just vulnerable to another flavor of trusting-trust. What about the compiler that built your rev-eng and blackbox tools?

That is of course, not what I'm saying.

My original claim, that I stand beside, is that code audit-ability for security purposes is not a reason to prefer open source software. For all the reasons this thread points out, that is just as fraught as auditing closed source software. Further, a competent audit of the software would not look much different between open and closed source projects.

Absent a competent audit, there are lots of other factors that are higher on my (and many more knowledgeable peoples) lists for importance to security and privacy than open vs closed source. Things such as documented and approved algorithms, the team involved, the amount of legal backing, the market incentives etc.

That is not to say there aren't reasons to prefer non-Google based API or to prefer open sourced software for other reasons. Just security audit-ability is a bad one.

I don't think that you've fully appreciated the rebuttals to your position. Consider this: you audit your build toolchain and thereafter trust it not to manipulate your binaries. With this axiom in place, is it not true that it's easier to audit open source software (assuming it's built on a trusted toolchain) than proprietary software?

I don’t think you’ve understood the original premise. Suggesting that closed source software isn’t auditable Is laughable. No one who does software audits for a living supports that premise.

>Suggesting that closed source software isn’t auditable Is laughable

I never said that. Come on, dude.

I think I see the confusion. The post they first replied to said that, quite specifically. You were not that poster, however.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact