
“More eyeballs == Secure code”: But is it just a theory or happens in practice? - rms_returns
Its one thing to say that &quot;open source is secure because anyone can have a look in the source-code&quot;, and its totally different to expect lay users of Linux desktops to head over to the source repos of gnome, xfce, unity, etc. and scavenge the code like professional auditors.<p>Its the job of security auditors to ensure that this code is safe and doesn&#x27;t contain any malware or adware, none of the lay users have that kind of expertise. And yet, I was astounded today when I came across a comment on reddit made by a senior xfce dev who says that the gnome project doesn&#x27;t perform any extensive code audits at all[1]. To quote the user Sidnioulz:<p>&gt; &gt; Has there ever been a code audit for any of the more popular Linux DEs?<p>&gt; There might have been for KDE? GNOME does not audit anything as far as I&#x27;m concerned. They state they review Shell extensions but the involved process has been discussed on their mailing list and it&#x27;s essentially a &quot;does the code look nice&quot; review rather than a security audit.<p>[1] https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;xfce&#x2F;comments&#x2F;47eoji&#x2F;does_xfce_have_a_security_advisory_team&#x2F;d0d93dk<p>A senior xfce developer saying this is a matter of grave concern. From what I get from this conversation is that some amount of transparency is needed in the auditing mechanism. In other words, a way to communicate to the lay users that the security audits are taking place all right, and a mechanism for them to ensure the same. This must happen if we want to avoid the next fiasco like that of linux mint. Granted that the mint fiasco doesn&#x27;t relate directly to the source audits of gnome project, but the larger question here is the importance of security audits. Had their forum software (phpBB) been audited properly, they would have surely avoided a huge loss in trust, PR and reputation.<p>Moral of the story is: &quot;Make security audits a part of the development process itself, not an afterthought.&quot;
======
dguido
To address the title: It does not happen in practice, and the idea that
"simply because code is open-source and viewable, it is more secure" needs to
die. It is an open-source fantasy, like Linux on the Desktop.

Code becomes secure when you train your developers, when specialists audit
your code, and when you consciously choose safe libraries and frameworks to
build upon. Tasks like these all cost time and money and organizations that
spend on security are ones with "skin in the game" who can afford to pay for
it.

------
caller9
More eyeballs actually may mean secure code, but only if those eyeballs know
what secure code is and if those eyeballs can be bothered to check out the
source code.

Security software cannot be trusted really unless it is open source and people
that know what they are doing look it over. Then you build it yourself on your
own system and checkpoint every change you merge in.

Almost nobody does this. So you end up with openssl's heartbleed and other
problems due to plain insecure coding in combination with actual protocol
weaknesses.

It's hard to write secure software and algorithms. The guys writing GUI code
care more about performance than security. If your UI is running as root, it
is not secure.

------
bjourne
> Moral of the story is: "Make security audits a part of the development
> process itself, not an afterthought."

Then maybe you should start auditing the code yourself instead of just telling
others what to do (with their free time)?

~~~
jack9
I don't think you understood the problem or the author's attempt at raising
awareness, in the context of a cliche. Never once did he tell others what to
do.

------
thwarted
"More eyeballs == Secure code" is a massive simplification to get a
tweet/soundbite, and a terrible one because it makes implicit so much about
what this actually means.

"More eyeballs" is meant to be in contrast to proprietary code, which is
assumed to have a limited number of eye balls looking at it.

As for "secure code", there is no such thing as 100% secure, just as there's
no such thing as 100% bug free.

It's all about mitigation and potential. There are various kinds of
mitigation, of various levels of strength, appropriateness, and utility.

Mitigation of security issues and bugs when it comes to proprietary, closed
code is accomplished by actively paying experienced programmers, quality
assurance, and security professionals to write, test, and review the code.
This is, however, expensive. Which partly explains why some proprietary,
closed code has a bad reputation (not the least of which is that, without
access to the code, you can't verify the (marketing) claims made by the
vendor).

For open source software, the code is _accessible_ and _available_ to be
reviewed by _able /experienced_ people. More eyeballs _can be_ on it. This
statement should not imply that more, experienced eyeballs actually are on it
(but it often is used as such), nor should imply that lay users are expected
to uncover and fix security issues/bugs. There's a lot of code on github and
sourceforge that no one has ever looked at other than the author, but it's
there able to be looked at (and it _might be_ safe to assume that code on
github gets a modicum more attention on average than code on sourceforge,
ahem). It doesn't imply it in the same way that "proprietary" doesn't imply
that it is actually written, tested, or audited by (paid) professionals
either.

I think this statement is a bastardization of two other, statements:

\- "System security should not depend on the secrecy of the implementation or
its components", aka, "security through obscurity" [0]

\- "given enough eyeballs, all bugs are shallow", aka Linus's Law [1]

"More eyeballs == secure code" (and Linus's Law) is a statement similar to the
Infinite Monkey Theorem[2], and has the same caveats, not the least of which
is requiring infinite time and monkeys (or in the case of LL, infinite
eyeballs). That doesn't necessarily make it less true, but it does make it
less pragmatic. So it's more theory than happens in practice, but it sounds
good, _seems_ intuitive, and has been used as a way to encourage/brow-beat
proprietary vendors to open source their code (I'm not sure that this has
actually worked for anyone, however, because people don't want to use code
that has a bad reputation, which actually discourages eyeballs from examining
it). Of course, then you get things like phpBB which is open source and
notoriously for bad security, which we could say we know because of so many
eyeballs using it (if not looking at it). Despite that reputation, it's
available and gets used anyway. If this reputation actively discouraged people
from using phpBB, then more eyeballs, in the form of users being exposed to
insecurity, might yield greater security by people actively avoiding the
known, traditionally problematic software in favor of something with a better
track record. It is unfortunate that in the case of phpBB, its ubiquity and
ease of use trumps that reputation.

Presumably you could combine them both and hire experienced developers,
testers, and security professionals to (continuously) audit open source code.
This would arguably be better than solely either proprietary code being
audited or the code being open source.

There are no silver bullets. You can (attempt to) throw a massive number of
eyeballs at the problem, and/or a massive amount of money for the very best
eyeballs, but neither guarantees absolute 100% security. And any claims
otherwise should be suspect.

[0]
[https://en.wikipedia.org/wiki/Security_through_obscurity](https://en.wikipedia.org/wiki/Security_through_obscurity)

[1]
[https://en.wikipedia.org/wiki/Linus%27s_Law](https://en.wikipedia.org/wiki/Linus%27s_Law)

[2]
[https://en.wikipedia.org/wiki/Infinite_monkey_theorem](https://en.wikipedia.org/wiki/Infinite_monkey_theorem)

