

When is it ok not to open all source code? - treerock
https://gds.blog.gov.uk/2014/10/08/when-is-it-ok-not-to-open-all-source-code/

======
riskable
#1: Keeping certain configuration details a secret (e.g. passwords, which
ports are open, versions of software, etc). Good idea. Best practices even.

#2: Hide the code/software that performs security-enforcing functions. Um...
What? No. This is stupid. This is exactly the kind of thing you _want_ out in
the open so people can find flaws and let you know about them before "the bad
guys" get ahead of you. It's Wizard of Oz security: Pay no attention to the
software behind the curtain.

#3: Who cares? Why do they feel the need to justify delaying the release of
code? Of course you'll want time to prepare it "for public consumption." As
long as it gets opened up in a timely matter.

~~~
dalke
#3 appears to be a catch-all for "we haven't decided our policy is yet, and
publishing the source code early, while we're still considering the options,
forces us to make a premature public statement."

For example, suppose the government was going to establish a new video codec
for its own video releases (ie, suppose that Dirac doesn't exist). There are
several options: create a new and unencumbered license, negotiate a license
for an existing system, or purchase the patent right from the patent holder.

The final choice depends on the cost to develop a new codec, the success of
that codec, and the cost of buying or licensing the patent(s). The government
can explore all of these at the same time, eg, set up a group to develop a new
codec while negotiating with various rights-holders.

The new codec might be finished while the negotiating is ongoing. The
government knows how well it fits their goals, but the rights-holders do not.
This gives a negotiation advantage to the government.

On the other hand, if the completed codec were immediately published, that
advantage disappears. The rights-holders might decide to raise their prices
after seeing just how bad the new codec is, which makes the government goal of
more open access to their video archives more expensive.

Thus, the government could rightly "judge the public interest not to be served
by publishing [their] own software", even if, after the negotiations have
completed, the software will be published.

------
antsar
1\. Security by obscurity

2\. Security by obscurity

~~~
zAy0LfpBZLC8mAC
Also, the distinction is largely pointless - vulnerabilities are not limited
to some magic "security enforcing functions", so if the argument had any
value, you'd have to keep all of the code secret. The security of a system
tends to depend on all of the code that is in some way reachable from the
outside.

Also, they list antivirus software as a security measure - all the more reason
to publish it all so people with some actual expertise in IT security can have
a look at it.

~~~
riskable
Not only that but the expectation that you can keep your AV vendor a secret is
ridiculous. Especially with tens of thousands of employees having a system
tray icon in direct view and hundreds to thousands of techs expected to be
able to install/re-install said software.

Not to mention the fact that big contracts with AV/security vendors are
usually public knowledge and easily accessible. Heck, they probably have press
releases indexed by Duck Duck Go!

------
cjslep
I'm curious if someone more knowledgeable than me in the security world could
elaborate on where most people tend to sit on this spectrum of openness. I
figure "obscurity" reduces the number of potential attackers that have the
capability of cleverly attaining the information being kept obscure, but at
the cost of not having much of the public available to evaluate the software's
integrity.

My uninformed guess is many people stick to the former.

~~~
riskable
Finding vulnerabilities in closed source software is just as easy as finding
vulnerabilities in open source software. The key differences are:

* Transparency.

* The ability to fix it yourself.

Also consider this: No one submits patches to fix vulnerabilities in closed
source software because that's not possible without the source. This results
in all sorts of problems that just don't exist when the code is available:

* After reporting a vulnerability you now have to wait for the vendor to fix it. If the vulnerability isn't out in the open (yet) they might take their sweet time while you nervously twiddle your thumbs.

* If the vendor doesn't support the product anymore (or the vendor doesn't exist anymore) then there's really no options at all. You have to live with the vulnerability or get a new product.

* When the vendor releases a fix how do you know if they fixed the core vulnerability or just the edge case that was given to them? All too often vendors apply hacks to detect specific edge cases instead of fixing underlying problems. This is why attackers are able to take advantage of _old_ vulnerabilities by means of simple fuzzing (or just being clever).

For example, I reported a horrific information disclosure vulnerability (SSNs
in the clear) once and their 'fix' was to base64-encode the data. If I didn't
keep track and follow up with them (took them four months to release that
'fix') it is likely that vulnerability would still be present.

There's actually a lot of other problems related to closed source
vulnerabilities and fixes but I'm out of time...

~~~
cjslep
I appreciate the time you did spend writing this comment. I don't work in the
security world so it is nice to know some arguments for open-source software
as a way of removing vulnerability in software.

------
powatom
Also: [https://identityassurance.blog.gov.uk/2014/10/09/how-we-
use-...](https://identityassurance.blog.gov.uk/2014/10/09/how-we-use-open-
source-code-on-the-identity-assurance-programme/)

Left a comment there but it's awaiting moderation. The GDS do a good job for
the most part, but I very strongly disagree with how they're handling this
one.

