Hacker News new | past | comments | ask | show | jobs | submit login
Software Freedom Doesn't Kill People, Security Through Obscurity Kills People (ebb.org)
192 points by ashitlerferad on Aug 14, 2016 | hide | past | web | favorite | 124 comments

The author asks for evidence of GPLv3 software hurting people. This includes the Linux kernel, for which there are many examples of incidents. The below was the first I found[1] and it has the benefit of involving multiple open-source projects under different licenses.

The idea that software is inherently safer because it's released under an open-source license is overly simplistic in the extreme. Just the most obvious reason for this is that opportunity for independent review doesn't guarantee that review will happen. The wisdom of the crowd doesn't guarantee a third party review will be better or more thorough than a solid first party system. Open-source provides the possibility of review, and that's all. Hypothetical eyes make no bugs shallow.

Edit - I'm not making a claim one way or the other about the relative safety of open vs proprietary software, I'm merely saying that the comparison is deeper and more nuanced than the way it is typically portrayed.

1 - http://www.bbc.com/news/technology-28867113

With open source software, you have the possibility of knowing how safe it is. With closed source, you don't (ok, it's a lot harder to read through binaries, but still vaguely possible).

No way that you can understand how safe is the neural network that is driving your car. And this applies equally to open source autopilot systems and closed ones. The article is just a rant that misses this fundamental point.

Sure you can. It's just not by verifying the algorithm, it's by analyzing real-world performance. This is the same way that we understand how safe the human driving the car is, and the insurance industry is very good at this.

Analyzing real-world performance doesn't care if the source is open or proprietary, so you're kind of missing the point of this whole thread.

I cannot, others can - I can even pay them to do if necessary.

Can you pay someone to solve the halting problem?

I think we all agree that open source is not the holy grail but it shines with comparative advantages. It is like democracy: Far from perfect but still the best political system:

'Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.'


(OK, that was slightly OT!)

So the plutocrats will protect us from the majority? :)

Obviously we need constitutional rights and bureaucrats to implement the will of the people as expressed in the polls. But why do we need representatives? Do you really think eg a Tea Party representative will protect Muslims from the majority better than the first amendment and courts?

"So the plutocrats will protect us from the majority? :)"

No. The lesson, as usual, is that extremes rarely work out for us. The plutocrats will screw us if totally in control. So will politicians if they can get money from them or special interest groups. So will democracy if something horrible is trending in popularity. As usual, the solution will be a compromise between various points in the design space with more effective checks on various interactions and risks each pose to each other. I can't tell you what that design will be but looking for it is best investment with these things.

"Do you really think eg a Tea Party representative will protect Muslims from the majority better than the first amendment and courts?"

The First Amendment is subject to interpretation by representatives and/or courts. Fourth, Fifth, and Fourteenth have seriously been watered down. They're among the best examples of the risk. A politician can outlast the media fervor of the moment, get advise from well-informed people, and have teams check the side effects of the law. The current situation exists because apathetic democracy allows them to get away with not doing that. Also, passing laws in exchange for bribes. That could all be stopped with enough voter action followed by legislation or alternative models. The benefits of representational democracy remain.

I'm not saying I believe it's the best system. I'm just saying plutocrats and mobs cause lots of problems it can prevent with less effort than constantly outsmarting plutocrats or fighting mobs. So, it's a consideration.

The current situation exists because apathetic democracy allows them to get away with not doing that. Also, passing laws in exchange for bribes.

Or it exists because it is an emergent phenomenon that arises eventually. An increasingly polarized two-party system where each primary race is rigged against outsiders and voters are scared to vote third-party because each candidate is super scary to the opposing party. It's bound to happen eventually. In this case it happened in the US presidential race. And Congress has long had a 10% approval rating. One can make up reasons as to why it happened or accept that a representative democracy eventually reaches such states, and it's not clear how to get out of them with more representative democracy. Trying the same thing and expecting a different result.

I'm not saying I believe it's the best system.

Exactly, and I'm saying it's not. It's easier to fool all of the people some of the time (eg during elections) or heavily influence some the people all of the time (as lobbyists for special interests do) but you can't fool all the people all the time. That is far more expensive.

Once again ask yourself: are eg Muslim and Mexican US citizens safer if Donald Trump gets elected and magnifies the desire of those who elected him, or if the population at large was polled for policy? Are the people of Gaza better off under Hamas because they were democratically elected once by those who showed up? Would the Nazis ever get into power if there were no representative positions to get into? Besides gerrymandering, poor voter turnout and other major problems, representative democracy eventually gives a giant hammer to some special interest or other.

The crowd would do a better job on predicting policy outcomes than experts: http://www.npr.org/sections/parallels/2014/04/02/297839429/-...

I'm going to have to think on these points some more. :)

No, but neither can anyone else.

I can't proof that the neutral network always behaves correctly, but with testing and statistics you can verify the network up to a chosen certainty (below 100% obviously). Or you can actively search for problems with fuzzing techniques. All without solving the halting problem.

Right, but with the code in the open, it's easier to find ONE easy hole than to patch ALL the easy holes.

No that's not really how ANN's work.


artificial neural networks, you won't really be able to opening the code won't tell you much if anything, it's simply beyond the human capability to review.

So you'll end up with ANN's reviewing other ANN's which are reviewed by other ANN's at the end.

You can test ANN's by simply fuzzing them and seeing the output, getting much insight to why an outcome was chosen for a given data set is near impossible.

Lots of ways to review them:


These are also just the first steps. Not bad given what hardware and software verification used to look like a few decades ago. I think the verifiable ANN's will be more structured or constrained in their development or even use cases.

Have you actually read what the paper you've linked? I haven't said that it's impossible to review ANN's, but you do not need to review their code to do it, infact reviewing their code is the wrong approach in this case. You do not want to be conducting code inspection on nondeterministic systems as it would yield very little. This means that having the ANN being open sourced does not add additional security by having the code inspectable by everyone like with traditional software. This is what the whole argument is about, not about can ANN's be reviewed.

You said:

"it's simply beyond the human capability to review. So you'll end up with ANN's reviewing other ANN's which are reviewed by other ANN's at the end."

That implied it was beyond human capability to review ANN's to the point we'd need ANN's to do it. In the paper, they give a number of deterministic methods to review ANN's noting some were successful by the teams using them. Countered both of your claims in that context.

"This means that having the ANN being open sourced does not add additional security by having the code inspectable by everyone like with traditional software."

I agree that most of the problems won't be caught here. More about training set and safety monitors. However, the paper shows methods like rule extraction, visualization, and simulation that have aided in assessing and improving neural networks. These work on that internal state that's incomprehinsible at first glance. It's also usually stored in HW logic, the SW code, or some data format. Having those provably facilitates analysis of ANN's performance. Assuming it's the kind that's analyzable at all. ;)

Note: Code-level errors, either by hand-written or auto-generated engines, can also invalidate an ANN's behavior. Vast majority of software flaws occur right here. Obviously, some static analysis or safety transformations can increase assurance of correctness of overall system. Should be combined with other V&V methods.

The full sentence was "...opening the code won't tell you much if anything, it's simply beyond the human capability to review." It was within a comment subthread about the value of traditional code inspections for reviewing ANN's more specifically will the availability of the code for everyone to inspect be as "valuable" for security assurance as traditional software, context is everything.

If you get a the source code of any traditional software reviewing the code can be done by humans regardless of how complex it is, with ANN's it's not really the case simply because of the nondeterministic nature of the system. You can validate the "correctness" of an ANN but it's not done in the same manner as reviewing traditional software.

"with ANN's it's not really the case simply because of the nondeterministic nature of the system. You can validate the "correctness" of an ANN but it's not done in the same manner as reviewing traditional software."

You have to do both: validate the non-deterministic aspects of the ANN logic and validate its implementation as code. Makes them a lot harder. One reason I avoid them.

I made money doing it one time. They told me I couldn't prove an arbitrary program would halt. I wrote a program that loaded an arbitrary algorithm into a process, counted down to zero from starting constant, attempted to kill the process when at zero, and otherwise shut down the whole system. "sklogic" similarly showed how pointless the question was when he said he could just put a while loop around any algorithm doing basically the same thing within the program.

So, not only can you prove a program halts using a verified halter, the techniques for doing so have prevented and detected many real-world problems. One of those situations where abstract math ignores simple, concrete solutions. :)

No, but for any given program I may be able to hire someone to tell me whether it halts.

Safety critical automotive software is going to tend to be an easier case, since it's quite likely to be extremely time bounded.

No one can. If anyone could certify the neural network entire response mapping then you could simply write an exact algorithm with the same responses. Good luck for doing it to automatically drive a car. Strangely every company in the world is focusing on NN for autopilot, not on hand written algorithms.

If anyone could certify the neural network entire response mapping then you could simply write an exact algorithm with the same responses.

You can, actually, the result would be extremely complex, though.

Nice strawman given most systems use deterministic methods for that reason. Plus, simplest is often best in safety-critical. That said, you could've gotten more answers on Google with terms formal verification neural nets. NASA paper with good methods was first result for me.

Good methods for what? Not certainly to certify that each combination of inputs gives a certain output, as you can read from that paper. And that is the only way that would have ensured that the guy on the tesla didn't die. That methodology furthermore has nothing to do with open source.

"Good methods for what? Not certainly to certify that each combination of inputs gives a certain output, as you can read from that paper."

You're ignoring the biggest issue in your whole tangent: real-world engineers with sense won't use NN's in systems unless the problem itself is non-deterministic & other methods don't work. Personally, I push control theory, state machines, ladder logic, whatever for applications like this. You can deterministically analyze them. Whereas, problems with too much uncertainty or requiring approximate solutions will always have an error margin. You personally have been pushing this thread to focus as much as possible on the NN case while ignoring 99% of real-world, use cases representing how the OP topic will usually play out. You also act like the inherent error in the tooling is bad when the problem domain itself has inherent error. So, I decided to address your tangent.

The first solution, safety monitors, goes way back in safety-critical use. That's to have a simple solution checking the ranges or other behavior of the NN that's able to override faulty NN with simpler logic. This can be done for GPS-guided drones that usually move in straight lines or smooth curves. The NN part is reset, diagnosed, or deactivated while the fail-safe system takes plane on simpler path or out of the area.

Second part checks the nets themselves. Again, given a non-deterministic problem domain, the nets will be non-deterministic with the best one can do getting the error margin way down or within fail-safe bounds. This other paper I found summarizes a lot of V&V methods for NN's:


Safety monitors. Training data or updates with constraints built in. Automated, test-case generation tweaking expected data. Rule extraction. System visualization. Simulation. All should contribute to safer NN's. They can't be perfectly correct, though, because vision, complex flying, thinking... none of this is 100% accurate no matter what method you use.

Says who? You may not be able to (I sure can't) but that doesn't mean there aren't those who can and do understand this sort of thing. And there are those who will literally sit down for months and years till they do understand it. Or they might rip it out and replace it.

They will understand how it works in principle, not why it took this or that specific decision and they might not be able to exactly predict what it will do in an unforeseen situation.

The decision itself might be taken using an amount of data completely overwhelming to human being analysis, processed at inhuman speed.

Exactly, so having an open source autopilot systems would not have changed absolutely nothing, and the guy on the tesla watching Harry Potter would have been dead the same.

Guys, are you really saying that you think that you can certify every response of a NN for every input? Especially for NN as complicated as autopilot systems? Are you aware that if that was true then you would be the richest men in the world given that everyone would come to you to write a perfect algorithm to drive a car?

Certifiable reliability is not the only goal.

But even in safety critical tasks I have seen NNs and MDPs used to derive a good enough solution, then training turned off and the system certified (e.g. by running a large gold standard collection of test cases)

This isn't true at all. What's necessary is code written in a verifiable way, qualified reviewer assessing it, trust in that reviewer/assessment, and way to ensure you're using what was reviewed in evaluated configuration. It's how main, security certifications work if you do it at high-assurance level. Lower levels are paper BS or simply inadequate given short-term, code review only catches so much.

I broke it down here with a quick revision just now:


That's a bug in OpenSSL, not the Linux kernel. OpenSSL doesn't use the GPL, it is dual licensed under its own license and the SSLeay license.

And good luck fixing heartbleed in a proprietary product in the way that occurred for OpenSSL - the entire codebase got looked at and we got BoringSSL and LibreSSL.

What would you prefer - code with obscure bug from spaghetti code so bad it kills people [1], or code looked at by enthusiastic amateurs and professionals alike? Car people tend to enjoy pushing the envelope and they are doing this with software already!

1. http://www.safetyresearch.net/blog/articles/toyota-unintende...

>Hypothetical eyes make no bugs shallow.


Beyond the "email me" example, as an author of free source projects for the past several years, I can say that if the project is used and if users have access to the source, they will find and fix bugs- not all, but more than would have been otherwise.

On the flipside, if few use your project, bugs can last for ages. I've had this experience also- distributing projects with free source that don't work, and not finding out until much later when I noticed it myself.

Making source free is not bad for security, because it's not about high visibility. Security problems are more frequent when the vector of attack is larger, e.g. when your car can be stolen because they didn't anticipate misuse of the entertainment system that was accessible via the carrier.

Related to that topic, Chrysler has recently been paying hackers to try to find security flaws: https://www.wired.com/2016/07/chrysler-launches-detroits-fir...

If the source is free or open, then people can find flaws more easily, so that they can be fixed. You want to be able to find bugs fast and fix them quickly. When you don't, that's when you get into serious trouble.

As a warning though, source being free and open and well-used doesn't mean that bugs can't go a very long time without being seen, after which point they are everywhere: https://en.wikipedia.org/wiki/Shellshock_(software_bug)

Anecdotally, I read through things occasionally. I once was curious about something in the xnu kernel and noticed a bug in the code, just from reading it. I sent Apple a bug report and eventually they fixed it (though I think it took a couple years).

You're arguing that the (probability) x (impact) of a claimed benefit is in question. But your argument merely mitigates the good that is provided by software transparency.

What are the bad effects to society for software transparency? Because if open software offers noisy positives with minor negatives, then we should prefer open as the default heuristic until circumstantial information says otherwise.

Some projects, whether open or closed, are more valued by institutions or businesses, and consequently some projects will justify more money and eyeballs. Also, what businesses and institutions value can be unintuitive. When there's a bug in Linux, that might immediately degrade a business product, whereas a security problem with SSH might not ever impact a business -- even if important customer data is stolen. I doubt Target experienced a real business problem after its large data theft incident with customer financial data.

Therefore, 3rd parties contributing to software will probably be far more prompt for bugs in projects that directly impact business success vs. OpenSSH, but at least the door for extra benefits is open, whereas for closed software the door is shut.

I am also for open source, however here are some arguments for closed source:

What are the bad effects to society for software transparency?

One is companies that are less willing to write code or go to market because their code IP wouldn't be competitive given existing players. Smaller new competitors' code would be quickly copied by larger existing players with their larger teams into larger projects that are already used by everyone.

Then there is the case of artistic one-off projects. Imagine a $50 video game releasing its source code immediately or as it is being made. There would be another company that just copies the code or the textures and distributes quickly before the last level is done.

at least the door for extra benefits is open, whereas for closed software the door is shut

This is true, but some doors that you don't want to be open are also shut. It takes a lot of effort to reverse engineer something. I bet all the openssl bugs would be harder to find if it wasn't open source. Especially, if binaries weren't available either - something like only an API layer existing. Maybe heartbleed wouldn't have been found until openssl was replaced by something else.

Finally, something like GNU parallel is very commonly used. On the other hand, some random utility that is on Github and isn't very popular probably isn't looked at by nearly as many people. Out of 100 people who use some software, about 1% looks at source code. So then if you have 1000 people use your tool, only 10 people look at the tool's source code and 1 of them is a malicious actor looking for exploits. I have personally found several bugs in open source software that could have been security exploits, but haven't reported them because I was lazy at the time.

Having said all this, most non-application-specific software could be open source and it would probably help it more than hurt it.

~ companies will be less willing to write code

~ artists will create fewer video games

To me these are arguments for making closed source illegal, especially the second. Among many benefits of first, fewer actors warping society in order to capture rents from software they control, eg tax software vendors. On second, destroy one incentive to create addicts.

Indeed, the DAO was GPL v3[1]. It was taken to the tune of $50M. Slapping a particular license on it doesn't make it more secure.

[1] https://github.com/slockit/DAO/blob/develop/LICENSE

"opportunity for independent review doesn't guarantee that review will happen."

Maybe not with the software most of the world doesn't care about but I'd bet that there will always be someone willing to look real hard at software being put in to popular automobiles.

The same would seemingly have been true of OpenSSL (software that protects secure transactions online), but the Heartbleed bug wasn't detected for over two years.

it was detected though, still better than if it were closed

The linux kernel is GPLv2

> The idea that software is inherently safer because it's released under an open-source license is overly simplistic in the extreme

Reminds me of the common confounding of elections and democracy.

> The below was the first I found[1]

Try again, that is bsd openssl code, probably proprietarized, not gpl.

>The idea that software is inherently safer because it's released under an open-source license is overly simplistic in the extreme.

Well, no. No it isn't. A higher possibility of more people seeing the code translates into a higher probability of bugs getting fixed. Of course, there's no guarantees, but more eyes means more chances of bugs getting caught. And that means safer.

"Of course, there's no guarantees, but more eyes means more chances of bugs getting caught. And that means safer."

That doesn't. It means potential to be safer if all kinds of things line up. Vast majority of FOSS code is crap like any other code. More eyes didn't change that. What's needed is design/implementation work, qualified review, time, and trustworthiness of reviewers & distribution.


Fair enough. But I would, and indeed, do, argue that potentially safer is better than not potentially safer.

In this order: provably safer is better than potentially safer; potentially safer is better than no potential. And still, as I showed, open or closed source at consumer level has little to do with the potential. It's amount of trusted development and/or review of source then distribution of that. Illustrated where greater than means more trustworthy, reliable, and/or effective:

1. Rigorously designed & reviewed open-source is greater than...

2. Rigorously designed & reviewed closed-source is greater than...

3. Typical, open-source is greater than...

4. Typical, closed-source is greater than...

5. Cloud infrastructure based on Concurrent DOS written in a MOL-360 language.

I... Completely agree. I honestly have no idea why we're arguing at this point.

Just took some discussion to clarify our views that's all. Glad we reached an agreement. :)

"Meanwhile, there has been not a single example yet about use of GPLv3 software that has harmed anyone."

What!? That's dishonest. Most people doing safety-critical development use robust tooling intended for it instead of FOSS. Or stuck with proprietary tools attached to their platforms. So, FOSS will hardly cause harm by definition just because so little use. Kind of like how Mac OS 9 and Amiga are secure because you see no huge botnets. ;)

Far as harm, it's done plenty. Most GPL software, just like proprietary, is of crap quality. I've lost years of data to open-source solution. Many people were hacked due to FOSS solutions where "many eyeballs" didn't even try to look for problems. Extrapolate that to cars, planes, and trains. Then think if you want to use one afterwards. I won't. Give me DO-178B Level B/A software over that any day.

"until you can prove that proprietary software assures safety in a way that FLOSS cannot"

Wait, has this author heard of DO-178B and related standards? They do exactly what he says. All are proprietary, too, although could be FOSS'd. How about FOSS software people put their stuff through rigorous prevention and 3rd party evaluation process before expecting us to believe its high quality or safe? Cuts both ways. I do know a handfull that would make the cut. Need exceptions for OSS like that. Most won't, though.

It seems that most commenting in this thread don't have not actually followed the complex political debate going on in the world of automotive industry adoption of FLOSS.

Specifically, the automotive industry has made a series of arguments that proprietary software is ultimately safer than FLOSS, and all their goals are to lock-down FLOSS in various ways to prevent the nefarious from "hacking" the vehicle.

The vehicles are all still hackable, and many have been modified by people for both reasonable and nefarious purposes. The FLOSS situation won't change anything, and there aren't even any examples yet of hacks where FLOSS made the situation worse. It's an assumption they make without full information because of their inherent pro-proprietary bias.

I don't see the author arguing that being open makes software inherently more secure, but that your software shouldn't rely on being 'secure' because it's closed source, rather design the software to be secure even if the source code were to be open-sourced, which may bring additional eyes, (and if not, it's no worse than if it was closed).

I don't know if anyone has a good argument against that, but I didn't see one yet. It's not an argument to say that open-source makes your software bulletproof or even safer just by being open-source, rather if you designed your software with proper security as a goal, you shouldn't mind releasing your code.

Both arguments are hyperbolic.

Both propriety and open source software has likely killed people, and likely will do so again. No amount of eyeballs will help if no one is looking, nor will hiding all the bad code away so no one can see it. Bad engineering practice is bad engineering practice.

I say clearly in the blog post that there is danger when you have software handle any life-critical service. The criticism I'm making in the blog post is the arguments by the auto-industry and their providers that FLOSS is inherently more dangerous than proprietary software.

I don't think that one unprovable assertion ("GPLv3 software has never killed anyone") is the best way to combat someone else's unprovable assertion ("our cars would be less secure if everyone could see our code"). The only way to argue against security through obscurity is to argue against security through obscurity, although admittedly that can be tough depending on the audience. Fighting against the status quo is just going to be an uphill battle sometimes, it's the nature of the beast.

This sounds more like a rhetorical device, in which debater #1 has attempted to frame the debate in terms of debater #1 being correct unless debater #2 can prove that they're wrong. Debater #2 then tries to reframe the terms of the debate such that debater #2 is right until proven wrong by debater #1.

In both cases it's more a form of sophistry vs. an opponent than it is a way to make a case to the public. Using a dishonest rhetorical device to counteract someone else's dishonest rhetorical device rubs me the wrong way, because it feels like both sides are trying to twist facts in their favor, and are not in good faith trying convince me that they're right and the other person is wrong. Debater #2 is responding to sophistry with more sophistry, rather than attacking debater #1's sophistry head-on.

>The time has come that I must speak out against the inappropriate rhetoric used by those who (ostensibly) advocate for FLOSS usage in automotive applications.

You mean used against?

The author appears to be speaking out against people who, while advocating the use of libre/open source code in automotive solutions, decry the use of GPLv3 (as opposed to v2).

The author suggests that the people opposed to the use of GPLv3 are making false arguments on the grounds of safety. The relevant difference being that GPLv3 requires not only the release of code, but the ability to use modified versions of code in situ. E.g., by replacing your car's firmware.

nonsensical rant is nonsensical....

I agree open source software has more chance of being safer, but the article does not do good job of arguing it. Anecdotal evidence fallacy is not good science, e.g. "Meanwhile, there has been not a single example yet about use of GPLv3 software that has harmed anyone".

The author of this post is right, to a degree, but his arguments are really weak. Here's it simply:

-With OSS, your bugs are more likely to be caught.

-Therefore, it's harder to maliciously hack the vehicle.

-Therefore, it's less likely the software will fail and kill someone.

-Therefore, not opensourcing your code is a liability, because not doing so makes your product killing someone more likely.

Or the auto companies could just admit the real reason why they don't want to OS their code: They're afraid of losing control of their product. Of people, even if they've voided the warantee, using their product in ways they didn't explicitly allow. Of losing their monopoly on support.

If you're going to be evil, at least give it to us straight...

Your first claim is not true. What makes bugs more likely to be caught is having lots of eyes on the source. If you have a popular OSS project there's probably a bunch of people looking at it. But making something OSS does not guarantee that anyone will look at it. Meanwhile, proprietary software will still be examined by everyone who's paid to work on it. So really, the value of OSS in catching bugs is related to how interesting/popular the software is. Boring software is rather unlikely to catch many bugs by being OSS.

But the probabilty of people outside your org looking at your source if it's proprietary is zero. Said probability is nonzero for OSS. So yes, even in a low popularity OS project, there's a higher probability of more people setting eyes on it than proprietary software.

In this case, I was using eyes as a shorthand, but yes, you are correct.

I'm going to skip the main points everyone else is hitting (because I've been touting gplv3 for a while), and get straight to what I think is the most important part of the evolution of foss.

I think FoSS is the way to go, but, I think the main problem is with the many eyes theory. To cut to the chase, I think the future of code lies in loc reduction and simplification of code in order to combat the problem of too much complexity.

The linux kernel is now at 10million loc +! I dont care if you are Red Hat, you dont have enough eyeballs to properly audit that shit!

Thats what I think the future of software should be, is simplification of code, maybe some refactoring, so that the barrier for average user to look at and understand the code is lowered.

That and perhaps an AI/machine learning code reviewing system that can perform the function for those not able but that still want the benefits of foss.

> The linux kernel is now at 10million loc +! I dont care if you are Red Hat, you dont have enough eyeballs to properly audit that shit!

To be fair, most of that is in device drivers and other optionally compiled or non-critical pieces of code.

The common code path is extremely well tested and audited.

Exactly. I see so many people complaining about the Linux kernel SLOC versus other kernels; it's an unfair comparison because no other modern kernel I know of has such a wide selection of in-tree drivers.

For instance, when I install Windows on a computer, I will often have to install additional chipset drivers (odd USB3.0 controller, motherboard chipset stuff, fan control). When I install Linux, all of that stuff works out of the box, no drivers necessary: it's rolled right into the kernel.

Open source is better for some things, but security isn't one of them.

I say this as a developer of an open source platform. When it's just starting out, it has the same number of holes as anything else -- but everyone can see the code and find the holes. It takes years, and hundreds of man-years, to find and fix 99% of the holes. Until then, the source code is all in the open, making it an easier target for an attacker than a closed-source product. In thw latter, you just have to close the "obvious" holes.

True, with very old or well-funded open-source products, this is not the case. But MOST open source software isn't like that.

It might slightly slow-down a motivated and competent attacker to not have the source. But it also very highly slow down discovery of some bugs by non-hardcore security researchers, most especially security bugs.

There seem to be many different topics involved. First of all open vs. closed source, and GPL V3 vs other versions of open source licenses.

In general, I would say that open source is preferable to closed source, but there are situations, where this is not always possible for the producing companies.

Finally, there is the problem, that if the owner of the car is free to modify his cars software (which in abstract sounds like something he should be able to), there is the problem of people modifying the software of their car, which should not. One should not just tinker with the algorithms used to control the engine, breaking and stability control.

>there is the problem of people modifying the software of their car, which should not. One should not just tinker with the algorithms used to control the engine, breaking and stability control.

Why is that a problem that has to be solved? In purely mechanical cars, people already can and do tinker with the engine and breaks. But only a tiny fraction of people do, and those usually take care not to risk their lives. In the limited cases where modifications cause problems for other people, regulations are enforced through scheduled inspections and random stops of suspicious vehicles. This system has worked quite well for the last hundred years or so.

If software really is different and shouldn't be modifiable, I think this needs a little more justification.

First of all, I trust a somewhat skilled mechanic more to get his car modifications reasonably safe, than a random software modification. Then, there is the problem, that mechanic parts are easy to inspect - many improper modifications can be seen at a glance in a traffic control - software modifications are pretty impossible to detect. Short of checksumming the whole car software, it cannot be done.

> I trust a somewhat skilled mechanic more to get his car modifications reasonably safe, than a random software modification

Those are not comparable things. The correct comparison is between a mechanic and a programmer (after all, anyone who modifies his software is a programmer, perhaps an unskilled one), or between an automotive part and a piece of code. Using the correct comparison, we see that we already permit anyone to work on his own vehicle, and that he can put anything in it he wishes. Software should be no different.

> Then, there is the problem, that mechanic parts are easy to inspect

Some of them are. Some look just like the correct parts, but were manufactured to incorrect tolerances. These parts wouldn't be obvious in a visual inspection.

> software modifications are pretty impossible to detect. Short of checksumming the whole car software, it cannot be done.

Software hashes aren't rocket science: your post & mine were both hashed at least one. Indeed, software hashes make detecting changed software easier than detecting swapped physical parts.

You need to have a basic set of mechanic skills to take a car apart and put it back together at all. Also, most mechanical work is not rocket science.

Software is an entirely different beast. It might be quite trivial to modify parameters, but verifying that the software still works reliably might require a whole testing department.

Yes, software can be checked for modifications via hash codes, but how do you expect a cop to run a hash sum on the vehicle software? You would have to read out the memory itself, because how could you trust any possible self-test of the software?

So, a car could only allow signed software to be loaded, but that again seems to be incompatible with the GPL 3

> It might be quite trivial to modify parameters, but verifying that the software still works reliably might require a whole testing department.

That responsibility (for running full tests, or being responsible for damages caused by not running full tests) lies squarely on the modifier/owner. We don't need to erect a whole new set of laws because somebody could make his engine run poorly.

> Yes, software can be checked for modifications via hash codes, but how do you expect a cop to run a hash sum on the vehicle software?

I don't. Why should a cop care what software I'm running, any more than he cares what brand of brake light I buy?

It's the owner's car. If he modifies it to be unsafe, then he's responsible for the damages he causes. If he doesn't, he's not. Checksumming may be useful in a court case to prove modifications (although diffing would work just as well).

A malfunctioning car might endanger its passengers and other people. For that reason, car makers have to get certification for any new model they want to bring to the market and later modifications have to be done with either parts certified for that car or new certification has to be obtained. (At least here in Europe). Finally, it has to made sure, that after the modifications, it still complies to the emission standards.

So there are already popular modifications done to cars by their owners, which do get checked by cops for their certification, and often enough people do not have the necessary paperwork. Simple example: some people install custom wheels which are the wrong size for that car. This poses a danger as the drivability might suffer.

If you are modifying any software required for the safe operation of the car, there is quite a potential causing harm to others, and there is where the officials have to take notion of it.

> how do you expect a cop to run a hash sum on the vehicle software

Any flash chip that holds vehicle software gets a tiny hardware component that once a minute hashes the entire memory and broadcasts the checksum over the car's message bus. Somewhere in the car there's a read-only hardware interface to the message bus that cops and mechanics can use.

If you don't trust the message bus you need a bit more silicon to provide a signed challenge-response protocol, but then we're still talking about cents per flash chip.

Ok, that would be possible, having an independent system checking the car for modifications. Though, this system then again could not be open for the user to modify.

And of course, this all would basically ban all cars with modified software from the roads. So you can modify your cars software, but not use the car in public traffic.

Having the checksumming system unmodifiable would be similar to disallowing modification of speedometer and odometer. Important measurement instruments were always kind of a special case.

You could use this to limit modification of some crucial systems (ABS, airbags, etc. Those will always be one septate microcontrollers anyways because of their hard real time requirements).

Or as you said you could keep all modified cars out of public traffic, while still allowing experimentation on private roads. Both variations are more heavy-handed than I would prefer, but a lot better than the status quo.

By far not everyone modifying a car is a skilled mechanic. Many are, but by the same logic most people modifying car software will be skilled programmers.

>software modifications are pretty impossible to detect

I think there are two classes of software modifications: those that negatively impact the environment or other people and those that don't. The first is by its nature easy to detect from the behavior of the car, and I see no reason to regulate the second class.

Some people will introduce bugs, but I'm not convinced that's a problem that needs to be solved; self-preservation will make people careful not to introduce bugs in the break system, serious problems should be few and far in between

> I think there are two classes of software modifications: those that negatively impact the environment or other people and those that don't. The first is by its nature easy to detect from the behavior of the car, and I see no reason to regulate the second class.

Assuming they manifest themselves before a crash happens. When the software modification only impacts the ESP or ABS system in an emergency situation, it is too late. Only extreme thorough testing can make software fit for this kind of purpose. And experience shows, even then bugs stay unnoticed.

You are arguing things you do not fully know. If you want open source examples, goto fail; was in open source. Can you quantify how many MiTMs happened? Stuxnet had open source for PLC infection. That could have wounded some people. Heartbleed was in open source. Guess how many times that was exploited by governments with sensitive information disclosed?

The list of open source failures is long. It is not true in and of itself that the open source "more eyes" argument is correct. Do you audit the source code of everything you compile for security issues?

One could convincingly argue, just based upon DROWN, FREAK, Logjam, and POODLE that code in the open doesn't get inspected and those who do inspect it miss an extensive number of security issues.

I'm not sure how many people were harmed by this bug in MRI software, but the bug resulted in false positive rates of up to 70% http://www.theregister.co.uk/2016/07/03/mri_software_bugs_co...

Do open source developers actually want to take on the liability that comes from distributing software for use in safety-critical environments? One mistake could leave someone with a lifetime of medical bills, and they'll sue anyone with money involved in it.

You don't seem to realize that the software license means no warranty, but that does not preclude anyone from giving support and warranty on the product.

So yes, GPL'ed software may be used on critical applications the same way as every other piece of hardware and software: via special contracts.

I realise that. Hell, I've done that. As a licensed engineer, I would just be very concerned about contributing to an open source project that controlled critical systems in cars. That sounds like all the liability of my actual employment, but none of the pay.

Unauthorized use of your code or product for uncovered applications is a problem no matter if the source code is open or not. Companies exclude themselves from such liability as a matter of policy, demanding special contracts on these cases.

If you get any IC datasheet, or even an application-specific brochure such as (1), you will see a notice saying explicitly that.


The idea of "Security Through Obscurity Kills People" as a bad thing is a very catchy phrase and great rhetoric. However it is also one of those statements which lowers the critical thinking and allows people to talk without producing stats or facts or demarcations of when a idea should apply or not apply.

For example we told repeatedly to use obscure passwords, lock our phones and tablets with 4 digit numbers, and even swiping-the-screen-gestures, bank account information is protected by numbers and password! And all this in most cases is an obscurity scheme, and that obscurity scheme is our single point of security failure before complete systems access.

"Heartbleed" was present in OpenSSL for over decade, out in the open and nothing detected. ...

p.s. slightly off-topic, but can you accurately count the number of ball passes? https://www.youtube.com/watch?v=47LCLoidJh4

You misunderstood the term "obscurity" in this context. Of course there is something that identifies some user (sth. he knows / sth. he has), but "security through obscurity" denotes practices of hiding the way authentication process is planned / implemented. This hiding usually at best makes no difference to the safety of the system, but often actually lowers it. One of the reasons is because it makes it harder for independent security researchers to review it.

Of course there are secrets in any access to a secure system. But they should be only those parts which are user specific (passwords, keys,...).

The 4 digit PIN is rate limited. How is that a single point of failure?

For example if someone skims the code.

Open security says everything except the key is known. But the actual key is private.

This ignore the issue of actually modifying the software.

Proprietary software doesn't stop people modifying software, it only slows them down.

And it doesn't slow people down anywhere near enough, hence all the talk about DRM, encryption, warranty voiding and anti-convention laws.

There is nothing wrong with allowing people to modify software, the issue is when people modify software to break laws (environmental law, in this case).

The solution is going to be some form of compliance certification/testing. The Manufacture provides the default software and gained a certification to prove it compiles with the laws. If the user wants to modify the software, they are going to have to prove that the replacement software also complies with the laws and get a certificate.

During annual vehicle inspections or licensing, they can check that certified software is installed. Or maybe the ECU's bootloader only allows certified software to be installed, unless a developer mode is enabled (which will require the owner to keep logs to ensure compliance)

"Proprietary software doesn't stop people modifying software, it only slows them down."

See DMCA and licensing agreements. Modifying your stuff might be a felony someday. Best to be sure the right to modify or repair is right there in the license. Irrevocably and transfering to next buyer automatically. Not happening right now.

Making hacking of your own property into a felony has already been attempted under current law, after a fashion. Sony's lawyers claimed that George Hotz's PS3 exploit was a violation of the CFAA because he hacked "the PS3 system", and "the PS3 system" belonged to Sony (apparently equivocating between individual consoles and PS3 as a platform) [1]. I'm not sure whether a judge ever actually addressed this argument, since Hotz settled. And of course that's a different situation than actually being indicted under the CFAA, but you can see the gears turning.

I seem to recall that a similar argument was used against Jon Johansen at some point, but I don't feel like trying to dig up early-2000s history on the web today (IME going back to ca. 2003 is okay, but before that a lot of things have fallen down the memory hole).

[1] http://volokh.com/2011/01/13/todays-award-for-the-lawyer-who...

I am all in favor of safety-critical software being open source, but I dislike the argument of forcing the issue by claiming the right to tinker, or really any variation on "I bought it therefore I can do whatever I want to with it". It's not true when the hardware in question has safety implications. For example, when I buy a car, I do not have a right to do anything I want to with it. I can't drive it in ways that would be dangerous to others, and the fact that I am legally restricted as to where and how I may drive it does not feel like an inalienable right being taken away from me. That's clearly a strawman but bear with me.

In a similar vein, if I were to physically modify the engine in such a way as to make it faster but fail environmental or safety regulations, that could also have financial or legal repercussions for me (or Volkswagen, to cite a recent example :). The same argument could be made against modifying automotive software- vehicles are certified by government agencies around the world based on the idea that they will perform a certain way WRT safety and emissions, therefore software behavior has to be just as unchanging and predictable as hardware behavior. The fact that people can and do hack their cars all the time doesn't change the regulatory environment for car companies.

I do believe that embedded software should be as open, modifiable, and testable as possible, but I don't think people claiming the right to tinker as a basic human right are going to change anyone's minds about this. The biggest impediments to (for example) car companies making their ECUs open source and hackable have nothing to do with a dogmatic attachment to proprietary software and security-through-obscurity (although those concepts have their adherents, misguided as they may be), and much more to do with compliance with government regulations and minimizing their potential liability.

There is at least a perception that having open source, hackable ECU software would be all downside with no upside to the companies selling those vehicles. The way to change this perception is to show it as being demonstrably wrong, rather than to simply claim a right to tinker. The best way to get open source ECUs is to either make the consumer market care about this (not likely) or to update the regulatory environment in such a way that car companies have a non-abstract motivation to go open source.

This is not unlike the dreaded binary blobs required to use certain 802.11 chips on open source operating systems- they exist because of FCC requirements, not because the vendors love binary blobs. They are allowed to ship software-defined radios, but the only way they can guarantee that their SDRs don't broadcast on illegal frequencies is to lock down the software that controls the hardware. It's not an ideal solution, but its the only solution they've got aside from spending even more money to ensure that safety at the hardware level, which wouldn't do anyone any good.

"It's not true when the hardware in question has safety implications."

It's a sensible counterpoint. The modification would be, if it's safety-critical, that you have a right to get modifications by professionals who aren't the original seller. Or perform them yourself if you are such a professional. This covers the biggest use-cases of repairs and enhancements respectively.

EDIT to add:

" is to lock down the software that controls the hardware. It's not an ideal solution, but its the only solution they've got aside from spending even more money to ensure that safety at the hardware level"

Not true. They could do it right at hardware level with simple bounds-checks burned into logic. The checks' setting set at manufacture or OEM configuration using something like anti-fuses. The kind of tech that's in 8-bit and 16-bit MCU's that retail for a few bucks. Cost almost nothing. Also more secure given simplicity. They aren't doing it because they don't care given demand side of this issue. Plus, lock-in among oligopolies brings them long-term financial benefits. ;)

Making modifying stuff a felony still just slows down peoples' efforts, unless there's a reliable way of enforcing it, which is... not really possible.

Red herring. It will still be a felony and people will still go to jail over it. Most might get away with it. Might given increasing surveillance state with commercial and government partnerships. Regardless, buyers need to pick suppliers who voluntarily eliminate that risk with proper licensing unless they want to risk jail time. They should not trust anything less in a license given precedents so far.

GPLv3 is stricter about this than GPLv2 though.

So if a company tried to apply "Your code must meet emissions standards before you are allowed to install it" limitations via a license agreement, then it would conflict with both GPLv2 and v3. If a company only allowed you to instal after being signed by the company's own testing laboratories, it would conflict with GPLv3's TIVOization clauses.

But if the government says: "All software installed on car ECUs must meet emissions standards" then there is no conflict, infact this is already 'implied' by the current laws.

The GPLv3 'should' also be fine with a "All car ECUs must only accept software that is signed and approved by a 3rd party testing laboratory" law. You have the exact same barrier to installing software on the ECU as the actual manufacture of the car.

The main example where the GPLv3 might conflict with a government law, is when a law mandates "the car's ECU must be locked down in a way that only the original manufacture can change the software"

How can you certify a neural network? If it was feasible you wouldn't need a neural network in the first place, because if the problem could be solved exactly with a well known and certified algorithm then why add the "uncertain" response of a NN?

Same way you certify a person. You train them, then you test them in simulated environments.

It won't be a fixed certification procedure, one car manufacturer might hire an independent auditor to check over every line of code. The car manufacture that stupidly uses a neural network in their emissions control system will have to hire a independent auditor who is comptable putting their stamp of approval on the neural network.

Actually it's way easier to certify a neural network than a human, you can simulate millions of situations. You could even certify the code which ran the training environment in a line by line situation.

As long as the government (or certifier) is happy with whatever method was used to create the report, a certificate is issued.

You need to certify every possible output for every possible combination of input. It sounds impossible to me. Even if it is not impossible then at that point you can substitute your NN with an algorithm with the same response mapping. If you don't certify everything then I can't really see how an open source certified autopilot would have saved the guy on the tesla.

Certification is not a 100% solution. We can never be 100% sure that a neural network will produce the right result. Just like we can never be 100% sure that hand written software doesn't have bugs, certification and auditing can't do that either.

Even with the most stringent software development and testing procedures in the world, NASA is unable to produce bug-free software for their spacecraft (which is why they have procedures to recover from bugs and patch software).

All the certification is saying: No bugs were found during testing, based on the amount of testing and the quality of the code and the testing procedures he probability of a fatal bug is x% (where x is a really low number with lots of zeros) and we deam x% to be an acceptable risk. In the case of autopilots, we only really need x% to be below regular human driving.

No software changes could have saved that driver. Tesla have a class 2 autopilot (adaptive cruise control with lane following). It's not within the job description of an autopilot to avoid all possible crashes, it's the job of the driver to be alert and ready to take control at all times. Tesla's autopilot doesn't even use Neural Networks outside of the camera sensor.

To be a class 3 autopilot that would be expected to deal with this sort of incident, Tesla would need way more sensors and much smarter software, so the car would actually have hope of detecting the faulty data from one sensor and taking the correct answer (The truck was only within view of the camera sensor, the radar is calibrated to only detect things at bumper height).

Wha is it with you and neural networks? What's that got to do with anything?

In the linked article the author says that the death of the person on the tesla with autopilot could have been avoided if the autopilot software was open source. That is simply absolutely false because you cannot ever certify the entire response mapping for a NN so complex. And even if you could then there was no need for a NN because you could write an algorithm with exactly the same response mapping.

The author never says that.

> Most importantly, it ignores the fact that proprietary software in cars is at least equally, if not more, dangerous.

To me it seems that he is suggesting that the proprietary software can be more dangerous than open source one. So from there you can infer that he believes that open source software can be more safe than proprietary one.

If death(proprietary) >= death(OSS) then !death(OSS) <= !death(proprietary)

Arguing over if code being publicly readable makes the code more secure is obviously BS.

Case in point, take code which is closed source, make it open source, and it magically does not become more secure.

Security of code depends of the effort made to make it secure; aka more eye balls, more secure.

The author is arguing wrong. Because especially automotive is not about security, it is a about safety. That said, security can influence safety, but the goal is completely different.

A system is considered functionally safe, when any unexpected input lead to an situation where the system safely shut down. So it does not matter if we have real security or security by obscurity. Because if something unexpected happens, the systems shut down and becomes unavailable. That is what we have in general automation. The same applies for a railway. If something happens then the system shuts down, an emergency brake is issued. That works, because there is a higher level safety designed in, that the currently occupied rail is blocked and no other railway can enter.

Then we have highly available systems, or mission cirtical systems as we say. That is for example an airplan or a car. The engine of a car can shut down at any time. That is not considered harmful. But the brakes are mission critical. The brakes have to work under all circumstances.

As such safety-related systems are highly regulated and require mandatory third party review called certification. You need to certify with TUV or other such organisation. Part of that mandatory review is to disclose the source. That means the review argument for GPLv3 is obsolete, as there is mandatory review for safety-related systems. The license of the software does not matter.

The author then argues, that the Tesla incident would not have happend with GPLv3 software. He argues, it happend because of proprietary software. Which is wrong. It would have happend as well with GPLv3 software. This incident happend because of false claims by Tesla. This systems was never designed to handled this kind of situation. MobilEye, the supplier behind this system said, this kind of situation should be handled by the 2018 version. Tesla gave here a false impression about the completeness and safety features of its Autopilot. BTW the same Tesla that uses GPLv3 software in its MMI, which they have locked down.

Then the author argues with VW and thier false claims regarding emissions. As prooven, the ECU came from Bosch with proprietary software. The same ECU is used by many different OEMs and Bosch does not disclose the inner working, as this is the idea that Bosch sells. Even VW does not had the source, but used the ECU in a testing mode. So the problem here is also only partly related to the licensensing of the software. It is more like you can use software for the good cause and the bade cause. Either way it is not related to licensing.

The argument that GPLv3 code is reviewed is also unproven and even sometimes wrong. In the past there are several example where review didn't take place. It was more like "yeah, I think someone has reviewed because it is open source and I could too. But why to waste time and instead build on top of it." So, GPLv3 can lead you to false assumptions. Except, you will really do the review your self. Here it comes the question: how many really have done that in their past?

I have a rule, if you mention Therac 25 then you lose.

It's a cliche. If it was an example of a common problem then it wouldn't have to be mentioned every single time.

Hmm, just because it's mentioned frequently doesn't necessarily mean that it should be dismissed or deserve an automatic "you lose". It doesn't necessarily mean that other instances of the same problem aren't common or couldn't happen again, but it could mean that that particular instance is the most famous. Your so-called "rule" is a faulty mental shortcut.

> then you lose

You disengage and don't necessarily convince anyone of anything (i.e. "win").

> If it was an example of a common problem

It is arguably common enough problem for reasonable definitions of common in its context. Hackable cars across the industry shows that people still haven't learned their lessons. The fact that people haven't died as often in these situations shouldn't have a bearing on arguments against closed systems if flaws are only potentially fatal.

Password is the ultimate form of security through obscurity.

a) You apparently did not read the article.

b) Passwords are not "obscurity", they are "secrets". "Obscurity" in this context is about cryptographic algorithms/systems.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact