The idea that software is inherently safer because it's released under an open-source license is overly simplistic in the extreme. Just the most obvious reason for this is that opportunity for independent review doesn't guarantee that review will happen. The wisdom of the crowd doesn't guarantee a third party review will be better or more thorough than a solid first party system. Open-source provides the possibility of review, and that's all. Hypothetical eyes make no bugs shallow.
Edit - I'm not making a claim one way or the other about the relative safety of open vs proprietary software, I'm merely saying that the comparison is deeper and more nuanced than the way it is typically portrayed.
1 - http://www.bbc.com/news/technology-28867113
'Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.'
(OK, that was slightly OT!)
Obviously we need constitutional rights and bureaucrats to implement the will of the people as expressed in the polls. But why do we need representatives? Do you really think eg a Tea Party representative will protect Muslims from the majority better than the first amendment and courts?
No. The lesson, as usual, is that extremes rarely work out for us. The plutocrats will screw us if totally in control. So will politicians if they can get money from them or special interest groups. So will democracy if something horrible is trending in popularity. As usual, the solution will be a compromise between various points in the design space with more effective checks on various interactions and risks each pose to each other. I can't tell you what that design will be but looking for it is best investment with these things.
"Do you really think eg a Tea Party representative will protect Muslims from the majority better than the first amendment and courts?"
The First Amendment is subject to interpretation by representatives and/or courts. Fourth, Fifth, and Fourteenth have seriously been watered down. They're among the best examples of the risk. A politician can outlast the media fervor of the moment, get advise from well-informed people, and have teams check the side effects of the law. The current situation exists because apathetic democracy allows them to get away with not doing that. Also, passing laws in exchange for bribes. That could all be stopped with enough voter action followed by legislation or alternative models. The benefits of representational democracy remain.
I'm not saying I believe it's the best system. I'm just saying plutocrats and mobs cause lots of problems it can prevent with less effort than constantly outsmarting plutocrats or fighting mobs. So, it's a consideration.
Or it exists because it is an emergent phenomenon that arises eventually. An increasingly polarized two-party system where each primary race is rigged against outsiders and voters are scared to vote third-party because each candidate is super scary to the opposing party. It's bound to happen eventually. In this case it happened in the US presidential race. And Congress has long had a 10% approval rating. One can make up reasons as to why it happened or accept that a representative democracy eventually reaches such states, and it's not clear how to get out of them with more representative democracy. Trying the same thing and expecting a different result.
I'm not saying I believe it's the best system.
Exactly, and I'm saying it's not. It's easier to fool all of the people some of the time (eg during elections) or heavily influence some the people all of the time (as lobbyists for special interests do) but you can't fool all the people all the time. That is far more expensive.
Once again ask yourself: are eg Muslim and Mexican US citizens safer if Donald Trump gets elected and magnifies the desire of those who elected him, or if the population at large was polled for policy? Are the people of Gaza better off under Hamas because they were democratically elected once by those who showed up? Would the Nazis ever get into power if there were no representative positions to get into? Besides gerrymandering, poor voter turnout and other major problems, representative democracy eventually gives a giant hammer to some special interest or other.
The crowd would do a better job on predicting policy outcomes than experts: http://www.npr.org/sections/parallels/2014/04/02/297839429/-...
I can't proof that the neutral network always behaves correctly, but with testing and statistics you can verify the network up to a chosen certainty (below 100% obviously). Or you can actively search for problems with fuzzing techniques. All without solving the halting problem.
So you'll end up with ANN's reviewing other ANN's which are reviewed by other ANN's at the end.
You can test ANN's by simply fuzzing them and seeing the output, getting much insight to why an outcome was chosen for a given data set is near impossible.
These are also just the first steps. Not bad given what hardware and software verification used to look like a few decades ago. I think the verifiable ANN's will be more structured or constrained in their development or even use cases.
"it's simply beyond the human capability to review. So you'll end up with ANN's reviewing other ANN's which are reviewed by other ANN's at the end."
That implied it was beyond human capability to review ANN's to the point we'd need ANN's to do it. In the paper, they give a number of deterministic methods to review ANN's noting some were successful by the teams using them. Countered both of your claims in that context.
"This means that having the ANN being open sourced does not add additional security by having the code inspectable by everyone like with traditional software."
I agree that most of the problems won't be caught here. More about training set and safety monitors. However, the paper shows methods like rule extraction, visualization, and simulation that have aided in assessing and improving neural networks. These work on that internal state that's incomprehinsible at first glance. It's also usually stored in HW logic, the SW code, or some data format. Having those provably facilitates analysis of ANN's performance. Assuming it's the kind that's analyzable at all. ;)
Note: Code-level errors, either by hand-written or auto-generated engines, can also invalidate an ANN's behavior. Vast majority of software flaws occur right here. Obviously, some static analysis or safety transformations can increase assurance of correctness of overall system. Should be combined with other V&V methods.
If you get a the source code of any traditional software reviewing the code can be done by humans regardless of how complex it is, with ANN's it's not really the case simply because of the nondeterministic nature of the system.
You can validate the "correctness" of an ANN but it's not done in the same manner as reviewing traditional software.
You have to do both: validate the non-deterministic aspects of the ANN logic and validate its implementation as code. Makes them a lot harder. One reason I avoid them.
So, not only can you prove a program halts using a verified halter, the techniques for doing so have prevented and detected many real-world problems. One of those situations where abstract math ignores simple, concrete solutions. :)
Safety critical automotive software is going to tend to be an easier case, since it's quite likely to be extremely time bounded.
You can, actually, the result would be extremely complex, though.
You're ignoring the biggest issue in your whole tangent: real-world engineers with sense won't use NN's in systems unless the problem itself is non-deterministic & other methods don't work. Personally, I push control theory, state machines, ladder logic, whatever for applications like this. You can deterministically analyze them. Whereas, problems with too much uncertainty or requiring approximate solutions will always have an error margin. You personally have been pushing this thread to focus as much as possible on the NN case while ignoring 99% of real-world, use cases representing how the OP topic will usually play out. You also act like the inherent error in the tooling is bad when the problem domain itself has inherent error. So, I decided to address your tangent.
The first solution, safety monitors, goes way back in safety-critical use. That's to have a simple solution checking the ranges or other behavior of the NN that's able to override faulty NN with simpler logic. This can be done for GPS-guided drones that usually move in straight lines or smooth curves. The NN part is reset, diagnosed, or deactivated while the fail-safe system takes plane on simpler path or out of the area.
Second part checks the nets themselves. Again, given a non-deterministic problem domain, the nets will be non-deterministic with the best one can do getting the error margin way down or within fail-safe bounds. This other paper I found summarizes a lot of V&V methods for NN's:
Safety monitors. Training data or updates with constraints built in. Automated, test-case generation tweaking expected data. Rule extraction. System visualization. Simulation. All should contribute to safer NN's. They can't be perfectly correct, though, because vision, complex flying, thinking... none of this is 100% accurate no matter what method you use.
The decision itself might be taken using an amount of data completely overwhelming to human being analysis, processed at inhuman speed.
But even in safety critical tasks I have seen NNs and MDPs used to derive a good enough solution, then training turned off and the system certified (e.g. by running a large gold standard collection of test cases)
I broke it down here with a quick revision just now:
And good luck fixing heartbleed in a proprietary product in the way that occurred for OpenSSL - the entire codebase got looked at and we got BoringSSL and LibreSSL.
What would you prefer - code with obscure bug from spaghetti code so bad it kills people , or code looked at by enthusiastic amateurs and professionals alike? Car people tend to enjoy pushing the envelope and they are doing this with software already!
On the flipside, if few use your project, bugs can last for ages. I've had this experience also- distributing projects with free source that don't work, and not finding out until much later when I noticed it myself.
Making source free is not bad for security, because it's not about high visibility. Security problems are more frequent when the vector of attack is larger, e.g. when your car can be stolen because they didn't anticipate misuse of the entertainment system that was accessible via the carrier.
Related to that topic, Chrysler has recently been paying hackers to try to find security flaws: https://www.wired.com/2016/07/chrysler-launches-detroits-fir...
If the source is free or open, then people can find flaws more easily, so that they can be fixed. You want to be able to find bugs fast and fix them quickly. When you don't, that's when you get into serious trouble.
As a warning though, source being free and open and well-used doesn't mean that bugs can't go a very long time without being seen, after which point they are everywhere: https://en.wikipedia.org/wiki/Shellshock_(software_bug)
What are the bad effects to society for software transparency? Because if open software offers noisy positives with minor negatives, then we should prefer open as the default heuristic until circumstantial information says otherwise.
Some projects, whether open or closed, are more valued by institutions or businesses, and consequently some projects will justify more money and eyeballs. Also, what businesses and institutions value can be unintuitive. When there's a bug in Linux, that might immediately degrade a business product, whereas a security problem with SSH might not ever impact a business -- even if important customer data is stolen. I doubt Target experienced a real business problem after its large data theft incident with customer financial data.
Therefore, 3rd parties contributing to software will probably be far more prompt for bugs in projects that directly impact business success vs. OpenSSH, but at least the door for extra benefits is open, whereas for closed software the door is shut.
What are the bad effects to society for software transparency?
One is companies that are less willing to write code or go to market because their code IP wouldn't be competitive given existing players. Smaller new competitors' code would be quickly copied by larger existing players with their larger teams into larger projects that are already used by everyone.
Then there is the case of artistic one-off projects.
Imagine a $50 video game releasing its source code immediately or as it is being made. There would be another company that just copies the code or the textures and distributes quickly before the last level is done.
at least the door for extra benefits is open, whereas for closed software the door is shut
This is true, but some doors that you don't want to be open are also shut. It takes a lot of effort to reverse engineer something. I bet all the openssl bugs would be harder to find if it wasn't open source. Especially, if binaries weren't available either - something like only an API layer existing. Maybe heartbleed wouldn't have been found until openssl was replaced by something else.
Finally, something like GNU parallel is very commonly used. On the other hand, some random utility that is on Github and isn't very popular probably isn't looked at by nearly as many people. Out of 100 people who use some software, about 1% looks at source code. So then if you have 1000 people use your tool, only 10 people look at the tool's source code and 1 of them is a malicious actor looking for exploits. I have personally found several bugs in open source software that could have been security exploits, but haven't reported them because I was lazy at the time.
Having said all this, most non-application-specific software could be open source and it would probably help it more than hurt it.
~ artists will create fewer video games
To me these are arguments for making closed source illegal, especially the second. Among many benefits of first, fewer actors warping society in order to capture rents from software they control, eg tax software vendors. On second, destroy one incentive to create addicts.
Maybe not with the software most of the world doesn't care about but I'd bet that there will always be someone willing to look real hard at software being put in to popular automobiles.
Reminds me of the common confounding of elections and democracy.
Try again, that is bsd openssl code, probably proprietarized, not gpl.
Well, no. No it isn't. A higher possibility of more people seeing the code translates into a higher probability of bugs getting fixed. Of course, there's no guarantees, but more eyes means more chances of bugs getting caught. And that means safer.
That doesn't. It means potential to be safer if all kinds of things line up. Vast majority of FOSS code is crap like any other code. More eyes didn't change that. What's needed is design/implementation work, qualified review, time, and trustworthiness of reviewers & distribution.
1. Rigorously designed & reviewed open-source is greater than...
2. Rigorously designed & reviewed closed-source is greater than...
3. Typical, open-source is greater than...
4. Typical, closed-source is greater than...
5. Cloud infrastructure based on Concurrent DOS written in a MOL-360 language.
What!? That's dishonest. Most people doing safety-critical development use robust tooling intended for it instead of FOSS. Or stuck with proprietary tools attached to their platforms. So, FOSS will hardly cause harm by definition just because so little use. Kind of like how Mac OS 9 and Amiga are secure because you see no huge botnets. ;)
Far as harm, it's done plenty. Most GPL software, just like proprietary, is of crap quality. I've lost years of data to open-source solution. Many people were hacked due to FOSS solutions where "many eyeballs" didn't even try to look for problems. Extrapolate that to cars, planes, and trains. Then think if you want to use one afterwards. I won't. Give me DO-178B Level B/A software over that any day.
"until you can prove that proprietary software assures safety in a way that FLOSS cannot"
Wait, has this author heard of DO-178B and related standards? They do exactly what he says. All are proprietary, too, although could be FOSS'd. How about FOSS software people put their stuff through rigorous prevention and 3rd party evaluation process before expecting us to believe its high quality or safe? Cuts both ways. I do know a handfull that would make the cut. Need exceptions for OSS like that. Most won't, though.
Specifically, the automotive industry has made a series of arguments that proprietary software is ultimately safer than FLOSS, and all their goals are to lock-down FLOSS in various ways to prevent the nefarious from "hacking" the vehicle.
The vehicles are all still hackable, and many have been modified by people for both reasonable and nefarious purposes. The FLOSS situation won't change anything, and there aren't even any examples yet of hacks where FLOSS made the situation worse. It's an assumption they make without full information because of their inherent pro-proprietary bias.
I don't know if anyone has a good argument against that, but I didn't see one yet. It's not an argument to say that open-source makes your software bulletproof or even safer just by being open-source, rather if you designed your software with proper security as a goal, you shouldn't mind releasing your code.
Both propriety and open source software has likely killed people, and likely will do so again. No amount of eyeballs will help if no one is looking, nor will hiding all the bad code away so no one can see it. Bad engineering practice is bad engineering practice.
This sounds more like a rhetorical device, in which debater #1 has attempted to frame the debate in terms of debater #1 being correct unless debater #2 can prove that they're wrong. Debater #2 then tries to reframe the terms of the debate such that debater #2 is right until proven wrong by debater #1.
In both cases it's more a form of sophistry vs. an opponent than it is a way to make a case to the public. Using a dishonest rhetorical device to counteract someone else's dishonest rhetorical device rubs me the wrong way, because it feels like both sides are trying to twist facts in their favor, and are not in good faith trying convince me that they're right and the other person is wrong. Debater #2 is responding to sophistry with more sophistry, rather than attacking debater #1's sophistry head-on.
You mean used against?
The author suggests that the people opposed to the use of GPLv3 are making false arguments on the grounds of safety. The relevant difference being that GPLv3 requires not only the release of code, but the ability to use modified versions of code in situ. E.g., by replacing your car's firmware.
-With OSS, your bugs are more likely to be caught.
-Therefore, it's harder to maliciously hack the vehicle.
-Therefore, it's less likely the software will fail and kill someone.
-Therefore, not opensourcing your code is a liability, because not doing so makes your product killing someone more likely.
Or the auto companies could just admit the real reason why they don't want to OS their code: They're afraid of losing control of their product. Of people, even if they've voided the warantee, using their product in ways they didn't explicitly allow. Of losing their monopoly on support.
If you're going to be evil, at least give it to us straight...
I think FoSS is the way to go, but, I think the main problem is with the many eyes theory. To cut to the chase, I think the future of code lies in loc reduction and simplification of code in order to combat the problem of too much complexity.
The linux kernel is now at 10million loc +! I dont care if you are Red Hat, you dont have enough eyeballs to properly audit that shit!
Thats what I think the future of software should be, is simplification of code, maybe some refactoring, so that the barrier for average user to look at and understand the code is lowered.
That and perhaps an AI/machine learning code reviewing system that can perform the function for those not able but that still want the benefits of foss.
To be fair, most of that is in device drivers and other optionally compiled or non-critical pieces of code.
The common code path is extremely well tested and audited.
For instance, when I install Windows on a computer, I will often have to install additional chipset drivers (odd USB3.0 controller, motherboard chipset stuff, fan control).
When I install Linux, all of that stuff works out of the box, no drivers necessary: it's rolled right into the kernel.
I say this as a developer of an open source platform. When it's just starting out, it has the same number of holes as anything else -- but everyone can see the code and find the holes. It takes years, and hundreds of man-years, to find and fix 99% of the holes. Until then, the source code is all in the open, making it an easier target for an attacker than a closed-source product. In thw latter, you just have to close the "obvious" holes.
True, with very old or well-funded open-source products, this is not the case. But MOST open source software isn't like that.
In general, I would say that open source is preferable to closed source, but there are situations, where this is not always possible for the producing companies.
Finally, there is the problem, that if the owner of the car is free to modify his cars software (which in abstract sounds like something he should be able to), there is the problem of people modifying the software of their car, which should not. One should not just tinker with the algorithms used to control the engine, breaking and stability control.
Why is that a problem that has to be solved? In purely mechanical cars, people already can and do tinker with the engine and breaks. But only a tiny fraction of people do, and those usually take care not to risk their lives. In the limited cases where modifications cause problems for other people, regulations are enforced through scheduled inspections and random stops of suspicious vehicles. This system has worked quite well for the last hundred years or so.
If software really is different and shouldn't be modifiable, I think this needs a little more justification.
Those are not comparable things. The correct comparison is between a mechanic and a programmer (after all, anyone who modifies his software is a programmer, perhaps an unskilled one), or between an automotive part and a piece of code. Using the correct comparison, we see that we already permit anyone to work on his own vehicle, and that he can put anything in it he wishes. Software should be no different.
> Then, there is the problem, that mechanic parts are easy to inspect
Some of them are. Some look just like the correct parts, but were manufactured to incorrect tolerances. These parts wouldn't be obvious in a visual inspection.
> software modifications are pretty impossible to detect. Short of checksumming the whole car software, it cannot be done.
Software hashes aren't rocket science: your post & mine were both hashed at least one. Indeed, software hashes make detecting changed software easier than detecting swapped physical parts.
Software is an entirely different beast. It might be quite trivial to modify parameters, but verifying that the software still works reliably might require a whole testing department.
Yes, software can be checked for modifications via hash codes, but how do you expect a cop to run a hash sum on the vehicle software? You would have to read out the memory itself, because how could you trust any possible self-test of the software?
So, a car could only allow signed software to be loaded, but that again seems to be incompatible with the GPL 3
That responsibility (for running full tests, or being responsible for damages caused by not running full tests) lies squarely on the modifier/owner. We don't need to erect a whole new set of laws because somebody could make his engine run poorly.
> Yes, software can be checked for modifications via hash codes, but how do you expect a cop to run a hash sum on the vehicle software?
I don't. Why should a cop care what software I'm running, any more than he cares what brand of brake light I buy?
It's the owner's car. If he modifies it to be unsafe, then he's responsible for the damages he causes. If he doesn't, he's not. Checksumming may be useful in a court case to prove modifications (although diffing would work just as well).
So there are already popular modifications done to cars by their owners, which do get checked by cops for their certification, and often enough people do not have the necessary paperwork. Simple example: some people install custom wheels which are the wrong size for that car. This poses a danger as the drivability might suffer.
If you are modifying any software required for the safe operation of the car, there is quite a potential causing harm to others, and there is where the officials have to take notion of it.
Any flash chip that holds vehicle software gets a tiny hardware component that once a minute hashes the entire memory and broadcasts the checksum over the car's message bus. Somewhere in the car there's a read-only hardware interface to the message bus that cops and mechanics can use.
If you don't trust the message bus you need a bit more silicon to provide a signed challenge-response protocol, but then we're still talking about cents per flash chip.
And of course, this all would basically ban all cars with modified software from the roads. So you can modify your cars software, but not use the car in public traffic.
You could use this to limit modification of some crucial systems (ABS, airbags, etc. Those will always be one septate microcontrollers anyways because of their hard real time requirements).
Or as you said you could keep all modified cars out of public traffic, while still allowing experimentation on private roads. Both variations are more heavy-handed than I would prefer, but a lot better than the status quo.
>software modifications are pretty impossible to detect
I think there are two classes of software modifications: those that negatively impact the environment or other people and those that don't. The first is by its nature easy to detect from the behavior of the car, and I see no reason to regulate the second class.
Some people will introduce bugs, but I'm not convinced that's a problem that needs to be solved; self-preservation will make people careful not to introduce bugs in the break system, serious problems should be few and far in between
Assuming they manifest themselves before a crash happens. When the software modification only impacts the ESP or ABS system in an emergency situation, it is too late. Only extreme thorough testing can make software fit for this kind of purpose. And experience shows, even then bugs stay unnoticed.
The list of open source failures is long. It is not true in and of itself that the open source "more eyes" argument is correct. Do you audit the source code of everything you compile for security issues?
One could convincingly argue, just based upon DROWN, FREAK, Logjam, and POODLE that code in the open doesn't get inspected and those who do inspect it miss an extensive number of security issues.
I'm not sure how many people were harmed by this bug in MRI software, but the bug resulted in false positive rates of up to 70%
So yes, GPL'ed software may be used on critical applications the same way as every other piece of hardware and software: via special contracts.
If you get any IC datasheet, or even an application-specific brochure such as (1), you will see a notice saying explicitly that.
For example we told repeatedly to use obscure passwords, lock our phones and tablets with 4 digit numbers, and even swiping-the-screen-gestures, bank account information is protected by numbers and password! And all this in most cases is an obscurity scheme, and that obscurity scheme is our single point of security failure before complete systems access.
"Heartbleed" was present in OpenSSL for over decade, out in the open and nothing detected. ...
p.s. slightly off-topic, but can you accurately count the number of ball passes? https://www.youtube.com/watch?v=47LCLoidJh4
Of course there are secrets in any access to a secure system. But they should be only those parts which are user specific (passwords, keys,...).
And it doesn't slow people down anywhere near enough, hence all the talk about DRM, encryption, warranty voiding and anti-convention laws.
There is nothing wrong with allowing people to modify software, the issue is when people modify software to break laws (environmental law, in this case).
The solution is going to be some form of compliance certification/testing. The Manufacture provides the default software and gained a certification to prove it compiles with the laws. If the user wants to modify the software, they are going to have to prove that the replacement software also complies with the laws and get a certificate.
During annual vehicle inspections or licensing, they can check that certified software is installed. Or maybe the ECU's bootloader only allows certified software to be installed, unless a developer mode is enabled (which will require the owner to keep logs to ensure compliance)
See DMCA and licensing agreements. Modifying your stuff might be a felony someday. Best to be sure the right to modify or repair is right there in the license. Irrevocably and transfering to next buyer automatically. Not happening right now.
I seem to recall that a similar argument was used against Jon Johansen at some point, but I don't feel like trying to dig up early-2000s history on the web today (IME going back to ca. 2003 is okay, but before that a lot of things have fallen down the memory hole).
In a similar vein, if I were to physically modify the engine in such a way as to make it faster but fail environmental or safety regulations, that could also have financial or legal repercussions for me (or Volkswagen, to cite a recent example :). The same argument could be made against modifying automotive software- vehicles are certified by government agencies around the world based on the idea that they will perform a certain way WRT safety and emissions, therefore software behavior has to be just as unchanging and predictable as hardware behavior. The fact that people can and do hack their cars all the time doesn't change the regulatory environment for car companies.
I do believe that embedded software should be as open, modifiable, and testable as possible, but I don't think people claiming the right to tinker as a basic human right are going to change anyone's minds about this. The biggest impediments to (for example) car companies making their ECUs open source and hackable have nothing to do with a dogmatic attachment to proprietary software and security-through-obscurity (although those concepts have their adherents, misguided as they may be), and much more to do with compliance with government regulations and minimizing their potential liability.
There is at least a perception that having open source, hackable ECU software would be all downside with no upside to the companies selling those vehicles. The way to change this perception is to show it as being demonstrably wrong, rather than to simply claim a right to tinker. The best way to get open source ECUs is to either make the consumer market care about this (not likely) or to update the regulatory environment in such a way that car companies have a non-abstract motivation to go open source.
This is not unlike the dreaded binary blobs required to use certain 802.11 chips on open source operating systems- they exist because of FCC requirements, not because the vendors love binary blobs. They are allowed to ship software-defined radios, but the only way they can guarantee that their SDRs don't broadcast on illegal frequencies is to lock down the software that controls the hardware. It's not an ideal solution, but its the only solution they've got aside from spending even more money to ensure that safety at the hardware level, which wouldn't do anyone any good.
It's a sensible counterpoint. The modification would be, if it's safety-critical, that you have a right to get modifications by professionals who aren't the original seller. Or perform them yourself if you are such a professional. This covers the biggest use-cases of repairs and enhancements respectively.
EDIT to add:
" is to lock down the software that controls the hardware. It's not an ideal solution, but its the only solution they've got aside from spending even more money to ensure that safety at the hardware level"
Not true. They could do it right at hardware level with simple bounds-checks burned into logic. The checks' setting set at manufacture or OEM configuration using something like anti-fuses. The kind of tech that's in 8-bit and 16-bit MCU's that retail for a few bucks. Cost almost nothing. Also more secure given simplicity. They aren't doing it because they don't care given demand side of this issue. Plus, lock-in among oligopolies brings them long-term financial benefits. ;)
But if the government says: "All software installed on car ECUs must meet emissions standards" then there is no conflict, infact this is already 'implied' by the current laws.
The GPLv3 'should' also be fine with a "All car ECUs must only accept software that is signed and approved by a 3rd party testing laboratory" law. You have the exact same barrier to installing software on the ECU as the actual manufacture of the car.
The main example where the GPLv3 might conflict with a government law, is when a law mandates "the car's ECU must be locked down in a way that only the original manufacture can change the software"
It won't be a fixed certification procedure, one car manufacturer might hire an independent auditor to check over every line of code. The car manufacture that stupidly uses a neural network in their emissions control system will have to hire a independent auditor who is comptable putting their stamp of approval on the neural network.
Actually it's way easier to certify a neural network than a human, you can simulate millions of situations. You could even certify the code which ran the training environment in a line by line situation.
As long as the government (or certifier) is happy with whatever method was used to create the report, a certificate is issued.
Even with the most stringent software development and testing procedures in the world, NASA is unable to produce bug-free software for their spacecraft (which is why they have procedures to recover from bugs and patch software).
All the certification is saying: No bugs were found during testing, based on the amount of testing and the quality of the code and the testing procedures he probability of a fatal bug is x% (where x is a really low number with lots of zeros) and we deam x% to be an acceptable risk. In the case of autopilots, we only really need x% to be below regular human driving.
No software changes could have saved that driver. Tesla have a class 2 autopilot (adaptive cruise control with lane following). It's not within the job description of an autopilot to avoid all possible crashes, it's the job of the driver to be alert and ready to take control at all times. Tesla's autopilot doesn't even use Neural Networks outside of the camera sensor.
To be a class 3 autopilot that would be expected to deal with this sort of incident, Tesla would need way more sensors and much smarter software, so the car would actually have hope of detecting the faulty data from one sensor and taking the correct answer (The truck was only within view of the camera sensor, the radar is calibrated to only detect things at bumper height).
To me it seems that he is suggesting that the proprietary software can be more dangerous than open source one.
So from there you can infer that he believes that open source software can be more safe than proprietary one.
If death(proprietary) >= death(OSS) then !death(OSS) <= !death(proprietary)
Case in point, take code which is closed source, make it open source, and it magically does not become more secure.
Security of code depends of the effort made to make it secure; aka more eye balls, more secure.
A system is considered functionally safe, when any unexpected input lead to an situation where the system safely shut down. So it does not matter if we have real security or security by obscurity. Because if something unexpected happens, the systems shut down and becomes unavailable. That is what we have in general automation. The same applies for a railway. If something happens then the system shuts down, an emergency brake is issued. That works, because there is a higher level safety designed in, that the currently occupied rail is blocked and no other railway can enter.
Then we have highly available systems, or mission cirtical systems as we say. That is for example an airplan or a car. The engine of a car can shut down at any time. That is not considered harmful. But the brakes are mission critical. The brakes have to work under all circumstances.
As such safety-related systems are highly regulated and require mandatory third party review called certification. You need to certify with TUV or other such organisation. Part of that mandatory review is to disclose the source. That means the review argument for GPLv3 is obsolete, as there is mandatory review for safety-related systems. The license of the software does not matter.
The author then argues, that the Tesla incident would not have happend with GPLv3 software. He argues, it happend because of proprietary software. Which is wrong. It would have happend as well with GPLv3 software. This incident happend because of false claims by Tesla. This systems was never designed to handled this kind of situation. MobilEye, the supplier behind this system said, this kind of situation should be handled by the 2018 version. Tesla gave here a false impression about the completeness and safety features of its Autopilot. BTW the same Tesla that uses GPLv3 software in its MMI, which they have locked down.
Then the author argues with VW and thier false claims regarding emissions. As prooven, the ECU came from Bosch with proprietary software. The same ECU is used by many different OEMs and Bosch does not disclose the inner working, as this is the idea that Bosch sells. Even VW does not had the source, but used the ECU in a testing mode. So the problem here is also only partly related to the licensensing of the software. It is more like you can use software for the good cause and the bade cause. Either way it is not related to licensing.
The argument that GPLv3 code is reviewed is also unproven and even sometimes wrong. In the past there are several example where review didn't take place. It was more like "yeah, I think someone has reviewed because it is open source and I could too. But why to waste time and instead build on top of it." So, GPLv3 can lead you to false assumptions. Except, you will really do the review your self. Here it comes the question: how many really have done that in their past?
It's a cliche. If it was an example of a common problem then it wouldn't have to be mentioned every single time.
You disengage and don't necessarily convince anyone of anything (i.e. "win").
> If it was an example of a common problem
It is arguably common enough problem for reasonable definitions of common in its context. Hackable cars across the industry shows that people still haven't learned their lessons. The fact that people haven't died as often in these situations shouldn't have a bearing on arguments against closed systems if flaws are only potentially fatal.
b) Passwords are not "obscurity", they are "secrets". "Obscurity" in this context is about cryptographic algorithms/systems.