Indeed, the first thing that came to mind when I saw this presentation was RMS' comment, as mentioned in this article.
The paper both are based on is at https://www.softwarefreedom.org/news/2010/jul/21/software-de...
edit: Also, similar points can be made about other embedded and safety-critical software. security through obscurity in automobiles is scary, some materials on this at http://she-devel.com
While certainly the malicious can figure out how to abuse a system without source code, the owner/author doesn't have to hand the means of abuse to the malicious on a silver platter.
Yes, end users (me being one) have an interest in identifying flaws in their physically internalized lifespan-affecting software. The company making the product has an interest in limiting access to that source code. There are plenty who would rather use the source code to destroy the latter than to benefit the former.
This is why we (in the USA) have the FDA: recognizing that the public has an interest in reviewing a product's engineering for safety purposes, yet respecting the company's need to ward off the malicious, representatives of the public are sent in to review the product for corrective improvement without exposing the company to malicious harm.
Yes, see arguments on "security through obscurity"
End of discussion.
No doctor will want to be associated with installation of a life-support device which the patient can reprogram at will. Every change, starting from creation of the first blank main.c file, MUST be documented as considered, deliberated, implemented, reviewed, tested, verified, validated, and documented by certified personnel to the satisfaction of everyone from patient to surgeon to maintenance doctor to board of directors to sovereign jurisdictional government. We're not talking "oops, reboot" or "get the backup" or "revert to the previous version" failures should a mistake be made, we're talking "patient will die in 30 seconds flat if you screw up".
Writing a FOSS version of the software keeping me alive right now sounds lofty, but I don't think the surgeon who installed my pacer would have done so if he thought I might actually replace the software outright, and the doc & techs who maintain it certainly wouldn't continue to.
The potential positive benefits of an open-sourced pacemaker are obvious and well-expressed. Oft missed are the litigative & metabolic nightmare scenarios. One dangling pointer and I'm dead.
Developing a free replacement software for a pacemaker is kinda pointless without the ability for the consumer to reprogram it.
Because some people find surreptitious conspiratorial killing beneficial, be it involving large amounts of currency or mere amusement.
Because an "accidental" bump from a stranger, with suitable planning & equipment, can have "unrelated" terminal results seconds or days later.
Because that planning becomes a whole lot easier for someone who has the source code.
While he was having a shower, he got shocked, he assumed it was electricity. After a few more shocks, he quickly realized that it's his pacemakers. He took a number of additional shocks before he was sedated in a hospital.
Now, he's still got the pacemaker inside, but it's off. It turns out that the company making the pacemakers knew that a small percentage were defective and that things like this could happen (unfortunately, I don't know what company that is).
He's lucky he's still alive. He said that he's never going to have another (active) pacemaker - he'd rather die than have to go through such painful experience again.
Would an open-source self-driving car be safer than a closed-source version?
Because a lot of testing is required in order to make something like that safe. For one, you need to have or build a car with all the required sensors, servos and computers. Of course, this is not impossible. It does limit the number of available hackers to those who can afford to build such a car. Let's say, for the sake of argument, that this car costs US $50K to develop. If the hacker doesn't want to modify his/her own car an additional expense of US $15K to $25K would be added to this.
Yes, much code could be developed in a simulator, but as any micro-mouse competitor can tell you, there's a vast difference between a simulator and real-life. Ultimately, to make something better, safer, faster and more intelligent, real-life testing is imperative.
Any time you touch the real world things get expensive and slow. That said, a dedicated small group of hackers around the world with at least $50K to burn and lots of free time could work on an open-source self-driving car. And the code may or may not be better than the closed-source. Open-source doesn't automatically mean "better", "safer", "faster" or "more intelligent".
Now, switching to the pacemaker question. What does it take to develop an open-source pace-maker that doctors can and will actually implant. I don't know. I am not in that field. My guess is that the cost is in the millions. Sure, one could screw around with code and ideas and pretend to be building a pace-maker. In the end, when all the smoke and bullshit clears out, you have to implant the thing in a living organism and test it. Multiple times. And, you will kill some of your subjects. Or you are likely to.
How many hackers can do that? Probably not many.
Then there's the issue of regulatory testing and FDA approval process. That is a very expensive process. Who would undertake that?
Yes, there are a lot of open-source/open-hardware projects, but they are absolutely trivial in comparison to a pacemaker. Implanting something into a human being is a big deal.
Then you have to look at the patient's perspective:
Doctor: "Mr. Obama, you need a pacemaker. We have two choices. First is this pacemaker developed by big-bad-established-closed-source company that makes too much money. You also have the ability to use this model, developed by fifty guys all over the world and it is open source"
Obama: "Who are these guys?"
Doctor: "Hackers who believe that open source is a better way to build a pacemaker. The argument is that the code and design are completely open and, hence, bound to be scrutinized by other hackers to a degree that closed-source can't even begin to approach"
Obama: "Yeah, but, what guarantees do they provide? Who is the responsible party? What recourse do I have if something goes wrong and I need serious help?"
Doctor: "Well, there's a company that sells them. So, that's your point of contact. No different from the big-bad medical company".
Doctor: "Nope, C++"
Obama: "I don't know. I'm kind of concerned. Let's go with the closed-source version. I don't know if I want to trust my life to a bunch of hackers around the world."
Dumb example, but you can't ignore the last issue with these things. RMS might be more than willing to experiment with his own life (I wonder). This is certainly not the case for Joe Average. Given the choice, they'll probably opt for what they perceive as safer and more reputable, rather than the hacker version. As much as folks in the tech community might understand the value of open source, the aforementioned non-coding backhoe operator who doesn't own a computer will have absolutely no intellectual connection to the concept of open source. All he will hear when given the choice is that a bunch of pimple-faced teenagers in their underwear developed this pacemaker in their bedrooms and experimented with their dogs. He'll opt for column A out of fear.
There exist many free software projects that have been created by larger entities, even if the project was at first proprietary. At the very least, many corporations/research groups have contributed to existing free software projects --- the kernel Linux is one such example. OpenStack was released by RackSpace. Google created Chromium, Android and ChromeOS. Etc.
This also demonstrates the problem of approaching this from the perspective of "open source". When considering this an ethical choice --- a choice of freedom --- it's not about creating a "better" piece of software. It's about creating software that respects our freedoms and, most importantly, understanding what the software that we have just put inside of our body is doing. Even if we do not have the resources to hack it, we can at least study it (Freedom #1). If we find a flaw and are unable to fix it, we can hire somebody who knows how to. We can then distribute those changes to our friends and neighbors who may have the same problems (Freedom #3).
This is the perspective RMS is adopting. Not "open source".
- Don't mess with your own implanted medical device. Even doctors go to other doctors when they think there's a problem
- Only someone living in a bubble thinks there are any significant number of doctors who can program
From start to finish I hated the whole premise. It's one thing to say you want to screw with open source pace makers and a whole other thing to try to convince us (yourself?) it's safer.
The suggestion is that the consequences of the source being open would be mostly beneficial (to the users of the devices), if not entirely so.