A Tesla has ~ 100.000.000  lines of code. Considering this post, do you think we are sufficiently educated in software security to produce secure self-driving cars?
Elon Musk: "I think one of the biggest risks for autonomous vehicles is somebody achieving a fleet wide hack" .
By your logic, we should not fly any modern commercial or military aircraft or spacecraft, live within a certain radius of any power or hazardous chemical plant, place any dependency on any first world country's health care network, including life support, or invest in any company or stock.
Like most things in life it comes down to a security/convenience risk/benefit compromise.
Are you claiming that this could not have happened with Tesla? If so, please explain why.
> By your logic, we should not fly any modern commercial or military aircraft or spacecraft, live within a certain radius of any power or hazardous chemical plant, place any dependency on any first world country's health care network, including life support, or invest in any company or stock.
Up until now the benefits have clearly outweighed the risks, but that does not mean it will continue to do so.
Embedded software used to be low-level code we'd bang together using C or assembler. These days, even a relatively straightforward, albeit critical, task like throttle control is likely to use a sophisticated RTOS and tens of thousands of lines of code.
"  
On the surface, this particular exploit on macOS seems to be caused by a totally unrelated subsystem (security vs. GUI).
In any case, in a desktop system you shove everything together and then try to modularize it with good design and weak tools like UNIX processes. In a car, the safety-critical systems are literally running on separate hardware with limited communication over a specialized data bus. Of course it's still possible for them to have bugs or even exploits, but the complexity of the infotainment system is irrelevant, aside from making it a potential jumping-off point for using an exploit in the safety-critical systems.
Counting the infotainment system here makes about as much sense as counting the number of lines in Windows when talking about a Mac vulnerability because a Windows machine could be used to launch an attack.
As for your subsystem argument, explain this: https://www.youtube.com/watch?v=OobLb1McxnI&feature=youtu.be...
That video is explained by using the infotainment systems as a bridge, exactly as I described in my previous comment.
But it's still wrong to focus on total complexity. You want to look at the complexity of the relevant system.
Which is my point. More code, more complexity, more bugs. If you are correct, the security boundary you are referring to got stretched out to accommodate a separate system.
As for your strategy on separating concerns and carefully crafting important code – don't you think that's what they originally had in mind when they first designed it?
loginwindow is not doing the wrong thing because it's complex. It's doing the wrong thing because that was never its job.
> don't you think that's what they originally had in mind when they first designed it?
Probably, but they didn't fail because loginwindow itself was complex. They failed either for systemic reasons that would have happened with simple or complex code, or they failed because the actual secure part was too complex. That's why I think total complexity is the wrong thing to look it; it may or may not correlate with those two real causes.
The drive encryption password hint bug looked like a symptom of something like that. The utility was rewritten in a rush, and it's probably not a whole lot of lines of code. But it didn't have even basic testing.
Yes? Agreed? It's simple, and yet it fails, because of management problems. That kind of management issue can happen to simple code or complex code.
Are you asking me to name a failure that's simple and large/complex at the same time? I'm confused.
Let me restate my position:
You originally brought up total lines of code. mikeash made a point that it's not total lines of code, it's code in relevant system, like ECU or the login service. I'm arguing in support of that point.
Complexity correlates with lines of code. But different subsystems can have different complexities. Especially when they're written by different teams.
The fact that the login service had special behavior wrt com.apple.loginwindow does not mean the complexity of com.apple.loginwindow is a notable factor. Com.apple.loginwindow was almost certainly not designed to enforce the security guarantees that the login service is responsible for. Its role could be played by either very complex code, or very simple code.
If the login service failed because of complex code, it failed because it was made of complex code. Not the total across the entire system.
Go ahead and pretend I never mentioned "systemic reasons" if it makes my argument make more sense to you. That was only about a possible scenario where the problem wasn't complex code. If complex code is the problem, then "systemic reasons" are utterly irrelevant. You don't need to argue that "systemic reasons" are complexity in disguise. Just delete that phrase entirely from my argument.