Hacker News new | comments | show | ask | jobs | submit login

Please point out the discrepancy.

A Tesla has ~ 100.000.000 [1] lines of code. Considering this post, do you think we are sufficiently educated in software security to produce secure self-driving cars?

Elon Musk: "I think one of the biggest risks for autonomous vehicles is somebody achieving a fleet wide hack" [2].

[1] https://bit.ly/KIB_linescode

[2] https://www.youtube.com/watch?v=4G1Boh-URIM




These companies have completely different operating systems, network ACLs, software update policies and subsystems that affect certain mechanical features.

By your logic, we should not fly any modern commercial or military aircraft or spacecraft, live within a certain radius of any power or hazardous chemical plant, place any dependency on any first world country's health care network, including life support, or invest in any company or stock.

Like most things in life it comes down to a security/convenience risk/benefit compromise.


> These companies have completely different operating systems, network ACLs, software update policies and subsystems that affect certain mechanical features.

Are you claiming that this could not have happened with Tesla? If so, please explain why.

> By your logic, we should not fly any modern commercial or military aircraft or spacecraft, live within a certain radius of any power or hazardous chemical plant, place any dependency on any first world country's health care network, including life support, or invest in any company or stock.

Up until now the benefits have clearly outweighed the risks, but that does not mean it will continue to do so.


How much of that code is safety critical? I occasionally see misbehavior from my Tesla's center screen, like the network connection failing, or audio glitches, or even the occasional spontaneous reboot. This can be mildly annoying but it doesn't worry me because I know that the center screen is separate from the stuff where bugs can actually get me killed.


"On Thursday October 24, 2013, an Oklahoma court ruled against Toyota in a case of unintended acceleration that lead to the death of one the occupants. Central to the trial was the Engine Control Module's (ECM) firmware.

Embedded software used to be low-level code we'd bang together using C or assembler. These days, even a relatively straightforward, albeit critical, task like throttle control is likely to use a sophisticated RTOS and tens of thousands of lines of code. " [1] [2]

[1] https://www.edn.com/design/automotive/4423428/Toyota-s-kille...

[2] https://www.embedded.com/electronics-blogs/barr-code/4214602...


Sure. My point is just that you can't bring up the total number of lines of code here, because most of those lines aren't in any way related to any safety-critical system. If you want to talk about how much code there is which puts you at risk, you need to look at that particular subset of code.


I think it is safe to assume that complexity is highly correlated with number of lines of code. And with complexity grows bugs.

On the surface, this particular exploit on macOS seems to be caused by a totally unrelated subsystem (security vs. GUI).


I don't see any evidence that this exploit is related to the GUI at all. The GUI just happens to be the easiest way to do it. Other commenters have mentioned that you can use the exploit with `su`.

In any case, in a desktop system you shove everything together and then try to modularize it with good design and weak tools like UNIX processes. In a car, the safety-critical systems are literally running on separate hardware with limited communication over a specialized data bus. Of course it's still possible for them to have bugs or even exploits, but the complexity of the infotainment system is irrelevant, aside from making it a potential jumping-off point for using an exploit in the safety-critical systems.

Counting the infotainment system here makes about as much sense as counting the number of lines in Windows when talking about a Mac vulnerability because a Windows machine could be used to launch an attack.


"Apple root bug appears to be triggered only by logins coming from com.apple.loginwindow. Running "su" with a blank password won't get you a root shell." [1]

As for your subsystem argument, explain this: https://www.youtube.com/watch?v=OobLb1McxnI&feature=youtu.be...

[1] https://twitter.com/0xAmit/status/935609423485169664


There are a bunch of replies to that tweet saying that it does work, you just have to do it a certain way.

That video is explained by using the infotainment systems as a bridge, exactly as I described in my previous comment.


> I think it is safe to assume that complexity is highly correlated with number of lines of code. And with complexity grows bugs.

But it's still wrong to focus on total complexity. You want to look at the complexity of the relevant system.


Not so sure: "Apple root bug appears to be triggered only by logins coming from com.apple.loginwindow. Running "su" with a blank password won't get you a root shell." https://twitter.com/0xAmit/status/935609423485169664


You can't blame com.apple.loginwindow for that. The security boundary is at the login processing. If it processes requests differently from certain systems then it's misbehaving. If we wanted to make this secure we would only need to carefully craft it, and even if com.apple.loginwindow was the most complex heap of bugs in the world it wouldn't matter.


> The security boundary is at the login processing. If it processes requests differently from certain systems then it's misbehaving.

Which is my point. More code, more complexity, more bugs. If you are correct, the security boundary you are referring to got stretched out to accommodate a separate system.

As for your strategy on separating concerns and carefully crafting important code – don't you think that's what they originally had in mind when they first designed it?


I don't think the security boundary was actually expanded, I think it had a hole punched in it. I doubt the bulk of com.apple.loginwindow was coded to enforce that particular security at all.

loginwindow is not doing the wrong thing because it's complex. It's doing the wrong thing because that was never its job.

> don't you think that's what they originally had in mind when they first designed it?

Probably, but they didn't fail because loginwindow itself was complex. They failed either for systemic reasons that would have happened with simple or complex code, or they failed because the actual secure part was too complex. That's why I think total complexity is the wrong thing to look it; it may or may not correlate with those two real causes.


"Systemic reasons" sounds a lot like complexity to me. Do you have a counterexample of a simple system that blew up like this due to systemic reasons?


Every time a prototype has been pushed into production. Simple code, but not tested or polished or designed for performance. This happens constantly.

The drive encryption password hint bug looked like a symptom of something like that. The utility was rewritten in a rush, and it's probably not a whole lot of lines of code. But it didn't have even basic testing.


A prototype is of considerable less size than 100.000.000 loc. I claimed that complexity is correlated with lines of code. I pose the same question: Show me a simple system that failed due to systemic reasons.


> A prototype is of considerable less size than 100.000.000 loc.

Yes? Agreed? It's simple, and yet it fails, because of management problems. That kind of management issue can happen to simple code or complex code.

Are you asking me to name a failure that's simple and large/complex at the same time? I'm confused.

Let me restate my position:

You originally brought up total lines of code. mikeash made a point that it's not total lines of code, it's code in relevant system, like ECU or the login service. I'm arguing in support of that point.

Complexity correlates with lines of code. But different subsystems can have different complexities. Especially when they're written by different teams.

The fact that the login service had special behavior wrt com.apple.loginwindow does not mean the complexity of com.apple.loginwindow is a notable factor. Com.apple.loginwindow was almost certainly not designed to enforce the security guarantees that the login service is responsible for. Its role could be played by either very complex code, or very simple code.

If the login service failed because of complex code, it failed because it was made of complex code. Not the total across the entire system.

Go ahead and pretend I never mentioned "systemic reasons" if it makes my argument make more sense to you. That was only about a possible scenario where the problem wasn't complex code. If complex code is the problem, then "systemic reasons" are utterly irrelevant. You don't need to argue that "systemic reasons" are complexity in disguise. Just delete that phrase entirely from my argument.


This is a much more interesting comment than your initial quip. Next time, consider leading with this rather than unnecessarily antagonizing people.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: