Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fiat Chrysler recalls 1.4M cars after Jeep hack (bbc.com)
251 points by vvanders on July 24, 2015 | hide | past | favorite | 289 comments


This is a terrible decision. On such a short timescale, they'll only be able to fix the particular bug in their infotainment system. The real problem as everyone has pointed out is that the car control and infotainment should not share channels. There should be a physical gap between them and if really necessary a very tightly controlled message bridge.

The real fix will require much more intervention than just a firmware flash at the garage.

We'll see at Def Con how much Chrysler really screwed up.


> This is a terrible decision.

It's the only one available to them. They can't just refit all existing cars with a new design on a short timeline. Hardware has permanence.

With that in mind, how would you fix the issue for existing vehicles?


The infotainment system should never have access to the CAN[1] network in the first place... ever. At most, they could have a very simple listener on the can network that could send data (one way only) to the infotainment device via a different channel of sorts. Mixing the two is asking for issues such as just this to happen.

[1] https://en.wikipedia.org/wiki/CAN_bus


> The infotainment system should never have access to the CAN[1] network in the first place... ever.

The problem is access to the CAN bus is necessary to change various vehicle settings from the infotainment system. I know for a fact that changing the door lock settings in my car requires the infotainment system to access the CAN bus to apply settings to that system.

This is not an unsolvable problem; this is basic software engineering. Processes should be isolated from each other. I imagine everything in the infotainment system effectively runs as "root" because they consider the whole thing a singular black box.


Cars can have three separate networks for infotainment (radio, navigation, etc), body controls (seats, lighting, locks) and mission-critical stuff (steering, breaks, engine). Some of them actually do.

There is little genuine need to send commands from low-importance networks to high-importance networks. Yes, you would have to live without single uber-control touchscreen, but it's not really that big of a limitation: you simply need separate controls for low-importance and high-importance things.


Then HN would be full of people sneering at the terrible car UI and demanding the Apple of cars come along and disrupt cars.


I rent cars often, I really dislike the ones w/a giant touch-screen on the front and nothing else. To me it's a gimmick and totally unsafe, since I have to look at the screen to do anything on it, sometimes repeatedly if it didn't recognize my touches. So they often end up duplicating all controls on the steering wheel, how is that 'good design'?

Last car I had (Kia Forte) was actually very pleasurable... they had a very helpful digital display but all the adjustment could be done w/real buttons.


The parts of car UIs we're talking about are mostly awful anyway, and on several different levels. They have basic design flaws like using naff graphics, which neither look good nor supply information more effectively than simpler alternatives, and using non-tactile controls like touchscreens or fully circular rotating dials. Hopefully by now everyone knows the dangers introduced by most of the mobile communications systems in modern cars.

The direct control interfaces in a car that tend to be very well designed and implemented are the basic movement controls like the steering wheel and pedals. These are distinctive and immediately recognisable. They work predictably and after proper driver training they become very intuitive. Drivers in an emergency can often still manage to swerve or brake to avoid a collision, and that is the level of user friendliness you want when you're in control of multiple tonnes of fast-moving death machine.

Other parts of the interface that modern cars tend to do very well are all the subtle behaviours behind those controls: the power steering that adjusts so it "feels right" at any speed, the independent driving and braking of different wheels so the simple pedal controls don't unnecessarily spin the wheels if you try to move off with too much gas but do retain control without skidding unnecessarily if you brake and steer at the same time in an emergency.

And of course there are a raft of safety systems in modern cars, some fully automatic, but some giving subtle cues to the driver to help them predict and hopefully avoid dangerous situations earlier.

But these so-called infotainment systems, and radios, and communications systems, and satnavs, and fancy environmental controls... Most of these are just awful, and sneering at them is entirely fair and justified right now, so separating them from the essential controls that keep the vehicle operating properly and safely should really be no problem at all.


I agree with both you and the person above. I think the missing word we're looking for is: "unrestricted" (as in unrestricted access should not be permitted).

Yes, the info system needs SOME access to the CAN network. But it shouldn't be unrestricted access. Instead the info system should be treated as "untrusted" and only specific things should be permissible over a well documented set of APIs.

I'd almost be tempted to say that while the car is moving, ALL access to the CAN network should be disabled. Since let's name some things the info system needs on the CAN network:

- Reconfigure car (e.g. daylight running lights, auto-locking door, etc).

- Start car remotely.

- Set AC remotely.

- Diagnostics.

The only one on that list which MIGHT be useful to have while the car is in motion is diagnostics. And that could be delivered via a read only interface.


Steering wheel audio controls are common and useful in motion.

It's not really about RO v RW, CAN is about sending commands. The head unit could say, it's the brakes, so that has to be protected against.

When there is not a physical layer doing that protection (separate bus, a filter, and so on) there are two other layers. One is the thing sending can reject it from going out, but that takes resources that might not be there. The other the receiver does some sanity checking on it, like "you want me to go in P but I'm going at 65, yeah I don't think so" to even it's gotten common place for messages involving the brakes include a shared code back and forth.

Really I think this boils down to there are untrusted channels (wifi, cellular, DAB) that should be locked down better. I mean the DAB thing, likely it's a buffer over flow in the head unit. Even if that has a cpu that is running FW that is making sure the CAN messages it is sending are sane, the exploit just makes it stop doing that. There's such a pressure to save cost and so many OEMs involved that ease of integration motivates the rest.

Yeah it sucks, but it is what it is.


What you're describing sounds like a firewall for CAN. It's a complex solution that will inevitably be insecure.

"Yeah it sucks, but it is what it is."

This attitude might work with some other things, but the automotive industry will have to deal with security in a reasonable fashion. This is potentially life-and-death stuff and consumers actually have choice here. Not everyone understands the full implication of someone tracking their cellphone or hacking their email, but everyone can imagine what will happen if your car will get out of your control.


FWIW some companies handle this better than others. In this case FCA took the Uconnect code from a different division and integrated it. People make mistakes, something was listening and passing on more than it should in that system, now it is fixed. I'm really impressed with the researchers work.

I've been trying to describe two things. One, the way that CAN bus operates, so some of the technical solutions people have been making just are impossible. People should imagine that CAN bus is something like RS-485 or the old systems that had some mix of ISA and PCI, not like switched ethernet. The second item is a bit of how these things happen in business world, for good or bad, not that I agree with it or anything.


> Steering wheel audio controls are common and useful in motion.

But wouldn't those likely be one-way commands TO the infotainment system? They would still work even if it was receive only.


My goodness this is really not going well for me.

Basically there is no notion of to or from really. When you push the vol+ button on the steering wheel there is a controller that sends a message trying over and over again until there is no collision. That message has an ID in it early on. In this case it will be like I dunno $412 okay. That means audio controls or something. A bit later is some length and then the data, say the data is $C.

The infotainment unit is listening to everything just as the controller for the steering wheel controls is and everything else on the bus (it has to, because of how collision detection works). When the controller in the steering wheel sees that $412 ... $C it goes, well isn't that nice, me or someone else sent the vol+ message out finally, cool, I'll stop sending that now. When the radio sees that message with an ID of $412 it goes, oh that's an audio control message command, I should pay attention to that. Then it looks at the length and data and sees $C. It goes oh that means someone pressed vol+, I'll make it louder.

But here's the thing, there might be a knob too for volume and there really is just one board doing the infotainment. It's not like the old days where it's a potentiometer, it's not even wired directly into the board that handles infotainment. All the IO pins that board has are already used-up handling the screen, CAN bus, and other things. When you twist it, it also sends the same $412 ... $C over CAN from it's controller! The radio did not know what sent it, and that's by design in CAN bus.

There might be mobile phone integration, it can do CAN too, say also a $412 with with a different payload (and possibly length) that might mute on call. Also there may be a touch screen, but that will not do a thing over CAN if you press the vol+ there, just do it's normal IO from screen to SoC on the board.

Am I doing a better job of explaining? The take away is lots of things can send the same message and lots of things might be interested in that message and that is how it is intended. To some extent you can mitigate this in hardware. You can make long runs or some shorter star shaped topologies as long as you get the termination right. For the star shaped topologies you can stick gateways in there. The controllers can be setup to filter on certain conditions and the bus is the limit for filtering if you are using a programmable part. What I mean by bus is the limit is things like there is no notion of to and from.

That's what you get in CAN bus cause that's how it works. You can thank Bosch for that.


Thanks. I bet like most things in this world, this design grew organicly.

It seems like a good design in a very noisy environment and it does allow the car manufacturer to easily add in new controls ( volume up for example ) in different locations that do the same thing.


And thank you for the polite response, I appreciate it. One further nice thing about CAN, because of how it works electrically (basically logically zero always wins) it is trivial to have messages (and message types, I only described the most common data type) of different priorities. So a low numbered ID always wins. That also allows a trivial DOS, oa bit blaster that repeats something like 000000100001...CRC... might not even trigger an error detection. It really is quite a nifty thing, amazing it works so well too.


Thanks for the detailed explanation. So it's a completely trusted bus architecture. And what you absolutely shouldn't do in that scenario is expose it directly to Wi-Fi and 3G, which is exactly what it sounds like they did. I hope that this exploit gets manufacturers treating this issue with the importance it deserves, but my cynicism says they'll largely ignore it until people die.


That looks like an absurd amount of complexity/overengineering just to save some wire. I suppose "put everything on one bus" could be turning into some sort of anti-pattern.


Consider how much wire would be required to route every button directly. There would be tens of pounds of extra metal, plus impossible bundles of hundreds of wires to route around the car.

CAN is great for its purpose, but handling untrusted actors is not part of that purpose.

What should happen is a non-CAN hardware gateway that only passes valid commands to CAN buses.


It was a lot of wire saved. A 1985 Mercedes has a chassis wire harness that is almost three inches in diameter at its thickest point. They wouldn't have done it if it hadn't been a bargain.


It's CAN, so it's half-duplex.


Porsche and Audi (and likely others) have secondary nav displays [upcoming turn and distance] that originates in the info unit and is displayed somewhere on the dash.

Other cars have climate controls built into the info unit.

Then, there are cars like Tesla where nearly everything comes from the info unit...


The original 'researcher' (because they did something so horrendously unsafe on public roads) mentioned that there are at least 2 car manufacturers who have devices on the CAN bus to watch traffic and detect when a device is issuing commands that it shouldn't be so the tampering can be detected (and assumedly shut the infotainment system off).


[flagged]


Please follow the HN guidelines when commenting on this site. This comment would be fine without the first paragraph.

https://news.ycombinator.com/newsguidelines.html


And a stalled/wrecked/slowed vehicle on a highway sometimes results in death:

http://i.imgur.com/dgHm3E9.gifv [redundant warning: death]

http://i.imgur.com/V6aySGy.gifv

You can't control for other bad drivers' reactions to your vehicle becoming an obstacle on the highway. Swerve and change lanes? Sure, in an ideal world that always happens. In the real world, accidents can happen instead.


[flagged]


Let's say there is a low chance of death from doing this: 1:10:000.

Arguably that's about the same as sending someone to jail for 1:10,000th their life. Which is about 2 and 1/2 days which is not ‘horrific’. Still, I would hope sending a random person to prison for a few days to stage a story seems unacceptable.

On the other hand vaccinations are both lower risks and a higher benefit so it’s not really comparable.


[flagged]


You are unqualified to determine the level of risk that was imposed on the other drivers. You have no idea how many zeros to add or remove to how likely an accident was to result from this. I'm not sure there exists a person that could give an accurate representation beyond generalities, so please don't present yourself as this person.

The point, the only point, really, that people have with their actions is that they endangered other people on purpose and without their consent. I wouldn't defend someone weaving between cars in traffic and leaving inches between bumpers doing so (I'm sure many of us have seen this) for the exact same reason. There are far too many variables to accurately account for, so they are raising the risk to all the people around them. Even an expert driver can't claim to know how every other driver on the road will react.

That the researchers did this for what I'm sure most of us believe is a good reason is irrelevant, given there were alternatives. They made a judgement call, and now we are all upset at their poor judgement.


> They made a judgement call, and now we are all upset at their poor judgement.

Yeah, unreasonably so. But, I get that you're upset.

> You are unqualified to determine the level of risk that was imposed on the other drivers.

I think anyone who has driven is at least somewhat qualified to determine the level of risk in common scenarios.

The Jeep didn't even apply its brakes.

> You have no idea how many zeros to add or remove to how likely an accident was to result from this.

It was a rough guess, but I did try to check it. 1:10,000 common interactions becoming fatal accidents would depopulate the earth rapidly.

> I'm not sure there exists a person that could give an accurate representation beyond generalities, so please don't present yourself as this person.

Oh I see, and when you'd told everyone else that they weren't experts you got around to me. Okay, well, sure. In that case.


  > Add four or five more zeros and you'll be in the ballpark.
So the 1:10,000 should be between 1:100,000,000 and 1:1,000,000,000? You're saying that, on average, between one hundred million and one billion vehicles would need to pass a stopped vehicle on the highway before causing an accident? Sorry, but if your arguments previously strained credibility this takes the cake.


That a mild slowing of the vehicle ahead would cause a fatal collision that wouldn't have happened otherwise, yeah.

This called for the same reaction as would adjusting speed to match any car that took their foot off the accelerator; the Jeep didn't even brake! Driving is a continual process of these slow interactions and that's not the part that causes accidents, and considering relative speed they would also tend to be non-fatal accidents if they did happen.


> the Jeep didn't even brake!

That just makes it more dangerous - there was no brake light to clearly indicate the car was slowing. Since we know drivers rear-end cars quite often we know the risk of accident was increased here.

Increasing the risk of accident is not acceptable unless all participants have given informed consent (they didn't).


That's fine if you're behind the vehicle right when it starts slowing down. I agree with you that a fatal accident is highly improbable there. But it didn't just slow down a bit and then resume speed. If you watched the video you saw that the vehicle came to a complete stop. The vehicles that saw it mildly slowing down have already driven by, leaving only incoming vehicles going 70mph unprepared for a vehicle ahead at a total standstill. Hopefully those unprepared drivers are sufficiently conscious, alert, and otherwise not distracted to react in time to prevent a crash. As the clips I posted above demonstrate, I wouldn't bet my or anyone else's life that that is the case.


A fatal collision can kill more than one person. Also, more than one car was impacted by the slowdown so you need to look at the overall odds per person not per car and then sum then to find the collective risk of death.

It's not easy to find the actual odds, but stopped cars on the freeway kill people every year with much higher risks in high speed low density high speed traffic as traffic jams tend to be safe it's unexpected stopped cars that's the problem.

Rough guess there is probably a 1:100 chance per year a car will stop in the middle of a free way for no apparent reason. There ~100,000,000 cars on the road. So, ~1:1,000,000 cars stop per year which is probably low but let's say they cause 10 deaths out of the 20,000+ auto deaths per year. Well that's ~1:10,000. Now, sure you can play with the odds but there much higher than 1:1,000,000.


Given some of the edge case interactions in any complex system, they did not and could not know that the only impact would be slowing the car down.

You should listen to the QuviQ guys talk about finding software bugs in automotive control systems.


They sent known commands to a known vehicle, they weren't live-fuzzing an unknown system.

The experiment seems less dangerous than not following decent tire-rotation policy, etc.

It's not that there's finite extra risk, it's what it is in relation to the baseline risk. Without that these comments are just useless scaremongering.


The problem isn't what they did to the vehicle.

The problem is there were other drivers around who may not have been expecting the situation.

Further, since they weren't in the vehicle, if something happened ahead of it and the Wired journalist NEEDED to make a sudden maneuver to avoid an accident he might not have been able to. Asking them over the radio to turn X back on could have taken too long causing a serious accident when without their intervention one could have been easily avoided.

It's not scaremongering. There is a reason you don't interfere with a driver on a public road at high speed. It was extremely irresponsible. There were plenty of ways they could have done the test in controlled circumstances (ask the cops for help, race track, auto test facility, large empty parking lot, etc.).

They took unnecessary risks with possibly fatal consequences. It was irresponsible.


But the unnecessary risk is so close to 0 that it's obviously just manufactured outrage. If you read about this and came here to comment and that's all you can talk about you might as well be trolling.

It's harmful. You're going to make some politician think that's where they should spend their time instead of figuring out why the car company sat on this for so long. Our whole society will lose.


Could you please stop referring to people who disagree with you as "anti-vaxxers" and "trolls"?


I didn't. I was quite distinct.


So look, it is not obvious to me that this is all "manufactured outrage", and I (a) know one of these researchers, (b) have spent my career mostly in vuln research, and (c) have wasted some brain cycles thinking about this issue.

I think I mostly agree with Robert Graham's take on this:

http://blog.erratasec.com/2015/07/infosecs-inability-to-quan...

Robert Graham thinks mostly the same thing you think, but because he doesn't evoke "anti-vaxx" and "trolling" and say things like "obviously manufactured", he's (1) persuasive and (2) not setting fire to a comment thread by picking fights with people.

You can write whatever you'd like to write. But if you keep writing like this, most people here won't care what you have to say, and in short order they won't be able to see it either, because you're going to get flagged off HN.

It would be helpful to have more people arguing the other side of the conventional wisdom on Miller and Valesek's demo --- more people, that is, arguing the thing you're trying to argue, that the risk was minimal and the upside significant. Please make that argument carefully, and don't caricature it.

Thanks for listening.


'fineman, you have my sympathies. I wonder if some people commenting on this issue have ever driven a vehicle. If they have, they must be the same yokels I see who never change lanes or even deviate from their lane ever, even on multilane roads: not to pass a cyclist, not because the car in front of them is stopping to turn right into a narrow drive, not to give the flag dude a little more room, never!!!


Interestingly enough, when my car is moving I cannot reconfigure the car. Those functions are disabled. However, I fully expect if that if my car was hacked, they could easily by-pass that restriction.


Perhaps a small proxy application that is fully vetted and tested code. This doesn't seem too dissimilar from the "I need a small suid root app to do $root_thing for an unprivileged user". As you said, unrestricted access is the problem. This just is typical "principle of least privilege" kind of stuff.

Totally doable, but it is unbelievable to me the lack of forethought in things like this.


CAN has been around for a long time. I don't think we can blame Bosch that they didn't see from 1985 the pitfalls of someone possibly adding global internet connectivity, nor can we assume they would have advised interconnecting CAN systems with the internet.


> Perhaps a small proxy application that is fully vetted and tested code.

I think that's a good idea, but I think it would be better if it was a small hardware firewall than a program in the infotainment system.

I would think something like that could even be entirely formally verified.


I was referring to an application/hardware that sat directly on the CAN bus, not a part of the infotainment system whatsoever. Something totally separate that had only 1 small function, to authorize access to CAN communications for "unprivileged" things such as the infotainment system.

So we're in complete agreement.


Airframe manufacturers have solved this problem. The car makers need to simply hire some aircraft electronics systems engineers for consultation.

It's not just security - you can't have bugs in one system causing other systems to go down. You can't tolerate a bug in the radio causing the car to crash (didn't anyone learn anything from the demise of the Death Star?)

The Fukushima disaster and the Deepwater Horizon disaster, from what I read about them, all suffered from easily preventable "zipper effect" of cascading failures.

Every industry apparently has to relearn the hard way what the aviation industry learned decades ago (and what the Navy learned even earlier).


I guess my question would be, why on earth do we need to change vehicle settings through the infotainment system? If you have systems that need to talk to the ECM, you either need to just hook up a tool through the OBD2 port, or have a dedicated interface. I also find the built in cellular link, at best, mildly insane, but under no circumstances should there be any real interconnect between a device that can send and receive with the internet, and the increasingly controlling and complex ECM, as the risks are enormous.


Changing your door lock setting doesn't require the infotainment system, the GUI for it currently does.

That's vitally important to understand. The systems are mingled but they don't have to be and in fact shouldn't be.

Also, not all isolation is the same. Consider the scope of isolation between multiple sandboxed apps (some accessing the raw internet) on a single computer and two separate computers connected via a specialized protocol-aware data diode.


Managing all my car settings requires a touchscreen -- should I really have two touch screens for this? No. One screen, one computer, but two processes that are isolated from each other.


In my Rover the K-bus devices do all the stuff that doesn't affect the integrity of the engine management system. It is definitely not connected to the CAN-bus. Why should the locks be connected to the CAN-bus? That's a stupid decision. Locks are the responsibility of the body controller. The K in K-bus means body, in German.


Since you seem to know what you're talking about - can you explain: if the system can be accessed and controlled remotely, why it can't be patched remotely?


Easy to say, hard to do. Really cars now need TWO CAN bus networks, the RED network and the BLUE network. All critical car functions on the RED isolated network, vanity functions on the BLUE network.

So often though a designer will say "We really want to help people who call in for roadside assistance, we just want to read the CAN bus, we don't need to write it." And some complex function will be created that only "reads" but then a overflow attack or some other exploit lets arbitrary code get run and since its possible to write code with a write function your security is toast.

So what actually will happen (I hope) is that a bunch of things will no longer be possible from the infotainment system because that system doesn't have access to the CAN bus at all. And any UI that does have access to the CAN bus won't have any wireless access to it at all. Which will make some car features, less featurefull.


They've largely had something like this already in most cars, a low speed bus that's got the non-critical things (seat position controls, window controls, etc.) and a high speed one that has all the things like engine data, speedometer updates, etc.

I definitely agree though they need to take this to the level that's usually seen in avionics with a read only gap between the third entertainment system bus. Without that you really can't hope to secure it. The only real problem I can see is how would you let the roadside assistance stuff unlock the doors? Maybe an authenticated write only to the low speed bus that you don't get to control the message, just "unlock all doors".


There are very few "non-critical" systems in a 2+ ton vehicle moving at highway-to-expressway speeds. For an example of what a "minor" system like seat position can do, consider the following:

http://www.lakecountrynow.com/opinion/blogs/communityblogs/1...

After more investigation they ended up suspecting a different failure mode, but it speaks to the danger of even pedestrian functionality. Most people are shit drivers who are unsafe to be around in the best of times, "minor" things like stereo control, windshield washing, etc are certainly enough to distract them let alone being physically moved out of the reach of the controls. Particularly in combination, let alone in rush hour traffic or bad weather.


If an individual does something stupid and gets himself killed, that's one thing. If I can send a broadcast message to cut the brakes, and that can potentially affect many vehicles at once, that's something else entirely and the greater risk to public safety.


I've done some CAN stuff (not on cars though), and that is the way that everyone I talked to advised to build your system, if suitable. However, we still pushed firmware from the master, so if the master were ever compromised, then all of the nodes would be potentially compromised as well. There is only sending and receiving data. All nodes receive any data from anything else on the bus. A node typically ignores data not addressed to itself but not always, that is up to the programmer. Any node can potentially impersonate the master, or another node. There is no way to prevent this, other than through the firmware.


One thing I've not seen addressed in this context: If someone wants to create mayhem on a highway, why would they not go low-tech / low effort? There's a number of mechanical and electrical attacks that I can think of just off the top of my head.

Analogy: is it worth the time making a pick-proof lock for the front door when someone can just break a window?

(I'm not saying we should let car manufacturers off the hook, just offering a perspective on the realism of the threat.)


I'm sure there are some who might want to "create mayhem on a highway" and who would not choose this route. But when your potential attackers are [everyone with an internet connection] this kind of exploit has huge potential. Imagine if the vulnerability were bundled with a popular mobile website or if someone else cracked how to mass-distribute it. Countries that are at war could use it against their enemy. Al-Abu-ISIS-HAMAS-Nidal-Haram-Boko could use it against anyone/everyone.

>Analogy: is it worth the time making a pick-proof lock for the front door when someone can just break a window?

Yes, because not everyone can throw a rock from their house to yours, be nearly everyone can be connected to the internet.


why would they not go low-tech / low effort?

Attribution. Mechanical attacks are easier to attribute than something that can be done from across the world over the Internet.


There's also inhibitions to talk about. It's one thing to go lay a spike strip down in the middle of the road and it's another to do something over the internet.


A lot of cars have audio controls on the steering wheel. It used to be on separate wires but now that's on CAN, save money. So that's one reason infotainment is on CAN.

It could be it's own CAN bus completely separated but you probably also have the CC on the wheel too. So do you put the CC on the same one? But wait, there's the SRS there too, oh man that plays ball with the self diagnostics.

Hmm looks like we need a high priority CAN in the wheel, might as well use it for everything.

That's how it goes...


It's perfectly reasonable to use the same bus to carry driving and infotainment commands from the wheel - those are simply physical switches. Also, the steering wheel isn't connected to the cellular network.

It's the infosystem that shouldn't be able to broadcast on that bus. There should be a very limited set of messages that will be accepted from it and there should be a dedicated circuit that drops everything else.


I'm not disagreeing with you, it's just the realities. For example CAN does not have a notion of source, it's really in the abstract a whole lot like a bus, just with cables.

So one reason the infotainment might want to send commands is to that station/song information appears in the instrument cluster.

Now I agree it would be really smart if there was another module sitting between the the radio and the IPC with two sides and it filters both ways. Also that gateway has logic like, I have power but it's been less than a minute, this packet has the right code, sure I will let this reflash happen. In fact such devices exist (except they are more trusting).

The thing is though that the radio tends to have the most impressive CPU of all in the car, so there is pressure to add that in there (I mean there was one manufacturer that had tried to merge BCM, TCM, telematics, and DIC all into the head unit with a roughly 400MHz cpu, so that gives oyu an idea), logic like 'yeah maybe, we won't be sending that in fact.' But once it is hacked, all bets are off, it can even DOS the bus. The other aspect is you just want they crap from Delco to work with the stuff from TRW and the gateway is getting in the way and yo don't want to expend the resources to figure-out why.

When it was all a wired network, it did not really matter so much.


> For example CAN does not have a notion of source, it's really in the abstract a whole lot like a bus, just with cables.

I get that. And if it did, the attack would by spoofing that.

Which is why the solution needs to filter what goes onto the bus - at the only point the source is known.

> When it was all a wired network, it did not really matter so much.

Well, consider if I told you there was a way to "cut the brakes" (at a future date, when the car is at speed, etc) on any car you've been in, without any tools needed or evidence left. Untraceable murder.

It's only the scope of the current attack that makes that look minor.


I thought about this too, but they're replacing the firmware with their own custom firmware OTA. How do you know that your hypothetical "very simple listener" won't be bypassed or re-appropriated for other means?


This sounds like the equivalent of sending UDP over the Internet. If you're receiving UDP from a server, you're not going to get access to the server just because it spouts UDP. If that's the _only_ thing the server does, you're just not going to gain access to it.


It's unfortunate but the reality is that manufacturers are too cheap/lazy to segregate devices on to separate L2 segments, deploy fw or keyed A3E until forced to by exposure of hack demos. In cars (and IoT), standards make everything cheaper but easier to hack without sensible security measures.


>> This is a terrible decision. > It's the only one available to them.

No, it's the only easy option they have.

Take all the vulnerable vehicles off the road. That's another option they have.

> How would you fix the issue for existing vehicles?

Have dealers yank the cellular connection in every vehicle and refund the customers something for removing a feature the product sold with.

Nothing else will fix it in a short timeline.


It's not just cellular. They also found it was the digital radio, and presumably anything else on that bus. Bluetooth? Key fobs? In-car wifi hotspot? Who knows what other entry vectors there are.


I think that disabling the entertainment center's access to the internet would be the responsible thing to do. Given Toyota's recent $1.2B settlement over the unintended acceleration defect, leaving this door open to hackers could be very costly for Chrysler.


Hell, I'm driving a 2015 Jeep Cherokee as a rental right now and I don't even want it connected to the Internet. But you've gotta have car apps, right? How else would car manufacturers be priming you to accept in-car advertising than an Internet-connected screen that you can't turn off completely?


Assuming the network fixes are effective, the recall is just a PR campaign.


Perfect network security is a rather big assumption, though, isn't it?

We recommend layered security to protect people's cat pictures – doesn't it seem like a good idea for something which could literally result in casualties?


Sure. My point is more that a fix that eliminates the ability for people to just use the internet to manipulate the vehicles removes a great deal of the urgency for corrections on the vehicle side.


What does it mean to you to "eliminate the ability for ..."? To patch the known bug? Or do an audit and fix all the bugs they find? Or unplug the system?

If they were yanking the cellular connection that'd be pretty good... If they're just patching one or two obvious holes it'll likely be broken again by Blackhat.


It means I should have used fuzzier language (so as to not imply that I have a solid understanding of the bugs).

But:

https://twitter.com/0xcharlie/status/624608184485851136

So the vehicle connection is not as visible as it used to be.


Yeah, that is valid. The vehicle shouldn't be visible on the net.

But that's the least useful way to protect us. Literally. If the vehicles remained visible they'd have to fix the bugs - this way they'll simply pretend they did.


Apparently they've already addressed the recent vulnerabilities:

https://twitter.com/0xcharlie/status/623258479730552832


It does make it less of an emergency but it also means that anyone with malicious intent has a huge incentive to look for a way around the network block because there's a known win if they find one.


Would it be really hard to fix this OTA for all vehicles, reusing (and patching) their own bug? This is a technical question, law might forbid doing this.


"Hardware has permanance"

That automobiles are recalled for reasons other than hardware replacement is a recent phenomenon.

However a software patch would keep Def Con busy while replacement hardware is developed.


> That automobiles are recalled for reasons other than hardware replacement is a recent phenomenon.

Sure, but how often are they doing major surgery vs. replacing a part / doing a patch? It sounds to me like properly fixing this issue would be a fairly sizable undertaking.


No, it's a stopgap. They have to be seen to do something to counter all the bad press. You can be sure they'll be looking at a more definitive and more robust solution in the longer term but in the shorter term they need to be able to say 'this particular hole was plugged'. That's just damage control on their part, nothing more or less.


The problem is that a fix for this particular problem doesn't actually make anyone secure, it only blocks the one specific attack the hacker told them about. (Hopefully the hackers kept another bug in reserve so they can trigger another recall/patch cycle as soon as this is over.)

We need to communicate that this isn't a decent half-measure, this is specifically a worthless measure intended to keep vulnerable cars on the street where they can kill people because a real fix is seen as being too expensive.


> The problem is that a fix for this particular problem doesn't actually make anyone secure, it only blocks the one specific attack the hacker told them about.

You are absolutely correct. Unfortunately, so is "jacquesm":

> > They have to be seen to do something to counter all the bad press.

To me, Fiat Chrysler is doing classic PR damage control that the automotive industry knows far too well. It wouldn't surprise me to find out that Fiat Chrysler has threat analysis reports regarding this and other vulnerabilities within their organization.

Sadly, this is not uncommon in the automotive industry[1]:

  GM has been heavily critiqued after the
  company admitted that engineers were aware of
  the issues that caused this recall as early as
  2004. Yet it took nearly 10 years later for GM
  to finally issue a recall ...
1 - http://www.bcoonlaw.com/general_motors_recall_lawsuit


There is simply no alternative at this point. The press coverage was so big (which I prefer) that they can't hope that 100% of users will do the fw upgrade themselves.

That thing here is a big fucking warning to all other car manufactures for the future.


> The recall doesn’t actually require Chrysler owners to bring their cars, trucks and SUVs to a dealer. Instead, they’ll be sent a USB drive with a software update they can install through the port on their vehicle’s dashboard.

http://www.wired.com/2015/07/jeep-hack-chrysler-recalls-1-4m...

> Chrysler says it’s also taken steps to block the digital attack Miller and Valasek demonstrated with “network-level security measures”—presumably security tools that detect and block the attack on Sprint’s network, the cellular carrier that connect Chrysler’s vehicles to the Internet.

Oh great, they'll install an antivirus and think they've fixed the problem.


> Instead, they’ll be sent a USB drive with a software update they can install through the port on their vehicle’s dashboard.

So, if I'm in someone's Fiat Chryslermobile, and they're pumping gas, I can flash the vehicle's firmware from the passenger seat?

This seems a lot easier than rooting it over the air.


That's how I get updates on my car. A signed executable from Fiat. It hardly ever works right, and it takes longer than the time you need to fill up with gas. And the car has to remain stationary.


> Oh great, they'll install an antivirus and think they've fixed the problem.

This is a super strange thing to say after your first quote. They've blocked it on the cell network AND are sending out updates to car owners.

So how you ignore your own first quote to criticise them for "only" blocking it at the network level is bizarre, it is like you didn't read your own post...


The post isn't complaining they'll only block it at the network level, it's complaining that they're trying that at all. The issue is vulnerable cars - not the lack of a firewall between the net and those vulnerable cars.

Any network fix is useless smoke and mirrors.


> This is a terrible decision.

Regardless it's the only and correct decision they can make at this time. It's honestly not terrible at all. What would be terrible, and valid of the comment "a terrible decision" would be doing nothing at all.


I think it's a great decision. Vulnerabilities should involve this much pain for the auto companies.

We all acknowledged the issue is architecture, and that can't be fixed by flashing firmware.


Chrysler doesn't have much of a choice. They have to do a "voluntary" recall for a safety issue, or the NTSB orders an involuntary recall. Since this has terrorism implications, they need to get this done.


Terrorism implications? Sure, but so does pretty much everything. If terrorists inside the US wanted to mess things up they just need to blow up a small targets in BFE, Iowa and the like. Worse, kidnap a few people and behead them. The US would be utterly paralyzed with fear.

Ignoring direct violence? DoS PSAPs so small things like reporting burglaries or helping heart-attack patients fails. Or shit, have you seen the cabling at many datacentres? Take a few of those out and the US Internet would be off for what, weeks? (And that one you can even do remotely.)

My point is that you cannot harden the entire US infrastructure. Saying this car issue "has terrorism implications" is either a tautology or fearmongering.


That's a good use for formal software proving. While proving the safety of the entire codebase might be currently out of bridge, it would be quite feasible to prove the safety of a message bridge.


> The company added that hacking its vehicles was a "criminal action".

I don't think that's the case, but I still commend them for doing a recall this quick.

Shooting the messenger seems to still be quite a strong reflex for corporations faced with bad news. The way to look at it should be that these guys did Fiat-Chrysler a service. After all, it's not only security researchers that have the ability to write code and that have prolonged access to a vehicle to test.

They seem to be mistaken about the time to write the code, after all, you can write the code and test it on a different vehicle than the one you intend to crash.

Law enforcement typically won't analyze the firmware of all the computers in a car after a single vehicle accident (and it would probably be quite possible to erase the evidence once the car has been given a command sufficient to kill the occupants).


Shooting the messenger isn't just bad PR for corporations. It's deeply ingrained in most enterprise culture, and it's why problems like this occur in the first place.

You know that engineers on the ground were well aware of this vulnerability. You know they tried to warn. But what happened to the warnings? They didn't make it up to the executive levels necessary, because several layers in between feared for their jobs and careers if they said something like "This new feature you're demanding could be used to brick every single Fiat-Chrysler vehicle we make, or even murder people". So the executives were asking for features but flying blind on danger.

And ultimately, this is a failure of the executive structure and the corporate structure (and it's an inherent antipattern in large organizations). Since the nature of hierarchy is for subordinates to hide unpleasant truths from superiors, they should have been actively asking about the hazards. They could have hired outside security reviewers. But they didn't.


The language in the FCA press release (linked in another thread) is softer:

The recall aligns with an ongoing software distribution that insulates connected vehicles from remote manipulation, which, if unauthorized, constitutes criminal action.

So through the process of extensive journalism, unauthorized remote manipulation got turned into hacking vehicles.


It was hacking. They clearly proved they had control over dangerous parts of random vehicles.

> The recall aligns with an ongoing software distribution that insulates connected vehicles from remote manipulation[...]

Assuming that's not a lie, were they working on that because the 'researcher' (quotes due irresponsible 'testing') had been sending FCA what they found instead of their own internal decision to improve things and they knew this (specifically) was coming at some point?


It was hacking.

I'm not quibbling about that. I replied to this statement:

> The company added that hacking its vehicles was a "criminal action".

I don't think that's the case, but I still commend them for doing a recall this quick.

Which characterizes the statement in the article to be about hacking the vehicle at all. So my reply points out that the manufacturer was probably not putting forth that characterization when they talked about criminal action, they were likely talking about unauthorized remote manipulation (which should be and quite likely is criminal).

So my argument is that the manufacturer statement about criminality is narrower in scope than the BBC translation of it.

The software update release is obviously in response to the research:

https://twitter.com/0xcharlie/status/623492229714313216

Upgrading it to a recall is probably in response to bad press.


Yeah, bad press about the ability of someone to remotely kill you in their products.


>> The company added that hacking its vehicles was a "criminal action".

> I don't think that's the case, but I still commend them for doing a recall this quick.

It is illegal under existing laws. Basically, it falls under the same set of laws as cutting someone break line. You are, at a minimum, in the "Reckless endangerment" category.

It doesn't take new laws, the old ones have seen enough people doing stupid things to other people's cars.


It's not illegal to hack your own belongings. It may be illegal if you trespass on someone else's network to do so, or if you manage to provably put the public in danger, but merely discovering a security vulnerability and exploiting it in something you own outright is not illegal. Chrysler's statement is overly broad IMO.


I think the statement is more for the general public.

"if you manage to provably put the public in danger"

Watching the video, they did it on a public highway with other cars. They killed his engine when he was on the highway. He is damn lucky he wasn't rear-ended and killed someone. Yes, that is illegal and definitely endangerment.

This is crap stunt journalism.


... and the car slowly drifted to a stop.

The only crap stunts here are by the car company pretending to fix the issue and knowingly leaving everyone vulnerable.


... and if the car behind them didn't? You act like there's some sort of protective bubble because of this fact.


...on the right lane of a highway - which is mentioned in the video as he has no way to get off the road


This happens all the time as part of normal driving. It's not like the car stopped suddenly, or veered wildly, etc.

Some traffic may have backed up for a minute until he restarted the car.


When it happens, it is an accident, not the intention of a third party. Arguing it can happen accidentally isn't a defense against endangerment.


Then it would have crashed into any other car that slowed a little. Traffic speed changes with conditions and that changes constantly.

The issue is what the extra risk is compared to the baseline risk.


The extra risk is slowing or stopping when people don't expect it. Unexpected things cause collisions. There's also extra risk in creating an obstruction, which creates risk for other drivers.

There's no doubt that driving is inherently a terribly risky activity. But I also don't think that pretending that taking your foot off the gas on a highway is a risk-free activity is particularly helpful.


its extra risk because the jeep's engine was turned off by someone accessing the car's computer on a public highway in traffic with the vehicle in the right lane


What's the baseline risk? Did this triple anyone's risk, or increase it by 0.01%?

Then combine that with the recall it spurs. The value of a live highway hack is very high. It got all the media to pay attention and got a commitment to a fix. You're freaked out about one car slowing down in traffic. Imagine many of the 500,000 vulnerable cars simultaneously accelerating wildly all across the nation if this was actually exploited.

What's the risk of doing nothing? Or of wasting years more with ignorable private disclosure?


"You're freaked out about one car slowing down in traffic."

No, I'm pointing out an idiot reporter endangered his fellow citizens to create hype instead of doing the reporting in a safe environment. Your the one who seems to be trying to justify endangering people to report someone else is endangering people.

The demonstration could have been done safely and effectively on a rented race track. It happens all the time when you want to test something around motor vehicles. Instead he went for cheap and sensational.


>It's not illegal to hack your own belongings

Doesn't the DMCA make that illegal? Isn't that the big fuss about John Deere tractor fixing and the like?


There should be push back against the idea it's inherently a crime to either R&D the operation of your own property, or modify it. There is a long standing tradition of car modification in the U.S. especially, I think it's unacceptable for this to be proscribed by making it a crime or even subject to civil action. Obviously any modification I make, is my liability, but that's a totally different thing that saying it's disallowed.


The messenger did some stunt hacking on a highway to promote their talk, and didn't follow any guidelines of responsible disclosure to the company so they could less drastic steps to fix the problem or have some time and perhaps do it in a way that isn't quite so expensive or rushed.

Should they be criminally charged, no, not for the hack itself, perhaps for the highway theatrics. But they deserve to sweat and hire some lawyers, and perhaps face a civil case for the irresponsible way they disclosed it.

The surest way to get legislative pressure put forward to regulate the info sec industry and put red tape that hampers or outright outlaws security research and activity is to have guys like this being so irresponsible with research that affects people's lives as to give them cause to.


The messenger reported the problem privately 9 months ago[0] and it was only fixed as they were preparing to go to the press. The day before the "stunt hacking" Jeep were claiming it was not a problem[1].

[0] https://twitter.com/0xcharlie/status/623492229714313216

[1] https://twitter.com/daviottenheimer/status/62351634451271270...


FCA deserved the press, but demoing it on a public road (at high speed) was horrendously dangerous and irresponsible.

They deserve to be charged/locked up for that portion of it.


>horrendously dangerous and irresponsible.

Not something that I would do, at least not on that stretch of road. But, nobody was actually harmed, nor was anyone even inconvenienced beyond perhaps having to make a lane-change. It could have caused a pile-up accident, but to give some perspective, I commuted into Houston on that same day and counted half a dozen disabled vehicles on my route, and two police officers trying to catch speeders; both of which add at least the same magnitude of risk to other motorists.

>They deserve to be charged/locked up for that portion of it.

"They" the two that did the risky road demo, or "they" the ones who waited 9 months to mitigate the safety defect in thousands of cars? Locking one of those groups up makes us more safe, and the other less.


> But, nobody was actually harmed

That's ends-justifies the means. I'm not arguing someone was hurt, I'm arguing they put people in a situation with a higher than necessary risk.

> I commuted into Houston on that same day and counted half a dozen disabled vehicles on my route, and two police officers trying to catch speeders; both of which add at least the same magnitude of risk to other motorists.

You can't control what other drivers around you do to cause road hazards, but in this case that's exactly what the hackers did. By cutting the transmission they caused a possibly dangerous situation.

They could have done this test safely in numbers ways. On a rig designed for testing horsepower, an abandoned parking lot, a private track, or asked some police to close a section of road.

Instead they did it in about the most dangerous way possible: live on uncontrolled roads with other traffic.

> "They" the two that did the risky road demo, or "they" the ones who waited 9 months to mitigate the safety defect in thousands of cars?

The 'researchers'/hackers. They directly put people's lives at risk to have a stunt to prove their point.

FCA screwed up big, and deserve some sort of penalty from the government. This shouldn't have been possible in the first place. But they didn't modify a running vehicle at 70mph surrounded by unsuspecting motorists.

If the hackers had done this reasonably safely, I'd have no issue at all. But they don't deserve the title 'researcher' after behaving like that.


The "ends justifying the means" is not a fallacy, unless you think Consequentialism is completely wrong[1]. In this case, it seems a lot of people do think the ends justify the means. In other words, the risk was morally justified given the results.

[1] https://en.wikipedia.org/wiki/Consequentialism


I don't think it its a good argument in this cases since they had much safer options.


>That's ends-justifies the means.

I don't like it either, but it's a common philosophy that we experience in our daily lives, whether we want it to be that way or not.

>Instead they did it in about the most dangerous way possible: live on uncontrolled roads with other traffic.

There are a number of ways they could have made it worse.

>The 'researchers'/hackers. They directly put people's lives at risk to have a stunt to prove their point.

We tolerate the same kinds of risks daily by allowing police to conduct traffic stops on busy roadways. We do so ostensibly because safety, but the reality is that it is mostly for some pretty dubious financial reasons. That's ends justify the means, and in this case ends are traffic fines collected, and the means are lives of police officers and motorists and man-decades of time lost in traffic every day.

>FCA screwed up big, and deserve some sort of penalty from the government.

But not jail, like for the evil hackers-not-researchers?

>This shouldn't have been possible in the first place.

Another uncomfortable reality for you. Software verification is a huge challenge which nobody has gotten right yet. There will be more of these vulnerabilities. We have to get this disclosure/update process right. The best thing that automakers can do to prevent disclosure stunts like this is to fix vulnerabilities ASAP when they're discovered.

>But they didn't modify a running vehicle at 70mph surrounded by unsuspecting motorists.

What they did/do (months worth of nothing) is far worse, and endangers far more lives. Imagine if someone/somegroup/somegov't had managed to bundle this vuln with a popular cell-phone app, or mobile website, and decided to activate it one day at rush hour.

>If the hackers had done this reasonably safely, I'd have no issue at all. But they don't deserve the title 'researcher' after behaving like that.

That's just petty, and nobody really cares what you call them. It does nothing to move the discussion in a fruitful direction; but it does make you appear a bit shallow in your reasoning.


> We tolerate the same kinds of risks daily by allowing police to conduct traffic stops on busy roadways.

We've chosen to accept that risk, and have control over it though government. People could choose to make it illegal.

Also note that when the police do it they take precautions such as having bright multi-colored lights on the car to draw attention.

It also bothers me that the Wired guy didn't know what was going to happen, so he couldn't prepare as well. Even that would have helped (though I still think the stunt was dangerous enough for someone to go to jail).

> But not jail, like for the evil hackers-not-researchers?

We can't put a company in jail and we don't usually do it with the CEO for much bigger crimes. I doubt the CEO had any idea this could happen.

I imagine this is one of those things where dozens of people in different departments (and even companies due to auto parts suppliers) all made small but not terrible bad decisions that added up to a huge problem. I doubt there is a smoking gun email someone where a manager says 'I know someone could disable the car on the freeway but it will save us millions!'

And while this is happening to FCA there are other cars with cell connected systems (VW, BMW, Audi, others). I imagine if we had enoug time we'd find at least 2 or 3 other companies with vulnerable systems in other car brands's 2015 models.

I just don't see how we could jail anyone in FCA. That's why I didn't call for it. On the other hand the hackers seems like a pretty cut and try case.

> That's just petty, and nobody really cares what you call them.

The words you use to describe something matter. Having taken such a stupid risk I don't see why they should ask to be acknowledged on the same level as security professionals who don't put people in danger for headlines.

Frankly the number of people in these threads defending the overly dangerous demo scares me, as does the number of people who seem to tacitly encourage such behavior.


>We've chosen to accept that risk, and have control over it though government. People could choose to make it illegal.

I'll expect to see you down at the legislature lobbying for reform and threatening jail for the opposition.

>Also note that when the police do it they take precautions such as having bright multi-colored lights on the car to draw attention.

It's hard not to notice something so distracting! That's another argument against the practice, isn't it?

>We can't put a company in jail and we don't usually do it with the CEO for much bigger crimes. I doubt the CEO had any idea this could happen.

It's a funny thing. We can never seem to find anyone in a company who knows anything or has any responsibility. We just have to satisfy ourselves that if popular opinion moves against a company strongly enough, or the gov't gets shamed into prosecuting them, that maybe then they might address some problem, usually after it actually kills people, so long as nobody has to admit fault. It's almost like a huge stunt is needed to get peoples' attention sometimes!

>I imagine this is one of those things where dozens of people in different departments...

Yeah, we all know about how corporate structures insulate decision-makers from the consequences of their decisions.

>cut and try

"Cut and dry" So, because it's easy to prosecute these two, and hard to prosecute the others, justice should take the easy route, even though one may have endangered tens of people on one occasion, and the other endangers tens of thousands of people for months? I see a different value proposition here than you.

>Frankly the number of people in these threads defending the overly dangerous demo scares me

I guess it would surprise you to learn that I feel the same way about you after this conversation?


I'm gonna take this argument in a slightly different direction. While I agree with you that the way this experiment was conducted was needlessly unsafe and irresponsible, I don't believe these researchers deserve jail time given that only positive outcomes resulted from this event. Pragmatically, society has nothing to gain from punishing these individuals and in the best case scenario doing so costs us money and pointlessly devestates likely several lives. For that matter, I am also against sending people to jail even when their actions have negative effects if it is not clear that isolating them from the public will reduce future harm.


My reason for suggestion jail is because I worried if they're not punished (just a week would be perfect) other people will think such needlessly unsafe public demonstrations are acceptable.

I dont know another way to say 'this is not acceptable'. To literally just say it but not punish... I do t think that would be heard and someone else would take a stupid risk.

They certainly don't need 5 years or something like that. Just a very noticeable slap on the hand.


[flagged]


> You're a bad person and you should feel bad.

I'm sure you know this kind of thing isn't allowed here. Please be civil in HN comments.


> What's wrong with you? [...] You're a bad person and you should feel bad.

That's unnecessary (and I get the reference).

I have no problem with them fining and disclosing this hole. I'm glad someone is proving that cars are woefully insecure.

I have a problem with the unnecessarily dangerous way it was demoed.

Why do I have to choose between 'white hat hackers saving the world' and 'evil computer terrorists'? Why are can't I judge one part of the story (the discovery) from another (the demo)?


In this case, your judgement is poor. St. Louis area prosecutors would like nothing better than a showy media trial of suburban white dudes to distract from the well-deserved asskicking they've gotten over the last year. Fiat-Chrysler would like nothing better than pictures of these guys in handcuffs to distract from their fuckup. Those of us who hack shouldn't be so quick to turn on our own, and those of us who have ever driven on a public road know that sometimes vehicles slow down. Driving is dangerous, and as a society we've accepted that, because trade-offs. If you don't make your kids wear crash helmets while riding the school bus, you have no business calling the fury of the state down on these guys.


A lot of reasonable people would say that it is because driving is dangerous that we shouldn't be tolerant of people needlessly adding more danger to it.


> responsible disclosure to the company

When are you going to start discussing the criminal implications of the company in ignoring that disclosure?

> a way that isn't quite so expensive or rushed.

Since when is the company's expense our problem?

The fix is trivial. Yank the cellular connection. The company is refusing to do the simple, easy, and quick fix. They're the ones making it a painful process. They're the ones choosing to leave everyone vulnerable while they hide the problem with a bandaid.

> legislative pressure put forward to regulate the info sec industry

Your country's laws are your problem. That would be disastrously dumb, but it's your responsibility to forbid your politicians from shooting the country in the foot.

If you blame this on hackers you'll only ensure that a foreign hacker will start a campaign of trying to get you to pass broken laws.


I don't think they're necessarily asserting that the security researchers carried out a criminal act in demonstrating that the vehicle could be hacked (they almost certainly didn't). But they would be correct to assert that anybody who hacks someone's Jeep without their permission certainly WOULD be carrying out a criminal act. As such, I think they're reasonable in pointing out that this is not a safety issue but rather a protecting-their-customers-from-assholes issue.


> As such, I think they're reasonable in pointing out that this is not a safety

If your system ALLOWS other assholes to take over my car, it's DEFINITELY a safety issue.


I would add that shipping cars with such vulnerabilities should be a criminal action, if it isn't. It's certainly negligent (however unintentionally) and unsafe for consumers.


Not to go all law school on you, but criminal actions are defined by statues. There are many actions that are ethically wrong, but perfectly legal.

Now, for a civil action, it is always fuzzy. Whether or not it is negligent is going to be based on many things, a big one being whether it was "reasonable". Shipping cars before this event may have been reasonable, but continuing to do so many be negligent. But at least in the US, that needs to be decided in court, not on the internet.

I'm pointing all this out because you said "Certainly". That word really has no place in the US legal system.


Given the complexity of the software components in a modern car I think it would be a safe assumption that all of them are shipping with such vulnerabilities. If there is one area where we could benefit from some openness it is the embedded world, these are life critical systems in every sense of the world and what I've seen of such code bases does not inspire confidence at all (rather the opposite).


The issue isn't vulnerabilities, it's connecting those vulnerabilities to the net. This isn't about software quality in general.


But that's exactly how these things happen. People program insecure stuff thinking 'well, at least this isn't connected to the internet', then one day someone else takes the blackbox the original guy (long departed) put together, hacks up a TCP/IP interface or does something else without looking through the codebase and boom you're wide open.

You can replay this story 10's of times over the next couple of years and lets hope it's only the nice guys finding them.


See also: SCADA


Don't get me started on that one, yes, it's probably even worse than automotive because these are 100's of thousands of legacy systems quite often without any security at all connected to the net. Obscurity is the only thing that keeps these systems working.


Yeah, good luck with that. Law enforcers find it incredibly useful to be able to remotely take over a (suspected) criminals vehicle and force it to stop, so they're hardly going to support a law that makes that impossible. And let's face it, the only way to truly make sure a hacker can't take over a vehicle remotely is to make it impossible.


Are there other areas where shipping with known vulnerabilities that can endanger lives is criminal? EG in medicine?


There is such a thing as "criminal negligence", but I don't know if this case or anything in medicine would qualify. My guess is no, based on some examples I'm looking at.


An argument could be made that putting the vehicle control system on the same bus as the internet-connected entertainment system was criminal negligence, but they would just hang the guy who probably put up a fight against it in the first place for not putting up a bigger fight or something, the company would never feel any pain. :/


I assume there was security, no matter how weak, that was bypassed. That's criminal in the US under the DMCA. Hell, even bypassing ROT13 "encryption" is criminal under the DMCA. Furthermore, even if the system was wide open, I have no doubt Chrysler can claim criminal action under the CFAA.

Ain't America's laws wonderful?


> even bypassing ROT13 "encryption" is criminal under the DMCA.

Is it? I thought the encryption has to be "effective", which would not be the case for ROT13. I would even argue that ROT13 is merely an encoding.


http://www.stoel.com/the-anti-circumvention-rules-of-the-dig...

No, the exemption is this:

Encryption Research. The DMCA exempts encryption research from its circumvention and trafficking bans. Circumvention of access controls by one who has lawfully obtained the encrypted copy is permitted if the circumvention is done in the course of "an act of good faith encryption research." The researcher must first have made a good faith effort to obtain authorization before the circumvention, and the circumvention itself must not constitute infringement. The researcher may also develop and employ tools to circumvent the access controls for the sole purpose of carrying out the research, and may share those tools with collaborative researchers. The DMCA lists several factors to be used in determining whether the exemption for good faith research should apply in a particular situation, including whether and how the research results were disseminated, whether the researcher is engaged in a course of study or is trained or experienced, and whether the researcher provides the copyright owner with the results of the research.


And even with this exemption, researchers have to tread very lightly to avoid prosecution.


"effective" as in "in effect", i.e. it exists and is used, not in that it's actually secure. The law would be rather pointless otherwise, as it would only make it illegal to bypass encryption which cannot be bypassed.


Right, it's like a password or a door lock. The strength of them isn't the issue, it's more that they communicate the owner's intent.

Effective would probably have more to do with covering all the routes of ingress rather than doing so strongly.


Something I'm big on in life is that the worst thing you can say to someone is that you don't understand them. To not understand someone is to deliberately not give the issue thought because you're so deeply embedded in the idea that you have enough background on an issue to decidedly say that you are right and that the reasoning behind your stance is flawless, and thus that they are wrong. I believe that everyone's beliefs and actions are understandable. They might used flawed logic, or they might value things differently than you, but I believe 100% that everyone on this earth uses reasoning for every action and opinion we have.

But when it comes to politicians trying to ban encryption and automakers trying to ban me from editing bits on a memory chip that I got as a part of purchasing one of their cars, I really am unapologetic when I say I really don't understand them. I completely and utterly lack an ability to get into their heads. It would be fascinating to lose my knowledge of everything I know about computers for a day and give these issues thought. To have computers be mystical voodoo magic would be an amazingly different world.

If I had to give it a guess, that's probably what I'd say. Politicians and automakers and middle aged Edward Snowden haters all lack an idea of what is possible and what is unpractical when it comes to computers. They lack an appreciation for just how much commonality there is between the computer running a McDonald's register and the one making sure their car doesn't kill them.

Politicians think we can just "ban" encryption, as if this isn't some mathematical concept with freely-available professionally made implementations. They think Apple has gone to great lengths just to implement their end-to-end iMessage encryption... when in reality they almost certainly took the path of least resistance and merely stand on the shoulders of giants that collectively implemented encryption for them. PR reps for automakers think of code in such an abstract way that they think modifying it must be terribly difficult and thus inherently malicious, when in reality their programmers stood on the shoulders of giants and used the same common interfaces that every programmer uses. Hacking their car was probably done by a curious man decompiling the firmware which they pulled off via JTAG or a test clip. They think Edward Snowden must have been sneaking around in underground tunnels with a ski mask and plugging his laptop into servers, when in reality this was just a drive that was mounted on a machine he used.

Tl;dr: Programmers all pretty much follow the path of least resistance. The general populations lack of background makes them think that things are much more difficult and thus deliberate than they really are.


> Politicians and automakers and middle aged Edward Snowden haters all lack an idea of what is possible and what is unpractical when it comes to computers.

Plenty of the Edward Snowden haters are <= 30 and plenty of the people that applaud him are > 40 so let's not add age into that discussion.

Whether you are pro/against Snowden probably has more to do with your life experience to date and your general views on what a government should be able to get away with.


> Plenty of the Edward Snowden haters are <= 30 and plenty of the people that applaud him are > 40 so let's not add age into that discussion.

> Whether you are pro/against Snowden probably has more to do with your life experience to date and your general views on what a government should be able to get away with.

I only said, "middle aged" for exactly that reason: you might hate Edward Snowden and be a totally technically literate person. Age doesn't necessarily shape your view on Snowden, but age almost certainly correlates with technical literacy, and I think technical literacy definitely plays into your perception of how much effort Snowden went to in order to gain access to the files he leaked. If you think that he went to great lengths to gain access to the files, you might think his actions were more malicious / that he was looking for trouble.


That "criminal action" statement was repeated multiple times; obviously they're trying to send a message. We need whistle-blower protection laws so that the auto companies can't try to eliminate this research by sending the researchers to jail.

Law enforcement won't try to analyze the firmware, but class-action lawyers certainly could. Won't do any good if the bad actors have erased their tracks of course.


If we outlaw hacking then it will only be the criminals that have the hacks!


I see a lot of people thoughtlessly applying computer-security mindset here to vehicle-safety. They're really not the same thing, because they are handling very different risk models. Vehicle safety is about "how will this system perform under typical conditions when something goes wrong?". Computer security is about "how will this system perform if a smart asshole tries to abuse it?". Vehicle safety generally doesn't concern itself with deliberate sabotage. You won't see a product recall for a car because "under some circumstances, a criminal might cut the brake cables". What Chrysler are doing here is, though, effectively that, and why they have to do that for a computer security issue is interesting.

We're all used to the idea that if you put a computer on the internet, it will come under attack. People will try to snoop on the data it handles, or subvert it to use it for their own purposes. So why do we then move on to assume that, if such a system is attached to something safety critical, that those same people who will attack the computer to get at its data or processing power will now move on to attacking the brakes, or the engine, and try to kill people?

Most vehicular crime isn't homicide, it's acquisitive - people will attack vehicle security systems to steal the car, or get access to valuable contents. Sabotaging the vehicle to kill the driver is way down the list.

As a society we tend to assume that physical security is not the only thing that stops random strangers from trying to kill us. We do not all drive around in armored cars in case someone decides to shoot at us from an overpass. We don't all sweep under our car with a mirror for bombs before we get in and start the engine.

And it's certainly not a failing of Chrysler's engineers to adequately consider customer safety that they sell Jeeps which are not bulletproof and which have exposed frameworks on the underside where bombs can be attached.

So why is it that we're so quick to assume that because a safety-critical computer system is exposed to the internet, that this is the worst thing ever?

Is it that as far as physical security of your Jeep goes you only have to trust the people in your neighborhood, but for internet security we have to trust the whole world?


> So why is it that we're so quick to assume that because a safety-critical computer system is exposed to the internet, that this is the worst thing ever?

Because of the Greater Internet Fuckwad Theory (or, more nicely put, the "online disinhibition effect"[1]).

We don't worry as much about random strangers harming us in person because most people are generally well-behaved when they are face to face with someone in real physical space.

On the Internet, where all you see is a screen and all you do is click your mouse, "reality" gets a lot more tenuous. In that environment, people act worse.

If you were walking over an overpass and saw someone left a cinder block up there with a note attached saying "Throw me!" how likely would you be to lumber it up off the ground, carry it to the edge, and heave it over onto to a car you can see passing below, whose occupants are visible to you?

Now imagine you stumble onto a random web page with a button labeled "Click to drop cinder block off overpass". Tempting?

The way our behavior differs in these two circumstances is a big part of why Internet security is so different from physical security. (The other big difference is how data can be replicated for free. It takes 50x as much effort to steal 50 cars. It often takes no more effort to apply the same have 50, or a million times.)

[1]: https://en.wikipedia.org/wiki/Online_disinhibition_effect


In addition to this, there are differing scales at work. In real life, by your analogy, you might find a cinder block with a "throw me" note. With the internet, you might find a button that says "throw 1.4M cinder blocks." One maniac can cause much more trouble that way.


I think it's pretty clear that connecting a computer to the internet is a matter of trusting the whole world, which is a lot harder than trusting local circumstances.

Your assessment of risk has to change when the cost of scanning and attacking your machine from afar in a hard-to-trace manner is dirt cheap.


That doesn't sound like the same vulnerability at all. Cutting brake lines no longer results in someone's death [1], and they'd know at the time they first try to brake, when they'll probably be at low speed anyway.

The Chrysler exploit, by contrast, allows you to silently take control of the vehicle in ways that don't reveal your position until much later (if at all), due to the sound system not being firewalled from the brakes.

That seems fundamentally different from "hey, gangbangers might shoot at you while driving".

[1] http://www.quora.com/Is-cutting-someones-brake-line-prior-to...


Why is it fundamental different? What is the fundamental difference?

I'm asking because I'm genuinely not sure. I agree it seems different. It does feel like chrysler should be responsible for securing the system from remote exploitation. But are they? And why?


Because a lot of immature assholes on the internet who would be too cowardly to confront you physically wouldn't bat an eyelash to disabling your car remotely as a prank. See: swatting.


So you're okay with spy agencies (from all over the world) as well as drug cartels and other criminal organizations having the power to kill you in an almost untraceable way while you're on the highway?

Also, the US gov has been using these entertainment systems to spy on people for more than a decade...it's already been happening. Unfortunately, I can't find the link now, but it was a post from 2001 or 2003 on NYT and I think they were using Ford Sync to do it.


Don't think I said that I was okay with that, no.

But to be clear, drug cartels, spy agencies and criminal organizations have been able to do that for quite some time. They've just had to send a person to plant the bomb or the bug or the location tracker in person. And it's not generally regarded as the car manufacturer's problem to deal with that threat.

So yes, there's a question of scale, which makes a difference here. Traceability can maybe be handled at the network level - who knows what information Sprint captures about traffic to these car systems?

But the way most people are talking about this you'd think that as soon as the method for doing this hits the internet, script-kiddies are going to start randomly crashing Jeeps into bridge pylons.


>But to be clear, drug cartels, spy agencies and criminal organizations have been able to do that for quite some time. They've just had to send a person to plant the bomb or the bug or the location tracker in person. And it's not generally regarded as the car manufacturer's problem to deal with that threat.

But those sorts of methods require orders of magnitude less plausible deniability.

When people hear on the news that some controversial political activist (in any country) died during an armed robbery, from a propane explosion, suicide or a car crash which one do you think they'll question the least?

You're a fool if you think intelligence agencies (around the world) haven't been weaponizing these sorts of vulnerabilities (and they're fools if they haven't been). The major hurdle I see is that the people they'd risk exposing this sort of capability on, don't ride around in cars with the required features or live somewhere where it's more sensible to get them some other way.


"live somewhere where it's more sensible to get them some other way"

Yes, the main remote exploit you're exposed to driving round Yemen in a Grand Cherokee is probably a Reaper-launched Maverick strike, rather than having your transmission remotely cut :)


> But the way most people are talking about this you'd think that as soon as the method for doing this hits the internet, script-kiddies are going to start randomly crashing Jeeps into bridge pylons.

You mean the same script kiddies who think it's hilarious to sic a SWAT team on someone's house? It's not like script kiddies everywhere would start doing this - but all it takes is 1 before you've got a problem, and I'm sure that if it was easy enough for any script kiddie to do, at least one of them would.

Say the car manufacturer made no attempt at security whatsoever - all you had to do to take control of the car's critical systems was know its IP address and guess its 8 character max admin password. Would that really not be on the manufacturer?


People today, in low-tech real-life, have been known to go and throw rocks off overpasses. People have died. People have also gone to prison.

It's not the car manufacturer's responsibility to protect their customers from that.

Make the same thing possible for someone to do from their basement, and sure: people will die; people will go to prison.

Look, I'm not actually trying to absolve Chrysler of responsibility here, I'm trying to get to the bottom of why when virtual meets physical, we act like the nature of the internet fundamentally changes things. I'm interested in what it is about this threat to car owners which is in a difference from existing threats.


It fundamentally changes things because it's so easy to do anonymously. If someone drops rocks off an overpass, it's pretty easy for police to track them down and arrest them. If someone attaches a bomb to the bottom of a car, sure it's harder to get caught than dropping rocks off an overpass, but you still need physical access to the car, and it's still relatively traceable. But if remotely hacking a car, it would be pretty easy to stay anonymous. Plus, in both those other cases it's obviously foul play, whereas if a hacked car runs into a wall it's probably not going to be so obvious.

Plus, the anonymous nature of the internet makes it much easier to become detached from the real-life consequences of your actions. Just look at all the examples of online harassment from people who would never say things like that in real life. Look at people who go and grief kids' minecraft servers, yet wouldn't go and kick over their sand castles in real life. Look at morons who swat people.

Actually, come to think of it, maybe it's not so different - if it was found that a big car manufacturer had a problem with their door locks and you could open it just by sticking a toothpick in, you can bet they would take the blame once they started getting stolen.

I'm not saying the responsibility is solely on the manufacturer, but they definitely bear a major part of it. When you buy a car, you expect a reasonable amount of security. I guess the question is where we draw the line as to what counts as reasonable.


> I guess the question is where we draw the line as to what counts as reasonable.

Yes, exactly. And I think a lot of people, including me, would say that anything that can be done entirely in software is reasonable.

Hmm. Does this mean that anyone doing safety-critical embedded software should be compelled to formally verify every line of their code? I'll have to think about that. That might be going a bit too far given the present state of verification technology. On the other hand, it would be a great thing.


>Is it that as far as physical security of your Jeep goes you only have to trust the people in your neighborhood, but for internet security we have to trust the whole world?

Yes?

If you connect a system to the internet, you have to consider attempts to attack that system (automated and not) to be part of typical operating conditions.


But why does the internet need to be considered a completely hostile environment? Because the internet differs from the real world in that... Actions are untraceable? Crime has no consequences? It crosses borders?

These are all true to a greater or lesser extent (often to a lesser extent than people think). But it makes for a pretty weird threat model, trying to protect your customers from high-tech murderers, anarchists, and three-letter-government agencies. This isn't like trying to stop someone steal a credit-card-number.


Going from the incidences of pure-computer attacks, the main issues are:

1. Ability of attackers to probe many systems for vulnerabilities safely. Someone walking down the street pulling handles to check for unlocked cars can only get at so many cars.

2. Physical distance of the attacker from the victim and their property. Specifically, they can be in a different legal jurisdiction, making it very hard to prosecute them, and therefore reducing the deterrent effect of law enforcement.

3. Abstract nature of the act from the criminal's perspective. The decision to commit a crime, and the processes that deter it, are not entirely rational, and have to do with things like social anxiety, perceived safety of the environment, etc. Just like trolls say things online that they would never say I'm person, some online attackers do things they would never have the nerve to do on person, even with the same level of actual risk.

As to motives, these are fairly well-studied, and some are very applicable to this class of vulnerabilities.

1. Direct acquisition of valuable goods/information. Doable with this vuln, but not for someone sitting in Russia. Strike it off the list.

2. Extortion. Most DDoS attacks are aimed at this. You can't get anything directly by causing someone harm, but you can (and many people do) perform a "demonstration" attack to show capability, then call you up and make demands. Very doable with these attacks.

3. Ideological motives. This tends to lead people to want to hurt others in particularly visible ways, so I can see the psychological appeal of using this kind of vulnerability for a terrorist attack. A bit out there in terms of probability, but possible.

4. Nation-state action. Not many consumers worry about this too much, but I think the appeal of this vulnerability to an intelligence agency is pretty clear.


Thanks, good response - this is what I'm trying to get at. Initial reaction to this threat is very much "OMG people are gonna die". And I'm going to be honest, I think people will, at some point, be killed because someone targets a vehicle computer system remotely. But it's not a law of nature that vulnerable systems will be compromised to cause the maximum casualties possible; it will take someone deliberately setting out to do it. This is a means; someone else has to supply a motive.


> Actions are untraceable? Crime has no consequences?

In a sense, yes. The risk to the attacker is reduced so significantly, and the consequences are so remote, that people on the internet will do something horrible just for fun. Basically, distance, anonymity, and lack of consequences seems to turn a lot of people into sociopaths.

People develop in a society, face to face, where your actions have consequences to you and to others around you, and ultimately to your relationships with people you interact with directly. I think the internet provides some evidence that if we didn't have that, a lot more people would act horribly to one another.

Granted, that's not Chrysler's fault. But providing a "crash my car over the internet" button is handing those people a very powerful tool, and that seems like negligence to me.

Let's put it this way: would you drive a car you know someone could hijack and crash over the internet at any time? Wouldn't you like a reasonable assurance that your car has been designed to prevent that?


It's interesting that that sounds more like a flaw in the Internet than a flaw in the Jeep :)


Arguably, it's a flaw in human nature.


It's an avoidable risk - there's no reason it has to be possible to remotely affect a vehicle like this from over the internet. It's also a risk that can grow exponentially once discovered in ways physical threats can't. How long after shellshock was announced did everyone start seeing exploits in their server logs? Not very long.

So maybe instead of using this to kill people, someone decides to cause small accidents for the insurance money. Or there's a way to use it to listen to people through the voice recognition software and people spy on their exes or employees with it. Or just load a trojan onto people's smartphones when they dock it into the onboard charger that gives them root access to the thing they use to check their bank statements, or who knows what? Don't think of it as just a car, rather think of it as an exploitable network with the added benefit of potential collision damage.


THERE's a proper threat model - I like it! Can I take control of the Jeep behind me in traffic, cut off its brakes and cause it to collide with me for the insurance money? Okay - now we're talking. A realistic threat we can work to counter.


> Because the internet differs from the real world in that... Actions are untraceable? Crime has no consequences?

Precisely. And you said one of the reasons it must remain that way:

> It crosses borders

The alternative is for any connection to the network to require a real-world identity, and to bear liability for information they transit if they can't identify who it came from. This is politicians' wet dream (more control/power), but it is utterly impractical as it simply can't scale, cross jurisdictional boundaries, or actually stop bad actors (who just steal someone else's credentials). Never mind the inevitable effects on free speech and cementing the idea that individuals cannot opt out being tracked and recorded.

It's a long-held design principle to assume that the Internet is full of malicious intelligences, and that your software should act accordingly. Even if everybody in the world were completely benevolent, this would still be a prudent assumption for robustness against weird coincidences between context domains. Putting one's fingers in their ears and then crying to influential friends about "hackers" doesn't absolve one of responsibility for adhering to this principle.


I think one thing to consider is the attacker's leverage. In the physical world, an attacker might attack one victim at a time. However in the digital world, the attacker could attack multiple vehicles in a similar amount of time.


Right. To what end? Who's the threat here? Someone looking to cause a mass casualty event? So terrorists and psychopaths, then. So what liability does a car manufacturer have in such an event?


Attacker: Nice car company you have here. Btw, I've got control over 300k of your vehicles. Want to pay up?


> So terrorists and psychopaths, then. So what liability does a car manufacturer have in such an event?

I'm not sure, but a large number of vehicles turning into bricks during rush hour would probably be a big enough problem for one of the many catch-all "things that undermine national security" (criminal) laws to be relevant.


Interesting questions. I think part of what's going on is that people don't like being exposed to new, unknown threats. Even if this turned out not to be a very big deal in practice -- and I am not sure it won't -- it's still a new risk that's being assumed.

> Sabotaging the vehicle to kill the driver is way down the list.

But if it can be done remotely and untraceably...?


Not too long ago, many decent hackers were no longer interested in causing damage and instead focused on stealing people's identities or credit card information for financial gain.

If Greedy Greg knew public CEO Huge McChecks was driving an exposed vehicle, Greedy Greg could short sale Huge McCheck's company and cause a multi-million dollar crash with Huge McChecks inside... all with a couple strokes on a keyboard from thousands of miles away.


>applying computer-security mindset here to vehicle-safety

That might be a valid complaint if this attack required physical access, but it's a remote exploit. It is computer security, except the target is many times more interesting because it can kill people.


One got to love bright minds who ever thought connecting any relevant part of a cars control mechanisms to the internet was a good idea.


It's not just internet. There's also a vector that uses Digital Audio Broadcast (radio).

source: http://www.bbc.com/news/technology-33622298


Doesn't Tesla send over-the-air updates for critical systems to its cars? From my understanding, that means there is a way for a similar attack with a fake-firmware. Am I wrong?


Yes, but I don't think skimmas is fully right. Updating firmware over the internet is not a terrible idea. Public key decryption is well established technology and simple enough for even big corporations to get right.

Of course.. there's some room for them to screw up, but I would argue that that's set off by the risk of having buggy vehicle control firmware killing people. Especially with a new vehicle like the Tesla.


There's also such thing as "key stealing". I imagine it would be quite a valuable target.


When you're signing a binary blob, protecting the private key is actually pretty easy since it can be air-gapped/offline. Or heck you can buy appliances where they'll perform specific functions using the private key but won't expose it themselves without physical intervention.


If I were a mega-corporation protecting a firmware private key, your name would have to be Tom Cruise to get it. Though unfortunately responsible corporations seem to be as rare as real-life Tom Cruise characters, so I guess it's a valid concern you have.


The ability to send OTA updates is a much better option than having to physically touch every single vehicle with a recall...

I'd prefer it if my car didn't have any connection between a public network and it's control systems, but if it does I want it to be able to automatically install patches ;)


Pretty sure Tesla signs their OTA updates.


Not that that's perfectly secure


Yeah, it's only as secure as Tesla's ability to secure its private key.


And their ability to verify the signature on the car. Even if the private key is totally secure, you could either find a weakness in the signature verification, or find an exploit that lets you bypass it entirely.


That's an extremely rare vector for exploitation. I can think of one specific case where a vendor completely forgot to check the result of a signature check, but aside from that signature checking is well understood and rarely goes wrong.

But to be honest people are taking their specification and dooms-day-ism to stupid extremes. Soon we'll be talking about "it is only as secure as the CPU, what if you find a CPU bug and bypass all security?!"


> signature checking is well understood and rarely goes wrong

I would argue otherwise, plenty of signature schemes give you enough rope to hang yourself. The Playstation 3 example springs to mind!


Signature verification for android applications has been a tough one: http://www.saurik.com/id/19


There's plenty room for bugs in mechanism like that. For example Motorola Droid (back in 2010) was locked and only accepted signed updates. There was a bug where you could bypass it by using authentic update and appending your payload at the end of that file.


Interesting, can you point me to a news article or link? Shouldn't that have been rejected as an improperly signed file as (update+payload) should have a different signature than (update)? I'm want to know whether I'm misinterpreting something, misunderstanding something, or the signatures were misimplemented.


The original forum where it was posted doesn't exist anymore and there doesn't seem to be archived copy.

Here's another page which is describing the steps: http://www.areacellphone.com/2009/12/motorola-droid-rooted-h...

Here is the commit with a bug fix: http://review.source.android.com/12807

and actual diff: https://android-review.googlesource.com/#/c/12807/1/verifier...


Only the header was signed – that was the issue. It was made to save computational power on the device.


Indeed, and I remember the vast majority of the people getting super excited at the possibility of "getting an OTA update that can improve your acceleration by 0.1s" - without realizing what exactly that means in terms of security. In particular, that others could also control your engine and car the same way through updates.

The car manufacturers who do OTA updates for their cars are sitting on time-bombs. The clock is ticking for them until people get killed this way (regardless of them using HTTPS or signed updates - which some manufacturers don't even use now).


yet it's an order of magnitude easier to just go buy the parts to a real 'time bomb' than to crack an OTA update. Security is relative after all, and evil geniuses have much better ways to kill you.


like open(not unlock, open) doors while you drive?

http://www.theregister.co.uk/2014/07/21/chinese_uni_students...


With self-driving cars, this is definitely going to be the norm, and this really draws attention to how dangerous they could be, although human-driven cars are also pretty dangerous.


Seriously. That's the real criminal action here.


The issue is the stupid media center. People want that to have Internet. Then at the same time they want it to display all of the climate details and other system info that comes off the CAN.


> People want

Yeah, no. Auto mfr.'s want to reduce the display count to make cars cheaper to make and the interiors simpler to build. I've never ever spoken to or heard form anyone who 'wants' or even kinda likes having their climate controls on the same screen as their maps, pandora, etc. It's confusing, usually cluttered, and complicates things unnecessarily.


Have you ever been in a Tesla?


Then it should only read the CAN, or have a limited write capability

Customers want things, it's not their responsibility, it's the manufacturer's responsibility to not ship something unsafe


There are typically at least two CAN buses in many recent cars, one for critical stuff like engine, transmission, etc, and another for less critical things like environmental controls, there is also LIN, FlexRay, MOST, and maybe another one or two that are less popular.

One problem is that there is no limitation in the CAN protocol to prevent a node from impersonating the master node. Another is the mutability of a node's firmware.


But surely they can be isolated systems


They can be but they're not. Once you share a wire all bets are off.


By that logic there's no security on the internet at all. Everything "shares a wire" after all.

Security is about degrees and nuances. These kind of black & white statements are unhelpful and unrealistic.


> By that logic there's no security on the internet at all.

Have you /been/ on the internet, lately? ;)


Two systems that accept input from the same touchscreen can't be isolated in hardware. From the software side, any portion of each system's interface that overlaps at the touchscreen cannot be isolated. On top of that, if the touchscreen is reprogrammable, then it can be attacked directly and touchscreens are commodity devices not normally considered part of the security infrastructure.


I don't think anybody is upset that their climate control system is exposed to hackers. There are no accelerator/steering/braking controls on the touchscreen (I assume) so those could be isolated but it would require having to separate systems for critical and non-critical operations. The only reason not to do that is because it's more complicated which translates to more expensive.


The climate control isn't mechanically independent of the engine and other mechanical systems. Even in the primitive cars before digital contols, the heater is dependent on the engine's thermostat (well except for old Bugs and 911's, but then that heat never really worked). In modern cars, the AC may be connected to the engine control system so that power can be redeployed for fuel efficiency or for power in emergency situations or to keep emissions within specification.

"Climate control system" is an abstraction over belt driven moving parts.


Exactly


It's all on the CAN bus anyway. That's how modern cars are built. And hey, if you don't like that, you can always fly on a plane... those surely aren't internet-connected, right? ... Right?

http://www.gpo.gov/fdsys/pkg/FR-2014-06-06/pdf/2014-13245.pd...

http://www.gao.gov/assets/670/669627.pdf#page=23


The Internet of Things has lots of bright minds working on it...


Just an FYI for everyone on the "segregated systems" bandwagon:

If a compromised device can talk on the CAN bus it's game over since (pretty much) everything listens on that bus so you can't (without a lot of time and effort, implement a way to) pick and choose systems to segregate while maintaining wireless connectivity to those critical system.

Vehicle manufactures get a huge data set sent back to them by vehicles. They use this for stuff like correlating part failures to operational conditions, determining which intermittent wiper setting people use as well as improving the logic for the operation of critical systems (e.g. if my last inputs were $stuff then don't upshift). I wouldn't be surprised if they sold the data as well. McDonalds would love to know where and when people start looking for food. insurance companies would love to have more variables to correlate to risk trivial (e.g. $color cars with $trivial_feature get in accident that cost $really_small_percent $more_or_less than $other_color

To segregate systems you need to be able to pitch to the bean-counters that the cost/benefit of whatever degree of segregation you're proposing beats the cost/benefit of whatever plan the next guy is proposing. These data sets are incredibly valuable to many different parts of the company. The people doing marketing and customer facing stuff would be at a severe competitive disadvantage if they had to wait months (first oil change) o get real world data on feature usage after a re-design.

Sure you could download it at service time..."but we already have a system that does it in near real time, can't we just secure that?"...

TL;DR: Segregating systems involves more than having the engineers wait a few months to figure out if their new tune solved the problem.


Which costs less in the long run, a potential class action lawsuit and loss of consumer confidence, or better system security?

If necessary, consider that security is a feature you can sell, when your competitors are following the path of least resistance and paying out their settlements.


Depends on the lawsuit. How much effort would you go through to secure software that runs on a decade old vehicle with a very particular set of options if the exploit requires a very particular set of conditions?

You might fix it just to have a similarly obscure zero day be discovered (unknown to you) and exploited in a different place. Then not only were all those resources spent in vain, but you've got to deal with the opportunity cost of not having thrown those resources behind current of future safety and security tech.

People accept the risk of driving vehicles with legacy safety equipment, why should software be any different from hardware or legacy software in non-embedded applications. At some point you have to let stuff go. Just ask Microsoft.


Depends on how much effort it would have taken to get it right the first time, which is what I'm suggesting be done.

And in this case, these vehicles were manufactured pretty recently, so even conceding your point, I don't think we've passed the 'let stuff go' point in this case.


Vehicle manufactures get a huge data set sent back to them by vehicles.

That is creepy. Is there a way to disable this phoning home (or know it even exists) if I ever buy a new vehicle? That's an unlikely situation for me, but maybe there are others who would like the new features but not the privacy aspects of it.

Even so, the manufacturers are only receiving data, so a one-way link from critical systems to others would be fine. That's how airplane avionics have been designed.


From the press release: http://www.media.chrysler.com/newsrelease.do?id=16849&mid=1

"No defect has been found. FCA US is conducting this campaign out of an abundance of caution."

What the hell?


Defect appears to have a particular meaning in relation to vehicle safety. I'm just basing that on the way it is used here:

http://www-odi.nhtsa.dot.gov/recalls/recallprocess.cfm

I guess it isn't that big a stretch to use such defensive language, there are a lot of things that can be tampered with on a vehicle that arise out of engineering trade offs.

I don't mean to dismiss the problem at hand, but it is only an issue if someone makes an effort to tamper with a vehicle, which is different than a critical part failing prematurely or whatever.


Unfortunately your argument makes sense.


That's probably the wording their lawyers asked them to use so they would not be liable for damages. If they admit fault, they're legally to blame for any accidents or damage.


Legally a defect in a product means you are stictly liable for all damage it causes.

But also the hackers haven't demo'd the exploit in an untampered with car. I still wonder if you need physical access to do this.


If my understanding is correct, Uconnect/Bluetooth security is the weakest link, which isn't much comfort.


Preemptively admitting fault probably creates issues if there is future litigation.


The good that comes out if this is that somewhere in the management chain people will feel justified to increase security investment by saying, "remember the Fiat Chrysler recall?"


The recall aligns with an ongoing software distribution that insulates connected vehicles from remote manipulation, which, if unauthorized, constitutes criminal action.

The WIRED story's hackers presumably were authorized by the vehicle's owner or operator, so the demo did not "constitute criminal action."


> The WIRED story's hackers presumably were authorized by the vehicle's owner or operator, so the demo did not "constitute criminal action."

They were probably authorized by the driver, but car companies have argued that the driver does not "own" the software in the car, so I assume their claim of unauthorized access is predicated on the assumption that the car company still owns the software and that the car company did not authorize the access.

I do not agree with the car company's stance on ownership, but that may be the origin of their claim.


WIRED has another story about this question,

http://www.wired.com/2015/04/dmca-ownership-john-deere/

Supposedly the US Copyright Office will decide this month.


The car company isn't claiming that the Wired hack was unauthorized.

That paragraph is there to make it clear that the tampering is a significant action to take.


Did the WIRED story demonstration take place on a public road with other cars around?

[edit: viewing the video - yes they did - they should be charged with endangerment - someone could have been hurt rear ending the jeep]


If there are protection measures in place it could be a violation of DMCA, whether or not they had the owner's permission.


I'm pretty sure it was /their/ Jeep.




If researchers really want to underscore a point: hang out outside the IIHS testing facility, and when they're testing the vehicle in question, then mess with the systems.

Maybe IIHS needs to include "remote hackability" as a criterion in their testing?


> Maybe IIHS needs to include "remote hackability" as a criterion in their testing?

I think so but just include hacking in general. Remote hacking is the worst but if someone can get physical access to some part of your car for a brief period of time (maybe the door's are unlocked and they plug something in or maybe they get under the hood and mess with the car's computer) you still have a major problem on your hands.

Granted it's far harder to secure a device when someone has physical access to it but they need to test and harder for this the best they can as well. In my opinion anyway.


IIHS uses cars that aren't running and don't have fluids in them. They don't want to clean up that mess (or have the car destroyed by fire). They do test fuel system integrity though. I'm not sure if they use some other fluid (water + dye) or something like pressurized air or leakdown test after the fact.

I guess you could engage in other antics but none of them would be all that effective since the car is towed by a cable until it's a few feet from whatever it's hitting.


Actually, you are right. I was thinking of NHTSA, which does braking tests on an actual track.


Wow connecting cars to the internet? Sounds insanely dangerous. Not everything should be connected to the internet or "smart".


Auto companies' lax attitudes towards systems security will change when insurance companies start considering such security vulnerabilities as safety issues and adjust their existing safety ratings appropriately.


Insurance companies already do. Insurance rates are actuarily based on actual accident frequency. Outlier risks are covered by reinsurance. Not only are insurance companies well funded, they employ smart people and work in an information network with a lot of depth.


Yeah, right. The only way insurance companies are competitive and pricing risk correctly is if there's a magic responsibility fairy that bestows wisdom and driving ability upon all at midnight on their 25th birthday.


If the band is 20-25, they charge everyone based on age 20, otherwise they would be subsidizing high risk drivers instead of generating more profit on 21-24 year olds. Actuaries are applied mathematicians, and typically of above average intelligence compared to the general populace.


Why would the band have to be 20-25 for every insurance company in the country? Why not be innovative and create an accurately priced 24-25 band and undercut your competition?

I'll tell you why: because insurance companies are a rent-seeking oligopoly with an enormous barrier to entry. They just sit there and make a fortune doing jack shit.


Based upon my own experience (in another industry), I have no doubt that there were knowledgeable people internally who warned them of this -- if they were not fully cowed by the bureaucracy.

I have zero sympathy for the manufacturers. I only hope that, if they decide to go on a witch hunt, they actually seek and punish the morons in power who, most likely for self-serving purposes, let this slide.

This also should raise a ringing cry to rein in DMCA et al. uses that seek to outlaw such research. In this case, the manufacturer has obviated their authority in the matter.


> The company added that hacking its vehicles was a "criminal action".

screw that attitude.

I hope government will make the equivalent of whistleblower protection for security researchers that report exploitable flaws, because it's the only way to increase security over time.

i.e. I'm scared as hell that planes are allegedly hackable but researchers aren't really talking about it nor testing it properly because fear of lawsuits.


In other news Apple just hired Doug Betts, former FCA 'quality' boss

http://blog.caranddriver.com/fiat-chrysler-quality-chief-res...

http://www.autonews.com/article/20141028/OEM02/141029851/bet...

http://www.techtimes.com/articles/70582/20150721/apple-hires...

because nothing screams quality like a 'decided to leave one day after yet another drop in Consumer Report rankings' and 1.4m car recall!


I've been spending a bit of time over the last couple of months reading up on CAN, OBD2, system architectures for automotive systems, attack vectors, various forms of CAN-attacks, building stuff that interfaces with CAN buses, writing software, figuring out how things work etc. And I have to say that many of the comments in this thread are frighteningly uninformed.

I know this is supposed to be The Magic Kingdom where people are only supposed to say positive things and eat happy pills all day, but would it kill people to at least try to read up about the things they so willingly share their "insights" on before posting here?

At the very least, try to understand how CAN works before spouting nonsense grounded in uninformed assumption. Uninformed opinions are not helpful. They just pollute the discussion.


At RSA I was at a car hacking session, and the big take away I got is how some of these systems have none upgradable firmware, and today's designs sent for manufacturing now aren't due for 2017-2018 model year cars. So some of these vulns could be baked in, in a way that have expensive work arounds because the car manufacturers have been so feature driven rather than security conscious. It's the car equivalent of bloat/crap ware on phones. Features that drive up selling the customer. The cars that have OTA firmware updates (BMW was one example) are able to push out fixes faster, and with more complete coverage than recalls so it seems sane to me to make it mandatory such "smart cars" can be OTA updated.


Do we really need our vehicles to have so much technology?

I know I'll get down voted, but it has to be asked.


Short answer, no.

But, consumers expect a infotainment user experience at least as good as hanging an iPad in the car, and if your car doesn't provide that, they will rate it poorly in surveys.

Source / Disclaimer - I work for GM.


So despite the huge uproar at https://news.ycombinator.com/item?id=9921557, it turns out the end did justify the means.


Only if it can be shown that performing the test on a dangerous section of a public highway was instrumental in getting the recall to happen. I believe they could have performed the test on a closed auto track and it would have been just as effective.


That only holds if there were no better means that could have achieved the same ends.


It's possible, but can you cite another example of when a news article caused a recall to be issued a couple days later? It seems unlikely - usually takes months if anything happens at all.


Do you really think this article would be less successful if the driver was on a test track?


Yes I think that. This was on local news early in the morning, just to cite one source from which I rarely get hacking stories. Normally they just talk about weather and how the elderly are getting scammed. This test occurring on a public road created an effective visual, that pushed the story into many more media channels.


I think 'hackers turned off the breaks on a car with a reporter in it at 70mph' would have made headlines even if it took place on a test track. It's a juicy story with plenty of 'it could happen to you' scare factor built in.


Auto makers will have to make sure that they can update their software remotely or this going to become really expensive very soon.


What's to say this wasn't precisely that? Maybe at the talk we will learn that accidentally uconnect was listening to the update FW messages from the sprint network.

Often the board that handles the radio acts like a bridge in modern cars. It tends to be the beefiest computer hardware in the vehicle. So it is on the high speed CAN bus and a node on the more star shaped low speed bus.

When they updated the FW, they removed the bit of code that prevented dangerous messages IDs from leaving the controllers on the radio board and possibly added code that could put arbitrary CAN bus messages on the buses as relay from the sprint network.

I am very eagerly awaiting the talk.


They probably already can, if the use the right exploit.


They claim that only US cars are affected. Is this because liability costs are much higher in the US or is there really a different software used in the rest of the world?


It could also be related to the cell network used.


Good point. I guess there could also be some other service involved that's US only, something like OnStar.


Makes me wonder if that doesn't have something to do with IPv4 address exhaustion and perhaps cars in Europe are behind CGNAT.


>"I wonder what is cheaper, designing secure cars or doing recalls?"

Ever the enduring question


If you have to update software in person, you shouldn't be in the software business.


Why does Jeep in particular have so much issue with quality assurance? The most memorable is the SUV rolling over in a simple test a couple years ago which have been fixed but it's quite worrying.

https://www.youtube.com/watch?v=zaYFLb8WMGM


First off that test is somewhat of a joke for SUVs since it basically rewards car-ish handling which is at odds with the truck-ish performance that is demanded of larger SUVs

In terms of comparing two different vehicles...

If the $year base model $carA has a suspension system closer in performance to the sport model of $year $carB but for $year+1 they put a much softer suspension in the base model of $carA in response to customer feedback then the results go out the window. Alternatively, if the OE tires are particularly expensive then you can bet that most owners will replace them with cheaper one. Additionally, particular trims are often incredibly rare or almost always have a particular option added. So testing a $trim without $package will be of little meaning when the vast majority of $trim that make it to dealer lots will have $package. In either case the results go out the window because there's a handful of large variables that are uncontrolled for. It's little better than the comparisons you see in car commercials.

If you actually want to grill Jeep over something I suggest you read up on 90s XJ gas tank fires.


This is not particular to Jeep, the Jeep Cherokee is simply the particular car they hacked. This basically affects all Fiat Chrysler vehicles.


A lot of SUVs were prone to rolling over before they got the traction control system right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: