Well geez, it's a good thing that there's no class of bugs in which a certain amount of data, maybe more than the receiver was expecting, or terminated in an odd way, overwhelms the receiver in such a way as to cause the data to then be interpreted as commands which are run in place of the receiver's code...
English: "This random guy on the internet discovered real vulnerabilities and we're scrambling like hell to fix them. We hope this carefully worded statement written by lawyers will keep the public and the FAA off our back until we can fix the problems."
"Experts are still not sure of the root cause of the malfunction in the data unit, but subsequent software changes by Airbus mean any similar error in the future won’t lead to another terrifying nosedive."
Sounds like I guessed right.
1) There was erroneous data
2) The erroneous data caused the plane to suddenly nosedive
2 was fixed - it's known why it dived and they fixed that from happening again. They still don't know what caused 1.
However, the rust/LLVM compiler pipeline is nowhere near mature enough for use in high-criticality environments.
I love Rust, but I wouldn't want it to fly a plane I'm on. For instance, here's a bug in LLVM that Rust developers happened to discover - before they disabled their use of the buggy features, Rust/LLVM was producing _numerically incorrect_ code. https://github.com/rust-lang/rust/issues/54878#issuecomment-...
Age doesn't automatically protect you from bugs.
I don't know about Rust, but LLVM isn't mature? Nearly ever program running in the Apple ecosystem was compiled using LLVM. Swift is compiled using LLVM. Since Xcode 4.2, Clang is the default compiler. So iOS and macOS apps are built with LLVM. I'd also wager a guess that Apple uses Clang to compile key system code (Darwin, macOS, and iOS, etc) as well.
It is becoming more mature... But the argument about how to handle an empty infinite loop has been going on since 2015.
Because seriously these people need to be stopped.
Anyway, just test with ubsan and you'll be good.
* Some is inherited from LLVM, some is bugs, some is dumb corner cases like using kernel syscalls to overwrite the program's own memory with random bits.
(Unlike code running for avionics for example)
There was actually an incident posted on HN a while ago where IIRC an ipad did a software update or similar at a critical moment, and caused complications bad enough to get an incident report published on some aviation agency's website.
In a critical situation this is impossible, though.
I have heard about just flying with no G-load for a while and getting some basic instruments working. But I dont think you will get full nav functionality back.
Behold, LLVM and GCC both miscompile the same numerical code involving mildly tricky aliasing constraints.
I think I'd prefer that safety-critical code be written in languages that don't allow pointer arithmetic except in scenarios that can be proved via static analysis not to introduce multiple memory aliases. Expecting program writers and compiler authors to get these sorts of things right in C/C++ is just unreasonable.
I'd argue that you would probably never find a bug such as that in a certified compiler, not because of the certification, but because far, far fewer eyes look at its output. I have worked in safety critical software and used a certified compiler. It had enough bugs that I personally found four in my first year.
Agreed. They need to discover on their own that critical software requires coq.
</kidding> ... </i-think>
The thing that triggered me the most was that they got the engineers who wrote the code to test it, and report back that their own code was fine. From the sound of it they didn’t even test the vulnerability, they just did an external test, without specifically testing the segmentation controls or the components in question.
1. Relying on segmentation to protect vulnerable components is not a reasonable standard operating procedure. Segmentation is supposed to be an additional layer of protection, you’re also supposed to secure individual components.
2. It seems as though the researcher believed there may have been a way to bypass some of the segmentation (the segmentation between the medium-sensitive network and the highly-sensitive network). The article kinda implies that they didn’t test that layer of segmentation, that they only tested the most external layer (the segmentation between the non-sensitive network and the medium-sensitive network).
The whole response comes across as dismissive spin, that they hope will be consumed by people who don’t have a particularly sophisticated understanding of network/application security.
Call me when you bypass them, I have a job for you.
Data diode can be put either ways, with different results:
* case 1, you allow traffic to only go out:
This way, nothing can come inside the system, but the system can export data.
Here basically, confidentiality is of secondary importance, but integrity is crucial.
It is the Biba model.
It can be seen on Command and Control systems for critical industrial installation for example. For example, with power plants C&C system must avoid to be hacked, but exporting to other systems data such as their power output and operational condition is generally required.
* case 2, you allow traffic to only go in:
This way, the system can ingest data from the outside, but nothing goes out.
Here basically, confidentiality is primordial, integrity a bit less.
It's the Bell-LaPadula model.
It can be seen in Military intelligence systems for example. Here you collect pieces of information and you make decisions on them, and all that must be kept confidential.
* One way: you enforce integrity
* The other: you enforce confidentiality
As an ending note, data diodes are generally pretty simple: basically you take a fiber with TX and RX link, and you cut one. There are a few more tricks (UDP only, sending multiple times because you don't have ACKs, static ARP tables, tricking the NIC into thinking it's up without signal), but that's the core of it.
It described in detail how the planes are updated with new firmware for the avionics, entertainment system, and the engines. I was shocked to learn that the 787 uses a lot of COTS kit internally, such as standard WiFi and Ethernet connections. There's an RJ-45 jack at the front landing gear accessible from the outside of the plane at any time!
It was by far the best technical document I have ever read, of any type, ever by at least a factor of ten. It was so good I read it like a novel. Twice. The security design was amazing. The PKI was amazing. The patch management was amazing. The network design was amazing. The documentation was amazing. My estimate was the the document alone would have cost multiple millions of dollars to write, not including any of the engineering work that went into the solution itself.
Boeing's engineers thought of everything. EVERYTHING. This scenario was catered for:
- The plane is rented, not owned.
- The IT department is outsourced.
- Aircraft maintenance is outsourced.
- The plane is currently on the ground in a country that is hostile.
- A critical update has been released, without which the plane is unsafe to fly.
The security is just nuts. Everything uses explicit, hardcoded whitelists. TLS is bidirectional (clients are verified by the servers too). Patches must be quadruple signed by Boing, the parts manufacturer, the FAA, and the airline at a minimum to be acceptable. There are physical connection breakers and PIN codes on top of that. There are two nested VPNs on top of the already encrypted WiFi. It just goes on and on.
No part of it left me thinking they could have done better. I've used that document as a template for my own work, and it's the better for it.
Since then, I've insisted on flying 787s whenever possible, because I'm certain that the engineering effort that has gone into those things is about as good as humanly possible.
We have Teslas that don't have anywhere near this kind of security or redundancy "autopiloting" themselves right now on highways.
People seem to be ok with that because it's a car and not a plane. But the way I see it, there's thousands of those cars on the roads, and a software bug across the fleet could cause just as much damage as a plane crash.
Not just Teslas, modern vehicles in general. I would say that in terms of security Tesla is probably doing a more bang-up job than the other automakers; it was only a few years ago that Charlie Miller and Chris Valasek remotely killed a Grand Cherokee on the highway.
The Jeep hack (as well as their Toyota and Ford hacks) was exptremely important, because it put public pressure on the less technologically capable OEMs to get with the times and implement a (somewhat) secure electronics architecture. As someone who shares the road with those shitty cars I'm thankful for that. But even at the time of the hack, there were many OEMs whose cars were not anywhere close to that vulnerable and the industry hasn't stood still since then.
And since you mention Tesla, I also have to point out that they are one of the worst at security. E.g. they have an RJ45 port behind the dash that you can just plug into. It used to be that this gave you complete access to everything, but people abused it. So Tesla made it a little bit harder, though not impossible, to get into their system. Tesla also has a lot of bugs in their smartphone integration that allow "fun" exploits like remote unlocking.
Do you have the knowledge required in order to judge that?
> I've insisted on flying 787s whenever possible, because I'm certain that the engineering effort that has gone into those things is about as good as humanly possible.
Have you compared it to the other Boeing airplanes? To Airbus airplanes? Bombardier? Embraer? The new Mitsubishi airplane?
I get that technical documentation can be incredibly good, but even picturing the most thorough and well-written documentation book I can, what could possibly make it worth "millions of dollars to write" when writing an actual book hardly can top $100k even by going all out on expenses?
I'm sure the number of lines of code involved in a 787 is probably well into the 10s of millions. Then you've also need help of Electrical/Computer engineers, mechanical engineers, aerospace engineers and who knows what else. This isn't a simple matter of plopping a tech writer in front of a computer and "punch out the doc".
I dont imagine the systems are simple enough that you have a single engineer or developer looking at the relevant systems. I can easily see a team of well more than 50 being needed to produce such documentation, and conclude the cost of production for such documentation being $1M USD to probably being way on the low end, maybe even by an order of magnitude or more.
Well... some completely random stranger with no credibility and no supporting documentation said it on the internet, so it must be true.
The 787 is the greatest technical achievement of mankind, bar none!
I guess the question is how bad is it (from the article it's hard to tell exactly, but it sure doesn't sound great)? And another question is how many of our systems that we rely on, from bridges to airplanes to traffic lights, are just actually very insecure but either nobody notices or nobody exploits them?
That said, Boeing's abysmal PR and completely blanket "it's not our fault" statements make me assume the worst here. I have no idea how that company will ever earn back my trust. But maybe they have enough regulatory capture, much like Equifax, that they just don't care.
A bad actor could do anything from a DOS (positioning it the wrong direction) to tampering with the bulbs (for example swapping out all the greens with reds).
The reason most society doesn't collapse is because we assume most people are good actors. Unfortunately once your device is hooked up to the internet you vastly increase the odds of dealing with bad actors and have to spend more time and money securing against bad actors.
It has been this way probably for longer than I'm alive (30 years). There is a huge traffic light for the central lanes with a visual timer (like all others in this avenue: green horizontal lines that fade one by one when the signal is closing soon) and a smaller one for the local south lanes on a given crossing.
After some road paving, pedestrian crossings were made accessible, but for some stupid reason changed the behavior of the local traffic to a deadly combination. The speed limit is 60km/h for both central and local lanes (but people drive from anything between 50-100km/h).
Previously for 30+ years: everything turned green/red at the very same time.
Since around some date before 1 January: central lanes turn green first. Smaller traffic light for the local lanes turns green after 10 seconds. Local lanes turn red a few seconds after the central too. In some places, it might create an incentive for you to swift to local and back (while hitting the gas pedal) after 30s-1min if you see traffic ahead and that you can't make it in the central lanes - not sure if I consider this a feature or a safety risk.
First time I passed by after the change I didn't stop (5.a.m. new year's eve) because I was watching the central lane semaphore and it was too late when I noticed they changed it. A second time, I had to hit the breaks.
During the first weeks after the changes, I saw a dozen or more cars either running the red light without stopping or after waiting for the [central] lane bright traffic lights go green.
Six months afterward, a reckless military driver killed a disabled woman in a wheelchair nearby. Probably unrelated but I'd be surprised if a related road design fault played a role.
All municipal infrastructure tends to be. It's usually implemented to a cost and security considerations are completely absent.
You can bet that in any given city, all those street light control cabinets are keyed alike and the city has no true idea who has keys and who doesn't.
This exact problem applies to so many domains it's literally for lack of effort that they haven't been exploited yet.
Edit: Some googling for links let me to this video, which seems relevant:
I'll Let Myself In: Tactics of Physical Pen Testers
Traffic authorities use coloured bulbs where you are? Weird.
Where I am, as far as I'm aware the colours always come from a gel in front of the lamp. Furthermore, the red lamp holder is larger than the green and amber ones.
I think many of the lights here are also retrofits from incandescent traffic lights.
What country are you in that has rotatable traffic lights?
Anyways, even if lights are too big to climb up you can put on a yellow vest and get a flashing light on your van and people wouldn't really think twice.
Here's some photo evidence from Palo Alto, CA, USA: https://imgur.com/a/NxmphG6
Traffic lights here are nothing like that.
As gp said: The amount of such bad actors is low. And gains from an individual hack are low and there's a chance of getting caught.
This is like hacking servers. If you can get all the way to physical access with the device, of course it's exploitable. But that doesn't actually say a lot about how secure something is.
I imagine the quality of the lights around the world differ. If you can climb up and adjust it, these aren't the kinds of lights I'm thinking of.
So you'd need some equipment to do anything to them. And that equipment would have to basically shut down the intersection. So do it at 4am and hope that nobody drives by for the half hour you're there.
How hard is it to get a cherry picker? (Check your local tool/construction rental place) What about if you climb on top of a van, or a box truck?
Millions of ongoing safe flights? I dunno. I feel like they're getting savaged (which they deserve... to a point... but we will cross that point I am pretty sure, if we haven't already...)
The thousands (tens of thousands?) of safe flights per day don't make the news. Boeing has been a pioneer in the safest form of transportation in existence. Mentour Pilot (an active 737 pilot on YouTube) goes into detail about why he's not concerned about Boeing (any more than he's concerned about Airbus).
I can also share a story from my (late) father who worked at Boeing from 30 years (and was working at Boeing during the MAX crashes). I asked him why Boeing let the 737-MAX debacle happen. These were a dying man's words (paraphrased): "Boeing wanted to ground the plane after the first 737-MAX crash but the FAA refused until after the second crash. Boeing did not have the authority to unilaterally ground the planes."
Take that for what it's worth.
1) Airbus A340 - No crashes
2) Boeing 777 - 5 crashes, 2 intentional (Malaysia x2), 1 engine-related (Rolls-Royce problem), 2 pilot-error (Asiana/Dubai) - [Thanks fishywang]
3) Boeing 747-8
4) Boeing 737-NG
5) Boeing 767
6) Airbus A320
7) Boeing 757
The popularity of A320 and 737-NG makes their safety records particularly impressive.
Luck and good fortune also played a huge role of course. What the pilots did was unconscionably neglectful. CFIT is often (usually?) fatal for all passengers.
*EDIT: It's similar to software that's written to be extremely robust that encounters an unexpected error (or set of errors). The software wasn't necessarily designed to handle it, but sometimes it can nonetheless.
That isn't really something engineers can design to. What happens is the aerodynamics group calculates the maximum loads on the airplane. This is increased by 50% and called the "ultimate load", and the parts are all designed to not break up to that load.
Parts stronger than that are overweight, and weight is the enemy of all airplanes.
After parts are designed, they go through an independent "stress" group which verifies that the parts meet the strength requirements.
[x] and I don't mean that to doubt you but I hope you understand I can't be absolute on this without verification
*EDIT: This is probably a conversation that happens after every crash/major-incident and hindsight is 20/20.
I write software that is critical for public safety customers (think police/firefighters). Maybe this is just my perspective having left a defense company but it is terribly insecure. The “secure” version of our product was obviously an after thought, it was poorly executed and i dont think it’s even used widely. And my company dominates this market, so the attack surface is huge
I'd guess virtually all of them.
Part of this is just a poor understanding and pricing for software consultancy - along with some absolutely terrible actors in the HPC realm. Ideally your HPC will come in and spend a fraction of the time vetting software that the dev team built, but occasionally you get a fraud who works 8/5 for a month at 120/hr and delivers nothing but vapor in the end.
Maybe some security consultant industry group could set up a certification program, though all the times in memory I've seen software related certification it's been
1. Absolute BS in terms of skills evaluated.
2. A money grab by the certifier.
Keeping it closed seems like a full admission that "there are probably a bunch of bugs in here and we don't want people to see them"
Most executives care about profits, security is simply not important. Even if an engineer explains that he needs more time to properly secure something, he will be asked to cut corners. Then, when shit hits the fan the executive will make a "pikachu face" and engineer will get fired for not properly implementing security.
Open sourcing the codebase doesn’t mean that all it’s vulnerabilities will be discovered, and it’s certainly not the only way for a company to manage them. Out of all the options that are available, it’s really one of the worst ones from the company’s perspective.
Now, imagine they did open-source their code: I imagine those codebases are humongous and it would take months if not years for security issues to be found by the community. How do you make sure that a bad actor doesn't find a flaw before the community does and uses it?
So open-sourcing sounds totally unrealistic to me.
Ultimately your argument applies to all life or death code, even code we put inside our bodies, which as you mentioned, is also highly specific and specalized.
Because the bar is higher there should be less review is a contradiction.
APT (often not a State) conversations are pointless where plausible deniability is ignored as a desirable property.
No one really would benefit from open sourcing it without being able to (security) test it in realistic scenarios. Obviously security researchers would profit in discovering bugs which may have no relevance in reality, to increase their fame.
There are always bugs in software, some people depend on keeping them secret.
One problem with opening the source code is then Boeing would have to dedicate a team of engineers to deal with every armchair crank claiming the software is going to crash the airplane.
(ex-Boeing, OS+support code for embedded LRUs)
Because yes, it seems the problem is with incentives. If attacks on such things happen seldom enough, that will never trickle down into incentives for managers at all levels to prioritize security high enough.
Then, somebody could argue that if it happens seldom enough, that is reason to not prioritize it that highly. But I don't think the actual risk translates into actual incentives for managers in a very linear, nor fact-based manner, especially when the number of occurances is very low, while still with catastrophic consequences.
We definitely attempted to write the best code as we could given the circumstances, but we had issues doing so:
* airline margins are razor thin, so salaries are comparatively low, which means
* the best employees frequently left for other opportunities, causing
* management to institute an over-reliance on process and tech debt from poor engineers to build up like crazy, and then
* management's priority was always "keep the lights on" rather than repay any tech debt or start new ventures.
Eventually we were working on an unmaintainable codebase, spending way too long to ship each feature, and the situation was not improving.
It was not a wonderful environment to work in (hence my departure).
Excluding executive pay, of course. Oh and excluding stock buybacks (which increases shareholder value, consequently greatly increasing the value of executive compensation).
Razor thin margins which result in $200 million (give or take) in quarterly profits are not exactly sad stories.
In summary, the non-executive employees are paid as little as possible to keep the company operating. And by operating, I mean that the bottom line/shareholder value is all that matters. Safety is really just a bottom line consideration. If an accident or two happens, and an eventual death payout is made, as long as the bottom line is not greatly affected, there will be no change in corporate behavior with respect to paying people properly and not cutting corners.
Secondly stock buybacks are similarly tiny compared to how much money a company actually has to give to its employees. Run the calculation some time. You need to be looking at revenues, not profits.
Airframe software is a totally different ballgame than airline operations management software.
When I worked on the 757 on flight critical systems (stab trim) the engineers I worked with took great pride in making the designs as good as possible. Nobody wanted to sign off on a design that they'd get a phone call on years later as being the cause of a pile of dead bodies.
I personally am proud that none of the stuff I worked on or any of the guys I knew worked on has been a factor in any incidents I've ever heard of.
Not a matter of pride, but you don't want to help your competitors offer the same capabilities as you for $0 R&D costs
Would depend on licensing but in any event, once you start showing how the sausage is made others can find inspiration to develop their own code, at which point you can start getting into a costly legal battle over whose idea it originally was, whether certain algorithms are protected, etc...
My point is simply that there's no upside for Boeing to open source their code.
You wrote 'Airbus' as though Boeings R&D in their software would somehow magically translate into an advantage for Airbus. But Airbus should also open source their code, and for exactly the same reason. In fact Airplane certification institutions such as the FAA and counterparts could easily mandate the open sourcing of every last bit of software to create a level playing field.
The reason to have code open source would be to get public confidence perhaps, but I doubt that makes it a net positive in their eyes.
If you write code that takes an input and increments a number, and you're worried that someone might exploit your code, someone else should have wrote the software.
Attach a printer that shows the voter a paper receipt before the receipt goes in a box, and it should be able to prevent against most any electronic attack.
Edit: better link https://opensource.apple.com/
HN discussion: https://news.ycombinator.com/item?id=9942647
"Miller and Valasek’s full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep's brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they're working on perfecting their steering control—for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep's GPS coordinates, measure its speed, and even drop pins on a map to trace its route."
The wheel control only working in reverse kind of makes sense. They're probably using some kind of self-park feature to control the wheel, and some engineer (sensibly) put in some kind of interlock to prevent the wheel from moving on its own when travelling at speed.
The wording of the article implies that these particular attacks only work when the car is travelling at low speed, but earlier in the article they did mention that they could (and did!) throw the transmission into neutral while the Jeep was driving on the highway. The driver was unable to recover without turning the car off and back on again.
In a followup a year later, they showed that they were able to do these attacks at any speed, including turning the steering wheel.
In any case, for bad dudes that are pursuing you when people aren’t around, being able to shut down your car is just as bad as being able to control it.
A Tesla is the perfect example. The fact that it has self-driving---I mean, "assisted cruise control"---naturally means a computer can take over the controls entirely.
They have been relaxed since then, but all current commercial jetliners would stay comfortably within both limits.
Incidentally, my GPS watch (Garmin) was working last time I was on a plane. It was bang on 700kph the whole time IIRC.
Would you agree that this logical boundary should be physically enforced? Such as an opto-isolator?
Secure: The network that connects and controls the airplane. Absolutely only essential things allowed on, and if possible isolated using mathematically proven secure vlan/isolation techniques.
"Employee": pilots, crew, etc. This is more just a distinct network for corporate operations security.
"Customer": Still try to keep this one secure so that virus and just don't spread, but this is the 'DMZ' area.
Communication from the secure network should be outbound only, and might best be done over a fixed rate serial data connection of some sort.
The modern version might be to configure a point to point network link on a CDMA based system and just disconnect the secure side's RX path entirely. Then you just export the data blind via UDP with like 10X redundancy.
Instead we got MCAS which was messing with autotrim signals and escaped scrutiny.
A trivial use case which requires write access to the CAN bus is the navigation system informing cruise control of an upcoming hill.
Equally trivial would be the seat position memory or profiles being applied through the main touchscreen.
(I work for a company that is developing infotainment systmes)
They probably do something to that effect
I have seen things. Terrible things.
Also industries have cultures, and dunning-kruger often applies outside their core domains.
For example: I did some work with Mercedes (no insult to them -- I've happily owned several of their cars). They were "real" engineers; the "schnook" of the door when it shut. The brake-by-wire folks modeled everything in Matlab, developed multiple implementations, But the entertainment system? It's not "real" engineering, so is typically subbed out to a canonical or cheapest bidder based on a powerpoint bullet list. The result: really really safe vehicle, but the user's actual experience (outside the driving) is not encouraging.
Same issue with a BMW I had: ECU always kept the pollution within spec; the car always started instantly when I turned the key no matter the temperature or environment, but the seats and windows would move randomly.
And we all suffer from this: when I worked on power plants I was (and remain) horrified by the shitshow that is SCADA and the HMI infrastructure many of the other "absurd" things I saw, well, some patient people would show me what an idiot I was and that there were very good reasons for doing things that looked to an outsider like headstands.
It's hard to implement strict one-way communications -- usually you at least need some kind of ACK for reliable transmission.
Put all those together with a vulnerability in the middle, and you have an attack.
Also, makes upgrades to new processors and better displays easier. Honestly, with an OLED screen, if there was a good mount on the back of the headrest, I'd rather watch movies on my screen.
Is that even legal? Will he ever be allowed to cross the US border after admitting this?
Generally no. There is a difference between being unprotected and being open to the public. While in some cases a person can claim to not have known and proving mens rea for such a crime is much harder than if it was protected and the protection had to be bypassed, it isn't impossible.
Such laws are selectively enforced, but being this is Boeing, you can expect it will be enforced on their behalf if they have any desire for it to be (given the current PR issues and the impact this might have, they might let this one go, at least for the time being).
Can't help but read this as: "We don't have a clue and depend on the manufacturer to tell us everything is 5 by 5."
If B says, "It's all good", and B is the SME, then FAA must agree and pass.
It sounds like the networks, while not air gapped, are being separated by some "high" security design or device... that happened to withstand the attack (hence the testing on Boeing's part). Fair enough?
AFDX (used for network in 787) uses unidirectional messages with no ACKs or anything so you can reliably make a data diode by just cutting cables right.
Is there a reason that an individual would own a 787 for personal use— eg - is it a plane that people change the interior layout for use as a private jet, or are these planes all tied up in commercial use?
If I owned one, I would lend it to the researcher as I would want to know the flaws and risks more clearly.
Previous owner was this guy https://www.insurancejournal.com/news/southcentral/2018/10/1... https://uk.flightaware.com/resources/registration/N912NB
I can't help but wonder what he planned to do with the plane, any insurance fraud scheme involving a passenger jet would probably invite interesting consequences.
There’s the BBJ (Boeing Business Jet) edition of both the 787-8 and 787-9. There’s over a dozen of them that’s been built and delivered.
I think, though, that the sort of person who owns one doesn't tend to want to tinker with it. You just hire someone to fly it, and know that it's always available. The difference between owning and renting, at that level, is mostly financials. Besides, even if you own the aircraft, it's likely you don't own the engines , and they're kind of an important component of the overall system.
Likewise, I don't know anyone who owns a car who has loaned it to a researcher to analyze it for design flaws. A couple people have done it , but for the vast majority of owners, you just use it normally, and if something breaks, you deal with the problem then. Airplanes are loaded with redundancies for critical systems so a lot of things have to go wrong for it to crash.
Oh, tons of people! There's a whole industry around hacking cars, most of those people aren't the kind of researchers that do con talks though.
(I'm an automotive enthusiast myself, although I'm more into the "old school" non-computer-controlled stuff. Mostly because it's simpler and inherently resistant to remote hackers.)