• At 8 seconds prior to the crash, the Tesla was following a lead vehicle and was traveling about 65 mph.
• At 7 seconds prior to the crash, the Tesla began a left steering movement while following a lead vehicle.
• At 4 seconds prior to the crash, the Tesla was no longer following a lead vehicle.
• At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected.
This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera and Mobileye (at least in early models) vision software. It also recognizes lane lines and tries to center between them. It has a low resolution radar system which ranges moving metallic objects like cars but ignores stationary obstacles. And there are some side-mounted sonars for detecting vehicles a few meters away on the side, which are not relevant here.
The system performed as designed. The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane. If the vehicle ever got into the gore area, it would track as if in a lane, right into the crash barrier. It won't stop for the crash barrier, because it doesn't detect stationary obstacles. Here, it sped up, because there was no longer a car ahead. Then it lane-followed right into the crash barrier.
That's the fundamental problem here. These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design. This is not an implementation bug or sensor failure. It follows directly from the decision to ship "Autopilot" with that sensor suite and set of capabilities.
This behavior is alien to human expectations. Humans intuitively expect an anti-collision system to avoid collisions with obstacles. This system does not do that. It only avoids rear-end collisions with other cars. The normal vehicle behavior of slowing down when it approaches the rear of another car trains users to expect that it will do that consistently. But it doesn't really work that way. Cars are special to the vision system.
How did the vehicle get into the gore area? We can only speculate at this point. The paint on the right edge of the gore marking, as seen in Google Maps, is worn near the point of the gore. That may have led the vehicle to track on the left edge of the gore marking, instead of the right. Then it would start centering normally on the wide gore area as if a lane. I expect that the NTSB will have more to say about that later. They may re-drive that area in another similarly equipped Tesla, or run tests on a track.
To me, that this behavior was added via an update makes it even harder to predict - your car can pass a particular section of road without incident one thousand times, but an OTA update makes that one thousand and first time deadly.
Humans are generally quite poor at responding to unexpected behavior changes such as this.
The saying has been beat to death, but it bears repeating: Tesla is a prime case where the SV mindset of "move fast and break things" has resulted in "move fast and kill people". There's a reason that other vehicle manufacturers don't send out vehicle software updates willy-nilly, and it's not because they're technologically inferior.
Cars have been dependent on software for a long time (literally decades). This isn't something new. Even combustion engine cars have had software inside of them that controls the operation of the engine, and this software is vigorously tested for safety issues (because most car manufacturers understand a fault with such software could result in someone's death). Tesla seems to be the only major car manufacturer that has a problem with this.
>So what is the right way to handle these updates?
The way that other vehicle manufacturers (car, airplane, etc) have been doing it for decades is a pretty good way.
>You mentioned a clear flaw with OTA updates, but there are also numerous advantages. For example, the recent Tesla brake software issue was fixed with an OTA update. That immediately made cars safer.
There is no evidence that said OTA update made Tesla cars any safer. There is evidence that similar OTA updates have made Tesla cars more unsafe.
The brake OTA that you mentioned has actually potentially done more harm than good. Tesla owners have been reporting that the same update made unexpected changes to the way their cars handle/accelerate in addition to the change in braking distance. These were forced, unpredictable changes that were introduced without warning. When you're driving a 2 ton vehicle at 70mph, being able to know exactly how your car will react in all situations, including how fast it accelerates, how well it handles, how fast it brakes, and how the autopilot will act is crucial to maintaining safety. Tesla messing with those parameters without warning is a detriment to safety, not an advantage.
The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles. I wouldn't be surprised if their Lane Keeping Assist (LKA) systems have similar problems.
When Pilot Assist follows another vehicle at speeds overapprox. 30 km/h (20 mph) and changes target vehicle – from a moving vehicle to a stationary one – Pilot Assist will ignore the stationary vehicle and instead accelerate to the stored speed.
>The driver must then intervene and apply the brakes.
Despite this warning in the manual:
>Super Cruise is not a crash avoidance system and will not steer or brake to avoid a crash. Super Cruise does not steer to prevent a crash with stopped or slow-moving vehicles. You must supervise the driving task and may need to steer and brake to prevent a crash, especially in stop-and-go traffic or when a vehicle suddenly enters your lane. Always pay attention when using Super Cruise. Failure to do so could result in a crash involving serious injury or death.
 - https://www.cadillac.com/content/dam/cadillac/na/us/english/...
That's a nitpick. Your broader point about Tesla pressuring the market down an unfortunate path is spot on.
Ideally, yeah, every manufacturer would have to take all the puffery out of their marketing, or better yet, talk about all the negatives of their product/service first, but I doubt I'll ever see that.
This article portrayed Super Cruise as something qualitatively different, based on the maps of existing roadways. I'm not sure if they've also considered integrating the multiple systems involved in driver assistance. I'm curious if Tesla has either for that matter.
Software should not be driving a car into any of them. I think that LIDAR would see the obstacle, but as I understand, the crashed Tesla car didn't have it.
I'd love to see LIDAR on consumer vehicles, but AFAIK it's prohibitively expensive. And to be fair, even Level 4 autonomous vehicles still crash into things and kill people.
Last but not least, every semi-autonomous system all the way back to Chrysler's "AUTO-PILOT" has had similar criticisms. People in the past even said similar things about high speed highways compared to other roads WRT attention.
Literally every car I have driven equipped with Cruise Control and Collision Avoidance (TACC) hits the brakes and slows down to 20-ish km/h if it senses ANYTHING moving slower (including stationary) in front of the car at possible collision path.
This really affects the nature of the situation. 20 years ago, cars contained microcontrollers with a tiny bit of code which was thoroughly reviewed and tested by skilled professionals. Today, all cars run so much code, even outside of the entertainment system, that the review and testing just can't be the same. (And there's way more programmers, so the range of skill and care is also much wider.)
We're in a new mountain-of-flaky-software world.
Compare this to classic engineering where you know the changes you've made, so you can rerun your unit tests, rerun your integration tests, check your change in the vehicle and be reasonably sure that what you changed is actually what you wanted.
The other approach to autonomous driving is to slowly and progressively engineer more and more autonomous systems where you can be reasonably sure to not have regressions. Or at least to contain your neural networks to very very specific tasks (object recognition, which they're good at), where you can always add more to your test data to be reasonably sure you don't have a regression.
I don't think we'll see too many cars being controlled by neural networks entirely, unless there's some huge advancement here. Most of the reason we see more neural networks now is that our computing power has reached the ability to train sufficiently complex NNs for useful tasks. Not because the math behind it advanced that much since the 60s.
That particular OTA update significantly shortened braking distances. [The update] cut the vehicle’s 60 mph stopping distance a whole 19 feet, to 133, about average for a luxury compact sedan. That's a safer condition, IMO, and I'm uncertain how to argue that it doesn't make the car safer.
 - https://www.wired.com/story/tesla-model3-braking-software-up...
> being able to know exactly how your car will react in all situations
If one depends on intimate knowledge of his own car for safety then he’s likely already driving outside the safety envelope of the code, which was written to provide enough safety margin for people driving bad cars from 40yr ago.
Additionally, the argument that we should continue to handle updates this way simply because we have done it this way for decades is the laziest possible reasoning. It is frankly surprising to see that argument on HN of all places.
As for the evidence that OTA updates can make things safer, this is from Consumer Reports:
>Consumer Reports now recommends the Tesla Model 3, after our testers found that a recent over-the-air (OTA) update improved the car’s braking distance by almost 20 feet. 
That update going out immediately OTA is going to save lives compared to if Tesla waited for the cars to be serviced like other manufacturers. I don't think you can legitimately argue against that fact.
 - https://www.consumerreports.org/car-safety/tesla-model-3-get...
There is again no evidence to support this fact. There is evidence that Tesla's OTA software updates have introduced safety issues with Tesla cars. That's a fact.
Better braking distance is of course a good thing but if anything, the fact that Teslas were on the road for so long with a sub-par braking distance is more evidence of a problem with Tesla than it is evidence of a benefit of OTA updates.
The other factor in that brake story is that it took mere days for Tesla to release an update to "fix" the brakes. This isn't a good thing. The fact that it was accomplished so quickly means that the OTA update was likely not tested very well. It also means that the issue was easy to fix, which calls into question why it wasn't fixed before. It also highlights the fact that Tesla, for some reason, failed to do the most basic testing on their own cars for braking distance. Comparing the braking distance of their cars should have been one of the very first things they did before even selling the cars, but apparently it took a third party to do that before Tesla was even aware of the issue. This doesn't inspire confidence in Tesla cars at all.
EDIT: The comment I was replying to was heavily edited after I responded. It originally said something along the lines of improving braking distance is good but there is no evidence that it would improve safety.
> if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.
Nobody is arguing that. We're arguing that there is no evidence the Tesla OTA update made the cars safer on net.
You're trying to set up some sort of "OTA updates are dangerous in general, but this one is clearly good, how do we balance it" conversation, but the problem is, this OTA update is not clearly good. OTA updates are dangerous in general, and also in this case in specific. You need to find a better example where there's actual difficult tradeoffs being made, and not just a manufacturer mishandling things.
If the car can’t see the obstacle, the braking distance simply does not matter.
As for your edit, you clearly misread the original comment, which is why I edited it for you. I said that there was no evidence that the OTA made the car safer. Please try to read with better comprehension instead of trying to misrepresent my comments.
You don't have enough information to come to that conclusion.
It's quite common to have to brake hard to avoid a cousin. It's pretty uncommon to see the specific scenario triggering this crash behavior.
Avoid doing them in the first place? It's not like bit rot is - or should be - a problem for cars. It's a problem specific to the Internet-connected software ecosystem, which a car shouldn't be a part of.
So basically: develop software, test the shit out of it, then release. If you happen to find some critical problem later on that is fixable with software, by all means fix it, again test the shit out of it, and only then update.
If OTA updates on cars are frequent, it means someone preferred to get to market quickly instead of building the product right. Which, again, is fine for bullshit social apps, but not fine for life-critical systems.
Part of me wonders if there should be a very quick, unskipable, animated, easy to understand explanation of the patch notes before you can drive when they make material changes to core driving functionality.
Tesla did make a mistake calling it Autopilot, but only because regular folk don't understand that aircraft autopilot is literally a heading, altitude, and speed, and will not make any correction for fault. Aircraft autopilot will fly you straight into a mountain if one happens to be in the way.
> Tesla did make a mistake calling it Autopilot, but only because regular folk don't understand that aircraft autopilot is literally a heading, altitude, and speed, and will not make any correction for fault. Aircraft autopilot will fly you straight into a mountain if one happens to be in the way.
Auto-TCAS and Auto-GCAS exist, and the public is aware of them: E.g. http://www.airbus.com/newsroom/press-releases/en/2009/08/eas.... http://aviationweek.com/air-combat-safety/auto-gcas-saves-un....
This is beyond broken, it's a fundamental misunderstanding of how physical products are supposed to work. Software people have gotten used to dismiss the principle of least astonishment because they know better —and no user got killed because of a Gmail redesign—, but this is a car, it's hardware with its user on-board, a lot of kinetic energy and all of it relies on muscle memory.
Highways are not, nor should they ever be if at all possible, proving grounds.
The second thing is to require the owners to take some action as part of the installation procedure, so that it is hard for them to overlook the fact that it has happened.
The third thing is that changes with safety implications should not be bundled with 'convenience/usability' upgrades (including those that are more of a convenience for the manufacturer than for the user.) To be fair, I am not aware of Tesla doing that, but it is a common enough practice in the software business to justify being mentioned.
And it has to be done securely. Again, I am not aware of Tesla getting this wrong.
A problem with variable stopping distances is the sort of thing that should be blindingly obvious in the telemetry data from your testing procedures. Brake systems, and ABS controls in particular, are normally rigorously tested over the course of 12-18 months in different environments and conditions. That Tesla completely missed something like that suggests either their testing procedures are drastically flawed (missing something that CR was able to easily and quickly verify in different cars), that their software development process isn't meshed up with their hardware testing and validation, or a combination of the two. Neither option is a good one.
The fact that Tesla was able to shave 19 feet of their braking distances is horrifying. After months of testing different variations and changes to refine your braking systems, shaving off an extra 19 feet should be impossible. There shouldn't be any room to gain extra inches without making tradeoffs in performance in other conditions that you've already ruled out making. If there's an extra 19 feet to be found for free after a few days of dev time, you did something drastically wrong. And that's completely ignoring physical testing before pushing your new update. Code tests aren't sufficient; you're changing physical real-world behavior, and there's always a tradeoff when you're dealing with braking and traction.
Tesla is being praised by consumers and the media because, hey, who doesn't like the idea that problems can be fixed a couple days after being identified? That's great. In this case, Tesla literally made people's cars better than they were just a few days before. But it trivializes a problem with very real consequences, and I hope that trivialization doesn't extend to Tesla's engineers. Instead of talking about a brake problem, people are talking about how great the fast OTA update for the problem is. Consumers find that comforting, as the OTA updates can makes what's otherwisea pain in the ass (recalls and dealer visits for software updates) effortless.
Hell, I'm a believer in release early, release often for software. Users benefit, as do developers. At the same time, the knowledge that you can quickly fix a bug and push out an update can be a bit insidious. It's a bit of a double-edged sword in that it gives you a sense of comfort that can bite you in the ass as it trivializes the consequences of a bug. And when bug reports for your product can literally come in the form of coroner's reports, that comfort isn't a good thing for developers.
I see a Tesla, and I try to get away from them as soon as possible.
Nope. I see tremendous numbers of distracted drivers who don't even realize there's a threat. I also see many utterly incompetent drivers who will not take any evasive action, including braking, because they simply don't understand basic vehicle dynamics or that one needs to react to unexpected circumstances.
Require updates to be sent to a government entity, which will test the code for X miles of real traffic, and then releases the updates to the cars. Of course, costs of this are to be paid by the company.
So, no filter, but government penalties and legal remedies should be available.
That's exactly the impression that I don't get from Tesla very much. Instead I see the follwing:
Get that thing to market as quickly as possible. If the software for safety critical systems is sub-par, well, can be fixed with OTA updates. That's fine for your dry cleaning app. For safety critical software that's borderline criminal
Hype features far beyond their ability (autopilot). Combine this with OTAs, which potentially change the handling of something that is not at all autopilot, but actually some glorified adaptive cruise control. For good measure: Throw your customers under the bus if ineviteble and potentially deadly problems do pop up
Treating safety issues merely as a PR problem and acting acordingly. Getting all huffy and insulted and accusing the press of fake news when such shit is pointed out
I could go on. But such behavior to me is not a company signaling that safety is of paramount concern.
"That does mean complete transparency during investigations, a complete audit trail of every software function invoked prior to a crash."
Let's just say that Tesla's very selective handling and publication of crash data does not signal any inclination for transparency.
Government has a valid role to play, though, by requiring full disclosure of the contents of updates and "improvements," by setting and enforcing minimum requirements for various levels of vehicle autonomy, and by mandating and enforcing uniform highway marking standards. Local DOTs are a big part of the problem.
Cars have been made safe for us also by direct intervention by the government. From important things like mandating seat belts and crash safety to smaller things like forcing the recall of tens of millions of faulty air bag inflators.
These are just a few of the many things Uncle Sam has done to make things safer for us.
First, any change in design (or in configuration, in the case of repairs) is backed by PEs or A&P mechanics who sign off on the changes. Their career rides on the validity of their analysis so that's a better guarantee than some commit message by a systems programmer.
Second, the FAA basically says "show us what you are changing" after which they will absolutely require physical tests (static or dynamic tests, test flights, etc., as appropriate to the scope of change).
And I'd say flying is so safe mainly from the blameless post-mortem policy that the American industry instantiated decades ago and which is constantly reinforced by the pros in the NTSB. It's a wonderful model for improvement.
As an example, the crash of N121JM on a rejected takeoff was due (only in part) to a defective throttle quadrant/gust lock design that went undetected during design and certification, in part because it was argued to be a continuation of a conformant and previously certificated design. (Which is relevant to the current discussion in that if you decide to make certification costly and time-consuming, there will be business and engineering pressure to continue using previously certificated parts, with only "insignificant changes".)
PS: I 100% agree on the NTSB process' contribution to safety.
If an engineer signs off a change they sign that they have validated all the constraints and that for all they know the machine will work within the specs with no faults.
If a software engineer commits code we may run some tests over it, look a bit over it. That's fine. But if the software ends up killing anyone, the software engineer is not responsible.
And yes, to my knowledge, every change to an aircraft is tested before flight or atleast validated by an engineer that understands what was just changed.
See also: The FDA and drug approvals.
At the very least, you should be able to get some sort of magnitude/fuzzy understanding of how frequently the new code is disagreeing, and you can figure out where and go check out those conditions.
An ML solution stops "learning" after training and only reacts.
To illustrate the difference, have you driven on roads under construction lately? As humans, when you've driven the same road hundreds of times, you start to do the same thing as a machine learned implementation. You drive by rote.
When you get to that construction zone though, or the lines get messed up, your brain will generally realize something has changed, and you'll end up doing something "unpredictable", i.e. learning a new behavior. The Machine Learning Ali's output (a neural net) can't do that. It can generify a pattern to a point, but it's behavior in truly novel circumstances cannot be assured.
Besides which, the problem still stands that the system is coded to ignore straight ahead stationary objects. Terrible implementation. It should look for overly fast and uniform increase in angular field coverage combined with being near stationary in terms of relative motion as a trigger to brake. I.e. If a recognized shape gets bigger at the same rate on all "sides" whilst maintaining a weighted center at the same coordinate. It's one of the visual tricks pilots are taught to avoid mid-air collisions.
Admittedly though, the human brain will likely remain WAY better at those types of tricks than a computer will be for a good long time.
You can do this sort of stuff when replacing a web service too, by the way. For example running two versions of Django and checking if the new version produces any difference for a week before making it the version the client actually sees.
You can look at your code and say "For all invalid XML, this, for all input spaces, that." You can formally prove your code in other words.
You CANNOT do that with Neural Nets. Any formal proof would simply prove that your neural network simulation is still running okay. Not that it is generating the correct results.
You can supervise the learning process, and you can practically guarantee all the cases within your training data set, and everyone in the research space is comfy enough to say "yeah, for the most part this will probably generify" but the spectre of overfitting never goes away.
With machine learning, I developed a rule of thumb for applicability: "Can a human being who devotes their life to the task learn to do it perfectly?"
If the answer is yes, it MAY be possible to create an expert system capable of performing the task reliably.
So lets apply the rule of thumb:
"Can a human being, devoting their life to the task of driving in arbitrary environmental conditions, perfectly safely drive? Can he safely coexist with other non-dedicated motorists?"
The answer to the first I think we could MAYBE pull off by constraining the scope of arbitrary conditions (I.e. specifically build dedicated self-driving only infrastructure).
The second is a big fat NOPE. In fact, studies have found that too many perfectly obedient drivers typically WORSEN traffic in terms of probability to create traffic jams. Start thinking about how people drive outside the United States and the first-world in general, and the task becomes exponentially more difficult.
The only things smarter than the engineers trying to get your car to drive itself are all the idiots who will invent hazard conditions that your car isn't trained to handle. Your brain is your number one safety device. Technology won't change that. You cannot, and should not outsource your own safety.
I mean what would have happened here if another Tesla or two were directly behind Huang, following his car's lead?!
Possibly nothing, I'd assume the stopping distance would be observed and the following cars would be able to stop/avoid, but I wouldn't like to bet either way. Perhaps, in some conditions, the sudden impact on the lead car would cause the second car to lose track of the rear end of the first? Would it then accelerate into it?
An attentive human would realize something horrible had happened and perhaps reacted accordingly. A disengaged or otherwise distracted one may not have the reaction time necessary to stop the system from plowing right into the situation to make it worse.
Because there are no closed facilities that you can use to actually perform any meaningful test. You could test "in-situ", but you would need an absolutely _huge_ testing area in order to accurately test and check all the different roadway configurations the vehicle is likely to encounter. You'll probably want more than one pass, some with pedestrians, some without, some in high light and some in low, etc..
It's worth noting that American's drive more than 260 billion miles each _month_. It's just an enormous problem.
[ABS News Reporter] Dan Noyes also spoke and texted with Walter Huang's brother, Will, today. He confirmed Walter was on the way to work at Apple when he died. He also makes a startling claim — that before the crash, Walter complained "seven-to-10 times the car would swivel toward that same exact barrier during auto-pilot. Walter took it into dealership addressing the issue, but they couldn't duplicate it there."
It is very believable that the car would swivel toward the same exact barrier on auto-pilot.
BTW - I'm running a nonprofit/public dataset project aimed at increasing safety of autonomous vehicles. If anyone here wants to contribute (with suggestions / pull requests / following it on twitter / etc) - you'd be most welcome. Its: https://www.safe-av.org/
But you're right, if you're really going to do a full roll out, may as well test it on a subsegment first - I'd hate for it to be used as a debugging tool though.
As I understand it, this is essentially what they were doing with the autopilot 'shadow mode' stuff. Running the system in the background and comparing its outputs with the human driver's responses, and (presumably) logging an incident when the two diverged by any significant margin?
I recently got downvoted for that exact line of reasoning.
Looks like some people don't like to hear that :-)
- known preventable deaths from, say, not staying in the lane aggressively enough;
- possible surprises and subsequent deaths.
There is an ethical conundrum (that is quite different from the trolley problem) between a known fix and an unknown possibility. If both are clearly establish and you dismiss the former to have a simplistic take, yes, you would be down-voted because you are making the debate less informed.
Without falling to solutionism, in that case, the remaining issue seems rather isolated to a handful of areas that look like lanes and could be either painted properly, or that Telsa cars could train to avoid. The later fix would have to be sent rapidly and could have surprising consequences -- although it seems decreasingly likely.
That learning pattern (resolving unintended surprises as they happen decreasingly often) is common in software. This explains why this community prefers (to an extend) Telsa to other manufacturers. Others have preferred the surprise-free and PR-friendly option of not saving the dozens of thousand of lives dying on the road at the moment. There are ethical backgrounds to non-interventionism.
As the victim of car violence, I happen to think that their position is unethical. I’m actually at what is considered an extreme position, of being in favour of Telsa (and Waymo) taking more risk than necessary and temporarily increasing the number of accidents on the road because they have a far better track record of learning from those accidents (and the subsequent press coverage) and that would lower the overall number of deaths faster.
As it happens, they don’t need to: even with less than half of accidents from their counterparts, they still get spectacular learning rate.
> I think you got down-voted because you misrepresented what Telsa is actually doing, which is a difficult arbitrage between:
> - known preventable deaths from, say, not staying in the lane aggressively enough;
> - possible surprises and subsequent deaths.
I don't think I was misrepresenting anything (at least, I was trying not to). I just pointed out that behaviour-changing updates that may be harmless in, say, smartphone apps, are much more problematic in environments such as driving-assisted cars.
I think this is objectively true.
And I think we need to come up with mechanisms to solve these problems.
> That learning pattern (resolving unintended surprises as they happen decreasingly often) is common in software.
My argument is that changes in behaviour are (almost automatically) surprising, and thus inherently dangerous. Unless my car is truly autonomous and doesn't need my intervention, it must be predictable. Updates run the risk oof breaking htat predictability.
> Others have preferred the surprise-free and PR-friendly option of not saving the dozens of thousand of lives dying on the road at the moment.
My worry is that (potentially) people will still die, just different ones.
> being in favour of Telsa (and Waymo) taking more risk than necessary
If I'm taking you literally, that's an obviously unwise position to take ("more than necessary"). But I think I know what you meant to say: err on the side of faster learning, accepting the potential consequences. Perhaps like NASA in the 1960s.
But my argument was simply that there is a problem with frequent, gradual updates. Not that we shouldn't update (even though that's actually one option).
We ought to search for solutions to this problem. I can think of several that aren't "don't update".
But claiming that the problem doesn't exist, or that those that worry about it are unreasonable, is unhelpful.
In the accident, the driver double-tapped the Park button which activated the Autopark feature, exited the car and walked away. The car proceeded to move forward into an object. The driver claimed he merely put it in park and never activated the auto-park feature. Tesla responded with logs proving he double-tapped and there were audible and visual warnings.
Well, I looked more closely. Turns out Tesla pushed an OTA update that added this "double-tap Park to Autopark" shortcut. And it's one bad design decision after another.
First, let's note the most obvious design flaw here: The difference between instructing your car to immobilize itself (park) and instructing it to move forward even if the driver was absent (Autopark) was the difference between a single and double tap on the same button. A button that you might be in the habit of tapping a couple times to make sure you've parked. So it's terrible accident-prone design from the start.
Second issue is user awareness. Normally Tesla makes you confirm terms before activating new behavior, and technically they did here, but they did it in a horribly confusing way. They buried the mere mention of the Autopark shortcut under the dialog for "Require Continuous Press". So if you went to turn that off -- that's the setting for requiring you to hold down a FOB button during Summon -- and you didn't read that dialog closely, you would not know that you'd also get this handy-dandy double-tap Autopark shortcut.
Third is those warnings. Useless. They did not require any kind of confirmation. So if you "hit park" and then quickly exited your vehicle, you might never hear the warning or see it on the screen that, by the way, that shortcut that you didn't know existed just got triggered and is about to make the car start moving forward while you walk away.
So I think it's quite plausible that the driver was not at fault here -- or at least that it was a reasonable mistake facilitated by poor design. It's unfortunate that Tesla was able to convince so many with a "data dump" that the driver was clearly at fault.
I still recall that poor NYT journalist that Tesla "caught driving in circles" -- while looking for a hard-to-find charger at night. Now I hope we are developing a healthier skepticism to this (now standard) response from Tesla and look more deeply at potential root causes.
a) Clearly different road markings between the gore section and normal lanes
b) BIG brightly colored and lit sign above the barrier
c) the barrier itself has a shock absorber/deflection thing like 10 meters in front of the concrete
I know what you mean, once you truly start to trust someone else to do a job you simply stop giving it any attention. It's being handled. So you maybe hover a bit at first, keep an eye, generally interfere. Once that stage is over, you just do other things safe in the knowledge that someone else is competently handling the task. Until it blows up in your face.
This level of autopilot legitimately terrifies me. Not because it's bad, but because of the way it will make the humans who are supposed to still be responsible stop paying attention
I am utterly, completely lacking in surprise that they didn't provide the relevant context, "... fifteen minutes prior."
This just looks... really bad for Tesla. It's more important to them to protect their poor software than drivers.
You think this one crash makes them evil?
Without cars, our level of productivity would be a fraction of what it is today as employment is confined to a tiny geography. Many more people would die from fires, disease, and crime as emergency services arrive on horse drawn carriage. Most people would never venture out of their hometowns.
Car companies, for all their faults, for all their fraud and corruption, create products that immeasurably benefit us every day. Before we call them evil, we must look at the impact they have on each of us. That impact is decidedly positive, as evidenced by the widespread ownership of cars.
Stepping back further, away from this accident, Tesla is also a leading player in moving away from destroying our planet. By that I mean they are pushing renewable energy. Again, not something I would call evil, let alone blatantly evil.
* every update to the software makes the record irrelevant, because one is no longer driving the same car which set the previous record. Lane centering for instance was introduced in an OTA update and it likely contributed to this accident.
* most of the safe miles driven by Teslas are not with autopilot on. The NTSB explicitly said they did not test autopilot.
* finally, some HN users did the math and it turns out that humans have overall a better safety record than autopilot Teslas.
To me it looks like Tesla's communication is only reflective of covering their asses. From blaming that journalist to the latest accident.
Note: I work for a Tesla competitor.
I disagree that their communication is only reflective of them covering their ass. I feel their is the expectation that self-driving is going to prevent more deaths than it causes. Especially as the technology improves, but even to an extent now if only through the virtue of it not being a system people should be using without being ready to intervene.
Don't get me wrong here, you raise good points. I just don't think it's a case of blatant evil.
Also, if you have the link to the math, I'd love to read it.
You don't work in software, right?
An ad hominess attack doesn't not trump evidence. My claim is true. The above article talks about an example of an update which improved the safety of the vehicle.
In addition to that, you're arguing against a strawman. I never disagreed that there was potential for the safety to be negatively impacted with every update. In fact, I explicitly agreed with this claim.
The point you are missing is that while update might be slightly enhance safety, the negative, unpredictable impact might be catastrophic. because one is planned and other one is not.
My acknowledgement only make sense if I agree that there is some level of danger in each update. That is why you're addressing a straw man.
I feel like there is a language barrier:
Or maybe you're being uncharitable with me, because as you put it in our other thread you find the things I've said "stupid". So you are just guessing that I hold the stupidest possible belief you can ascribe me, even when I tell you otherwise.
No. You are not missing the potential. But you do not seem to get difference in magnitude. One is incremental, reviewed safety enhancement, Other is unpredictably catastrophic.
You only seem to grasp very superficial aspects of my comments, which is why I requested to give some thought before responding. So I think there is some kind of barrier. But it is not of language. but for lack of a better word, I think it is of a lack of enough shared sensibilities..
Right now human driving is one of the leading causes of death. I believe that technology can eventually eliminate this as a leading cause of death. So I project a much greater potential upside. I also figure this is a matter of time and effort applied to the problem. Or in other words, there is a finite amount of time before an update brings the car to this point. This puts a ceiling on my mental tabulation of the amount of risk endured prior to achieving an extremely good end. So despite the severe risks, the limited nature of that risk allows me to rule in favor of taking the risk despite its presence.
You're assuming that I haven't pictured a sweeping update which adds the car murdering anyone who was unaware. I have! Your assumption is incorrect.
And if I was being superficial, I would have answered that yes, I'm a software developer. But its a fallacious appeal to authority.
There is not enough data to do this "scaling up". So doing so would be incredibly misleading (But doesn't stop tesla's PR from doing the same).
>Tesla is also a leading player in moving away from destroying our planet.
The actions of this company and the persons behind this somehow does not feel compatible with such a goal. I am sorry. I am just not buying it. It is more probable that this "saving the planet" narrative is something that is meant to differentiate from the competition and to attract investors. Do you think Elon Musk could have created a company that builds ICE cars and emerged as a major player? It is "save the planet" for tesla and "save humanity by going to mars" for spacex..
I mean, is this so hard to see?
There are tens of thousands of Tesla vehicles on the road, many of which have been driven for years. However, the strength of Tesla vehicles safety doesn't rest on Tesla vehicles alone. Tesla vehicles are a class of vehicle which implement driver assistance technologies. There are many other cars that do this. Independent analysis of these cars in aggregate have shown them to reduce car accident frequency and severity.
> The actions of this company and the persons behind this somehow does not feel compatible with such a goal. I am sorry. I am just not buying it. It is more probable that this "saving the planet" narrative is something that is meant to differentiate from the competition and to attract investors.
Tesla is a leader in the renewable energy sector. There is a need for renewable energy as a consequence of climate change. Being a leading player in renewable energy means being a leading player in combating climate change. So Tesla is a leader in combating climate change. Combating climate change is an effort to save the planet. So Tesla is a leading player in the effort toward saving the planet.
At no point in the chain of logic is it necessary to call upon the motivations of Elon Musk. If someone were kill another person, the motivation for doing that deed would not change whether or not they did in fact kill someone. In the same manner, the fact that Tesla is helping to solve the problem of climate change is a fact irregardless of the motivation of its founder.
>At no point in the chain of logic is it necessary to call upon the motivations of Elon Musk.
We are interested in their motivation because we are thinking long term. When you are need of a million bucks, and a person shows up with a million bucks that they are willing to give you, without asking for payback, will you accept it right away? Or will you try to infer the true motivation behind the act, that may turn out to be sinister? This is irrespective of the fact that the other person is giving you real money, that can help you right now. Will you think like, we don't need to worry about their motivations as long as we are getting real money. Will you?
Hope I am clear.
I brought up driver assistance technology as a way to continue discussing safety statistics. If you recall, I claimed autopilot was safer and you ruled this out on the basis of not enough information. Now you are saying that you don't feel the broader class is relevant to the discussion. So we return to the point where there is not enough information to make statistical claim about safety. As a consequence of returning to this point, your own claim about the system being half baked is without merit. Its a claim about the performance of the system which you have claimed we can not characterize with the currently available statistics.
> We are interested in their motivation because we are thinking long term.
The thing I'm ultimately arguing against is the idea that Tesla is as you put it blatantly evil. Blatant means to be open and unashamed, completely lacking in subtlety and very obvious. The things Tesla is doing with regard to the environment are blatantly good. They say they are doing it because of care for the environment and their actions reflect that. If we think long-term, their actions are part of what allows the long term to exist in the first place. They are not just lacking in shame for that, they are proud of it. Brag about it. Exult in it. It is blatant that they care about the environment.
In your post you're saying that you speculate that their motivations might not be what they have claimed. This contradicts the idea of blatant evil. Blatant evil is obvious, lacking in shame, lacking in subtlety. The hiding of something is the definition of subtlety. The need to hide is reflective of a shame.
I claimed the feature they call "Autopilot" is unsafe because it has only limited capability (as per Tesla's documentation). But the naming of the feature and its marketing inspires false confidence in the drivers, leading to accidents. This is a very simple fact, and it should have been apparent to people a Tesla, and the fact that they went ahead and did this kind of marketing makes them "blatantly evil" in my books. Because, as you said, it is open and they are unashamed about it. Other safety features that are widely available in similar cars from other companies is irrelavant here. I am not even sure why you dragged it into this.
>If we think long-term, their actions are part of what allows the long term to exist in the first place.
What kind of circular logic is that? If they are not really interested (their real motivation) in the "long term", then their actions cease to be part of "what allows long term to exist".
In citing their documentation, you acknowledge that there communication is enough to deduce the limits of their technology. In claiming that there is not enough data to make declarations about safety, you disavow the validity of your own proclamation of (a lack of) safety. In doing so, you've refuted many of the premises of your own argument.
> But the naming of the feature and its marketing inspires false confidence in the drivers, leading to accidents.
How is this different from any other name? Every word concept pair starts out without the word and the concept linked together. For example, the name given to our species is 'homo sapien' which means roughly 'wise human being'. But humans aren't always wise. So why isn't the person who coined the term 'homo sapien' blatantly evil for coining the term?
> If they are not really interested in the "long term", then their actions cease to be part of "what allows [the] long term to exist".
Maybe we're talking past each other but this is... an absurd idea. And wrong. So very wrong.
If someone wakes up in the morning and they say they got up because they wanted to see the face of their loved one, but really they got up because they wanted to pee, they still got up out of bed. The existence of imperfectly stated motivations doesn't cause a cessation of causal history.
Not deducing. By what they explicitly state in the manual. About the "need to keep hands on the wheel always". So again. I am not "deducing" it.
>So why isn't the person who coined the term 'homo sapien' blatantly evil for coining the term?
I don't know. Was the person who coined the the term trying to sell human beings as being wise? Are people suffering because of this word? What is your goddamn point?
Tesla is evil because they use lies to SELL. use lies and project a false image to get INVESTMENT. Please keep this in mind when coming up with further examples.
>The existence of imperfectly stated motivations doesn't cause a cessation of causal history.
Ha. Now you are talking about "history" that does not exist yet. Are you really this misguided or just faking it?
> Tesla is evil because they use lies to SELL. use lies and project a false image to get INVESTMENT. Please keep this in mind when coming up with further examples.
You've utterly failed to establish that they are lying.
> Are you really this misguided or just faking it?
Tesla already has an established history. Therefore, it is not necessary to speculate about future history.
>You've also clearly haven't understood anything I've said during this entire conversation.
Oh I understood you just fine. I just find it stupid.
>You've utterly failed to establish that they are lying.
That is because you are overly generous with assumptions to justify their claims, which is typical of people who are apologetic of fraudulent entities such as Musk
>Tesla already has an established history...
But they haven't save the planet yet. Please give some thought about what you are writing before responding.
No, I actually conceded. I gave up the generous assumptions on safety, backed by data, because you claimed we couldn't generalize from that data and I agree that doing such a generalization would be in some ways misleading.
This is what I mean by a lack of understanding on your part. Even in the post where you are telling me that you understood me just fine, but find my ideas to be stupid, you don't actually address what I'm saying.
As a consequence, I'm not going to continue this conversation. Have a nice day.
Indeed; here's a close-up of the critical area. Note that the darker stripe on the bottom of the photo is NOT the crucial one; the one that the car was supposed to follow is the much more faded one above that that you can barely see:
(Note that I'm not blaming the faded paint; it's a totally normal situation on freeways that it's entirely the job of the self-driving car to handle correctly. But I think it was what triggered this fatal flaw.)
I don't even see any sort of road-bumps to warn drivers of this dangerous obstacle approaching.
Note that I'm not defending the safety of them; I'm just surprised to see someone call them out as a hazard as they're such a common sight.
To compare, here's a similar situation in The Netherlands:
Points of interest:
- big solid white arrow, line-fade is nearly impossible
- white-green sign indicating that the road is splitting
- big shoulder in the continuation of the division
- loads of grass instead of immediately using a metal barrier
- gently rising metal barrier, so driving straight into it will result in something like this: http://cdn.brandweersassenheim.nl/large/269dezilkvangrail104...
Same goal, but a completely different approach.
Here's an accurate U.S. analogue to your example: https://email@example.com,-102.1517159,3a,75y,...
Meanwhile, here's a Dutch example: https://firstname.lastname@example.org,4.4371182,3a,75y,66....
Turns out sometimes you need crash attenuators because space isn't infinite. Also, there's no grass in sight here.
Here's another example: https://email@example.com,4.4203482,3a,67.8y,2...
No sloping barriers, because they're not always safer.
First, your counter-example is, ironically, pretty apples-to-oranges, as it is literally in the middle of nowhere. Meanwhile, the municipality where my interchange is located has a population density 1.5x that of Mountain View.
About the A20: it was built around 1970, inspired by American designs. Something like this would probably not be built today. Meanwhile, the specific ramp where the accident occurred, was constructed around 2006.
I do agree that safety measures should be adjusted according to their location, there is indeed no one-size-fits-all solution here.
Also, that gore is freshly painted. Here's the same gore with faded paint. Mind you, it wasn't as bad, but even there you get faded painting: https://firstname.lastname@example.org,5.1608083,3a,75y,109....
That fence wouldn't be fun to drive into at full speed, and the lane markings are worn, but it's still far less awful than a concrete wall.
In basically all cases where there's a lot of pavement that isn't a lane there's diagonal lines of some sort that make it very clear that there isn't a lane there. A good chunk of the time there's a rumble strip of some sort.
In rural areas there's less signage, reflectors and barriers but the infield is usually grass, dirt or swampy depending on local climate.
Edit: and I'm wrong because why?
Such "pretends to work but actually doesn't", IMNSHO, would be far worse than "doesn't work there at all"
A few reflectors, a crumple barrier or some barrels and you've got a highway divider start! Certainly not as lengthy or as well marked as the Dutch example. This one I used to drive by in KC almost daily looks similar to the Tesla accident one (granted, this example does have some friendly arrows in the gore): https://email@example.com,-94.6774548,3a,75y,1...
I have also seen so many people crash into this one that they put up yellow hazard lights: https://firstname.lastname@example.org,-94.594365,3a,75y,17...
- The area one is not supposed to drive in doesn't appear to be marked. Where I live, it would be painted with yellow diagonal stripes.
- On a high-speed road, there would be grooves on the road to generate noise if you are driving too close to the edge of the lane.
- Paint on the road is rarely the only signal to the driver (because of snow or other conditions that may obscure road markings). There would be ample overhead signs.
- Unusual obstacles would always be clearly visible: painted with reflective paint or using actual warning lights.
- We rarely use concrete lane dividers here. Usually these areas consist of open space and a shallow ditch, so you don't necessarily crash hard if you end up driving in there. There's usually grass, bushes, etc. There are occasional lane dividers, of course, when there's no space to put in an open area. However, the dividers are made of metal and they are not hard obstacles and fold or turn your vehicle away if you hit them (and people rarely do because of the above).
I'm sure there are some dangerous roads here, too, but a fatal concrete obstacle like this, with highway speeds, with almost no warning signs whatsoever, is almost unheard of.
However, it's not consistent. Some end quite abruptly as well, and have a plain sign in the middle without any slowly-rising divider.
The problem with the attenuators are that they don’t get replaced fast enough which makes these accidents a lot deadlier.
You can see the difference between a used and unised attenuator in this article.
I could easily imagine a stressed human driver unfamiliar with the area following that unstriped gore area as a 'lane', too.
I don't think a road like this would be possible in most of the EU. Autopilot needs to be fixed, but this road is also super dangerous and probably would not be allowed in the EU.
How often do non-autopilot cars fatally crash here? This does look like a bit of death trap to me!
I've driven in almost all countries in the union and I'm sure that and worse is readily available in multiple EU countries. While it's true that the EU subsidizes a lot of road construction local conditions (materials quality, theft, sloppy contractors) have a huge impact on road and markings quality.
However, I've driven a significant number of hours in Spain, Switzerland, Germany, UK, Ireland, Sweden, Denmark, Norway, Sweden, Finland, Estonia, and I wouldn't say that this kind of concrete divider is "readily available" as a normal part of a high-speed road, lacking the high-visibility markings I outlined in the post above, year after year as a normal fixture. In fact, I don't remember ever once seeing a concrete divider like this in the EU, even temporarily, but please prove me wrong (and maybe we can tell them to fix it!).
At highway speeds, lane dividers are only used when there is a lane traveling in the opposing direction right next to you. There is no point in concrete dividers if all the traffic is traveling in the same direction. At highway speeds, opposing lanes should be divided by a lot of open space and metal fences that don't kill you when you hit them.
A non-autopilot car crashed at this exact spot a week earlier. This crumpled a barrier that is intended to cushion cars going off the road here, and contributed to the death of the Tesla driver.
The fact that a human crashed at this exact spot confirms that it really is an unsafe death trap.
You can see that yellow line at the accident site.
Now, why didn't they start a new yellow line where the lane split? That would give drivers (and software) an important cue: if you are driving down a "lane" with a yellow line on the right, something is seriously wrong!
A little off topic, but I'm curious: I usually use "by design" to mean "an intentional result." How do other people use the term? In this case, the behavior is a result of the design (as opposed to the implementation), but is surely not intentional; I would call it a design flaw.
In fact, it is intentional! Meaning that the system has a performance specification that permits failure-to-recognize-lanes-correctly in some cases. This element of the design relies on the human operator to resolve. Once the human recognizes the problem, either they disengage the autopilot or engage the brakes/overpower the steering.
Now, you could argue that the design should be improved and I would agree. But we should perhaps step back and consider some meta-problems here. As others have stated, the functionality cannot deviate significantly from previous expectations without at a bare minimum an operator alert, training pamphlet or disclaimer form. Tesla's design verification likely should be augmented to more comprehensively test real-world scenarios like this.
But the real core issue is that the design approaches this uncanny valley of performing so terribly close to parity with human drivers that human drivers let their guard down. IMO it's the same problem as the fatality in Phoenix w/an Uber safety driver (human driver). When GOOG's self driving program first monitored their safety drivers they found that they didn't pay attention, or slept in the car. IIRC they added a second safety driver to try to mitigate that problem.
Working as intended: the system works in a basic average-observer human sense
Working as implemented: there were no errors in implementation, and the system is performing within the tolerances of that implementation (but the implementation itself may be flawed, or may be to a design that violates average human expectations).
You send a radar signal out, then it bounces off of stuff and comes back at a frequency that depends on your relative motion to the thing it is bouncing back from. Given all of the stationary stuff around, there is a tremendous amount of signal coming back from "stationary stuff all around us", so the very first processing step is to put a filter that causes any signal corresponding to "stationary" to cancel itself out.
This lets you focus on things that are moving relative to the world around them. But makes seeing things that are standing still very hard.
Many animal brains play a similar trick. With the result that a dog can see you wave your hand a half-mile off. But if the treat is lying on the ground 5 feet away, it might as well be invisible.
As far back as the 1970s helicopter-mounted radars like Saiga were able to detect power lines and vertical masts. That one could do so when moving up to 270 knots and weighed 53kg.
The same radar set that worked these wonders while flying would have been absolutely useless for a ground based vehicle.
The easier that you distinguish the signal you want from the signal that you don't, the easier it is to make decisions. For radar, that is far, far easier with moving objects than stationary ones.
In this case, the radar system itself is moving... including moving relative to stationary objects. So I don't see how what you say makes sense.
Not saying you are wrong. Just saying I don't follow your explanation.
The real issue is false positives from things like soda cans on the road, signs next to the road, etc. Can't have the car randomly braking all the time for such false positives. As a result, they just filter out stationary (relative to the ground) objects, and focus on moving objects (which are assumed to be vehicles) together with lane markings. This is why that one Tesla ran right into a parked fire truck.
Interestingly, I've discovered one useful trick with my purely camera-based car (Subary equipped with Eyesight): if there is a stationary or almost stationary vehicle up ahead that it wasn't previously following, it won't detect it and will consider it a false positive (as it should, so it doesn't brake for things like adjacent parked cars), but if I tap the brake to disengage adaptive cruise control and then turn the adaptive cruise control back on, it will lock on to the stopped car up ahead.
Aside: the following another car heuristic is dumb. It's ultimately offloading the AI/decision making work onto another agent, an unknown agent. You could probably have a train of teslas following eachother following a dummy car that crashes into a wall and they'd all do it. A car 'leading' in front that drifts into a metal pole will become a stationary object and so undetectable.
It's the Machine Learning equivalent of Zen navigation: http://dirkgently.wikia.com/wiki/Zen_navigation
I expect any system that lets me drive with my hands off the wheels for periods of time deals with stationary obstacles.
What is being described here, if it's correct, is a literal "WTF" compared to how Autopilot was pitched.
I wouldn't be surprised if the US ultimately files charges against Tesla for wrongful death by faulty design and advertising.
IMO, maybe the roads need to be certified for self-driving. Dangerous areas would be properly painted to prevent recognition errors. Every self-driving system would need to query some database to see if a road is certified. If not, the self-driving system safely disengages.
Cruise, who uses the 32 laser velodynes avoids highway/freeway speeds because their lidar doesn't have the range to reliably detect obstacles at that distance.
My understanding is that Telsa's autopilot is pretty different from other more mature self-driving car projects. I wouldn't read too much into it.
Does Uber use LIDAR?
And I agree with certified roads. But even then, if a truck drops cargo in the middle of the lane, you'd want to be in a car that can detect something sitting still in front of you.
I'm sure Tesla's engineers are qualified and it is certainly easy to second-guess them, but it is beyond me why they would even consider a BW camera in a context where warning colors (red signs, red cones, black and yellow stripes, etc.) are an essential element of the domain.
>On Wednesday, Mobileye revealed that it ended its relationship with Tesla because "it was pushing the envelope in terms of safety." Mobileye's CTO and co-founder Amnon Shashua told Reuters that the electric vehicle maker was using his company's machine vision sensor system in applications for which it had not been designed.
"No matter how you spin it, (Autopilot) is not designed for that. It is a driver assistance system and not a driverless system," Shashua said.
Props to Sashua for deciding to put safety above profit. We need people in leadership like this.
Really? Really?? I mean if was designing a self-driving system pretty much the first capability I would build is detection of stuff in the way of the vehicle. How are you to avoid things like fallen trees, or stopped vehicles, or a person, or a closed gate, or any number of other possible obstacles? And how on earth would a deficiency like that get past the regulators?
But on the other hand, gores like that can also trick human drivers. Especially if tired, with poor visibility. In heavy rain or whiteout, I typically end up following taillights. But slowly.
Imagine an image smeared out like on a 15 year old Nokia picture phone, but with very high colour(velocity) precision. That is what a radar sees.
You can argue something between those two options, but ultimately it is just a semantic argument (e.g. "it just chose to ignore it" which is effectively the same as a non-detection, since the response is identical).
True, but not one into which the car should have merged. Although crossing a solid white line isn’t illegal in California, it does mean crossing it is discouraged for one reason or another.
I love seeing the advances in tech, but it’s disheartening to see issues that could have been avoided by an intro driver’s ed course.
While the anti-tesla news is bad, tesla needs to be more clear that this really is a failure of autopilot and their model - they can't expect a human to get used to something working perfectly, clearly is hands were close to the wheel in the very recent past (tesla is famous for not reading hands on wheel).
I'm hoping google can deliver something a bit safer.
that's quite fucked up.
I simply love the inflated "dummy car".
"Unless the road marking is 105% perfect, it's never the fault of the autopilot, but look, autonomous driving!" is just pure marketing, without any substance to back it.
Wait, how can that be? I mean clearly it didn't detect the barrier in this case, but that wasn't by design was it?
Wonder if the barriers can be modified to look like another car to these systems, but still remain highly visible and unambiguous to human drivers?
Part of the overall problem is that these roads were not designed for autonomous driving. This is much like the old paver roads were fine for horses, but really bumpy/distractive to car wheels and suspensions.
Overtime, we adapted roads to new tech. This needs to start happening now too.
The fault here isn’t with road design. The fault here is with Tesla shipping Autopilot without any support for stationary objects, AND their delusional and (should be criminally reckless) decision to not use LIDARs.
A car without ABS or power braking is not legal to be sold today. We need to apply those standards here: anything more than cruise control (where the driver understands they need to pay attention) needs to have certain safety requirements.
At the end of the day we want to know if the autonomous systems are safe. Policy decisions will end up depending on that determination. That requires clear definitions for what constitutes failures and accurate gathering of data.
I wouldn't say it's by design or expected behaviour because if a Tesla approached a stopped vehicle, the expectation would be the car would stop.
One of the options for Autopilot is that you can tell it "Never go more than X MPH over the speed limit" with a common setting being a few miles an hour over.
The car never adjusts the cruise-control set speed by itself, with one exception: if the current road has no center divider, then it clamps the current speed to no more than the posted speed limit + 5 mph. The term "clamps" is in the programming sense: if cruise-control is already set below that speed, nothing changes.
The car never increases the cruise-control set speed. Only the driver can do that.
In other words, the driver had already set the cruise control to 75 mph and likely had the setting at speed limit + 10, which is aggressive. The reason the car accelerated is because it determined there were no cars ahead of it traveling at a speed lower than the set speed. Unfortunately, that conclusion was absolutely correct.
There’s no way in hell Tesla would have gotten away with selling this for so long if their users were allowed to read the unobfuscated source.
The fact that people are given no control over these things that can kill them while the manufacturers can just mess around with it without any real oversight is absolutely insane. I really don’t think the average IRC lurker who could figure out how to compile the firmware could be any more dangerous than the “engineers” who wrote it in the first place.
All of that being said, I still think Tesla has mostly the right approach to their Autopilot system. There is an unacceptably high number of crashes caused by human error and getting to autonomous driving as fast as possible will save lives. It is virtually impossible to build a self driving system in a lab, with all current known methods you must have a large population of vechicles training the system. The basic calculus with their approach is that the safety risks of not getting to autonomous driving sooner is more than the risks of failures in the system along the way to getting there. It is admittedly a very fine line to walk but I do see the logic in it.
I do think that Tesla could do more to educate the users who are using early versions of the software.