With three accidents on "autopilot" in a short period, the defects in Tesla's design are becoming clear.
1. Tesla's "hands on wheel" enforcement is much weaker than their competitors. BMW, Volvo, and Mercedes have similar systems, but after a few seconds of hands-off-wheel, the vehicle will start to slow. Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel. Tesla is operating in what I've called the "deadly valley" - enough automation that the driver can zone out, but not enough to stay out of trouble.
The fundamental assumption that the driver can take over in an emergency may be bogus. Google's head of automatic driving recently announced that they tested with 140 drivers and rejected semi-automatic driving as unsafe. It takes seconds, not milliseconds, for the driver to recover situational awareness and take over.
2. Tesla's sensor suite is inadequate. They have one radar, at bumper height, one camera at windshield-top height, and some sonar sensors useful only during parking. Google's latest self-driving car has five 3D LIDAR scanners, plus radars and cameras. Volvo has multiple radars, one at windshield height, plus vision. A high-mounted radar would have prevented the collision with the semitrailer, and also would have prevented the parking accident where a Tesla in auto park hit beams projecting beyond the back of a truck.
Tesla is getting depth from motion vision, which is cheap but flaky. It cannot range a uniform surface.
3. Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet after sheering off the top of the Tesla driving under the semitrailer. Only when it hit a telephone pole did the car stop.
The Pennsylvania Turnpike crash is disturbing, because it's the case the "autopilot" is supposed to handle - divided limited-access highway under good conditions. The vehicle hit a guard rail on the side of the road. That may have been an system failure. Too soon to tell.
The NTSB, the air crash investigation people, have a team investigating Tesla. They're not an enforcement agency; they do intensive technical analysis. Tesla's design decisions are about to go under the microscope used on air crashes.
Tesla's spin control is backfiring. They tried to blame the driver. They're being sued by the family of the dead driver, and being investigated by the NHTSA (the recall people), and the NTSB. In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.
> In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.
This is what we should be trumpeting. Responsibility, honesty, and transparency. Google's self driving car project reports [1] should be the standard. They're published monthly and they detail every crash *per requirement by CA state law (thanks schiffern).
Note: These reports are currently only required in California, and only for fully autonomous vehicles. So, Tesla and other driver-assist-capable car companies do not need to publicize accident reports in the same way. Tesla may be unlikely to start filing such reports until everyone is required to do so. We should demand this of all cars that are driver-assist during this "beta" phase.
Tesla owners and investors ought to be interested in a little more transparency at this point. It's a bit concerning that Tesla/Musk did not file an 8-k on the Florida accident. We learned about it from NHTSA almost two months after it happened.
>Google's self driving car project reports [1] should be the standard. They're published monthly and they detail every crash
Copying a reply from earlier...
Google isn't doing this out of the goodness of their heart. They have always been required by law to send autonomous vehicle accident reports to the DMV, which are made available to the public on the DMV's website.
So the transparency itself is mandatory. Google merely chose to control the message and get free PR by publishing an abbreviated summary on their website too. Judging by this comment, it's working!
The requirement does not seem to be there for driver-assist vehicles. Also, it's a state-level requirement. Looks like it's just in CA for now. The accidents detailed in the Google reports are all from California.
And, Tesla may be unwilling to start reporting driver-assist accident rates to the public unless the other driver-assist car companies are forced to do it too.
> The accidents detailed in the Google reports are all from California.
You could have opened the first report[1] on that page and notice this is wrong.
> June 6, 2016: A Google prototype autonomous vehicle (Google AV) was traveling southbound on Berkman Dr. in Austin, TX in autonomous mode and was involved in a minor collision north of E 51st St. The other vehicle was approaching the Google AV from behind in an adjacent right turn-only lane. The other vehicle then crossed into the lane occupied by the Google AV and made slight contact with the side of our vehicle. The Google AV sustained a small scrape to the front right fender and the other vehicle had a scrape on its left rear quarter panel. There were no injuries reported at the scene.
I don't envy his position, or any other CEO or leader of a large group. It's a huge responsibility. You can face jail time for not sharing information. Usually only the most egregious violators see a cell, but it's still a scary prospect if you somehow got caught up in some other executive's scheme. At the end of the day, the CEO signs everything.
Edit: I see the downvote. Am I wrong? Is being a CEO all rainbows and sunshine? Maybe I got some detail wrong. Feel free to correct me.
the guy comes forward and says "I'm going to make an autopilot for the masses". He/We know it's tough to do. The guy release its autopilot. It's not quite ready for prime time. Double problem : the guy goes into a tough business alone, the guy makes stuff that's not 100% ready.
I like it when people try to do hard stuff, that's fine and maybe heroic. But they have to be prepared : if testing the whole thing must take 10 years to be safe and a gazillion dollars, then so be it. If it's too much to shoulder, then they should not enter the business.
> if testing the whole thing must take 10 years to be safe and a gazillion dollars, then so be it. If it's too much to shoulder, then they should not enter the business
That sounds like a succinct explanation for why we've been stuck in LEO since December 19, 1972.
We've got 7 billion+ humans on the surface of our planet. We lose about 1.8/s to various tragedies.
Most of which are of less import than helping automate something that's responsible for around 32,000 deaths in the US every year. Move fast, break people for a just cause, escape risk-averse sociotechnological local minima, and build a future where fewer people die.
I don't think the issue is that Tesla's cars are dangerous. The issue people are raising is that they pretend, at least through implications, that their cars can safely drive for you.
Tesla is also not doing any kind of super special research into self driving cars. The system their cars use is (afaik) an OEM component (provided by MobileEye) that powers the driver assist features of many other brands, too.
Instead of actually improving the technology they have chosen to lower the safety parameters in order to make it seem like the system is more capable than it actually is.
Safe is like 'secure'. It's a relative concept, and the level of safety we are willing to accept from products is something that should be communally decided and required by law.
We shouldn't go off expecting that someone elses ideas of safety will jive 100% with our own, and then blame them when someone else buys a product and is injured.
Should Telsa drive-assist deactivate and slow the car after 10 seconds without touching the wheel? Probably, but I won't blame them for not doing so if there is no regulatory framework to help them make a decision. It's certainly not obvious that 10 seconds is the limit and not 11, or 15 or 2 seconds.
Once again Tesla have made the discussion about the driver. They have put an auto-control system out in the wild that steers the car into hard obstacles at high speed. Beeps to warn the driver at some stage in the preceding interval do not make this control systems failure any more acceptable. It still misjudged situation as acceptable to continue and it still drove into obstacles. Those are significant failures and a blanket statement of responsibility going to the driver isn't going to cut it.
I agree; I think it's despicable that Tesla is acting in such a defensive manner.
--
"No force was detected on the steering wheel for over two minutes after autosteer was engaged," said Tesla, which added that it can detect even a very small amount of force, such as one hand resting on the wheel.
"As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel," said Tesla. "He did not do so and shortly thereafter the vehicle collided with a post on the edge of the roadway."
--
Bruh-slow the car down then. If you're storing information that points to the fact that that the driver is not paying attention, and you're not doing anything about it, that's on you 100%.
We are in the midst of a transition period between autonomous and semi-autonomous cars, and blaming the driver for the fact that the car, in auto-steer mode, steered into obstacles, is counter-productive in the long term. You need to make auto-steer not work under these conditions. The driver will quickly figure out the limits of your system as their car decelerates with an annoying beeping sound.
> No force was detected on the steering wheel for over two minutes after autosteer was engaged," said Tesla, which added that it can detect even a very small amount of force, such as one hand resting on the wheel.
That would seem to me to be reason to steer the car to the shoulder and disengage the auto-pilot.
Tesla is trying to take too big a step here and blaming the driver(s) of these cars is really really bad PR.
Isn't that one of the reasons there is still a driver? Cruise control will drive you right off of the road until you crash with no awareness of the shoulder or other traffic or anything. But it is all over the place.
Right. When the AP detected road conditions that it couldn't handle, it should have reduced vehicle speed as much as necessary for handling them. Yes, it was dumb to trust AP at 50-60 mph on a two-lane canyon road. But it was also dumb to ship AP that can't fail gracefully.
I do admit, however, that slowing down properly can be hard. It can even be that hitting an inside post is the best option. Still, I doubt that consumer AP would have put the vehicle into such an extreme situation.
It's certainly a major fault of Tesla's - if you know it's not safe, you've got no excuse to keep going.
But the driver shouldn't be allowed to escape blame either. When your car is repeatedly telling you that you need to take control, then you should be taking control.
I've worked on highly safety related (classification ASIL D) automotive systems. There are rules about how to think about the driver.
The driver is not considered a reliable component of such a system, and must not be used to guide decisions.
Yes, the driver clearly was a fool, but system design should not have taken his decision into account, and come to its own conclusion for a safe reaction (e.g., stopping the car).
I don't disagree - like I say, none of my comment was intended to let Tesla off the hook in any way.
My point was simply that the drivers who use semi-automatic cars and choose to ignore warnings to retake control need to be responsible for their actions as well.
Exactly! In this case the driver may have been at fault.
What happens next time, when the driver has a stroke?
"Well, he didn't put his hand on the wheel when the car told him to, it's not our fault!"
Like you say, slow it down. Sure, the driver of a non-autonomous vehicle who has a stroke might be in a whole world of trouble, but isn't (semi-) autonomy meant to be a _better_ system?
Suppose I'm driving a "normal" car. I have my hands on the wheel, but I am reading a book. I hit a tree.
Is the auto-maker responsible? They advertised the car as having cruise control.
Call me old, but I remember similar hysteria when cruise control systems started appearing. People honestly thought it would be like autopilot or better - and ploughed into highway dividers, forests, other traffic, you name it, at 55mph.
Cruise control is supposed to control the speed of the car. If you set it for 55 and the car accelerated to 90 it would be considered dangerous and defective.
The steering control is supposed to control the steering of the car. If you set it for "road" and it decides to steer for "hard fixed objects" how should we regard it?
Why would the auto-maker be responsible for your distracted driving, and what does it have to do with cruise control?
Some of the uproar about these incidents is related to the fact that Tesla marketing is implying far more automation and safety here than the technology can actually provide. And apparently it can cost lives.
Exactly, they shouldn't be responsible for distracted driving. Autopilot can certainly get better, but you're being willfully ignorant if you don't believe that these Tesla drivers with their hands off the wheel don't fall under distracted driving. Could autopilot improve? Yes. Is the driver at fault here for not staying alert and responsible for the car they chose to drive? Absolutely.
Please show me a source for the claim that they're "implying far more automation" as everything I've ever seen from Tesla on this says hey dummy, don't stop paying attention or take your hands off the wheel.
Its normal to exceed the speed limit e.g. passing. No absolute limit can be arbitrarily set without tying the pilot's hands/limiting their options which could make the roads more dangerous.
It may be common to exceed to speed limit when passing, but it is illegal. The setting of these speed limits is meant to be objective, maybe are sometimes subjective, but it is wrong to call them arbitrary.
If the car autopilot software permits faster than legal speed driving, the manufacturer is taking on liability in the case of speed related accidents and damage.
It costs lives because folks are toying with this new thing. If they'd just use it as intended, they'd achieve the higher levels of safety demonstrated in testing.
Yeah, I just blamed the victims for misusing autopilot. Like anything else, its a tool for a certain purpose. Its not foolproof. Testing its limits can get you hurt, just like a power saw.
It is about the driver you turn on a test feature (it warns you it's a test feature) and ignore all the procedures you must follow to use this not yet production feature it's your fault.
It bugs me how quick Tesla is to blame their drivers. That autopark crash, for example, was clearly the result of a software design flaw. Tesla pushed an update where to activate autopark all you had to do was double tap the park button and exit the vehicle. That's it. No other positive confirmation. No holding the key fob required. The vehicle would start moving without a driver inside.
You'd think someone would have realized that a double tap could accidentally happen why trying to single tap.
The reporting on this was terrible, but there's a YouTube video illustrating the design flaw.[1]
Thing is, someone at Tesla must know this is the real culprit. But there was no public correction of their local manager who blamed the driver.
its a bit harder than a double tap... You press and hold. Count to three. Now wait for the car to start blinking and make a shifting noise. Press either the front of the key or back of the key to move forward. Definitely not easy to engage so I can understand tesla view on this one... I do this every morning/night to pull my car out of the tiny garage since I can't get my door open inside the garage
There are multiple methods of activating their autopark feature. At the time of the crash, the setting that allowed you to activate it using the keyfob as you describe also meant that merely double-tapping the P button in the car would activate autopark and cause the car to start moving forward a few seconds after getting out. This was added in a patch, so unless you paid attention to their release notes you may have missed it.
I'm not sure why it is, but a lot of 100 year old houses have garages that are just big enough to fit a car in and not be able to open the doors. Car dimensions really haven't grown all that much, the Ford model T was 66 inches wide and most modern sedans are ~70-75 inches.
Maybe they were never intended for cars?
Maybe it was a cost savings thing? Homes that are only 50-75 years old seem to have larger carports instead of garages.
* I answered my own questions. A model T was 66 inches wide but had 8 inch running boards on either side so the crew cabin is closer to 50 inches. The doors, if it had them, would be shorter as well. A model S is 77 inches wide so that's probably 24 inches difference in clearance.
False. Did you watch the video? It proves that you only had to double tap and exit.
Tesla pushed an update that enabled this as a "convenient" shortcut to the more elaborate, safer method you describe. In fact the video author praises it. Seems like a nice handy feature.. At first. Safety-critical design needs more care than this though.
It is True. You have to press in two places on the key. First you have to press and Hold in the middle of the key. Next you have to press either in the Front of the key or the Back of the key. Show me the video? Am I missing something can I simply press in the Front Twice?
>Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel.
>They tried to blame the driver.
In this incident, the driver was using autopilot in a fashion that it should not be; twisting road at high speed. The driver IS at fault.
>They're being sued by the family of the dead driver
This is no proof of anything. You can find a lawyer to sue anybody over anything.
>and being investigated by the NHTSA (the recall people), and the NTSB
Well, of course they are. They're supposed to. Again, no proof of anything Tesla did wrong.
I'm not saying there isn't a problem with this. I'm saying your reasoning is wrong.
I'm also curious as to how many autopilot sessions take place every day. If there have been three crashes, such as this where the driver is at fault, out of a million then that's one thing not considered so far.
>In this incident, the driver was using autopilot in a fashion that it should not be; twisting road at high speed. The driver IS at fault.
I disagree on this point. In any system that's supposed to be sophisticated enough to drive the car and but also carries a giant caveat like "except in these common driving situations..." failure is not instantly down to "user error." Such a system should gracefully refuse to take over if it's not running on an interstate or another "simple enough" situation; otherwise, as many have noted, the threshold for "is the driver paying sufficient attention" should be set much, much lower.
That the system is lulling the users into bad decisions is not automatically the fault of said user. Some blame, maybe most of the blame, has to fall on the autopilot implementation. When lives are on the line, design should err on the side of overly-restrictive, not "we trust the user will know when not to use this feature and if they are wrong it is their fault."
It's not lulling anybody into anything! The guy was given multiple warnings by the car that what he was doing was unsafe. The last of which was extremely pertinent to the crash. To quote the article, "As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel". This is after the initial alerts telling him that he needs to remain vigilant while using this feature. This isn't some burden of knowledge on when to use and when not to use the system, it's plenty capable of knowing when it's not operating at peak performance and lets you know. At that point, I'm willing to shift blame to the driver.
I just want to see "the vehicle again alerted the driver to put his hands on the wheel" followed by "and Autopilot then pulled to the side of the road, turned on the hazard lights, and required the driver to take over."
That's stupid. Autopilot doesn't have the ability to determine if it is safe to pull over. It has a fallback mode; eventually it will come to a complete stop in the middle of the road and turn the hazards on. But since this is also dangerous it makes sense to give the driver time to stop it from happening.
It's stupid to come to a gradual slowdown with hazard lights, but it's not stupid to keep blazing away at speed, with an obviously inattentive (or perhaps completely disabled) driver? I'm confused - even you acknowledge it has that fallback, but it's 'stupid' to enact it after a couple of failed warnings? When should that threshold be crossed, then?
so far we have been lucky the accidents have only impacted the driver. is it going to take the first accident where an autopilot car kills another innocent driver before people see the problem?
When autopilot kills more innocent drivers than other drivers you can point me to a problem. Time will tell if it is better or worse and the track record up until May/June was going pretty well.
I'd rather 10 innocent people die to freak autopilot incidents than 1,000 innocent people die because people in general are terrible at operating a 2,000-3,500lb death machine. Especially because everyone thinks of themselves as a good driver. It's fairly obvious not everyone is a good driver - and that there are more bad drivers than good.
Maybe I only see things that way because there have been four deaths in my family due to unaware drivers which could have easily been avoided by self-driving vehicles. All of them happened during the daytime, at speeds slower than 40mph, and with direct line of sight. Something sensors would have detected and braked to avoid.
Assuming we stick with the same conditions as in this case, I'd call an innocent life taken due to a driver disregarding all warnings from his system negligence. That also fits with what this driver was ticketed with which was reckless endangerment. We don't even get heads up display warnings about not driving into oncoming traffic, but nobody is going to blame a dumb-car for the actions of its driver. I would say that inaction in this hypothetical case would be the driver's crime.
The Tesla warnings remind me of Charlie and the Chocolate Factory, where one of the brats were about to do something disaterous and Gene Wilder would in a very neutral tone protest "no, stop, you shouldn't do that".
From a risk management standpoint, Tesla's design doesn't strike me as sufficient. Warnings without consequences are quickly ignored.
I guess I just don't know how many flashing warnings somebody has to ignore for it to finally be their fault.. At some point, their unwillingness to change their natural inclinations in the face of overwhelming evidence that they need to should have a negative consequence. This isn't like "just click through to ToS". If you buried it in the ToS or something similar, I absolutely agree with your point of view, but when you make it painstakingly clear in unavoidable manners I think we've crossed into territory of user responsibility. If this feature needs to be removed from Tesla cars (and I accept that potential future) I would definitely put that "fault" on the users of Tesla cars rather than Tesla based on the implementation I've read about.
Then the classification would fail-safe (i.e. no autopilot for you) and that's good[1]. The alternative with the current technology is apparently depending on humans to decide (poorly). This is to minimise deaths caused by engaging autopilot at the wrong time while you gather more data.
[1] If the goal is to stay out of court. I understand the AI drivers are better on average argument.
Tesla's "autopilot" system and implementation has serious problems. I think it's becoming clear that they did something wrong. Whether it rises to the standard of negligence I'm not sure, but it probably will if they don't make some changes.
With that said, I agree that ultimately the driver is at fault. It's his car, and he's the one that is operating it on a public road, he's the one not paying attention.
One of the laws of ship and air transport is that the captain is the ultimate authority and is responsible for the operation of the vehicle. There are many automated systems, but that does not relieve the captain of that responsibility. If an airplane flies into a mountain on autopilot, it's the captain's fault because he wasn't properly monitoring the situation.
Sorry, but you don't get off that easy when your design is intended to make life and death decisions. I work on FDA regulated medical devices and we have to do our best to anticipate every silly thing a user can (will) do. In this case, the software must be able to recognize that it is operating out of spec and put a stop to it.
The Pennsylvania Turnpike crash is disturbing, because it's the case the "autopilot" is supposed to handle - divided limited-access highway under good conditions. The vehicle hit a guard rail on the side of the road. That may have been an system failure. Too soon to tell.
This is the one I'm very interested to hear more about. From what I've heard, Tesla's self-driving is very aggressive in swerving to avoid obstacles -- was hitting the guard rail the result of a deliberate swerve, and if yes then was there a logic error, a sensor error, or in fact a legitimate reason to swerve?
Reasoning about what's legitimate can be a really fine line. Oncoming car, raccoon, fawn, human, doe, coyote, elk: some of these obstacles might actually be net-safer to not swerve around [perhaps depending on speed].
I'd be really curious if any of the autopilots have a value system that tries to minimize damage/loss of life. And how does it value the exterior obstacle against the passengers in the car?
I'm pretty sure that hitting a full-grown elk is a poor choice, and often a fatal one. A large, full-grown elk can weigh 600 pounds and stand 5 feet tall at the shoulder, which means it will probably go over the crumple zone and directly strike the windshield.
This happens commonly with moose in New England, which run up to 800 pounds and 6.5 feet tall at the shoulder. A car will knock their legs out from under them but the body mass strikes much higher and does an alarming amount of damage to the passenger compartment. You can Google "moose car accident" and find some rather bloody photos.
The standard local advice is to try really hard to avoid hitting moose, even at the cost of swerving. I would prefer that an autopilot apply a similar rule to large elk. Even if you wind up striking another obstacle, you at least have a better chance of being able to use your crumple zone to reduce the impact.
In this situation, small-to-normal passenger cars are actually much safer than typical American trucks and SUVs. When the legs are taken, the mass of the body strikes the roof of the lower vehicle and bounces over the top. The larger vehicles direct the body through the windshield into the passenger compartment.
I'm not sure about moose, but with more nimble animals, swerving is no guarantee of avoiding collision. A deer can change direction quicker than an automobile, and if it had any sense, it wouldn't be running across the road in the first place. It's definitely safer to slow down and honk your horn than it is to try to guess which direction the damn thing will jump.
[EDIT:] When I was learning to drive, my father often recounted the story of a hapless acquaintance who, in swerving to avoid a snake, had precipitated a loss of control that killed her infant child. He's not a big fan of swerving for animals.
Moose or Elks don't 'bounce over the top' on a small vehicle. They crush it, rip the roof off, or go through it. It may depend on antlers, angles, speed, and so on.
I've tried to pick the least graphic images possible:
I don't think there is much to be gained in terms of safety. Let's not forget that while you may have large moose, there's a range where they can still be much smaller while growing and still damage smaller cars fairly easily.
There is lots of ambiguity here. Depends on how you hit it, etc.
Roof landings don't always end well -- my Civic was trashed when a deer basically bounced up and collapsed my roof on the passenger side. My head would have been trashed as well had I been a passenger!
In general, if you cannot stop, hitting something head on is the best bet. Modern cars are designed handle that sort of collision with maximum passenger safety.
The cost of swerving is very, very minimal (especially in a modern car that won't go sideways no matter what you do).
Imagine a two lane road with a car traveling toward an obstruction in its lane. The distance required to get the car out of the obstructed lane via the steering wheel is much less than stopping it via the brakes.
There's a reason performance driving has the concept of brakes being an aid to your steering wheel as opposed to being operated as a standalone system.
Yes, I included it precisely because there's no reason to ever hit an elk. "some of these obstacles ..." -- it's one of the extrema that you might imagine as test cases for this kind of a feature.
Technically the swerving itself is not necessarily illegal, but it makes you liable for anything that could have been avoided by not swerving. If you swerve on a free and open road and don't cause any damage or injury to anything or anyone (including yourself and your car), it's not like you will still be dragged to court just because you didn't want to unnecessarily run over someone's pet.
> obstacles might actually be net-safer to not swerve around
In the UK its illegal to swerve for these things. In the sense that if you subsequently cause an accident for swerving to avoid a cat, you are at fault.
Wait, it's illegal to swerve for a moose (TIL: moose are called elk in Europe, American elk are totally different) in the UK? Hitting a moose is a fast route to the grave.
Such a value system would be a nightmarish thing so I hope that it doesn't exist yet.
Just imagine your car calculating that it should let you die in order to save a group of school children... except that the school children were in fact a bunch of sheep that the system mis-identified.
The danger of AI isn't in paperclip optimizers (at least for now), it's in humans over-relying on machine learning systems.
I think such a value system is fine in as much as it preserves the safety of the driver.
I'd like my car to make value judgements about what's safer - braking and colliding with an object or swerving to avoid it. I'd rather it brakes and collides with a child than swerve into oncoming traffic, but I'd rather it swerved into oncoming traffic to avoid a truck that would otherwise T-bone me.
As long as the system only acts in the best interest of it's passengers, not the general public, it's fine by me.
I'd rather it brakes and collides with a child than swerve into oncoming traffic
Wait, what? You'd rather hit and kill a child than hit another vehicle - a crash which at urban/suburban speeds would most likely be survivable, without serious injury, for all involved?
You cannot codify some of the "dark" choices that are typically made today in split second, fight or flight situations. If we go down this road, it opens to door to all sorts of other end justifies the means behaviors.
There's a threshold number of dead hypothetical children that's going to trigger a massive revolt against tone deaf Silicon Valley industry.
The moral choice is the opposite direction: the car should prefer to kill the occupant than a bystander. The occupant is the reason why the car is in use; if it fails, the penalty for that failure should be on the occupant, not bystanders.
Yeah this thread has really squicked me out. Of course dehumanization is an age-old tactic for enabling asshole behaviors of all sorts. I'd like to hope, however, that outside the current commuter-hell driving context, few people would even admit to themselves, let alone broadcast to the world, their considered preference for killing children. That is, when they no longer drive, perhaps people won't be quite so insane about auto travel. The twisted ethos displayed in this thread will be considered an example of Arendt's "banality of evil".
I don't even drive at all, and yet this thread irks me from the other direction. Dehumanization is an age-old tactic for enabling asshole behaviors, and bringing children into a discussion is an age old tactic for justifying irrational decisions ("Think of the children").
You claimed that any car may be driven slowly enough to avoid killing people on the road with only mere convenient to others. That's patently false: firetruck. And then there are several other posters claiming that it's the driver fault if any accident happens at all, due to the mere fact that the driver chooses to drive a car. If we remove every single vehicle in the US out of the street tomorrow, a whole lot of people would die, directly or indirectly. I'd like to see anyone stating outright that "usage of cars in modern society have no benefit insofar as saving human lives is concerned".
In the end, I think both sides are fighting a straw-man, or rather, imagine different scenario in the discussion. I read the original poster as imagine a case where swerving means on a freeway potentially being deadly for at least 2 vehicles, along with a multi-vehicle pile up. You're imaging suburban CA where swerving means some inconvenience with the insurance company. I have little doubt that everyone would swerve to avoid the collision in the latter.
Also since we're on HN and semantic nit-picking is a ritual around here, "avoid killing children in the road above all other priorities, including the lives of passengers" is NOT a good idea. As far as the car knows, I might have 3 kids in the backseats.
I believe you, that you don't drive at all. You call both situations strawmen, yet each is a common occurrence. Safe speed in an automobile is proportional to sightline distance. If drivers or robocars see a child (or any hazard) near the road, they should slow down. By the time they reach the child, they should be crawling. That's in an urban alley or on a rural interstate. If the child appeared before the driver or robocar could react, the car was traveling too fast for that particular situation.
In that "traveling too fast" failure mode, previous considerations of convenience or property damage no longer apply. The driver or robocar already fucked up, and no longer has standing to weigh any value over the safety of pedestrians. Yes it's sad that the three kids in the backseat will be woken from their naps, but their complaint is with the driver or robocar for the unsafe speeds, not with the pedestrian for her sudden appearance.
This is the way driving has always worked, and it's no wonder. We're talking about kids who must be protected from the car, but we could be talking about hazards from which the car itself must be protected. If you're buzzing along at 75 mph and a dumptruck pulls out in front of you, you might have the right-of-way, but you're dead anyway. You shouldn't have been traveling that fast, in a situation in which a dumptruck could suddenly occupy your lane. In that failure mode, you're just dead; you don't get to choose to sacrifice some children to your continued existence.
Fortunately, the people who actually design robocars know all this, so the fucked-up hypothetical preferences of solipsists who shouldn't ever control an automobile don't apply.
Thanks for your response. That's certainly an interesting way to think about driving (in a good way). Your previous posts would have done better by elaborate that way.
Just a bit of clarifying, I didn't call the situations rare or a strawman. I said the arguments were against a strawman, as you were thinking of different scenario and arguing on that.
Out of curiosity, had the situation with 3 kids in the car being potentially deadly to swerve. What would the response be?
"You claimed that any car may be driven slowly enough to avoid killing people on the road with only mere convenient to others. That's patently false: firetruck."
As a firefighter I'd like to offer a few thoughts on this:
- fire engines, let alone ladder trucks are big but slow. Older ones have the acceleration of a slug and struggle to hit high speeds
- with very, very few exceptions, the difference of 10 seconds to 2 minutes that your arrival makes is likely to have no measurable difference (do the math, most departments are likely to have policies allowing only 10mph above posted limits at best, and if you're going to a call 3mi away...)
- again, as mentioned, most departments have a policy on how far the speed limit can be exceeded. It might feel faster when it goes by when you're pulled over, but realistically there won't be much difference
- "due caution and regard" - almost all states have laws saying that there's implied liability operating in "emergency mode" - that is, you can disobey road laws as part of emergency operations, but any incident that happens as a result thereof will be implied to be the fault of the emergency vehicle/operator until and unless proven otherwise
If I'm driving an engine, blazing around without the ability to respond to conditions as much as I would in a regular situation/vehicle, then emergency mode or not, I am in the wrong.
This is a ridiculous claim to make. Emergency vehicles will be the very last kind of vehicle to get autonomous driving (if ever), because they routinely break regular traffic rules, and routinely end up in places off-road in some way.
Hell, modern jumbo jets can fly themselves, from takeoff to landing, but humans are still there specifically to handle emergencies.
> As far as the car knows, I might have 3 kids in the backseats.
Strange that you envision having a car capable of specifically determining 'children' in the street, but nothing about the occupants. Especially given that we already have cars that complain if they detect an occupant not wearing a seatbelt, but no autonomous driving system that can specifically detect an underage human.
> You're imaging suburban CA where swerving means some inconvenience with the insurance company.
"I'd rather other people die than me" is not about imagining mere inconvenience with insurance companies.
For someone complaining about strawmen, you're introducing a lot of them.
You seems to selectively cut out and either skim or didn't read my full post. Except the last paragraph, my full comment has little to do with autonomous driving and just driving/ morality with regard to driving in general.
No, I do NOT trust the car to detect children either on the road or in the car, that's why I phrased my comment that way.
> "I'd rather other people die than me" is not about imagining mere inconvenience with insurance companies.
Yes, I specifically point out the scenario the poster of that quote might be thinking about to contrast with the inconvenience scenario.
You should take your own advice: read the post again before commenting.
Legally speaking children are found to be at fault all the time, and under certain circumstances even treated as adults. So no, they're not incapable of being at fault.
Let me guess, you haven't done much work in public relations? Much worse for a company than doing some evil reprehensible shit, is doing such shit in a fashion that's an easy, emotional "narrative hook" for the media. A grieving mother clutching a tear-stained auto repair bill itemized with "child's-head-sized dent, one" is the easiest hook imaginable.
Maybe so. But Google for "parents of dead child sent bill for airlift" - if you want to be pedantic, you can even argue that in some of those cases, there may not have been any choice, only an implied consent by law that says that "a reasonable person would have wanted to be flown to hospital".
You'll find examples. And for every waived bill you'll also hear "well, regardless of the tragic outcome, a service was rendered, and we have the right to be paid for our work".
Please note that I am carefully avoiding passing any judgment on the morality of any of the above.
Unless I really misunderstand, the invoices to which you refer are for services intended to preserve the life of a child, not services intended to relieve a third party of certain trifling inconveniences associated with a child's death?
Society will not tolerate a robocar that cheerfully kills our children to avoid some inconvenience to its occupants.
One might think that we're not talking about inconvenience, but we are, because after all any car may be driven slowly enough to avoid killing children in the road. A particular robocar that is not driven that slowly (which, frankly, would be all useful robocars), must avoid killing children in the road above all other priorities, including the lives of passengers. A VW-like situation in which this were discovered not to be the case, would involve far more dire consequences for the company (and, realistically, the young children and grandchildren of its executives) than VW faces for its pollution shenanigans.
At the moment our autonomous cars can't manage to avoid a stationary guard rail on the side of the road, so it's a bit premature to be worrying about our cars making moral decisions.
The first priority of an autonomous vehicle should be to cause no loss of life whatsoever. The second priority should be to preserve the life of the occupants.
Nobody buys an autonomous car that will make the decision to kill them.
You'd hope the system has sensor, video and audio logs and some kind of black box arrangement to at least get the maximum value in terms of information out of those crashes that do happen. If so it should be possible to get a real answer to your question.
Tesla's spin control is backfiring. They tried to blame the driver. They're being sued by the family of the dead driver, and being investigated by the NHTSA (the recall people), and the NTSB. In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.
It's a hell of a lot safer for Google to take fault for in a 2mph bump into a bus than it is for a case where someone died. I'd guess Tesla will ultimately settle the lawsuits against them but if they came out and admitted 100% fault, which it likely isn't, they'd have absolutely no ground to stand on and could get financially ruined at trial.
> It takes seconds, not milliseconds, for the driver to recover situational awareness and take over.
The really depressing thing is that this is first-year material for one of the most popular tertiary courses around: psychology. It is simply gobsmacking to think that Tesla thinks a human can retarget their attention so quickly. It's the kind of thing you'd give to first-year students and say "what is wrong with this situation", it's so obvious.
>3. Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet after sheering off the top of the Tesla driving under the semitrailer. Only when it hit a telephone pole did the car stop.
This is actually the most distributing part to me. In these situations, graceful failure or error handling is absolutely critical. If it cannot detect an accident, what the heck business does it have trying to prevent them?
If he had hit the accelerator, you can bet your bottom dollar that Tesla would have mentioned this detail in order to continue the "driver's fault" narrative.
I still say they best damn well rename the feature. The connotations of the work autopilot are too strong as to be misleading to what it can actually do.
when the technology matures then bring the name back, but until then call it something else.
I am still amused that most of these self driving implementations only drive when people are least likely to need the assistance, good visibility conditions. They are the opposite of current safety features which are designed to assist in bad driving conditions. I won't trust any self driving car until it can drive in horrible conditions safely.
At most these services should be there to correct bad driving habits, not instill new ones
I can see how the name could be misleading to some people. However, I think the term is quite accurate. In an aircraft, Autopilot takes care of the basics of flying the plane while the pilot is fully responsible for what is happening, watching out for traffic, and ready to take over if necessary. This is exactly what it means for Tesla as well.
In any case, as somebody who has used it off and on for 6 months in all kinds of situations it is clear that it has limitations but it is also easy to know when to be worried. Its best use is in stop-and-go freeway traffic. Its worse use is on smaller curvy surface streets. It is also really nice for distance driving on freeways.
To me this seems to limit what is possible by software updates.
Does that mean that existing Tesla owners will never get 'real' autopilot capabilities by software upgrades, although I assume most expect to get them?
I think Tesla should start some upgrade program for self driving features. One thing that holds me back from buying one of the current cars is that they could get obsolete quickly if the next version has better self driving sensors.
Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet
Are we sure it continued under power? The Tesla is a heavy vehicle, with weight centred close to the ground. Momentum could easily have carried it several hundred feet after a high speed collision.
There's a report on electrek.co somewhere showing a google map of the crash site and where the car ended up. The car ended up a quarter mile down the road and had passed through a fence or two and some grass. There's also a report somewhere of a driver who says he made the same turn as the truck and followed the car as it continued down the road. Seems like it was under power. NHTSA, NTSB or Tesla could confirm with the data.
Does being under autopilot that disengages disable the accelerator? Otherwise, a foot from a disabled driver could have caused the vehicle to continue under power. Obviously Tesla probably has all this telemetry already.
Autopilot is not autonomous driving. It is fancy cruise control. Just like cruise control can't handle every situation neither can autopilot. Just like autopilot in an aircraft it requires a human paying attention and ready to take over.
The system works as intended and advertised. Obviously it isn't perfect but with the fatality rate roughly equal to human drivers and the accident rate seemingly much lower it seems at least as good as a human.
This would be very reasonable if the feature was not experimental. You specifically turn on experimental feature and then purposefully ignore instructions on using that experimental feature. This is obviously manufacturer's fault.
Yes. It is completely predictable some percentage of people ignore instructions of when production quality features. That's why the fail safe comes first, even in an experimental feature. If you can't provide fail safe, it's not ready for prime time, including testing by the public.
Cool logic. I guess we should stop prescribing drugs because there is no failsafe when people ignore dosage and directions. We should stop selling microwaves cause people might ignore warning and put a cat there... There is of course an alternative of people actually being responsible for doing stupid things and not blaming everyone and everything for their own actions.
I agree their sensor suite needs improvement. I hope they are adding Lidars and/or radars before they start shipping large volumes. A recall of large numbers of cars would kill Tesla
Maybe when they develop a safe, reliable autopilot system, they'll offer it for sale to the public. Some firms seem to have gotten that backwards...
It has always seemed that Google view robocars as a long-term revolutionary concept, while car companies see them as another gaudy add-on like entertainment centers or heated seats. It's not yet clear which is the right stance to take as a firm, but Tesla's recent troubles might be an indication.
No, Tesla issued a somewhat deceptive press release to make people think that:
"Tesla received a message from the car on July 1st indicating a crash event, but logs were never transmitted. We have no data at this point to indicate that Autopilot was engaged or not engaged. This is consistent with the nature of the damage reported in the press, which can cause the antenna to fail."[1]
When that gets trimmed down to a sound bite, it can give the impression the vehicle was not on autopilot.
The driver told the state trooper he was on autopilot.[2] He was on a long, boring section of the Pennsylvania Turnpike; that's the most likely situation to be on autopilot.
Multiple people here have reported that their car has allowed 15 minutes or more without having to touch the wheel. If each warning reset that timer, by design or bug, then you could easily travel 50 miles plus.
Plus, I'm almost certain that Tesla's goal with the system isn't to "have an autonomous system that you have to interact with steadily", hence the 15 minute warning in the first place. They have perverse incentives - to allow it in the first place implies reliability that may or may not be there.
Why would you need "more than an hour" to do 50 miles? Especially where you'd expect autopilot to be most used (highways)? 'round here it's about 40mn highway, and barely an hour (give or take) inter-city, and that's driving at posted speeds which people don't necessarily do (and I expect Tesla does not enforce)
Naming their system "autopilot" goes beyond the marketing dept overstepping and into the territory of irresponsibility.
The "average" definition of autopilot per Wikipedia is "autopilot is a system used. . . WITHOUT CONSTANT 'HANDS-ON' control by a human operator being required."
And yet Tesla's disclaimer states:
"Tesla requires drivers to remain engaged and aware when Autosteer is enabled. Drivers must keep their hands on the steering wheel."(2)
The disconnect between the name and the actual function is needlessly confusing and probably irresponsible.
They should change the name to "driver assist" or something more accurate given that mis-interpreting the system's function can lead to death of the driver or others.
> They should change the name to "driver assist" or something more accurate given that mis-interpreting the system's function can lead to death of the driver or others.
While I definitely agree on the name change, it should be accompanied by a feature change. In its current mode, and especially if people know it has been renamed, the same stuff will happen.
When you take your hands off the wheel, it turn on warning lights, break increasingly hard (depending on what it detects behind it) and sound loud warnings in the car. If it's truly an assistive feature where you can't be inattentive, the only scenario in which you stop steering is when you fell asleep.
That's what it does - it alerts drivers, and when their hands are off for some amount of time the car will pull off to the shoulder and stop until your hands are back on the wheel (IIRC).
What's the interval between when you take your hands off and it decides to pull over? From what I've seen in discussions and videos, it's pretty long. Other car manufacturers seem to only allow a few seconds.
The article seems to imply that the warning is given if driving conditions become "uncertain" and the driver's hands are off the wheel. Perhaps the time interval between warning and the car automatically stopping should be lower. What's the point of the car continuing on autopilot if it's uncertain of where it's headed?
Based on that account, it seems the warning is triggered if the road becomes curvy. There may also be additional triggers.
It seems insufficient to me. IMO there should be an interval of at most 5 seconds before the first warning, and another 5 seconds before the second, and then the car should start decelerating.
> What's the point of the car continuing on autopilot if it's uncertain of where it's headed?
Haha agreed! I don't know exactly how the system works. My sense is Tesla's AP let's you be hands off for a much longer time than other self driving systems
I think about three minutes. My VW Passat demands the hands back on the steering wheel after 15 seconds or so I believe. Same experience driving a recent Audi.
Is it even useful if it's merely assisting the driver, though? I haven't used it, but actively steering a car while someone else is also actively steering it sounds pretty terrible.
If you're not actively steering it but just waiting to take over if things go poorly, things also seem like they'd be difficult. Other than getting in an accident, how is a person supposed to know when autopilot is doing it's thing correctly, and when it isn't?
> but actively steering a car while someone else is also actively steering it sounds pretty terrible.
My passat has most assist systems and I like how it steers. It tries very hard to get my attention and get my hands back on the steering wheel. It barely lets me keep my hands 15 seconds of the wheel but this is perfect for what it's designed to do: keep you in the lane if you are being distracted by something (eg: baby).
Even more it has various modes to get your attention back including detecting tiredness and inability to hold the steering wheel on the lane.
> keep you in the lane if you are being distracted by something (eg: baby).
If you can't drive responsibly with a baby on board lane assist isn't going to save your ass. If something happens to your baby which distracts you ask someone else in the car to take care of it, if there is nobody to help pull over at the earliest opportunity.
Lane assist will not save you from anything else but lane departure and there are a lot of other things that can go wrong while you allow yourself to be distracted because you rely on lane assist to keep you safe. That's not the intended use of this feature.
Getting distracted by a sudden cry or a little child throwing an item is precisely what the feature was designed for. It was not designed for watching TV or cosntantly getting your hands of the wheel to do something else.
Also I find it interesting how much you assume about a person or situation from a single word.
I can't imagine the gymnastics involved in this feat either. A baby doesn't belong on the front seat without some modifications to the car as far as I know.
I absolutely agree. There is absolutely no excuse for the driver to take their eyes off the road to assist a baby in the car. How would you assist a baby in the back seat while driving with a seat belt on anyway?
As I read about people using these features, I'm convinced they shouldn't exist. Yes, crazies will apply eyeliner while going sixty on a highway but I doubt driver assist is an adequate solution for such stupidity should a driver poke themselves in the eye.
Who said anything about a) baby on a front seat (which is not permitted anyways) and b) assisting a baby.
If you have ever driven with a baby or child in a car you will have learned that they can cry or make a tantrum. Assist systems are intended to supplement and assist the driver in situations of increased stress. The idea that one would actively take their hands off the wheel to do something with a child is so preposterous ghat only commenters on HN could seriously suggest that.
At least in the United States, most state laws dictate that babies are to ride in the back seat (if available) in child seats. Some states also dictate when a child can be turned around to face forward. Varies state to state.
Given how many deaths happen due to car accidents, one would think that any car feature that decreases safety should be considered a bug (UX bugs included).
The sad thing is that Tesla has been repeatedly blaming the driver, instead of improving its technology in an openly earnest manner -- an approach that eventually made air travel the safest.
With so many sensors, car travel can eventually become safer than air travel in the future. If Tesla attitude improves, that future will arrive faster.
Has this feature decreased safety? Tesla's first fatality came after 130 million autopilot miles whereas traditions in the US, we have one fatality every 90 million miles. If that continues to hold, this seems to be significantly safer with it's own set of caveats. Not altogether different than getting stuck in a burning car due to your seatbelt.
I actually worked out the math on this--Tesla is lying with statistics when they made this comment they did about 1 in 130 million miles.
Here's what I did:
1 in 90 million rate: 1.1 x 10^8
Probability of No accident in one mile: 1 - 1.1 x 108 = ~9.9999999 x 10^8
Probability of No accident and driving 130 million miles: (9.99999999 x 10 ^ 8) ^ 130e6 = 0.24
Probability AT LEAST ONE ACCIDENT in 130 million miles = 1 - 0.24 = 0.76
If you do the same for a rate of 1 in 130 miles you get: 0.63.
So the true rate could really be worse than 1 in 90 million miles and they just got lucky that no incident prevented itself, which would make it more dangerous than a regular car.
In other words, the exact rate is very uncertain when there's only one crash data point. They might be lucky (or unlucky). Well that's obvious, and I wouldn't call it lying at all.
I agree that the fatality rate does not allow for any comparisons. Autopilot could cause significantly more or fewer fatalities than humans.
On the other hand, one also needs to consider accident rate. From what I understand, based on US averages, we should have had 50+ accidents out of 100+ million miles driven on AP, but we had about three.
Also, are they counting all previous versions of the software instead of just the latest?
Moreover, I wonder how the human drivers in the statistics fare if you correct for drunk driving / using cellphone, etc. In my opinion, any autopilot worth it's name should be an order of magnitude better than the average human driver.
Averages are a bad way to look at the overall risk.
An example:
Every time the auto pilot encounters a truck turning in front results in a fatality.
1 in 10 times a drunk driver fails to avoid the truck turning in time.
The average overall for drunk driver could be 1 in 50 million miles and the auto pilot 1 in 90 million, but you are certain to die in a few instances with the auto pilot, but the human could possibly save himself in the same situation. I imagine the Tesla has lots of these bugs that haven't been found yet.
I would trust a drunk driver over Tesla's autopilot at this point. Don't die for Elon Musk's dream--being one of the first auto pilot fatalities isn't that big of an accomplishment.
Why do you think the accident rate is lower among luxury cars as compared to more common vehicles? I would suspect the opposite. My hypothesis is that the people who seek out and buy cars that go 0 to 60 in 3.2 seconds probably drive faster and more aggressively than average. They therefore are more likely to have an accident.
> My hypothesis is that the people who seek out and buy cars that go 0 to 60 in 3.2 seconds probably drive faster and more aggressively than average
Highly unscientific anecdotal evidence and personal experience living in an area with an unusually high concentration of such cars shows me that most people that own really fast cars do drive fast and enthusiastically from time to time but do so in a responsible manner and typically respectfully go with the flow, whereas a lot of people with more regular cars (mostly MPVs around here) have a habit of driving a good deal too close and a good deal too fast every single day. Many even engage in some truly reckless behaviour as soon as they feel threatened/frustrated, a telltale sign of power/control issues.
Very anecdotal evidence here commuting in Toronto here:
- more drivers in $80k+ cars leave 2-3 times the following distance than other drivers with cheaper cars.
Not all, but very noticeable sometimes.
Several reasons, but again, it is just guessing that cannot replace actual data. That doesn't mean we should take the only data point available (90 million miles) and declare it settled.
Your point about people seeking to drive faster is a valid one.
The counterpoint is there are people who cannot afford luxury cars but also want to drive faster -- they'll end up buying cheaper cars that can drive fast within their price range. Those are likely to be more fatal than the average luxury car.
1) Hypothesis on car age. An accident on a 2002 non-luxury car model is supposedly more likely to be fatal than an accident with your average new car.
This matters because Teslas with autopilot are relatively new compared to overall car population.
2) Hypothesis that low cost cars are more fatal than an average luxury cars.
3) Demographic hypothesis. Because of price point, the buyer demographic of newer luxury cars will be different than the overall demographic. Age group (e.g. teenagers vs. young adults vs. older adults), education level, profession.
Luxury cars do tend to do pretty well among late model cars in terms of fatality rates--haven't seen rates for accidents overall. But the biggest correlation is probably that bigger cars are safer and smaller cars less so:
But this is overanalyzing, if we use only the "1 fatality in 130 million miles" stat. That 1 fatality was a car driving under a truck at speed and losing its entire top half.
I don't think more expensive cars are less likely to have that kind of accident, nor would it be less fatal.
This. It's like they want marketing credit for the engineering feat of creating 'autopilot' while maintaining the narrow legal position that the system isn't actually automatic or a pilot.
Well sadly the first casualties was a man that made 3 millions view on YouTube with a video called "Autopilot saves Model S" months before his fatal accident...
http://youtu.be/9I5rraWJq6E
At that time I don't recall that Tesla made any statement about how dangerous it was to deliberately push the limit of the system to entertains viewers.
This might have contributed to make him overconfident about autopilot.
> At that time I don't recall that Tesla made any statement about how dangerous it was to deliberately push the limit of the system to entertains viewers.
Yeah who knows. I think Musk retweeted his video. Hindsight is 20/20. Tesla can still move forward and learn from this.
It doesn't matter what they advocate for, it matters the perception of the technology. Just because Tesla doesn't explicitly say that it's a perfect system and says that the driver has to be aware at all times doesn't mean that actually happens, and with a lenient time frame without hands on the wheel it's not surprising that drivers would essentially give control to the car.
What I'm saying is I don't think it has anything to do with calling it autopilot, and even less to do with it not meeting the technical definition of autopilot.
People are using it in this way because they're able to, and they'll continue to do so until they're not able to. I think we agree here actually. The name isn't the problem really the issue.
I couldn't agree more. Autopilots of any kind have always required supervision. Tesla doesn't call its car self driving and there are many warnings that it isn't. Ultimately, the system would be called Boris and people would still use it irresponsibly.
Do you have any evidence at all that any accidents have been caused by misinterpreting the functions name? Do you think if it were called something different these accident wouldn't have happened? I see little reason seriously to believe that.
Aircraft (with over 9 passengers) are required to have TCAS[0]. TCAS issues a resolution advisory if conflicting traffic is detected, and if the approaching aircraft also maneuvers, TCAS can issue a "Reversal" if needed.
Airport Runways have typically have edge lighting, and if an aircraft is off-center during takeoff or landing, can hit them and do damage[1]. Airliners land at 150mph+ so it doesn't take much deviation to hit the edge lights. Occasionally, this also happens during a gusty cross-wind landing while coupled to autopilot (auto-land), or immediately after a touchdown after a coupled landing.
You're far less likely to suddenly encounter unexpected conditions when flying though; the sky has a tendency to be nothing but air, while the ground has a tendency to be filled with things you have to avoid, many of which are moving.
I would argue that the time frames involved are very dissimilar. In a Tesla, you need to be able to take complete control in under a second if the system fails. The closest I can think of in a plane is during an assisted landing, which is a relatively brief period where the pilot is fully engaged anyway.
Going beyond the plane analogy everyone is going with, all major transportation systems that utilize autopilot systems (planes, ships, trains) require training that teaches the pilots (or conductors for trains) to know how to stay alert and when to take over.
Perhaps it is true that the semi-intelligent autopilot is good enough for these trained activities, but the untrained public doesn't realize that autopilot today is not full automation.
Meaningful training is not required to get a driver's license in the U.S. In most states it is completely adequate for a parent to teach you to drive, with no minimum number of hours in a classroom, public or private.
Pilots also receive ongoing currency training. This doesn't exist for automobile drivers. Maybe some states require a written test every few years? But in the three states I've lived in, no written, oral, or in-car test it recurrency training required since the original was issued when I was 15 or whenever.
I had to pass a 20 question multiple choice test, drive at 15mph on a short course and parallel park to get my license at 17 years old. That was my only certification process. Unless laws change, I will never have to go through a training or certification process again in my life.
Comparing pilot certification to that of a driver's is disingenuous.
No doubt, but both groups need to supervise their Autopilots because autopilots of any kind have always been supervised. That's the point the grandparent is making.
The supervision required for the airplanes is not the same. If airplane autopilot flies you straight into the ground and turns off half a second before hitting it, that won't count as an acceptable behavior on it's part.
The clouds are rarely full of rocks. The planes have pilot/copilot and cabin crew. The pilots are highly trained professionals. The systems on the planes pass extremely rigorous testing to be certified.
Chances of gaining control and awareness of the situation in a timely manner on an airplane are higher.
Autopilot for planes came from a culture where failures weren't denied because of profits. Public safety and being surer than sure that incidents never happened again were and are the utmost concern. Pilots are professionals who went through training and certification.
It's sad. I am a big fan of Tesla and had/still have very high hopes from them, but it's decreasing. It seems Tesla is hell-bent on committing suicide. This way, they don't need the fossil-fuel car lobby to destroy them. The BMW, Volkswgen and the likes of dirty-gas-guzzlers-producers would be chuckling now.
Why the hell are they so bullish on auto-pilot? I cannot see any sense in it. They must just concentrate more on the battery, charging-stations and efficiency/performance part of the equation. Instead, I am seeing more efforts are being thrown in not-so-relevant areas and that too aggressively.
Semi-automatic auto-pilot is the idea rejected even by Google. Tesla should learn fast and get back to its core business: battery powered electric cars, period and not the battery powered automatic electric cars.
It really fits well with their image of an agile tech company producing car from the future. Also they need those software upgrade as it buys them some time to focus on new car model rather than replacing existing one.
Tesla is still a tiny player in the car business and at some point a big brand will enter its market in force. They need to be bullish to maintain their image.
That said, their PR has always been overreactive. Remember all the dirt they thrown at people reviewing their car. Or how they came hard on people with their burning Tesla. Those were unnecessary "dickish". If Tesla has some actual blame to take, the tone set by their PR people is going to seriously backfire.
> The BMW, Volkswgen and the likes of dirty-gas-guzzlers-producers would be chuckling now.
Funnily, all of those are investing heavily in automated electric cars already, or even have products for semi-automated electric cars on the market.
Just less hype around those.
If Tesla just continues, they'll be dead in a year, and Volkswagen, which also got further encouraged by the EPA to develop electric cars, will just take over their market share.
With Tesla in the picture, there is a lot of pressure on the BMW, Volkswagen etc to invest in the EV technology. So they would be chuckling now because with Tesla getting out of the competition (or with Tesla becoming weaker), they would have an easier task pushing their dirty-gas-guzzlers and of getting loads of easy profits.
With Tesla out, they can even squash the entire electric car efforts altogether and will plunder the money thus "saved" in the form of fat bonuses to the higher-ups (ala Volkswagen style), which they would have to invest in EV research otherwise.
I may seem exaggerating, but with the recent Volkswagen pollution scandal, many will agree with this. Tesla is better, Elon Musk is better in terms of innovation and their direction seems more promising for the public good. They are disrupting the fossil fuel giants who are hellbent in garnering as much money as they can at whatever costs.
I know I may (turn out to) be a foolish naive believing too much in Tesla and Musk, but I find myself helpless here.
Perhaps you live in a bubble, as most car makers barely raise an eyebrow over tesla as a competitor so far. Making tonloads of batteries is not necessarily better than making efficient, safe and less polluting "gas guzzlers".
From the outside, it seems there is a push to revive the american car industry through EV, supported by lenient and eager to see results US regulators. Of course, time will tell, and in the process hopefully we will all have safer and eco-friendly cars.
I don't understand why Tesla is allowed to release a safety critical system on to the roads when it requires buyers to acknowledge its not-finishedness. They're putting beta systems into an uncontrolled environment. No driver in the nearby lanes has been given that choice.
This a thousand times. Why is this allowed? And then going the extra mile and calling it autopilot which has all kinds of connotations about not needing to give a fuck about what the vehicle does. Bad bad bad. Recall and remove the feature until it's ready for actual use. Don't put everyone else on the road at risk so you can beta test.
Well, what I fail to understand (but do not find unbelievable) is how people dare leave their car's control to software just released. I'm sorry for all the incidents, think Tesla is to blame here, and would like electric to replace gas, so no cynicism intended, but that's just madness to me, the openness of people to alpha-test sth. that may easily cost them their lives. It's called autopilot, so what, while I admit it's bad naming, one should be responsible for their safety, before anybody else.
On the other hand self driving vehicles should be a different class of vehicles that's aptly moderated by the govts. A road is a complex thing to be in as an AI backed vehicle, especially considering the lack of the communication and organisation systems used in naval and aerial vehicles.
It's already something other cars on the roads have. Tesla just did it on the cheap and Tesla gets more press because it's not a 100 year old car company and its founder is Elon Musk.
Other cars require hands on the wheel at more frequent rates, as I understand it. There's no conspiracy. Everyone wants a good product out of Tesla. Sometimes that means additional pressure. 100 year old car companies have experienced plenty of pressure, and these days they anticipate the pushback. The difference between the companies is not how the press treats them, but how they handle and respond to adverse events. This article says it better [1]
> At Ford Motor Company, executives told me about all the pushback they received when they introduced features we now view as standard. People resisted and feared seat belts, airbags, automatic transmission, automatic steering, antilock brakes, automatic brakes, and trailer backup assist. Ford does extensive focus groups on new features that push boundaries, holding off on introducing ones that too many people don’t understand.
> Tesla doesn’t operate that way. When the company’s Autopilot software, which combines lane-keeping technology with traffic-aware cruise control, was ready last Fall, Tesla pushed it out to its customers overnight in a software update. Suddenly, Tesla cars could steer and drive themselves. The feature lives in a legal gray area.
I'm not sure of the legalities in the US, but in the UK I'd suggest that there are a few things that they'd fall foul of:
o advertising standards, its not an autopilot
o trading standards, not making a safe product/warnings [0]
o death by dangerous driving [1]
o driving without due care and attention [2]
[0] There are layers of consumer protection that enforces things like no sharp edges, doesn't give off nasty vapours, does what the marketing says it does.
[1][2] these are crimes for the driver. If you are found to have been driving in a reckless way, non concentrating, or not having hands on the wheel, that your fault as a driver. very large fine and time in jail.
especially as the tesla has lots of sensors to prove what the driver was doing.
Am I the only one that thinks people should be held responsible for their own choices?
This argument is completely bogus. No driver in the nearby lanes is aware the other driver is drunk, a bad driver or emotionally unstable.
The entire idea of beta testing is to have your system on an "uncontrolled environment". The system has bugs, as any beta system does, and whoever uses it should be fully responsible for it. Yes, fully responsible.
Tell me what's the difference between using a beta system and not caring (leaving hands off the wheel, etc) and drunk driving? On both situations the driver is choosing to put everyone at risk, nobody knows about the fact until after an incident and if there is an incident, who's fault is that? The beer company? The car company?
If the driver assistance made cars crash because it ignored a command from the driver or something like that, then sure... it's 100% Tesla's fault. But clearly on all of these accidents the drivers were careless to the extreme by leaving hands off the wheel for several minutes.
We can discuss if they should try to improve how they handle people that remove their hands from the wheel, but that doesn't change the fact that ultimately the choice to be careless was made by the driver, not the car manufacturer.
Comparing this to a drunk driver is not relevant, as driving drunk is already illegal, whereas autopilot is not.
> We can discuss if they should try to improve how they handle people that remove their hands from the wheel, but that doesn't change the fact that ultimately the choice to be careless was made by the driver, not the car manufacturer.
Yes, the driver is to blame for that, just as tesla is to blame for allowing that to happen in the first place. They have sensors to detect whether the driver has his hands on the wheel, and yet they allow the vehicle to keep going for minutes in a situation it can't control.
"Tell me what's the difference between using a beta system and not caring (leaving hands off the wheel, etc) and drunk driving?"
The latter is a clearly defined criminal offence whereas the former is currently unclear. While the driver is certainly liable for reckless driving, that the marketing seems to have suggested that the feature is usable, is a fact.
You're literally now comparing Autopilot to criminal offenses and asking if it should be allowed, seems like you're suggesting Autopilot should be illegal.
Sure, drivers suck, this is a known bug in humans. I'd like to see Autopilot changed to counteract the human limitation rather than acting like humans should be perfect for a system to work.
This is how you get into bad UI design ("it is NOT my UI that sucks, the users just don't understand it! Read the manual!"). Except in this case people can literally die.
No, certainly. The responsibility is shared, Tesla liable for a sub par product (that is autopilot, not the whole car) abd driver for what boils down to reckless driving. Though I want to note that I'm not familiar with the US legislation on traffic.
For drunk driving and nearby drivers, the drunk driver didn't sign an acknowledgement that he's aware of the hazards of driving drunk and did it anyway.
Signing an acknowledgement means there's something to acknowledge, like "we're not totally done with this." Any driver on the road can reasonably expect that any nearby car was sold fully functional.
The truck that the one Tesla crashed into was not (I believe) injured, although the truck's owner experienced property damage caused by Tesla (my opinion). The truck driver could have been injured or killed. If it hadn't been a truck, it could have been a white mini-van, and there almost certainly would have been injuries or deaths.
This think belongs on a test track, not crawling up my bumper.
If you serve alcohol to a person and they're intoxicated enough to be a danger on the road, you can be held liable for failing to stop them from operating a vehicle if they harm themselves or others. It is your responsibility to cut them off and find them alternate transportation. It is also the fault of the intoxicated driver, but fault doesn't end there.
Similarly, if you provide and advertise an Autopilot feature that isn't autopilot, it is your responsibility to ensure the driver is attentive, has their hands on the wheel and that you pushed a not-broken implementation to them. Other manufacturers have done this, yet Tesla hasn't.
> "As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel," said Tesla. "He did not do so and shortly thereafter the vehicle collided with a post on the edge of the roadway."
If the autopilot's confidence was dropping that rapidly, why did it not slow the vehicle? If it hits zero confidence it should halt completely until the driver restarts the car in manual mode, and possibly disable itself until checked out.
The "Autopilot" (aka Lane Assist) of my VW does that. If you don't get your hands on the wheel after it wants you to do (which is about 10-15s) and after a first warning it will hit the brakes with a lot of power for the fraction of a second.
So it's not actually stopping but giving you a clear signal that you should do something and would wake you up if you fell asleep. I don't know if it will come to a stop if you also ignore this warning, because it's not something you want to test with other participants on the road behind you.
I think that's the important difference: This system actively disencourages the driver from fully relying on the system while Teslas implementation does not seem to do it.
Apologize for not googling this myself: what mechanisms do autopilot programs have for detecting traffic conditions behind them, in terms of camera/radar/algorithms? Are they less robust than the mechanisms governing detection in front of the vehicle?
I guess the current lane-assisting and adaptive cruise control systems don't do anything at all. Most cars only have ultrasonic parking sensors in the back and probably a parking camera which both don't have any range. They probably rely on the fact that vehicles behind them should drive in the normal safety distance and that following cars are responsible for avoiding rear end collisions.
Agreed, but Tesla might need to get more aggressive now ("Confidence lost, resume control, emergency braking") and have the tail lights go to 100% brightness before braking is applied (to give the driver behind them ample time to slow).
If you're instructing the driver to do something, and they don't, your last option is to fail as gracefully as possible (which is "what is the best path forward in the distance I need to stop at emergency braking rate + I must notify the vehicle behind me as soon as its clear I will take automated action, even if that's before I apply braking force").
Going off-road like it did seems safer than stopping on a highway. Sure there was damage, but far safer for the unconscious* occupants than being rear-ended.
(* I use "unconscious" because this guy was not paying attention, but also because it appeared to the car that the driver was dead)
As long as it's not too bright out and the object isn't highly reflective and there's no glare and it's not too high and...
Given that the sensors can't detect a semi completely blocking the road, I'm reluctant to make any statements about what the Tesla can and can't detect.
It can detect a car immediately behind it with the ultrasonic sensors. There is a reverse camera, but it's not hooked up to the autopilot system (just used for showing on the display).
So, no, it can't really detect cars more than a few feet behind it.
Actually, that's precisely how (real) autopilots work. If the autopilot can't reliably continue flying (e.g. in extreme turbulence, severe icing, or failure of some part of the system), it will immediately disengage with a loud warning sound, and the pilot has to take immediate control.
You're both right. That's exactly how aircraft autopilot systems work... and it doesn't work. There have been several plane crashes -- most notably AF447 -- which resulted from autopilot systems disengaging when the (human) pilots had insufficient situational awareness to safely take over control of the aircraft.
It was, at least partially. The pilot failed to notice (or account to) "Alternate Law" conditions (which doesn't provide the same safeguards as normal mode). Especially stall handling is much different in Alternate Law.
The confusion between normal and alternate law on AF447 is not because the autopilot transferred control to the pilots when they weren't ready. The same thing could have happened if the transfer was made with full preparation. It was a signalling and training issue, not evidence that auto->manual transitions are bad.
If the question is "could a human be trained to use this?" then, yes, you can imagine a human reacting appropriately to the AF447 scenario.
But that's the wrong question. The right question is: does the system incorporate the success of human actors into its design? And the answer here is no. If the Airbus system works as designed, the human pilots will tend to be woefully unprepared to resolve failure modes.
Attempting to create an external training regiment to account for that design failure is a band-aid, not a solution.
Specifically because this is such a precarious transition of control, pilots have extensive training in staying alert and being prepared to take over, have co-pilots, and have rules about adequate rest times etc. And yet there are still air accidents involving improper use of the Auto Pilot.
We can expect none of these things from the average driver - they're more likely to be watching Harry Potter on their iPad
Furthermore, I imagine that an auto going down a road is far more likely to detect (or not detect) a situation where a human needs to take action right now. Aircraft have systems to detect approaching objects and things like sensor failures usually aren't going to require split-second action. (e.g. the Air France 447 crash involved multiple pilot errors over many minutes rather than a failure to take over rapid enough control.)
Humans program computers. There's a reason standards like MISRA C exist: it is critical to minimize error in human abstractions (code) which run on systems that put people's safety in jeopardy. Humans aren't wired to emulate machines perfectly in their minds, yet computers will do exactly what we tell them to do. You are implicitly placing trust in the humans who programmed those computers.
We saw how human factors can complicate the correctness of code in the Toyota unintended acceleration case. The instruction fed to the system by its programmers were incorrect and not up to industry standards. Computers caused the accidents in that case, despite human intervention by drivers.
And I said I trust a computer more, not that the presence of a computer is automatically enough.
I'm not getting in a manually-controlled plane that's flying centimeters from a hard surface. I will get in some computer-controlled planes doing the same.
(At the very least I want a real-time OS and dedicated range sensors. And the confidence of a plane owner who would lose millions of dollars if anything went wrong.)
There are crop dusters that fly close to the ground and obstacles - do you want to get in a manual one or can I code up a computer controlled one and put you in it?
What you really mean is you want to see safety data and a track record.
As for the actual planes and auto-pilots, the real point is the kinds of engagements and disengagements that work in the air don't necessarily work on the ground and being so close to potential collisions is part of that.
It's their BAC and amount of sleep I don't trust. Also the prescription medication, the status of any relationships they might have, and the competency of their managers.
The implication of the hard barrier is that it's not safe to crash into it. If there are big wheels on enormous shock absorbers making it safe to hit the barrier, then a lot less precision is needed, and you can put almost anything in control, even a heavy clamp that keeps the controls skewed slightly toward the barrier. But that's not the scenario anyone else is talking about.
Even then, the pilots usually have more than a second to react. As fast as planes fly, they're high enough and there's plenty of empty space around them when they're cruising.
Any notable quotes or stats from the book? I'd guess some of Tesla's stock value is tied up in the idea that the auto-manual transition of control does work.
I lent out my copy, but one particularly relevant section compares results from the aviation industry's use of pure-autopilot and HUD-augmented landings. Pilots maintain their flying skills, and get better overall results by staying in the loop with an augmented vision system. Good pilots often avoid full-autopilot entirely, or are forced to abort only seconds from touchdown, resulting in poorer landings. Bad pilots over rely on autopilot, and are therefore under-trained to deal with exceptional situations, resulting in incidents like the infamous Air France crash.
Cool. Tesla should review this book and talk. Paraphrasing a few quotes from your link ...
"The most difficult challenging tech problem is not full autonomy, but rather the interaction between automated systems and humans"
The subtitle of the book is also notable: "Robotics and the Myths of Autonomy"
The speaker almost seems to be arguing that full autonomy is impossible. If we start up a system, then its behavior has still been kicked off and wrapped by humans. Interesting perspective.
I'd like to read the book myself after listening to a bit of this talk. Thanks!
It's not so much that full autonomy is impossible. We "fire and forget" many type of systems with no possibility of human intervention--at least not in a timely manner. What you can't do is start an automated process, comforting yourself that an inattentive human can always take over in a split-second if the automation messes up.
> What you can't do is start an automated process, comforting yourself that an inattentive human can always take over in a split-second if the automation messes up.
I think that depends quite a bit on how much is automated. Traditional cruise control is an automation, and I'm sure there have been accidents that resulted from it, but generally it automates so little that you really can't become inattentive (unless you fall asleep). Variable cruise control has more automation, and I would guess the percentage of automation related accidents it played a part in compared to traditional cruise control is higher due to people trusting the system when it is misbehaving. Tesla's autopilot is much farther along this spectrum.
> We "fire and forget" many type of systems with no possibility of human intervention--at least not in a timely manner
Practically speaking, I agree with what you're saying. For autonomous systems as we think of them today, humans can set them and walk away.
Philosophically, if I throw a ball into space, is its continued movement autonomous? Kicking off a computer program is just a complex ball. We don't have true randomness in computers.
> What you can't do is start an automated process, comforting yourself that an inattentive human can always take over in a split-second if the automation messes up.
I 100% agree with this. I think that's a fundamental flaw in Tesla's current plan to achieve autonomous vehicles. It's also a huge liability risk, for which they are not insured, to be dishing out 15,000 systems like this every month. At least the other consumer-available self-driving car systems require hands on the wheel. Tesla doesn't even seem to do that.
Computers are deterministic. Maybe you can convince yourself that something in the outside world isn't, and generate numbers based off of that, like radioactive decay. Even that may be deterministic. I think it's a question for quantum-mechanics or philosophers.
Anyway, practically speaking, we tend to be happy with random numbers that other people would have a really difficult time copying.
I suppose you could declare that true randomness doesn't exist, but that doesn't make computers special...
Anyway, most of the transistors in a chip are barely kept in a range where they're deterministic. Just push some out of that range, and you get randomness that's as true as anything else.
Interesting. I didn't realize humans have already attempted so many different automated systems. In the talk you linked, he points out how, historically, systems are never fully automated because humans are not comfortable with that. He seems to predict that cars will not be the first systems that we allow to become fully autonomous, and admits he could be wrong.
It's surprising to me that he feels texting should be permissible while driving with a driver-assist system [1]. His whole argument here doesn't add up for me. I'll have to check out his book.
It sounds like he supports driver-assist mechanisms, though maybe without Tesla's beta release schedule of OTA updates. He seems unconvinced that full autonomy will ever be achievable.
I think I still believe in Google's plan, despite the shadow he casts on full autonomy. Zero deaths and full disclosure seems better than selective reports and the possibility for more fatal accidents.
Given there are now a decent number of investors on both sides, I imagine the debate will continue as to which is the best path: testing driver-assist in consumer vehicles, or testing full autonomy with company-controlled cars.
I'd like to hear what Mindell thinks would be the best hand-off from computer to driver in a driver-assist vehicle such as Tesla's. He answers a question about that generally here [2], but is mainly talking about airplanes where there is a chance for a smoother transition. How does one slow down time such that a hand off is possible in a vehicle during an adverse event that the car cannot handle itself? I feel that is the most pertinent question to today.
It seems like the car needs to see the adverse event coming, but since it is not ready to handle the event, the car is unlikely able to give the driver much extra time. By definition of adverse event, it would now seem to have been better if the driver had been paying complete attention the whole time.
When he talks of humans being richly involved in the environment, not sleeping in the trunk, I imagine an interface with several levels.
Example: you are driving down a highway that has full monitoring by overhead cameras. No obstacles are going to jump out at you, and the position of the car is mapped precisely in real time. The human controls via a joystick to select a lane position, and brakes are automatically applied if avoidance is needed.
Now the car leaves the highway, onto an isolated road. Before leaving the known environment, the interface switches. Now the lane position becomes a trajectory projection, and the driver must observe the road. The task of safely following the trajectory is automated. The interface adds random noise to the trajectory projection, to test that the driver is paying attention. Failure to demonstrate control results in the car pulling off to a rest stop.
Finally the car goes downtown. A well known road, but with many pedestrians and bicycles. The interface is based around identifying obstacles, with a joystick for max safe speed <-> stop. Trajectory is automated, with the driver choosing turns. The car scans for pedestrians, but if the driver does not acknowledge them, the car slows since the driver is not paying attention. If the car misses a pedestrian, the driver is aware and has been conditioned and trained to react in time.
All of these interfaces would require driver attention (minimal on the highway), but would significantly reduce fatigue associated with driving.
> All of these interfaces would require driver attention (minimal on the highway), but would significantly reduce fatigue associated with driving.
I see now, that makes sense. Thanks!
I did a little more hunting on Mindell and it seems his words are easily misinterpreted [1] [2]
It sounds like he is almost anti-Google in that article, though I imagine he's really trying to argue for something like you describe.
I think Google's on the right track towards such a system. Right now they're constraining themselves to having sensors inside the vehicle to see how far they can safely travel. If that does not work out, presumably they can invest in developing a living road like you describe to aid in tracking cars' movements.
If Google does start some sort of taxi network, presumably passengers will be able to enter destinations, or perhaps request to change lanes. Under Mindell's definition, perhaps that is not completely autonomous. I think it is full autonomy in the way Google thinks of it. Or maybe Mindell would call that autonomy and just admit he was wrong and that cars are the first human transporters that can be fully autonomous.
Thanks for your thoughts. I like your ideas about adding noise. You've obviously put some thought into this. I haven't seen others mention that elsewhere.
There is not a simple, one-size-fits-all solution to this. Ideally if AP confidence drops to a critical level what you want is for the AP to safely remove the car from the road however how and when do you do that for every possible situation that can be encountered on the roads? At the end of the day the driver has to take responsibility and take over the controls like it asks.
But it is safer than ramming it into a lamp post or guard-rail.
Ok, it is still a risk, and a bad accident may ensue. But most of the times there is a vehicle stopped on the road the traffic behind it simply stops as well, and in most places there is a hard shoulder that you can park on.
It's not as if cars don't strand for other reasons (engine failure, driver incapacitation).
I'm saying it should gradually slow down as it gets more uncertain (just like you do in heavy rain as your visibility decreases), not that it should slam on the brakes with no warning.
If gentle deceleration "causes" an accident then the problem is with the attitude and behaviour of the following vehicle, not with the deceleration.
That's the problem though: if its lost confidence, you might be gradually slowing down into a concrete median. You must ensure that the autopilot has enough confidence (while said confidence is diminishing quickly) to continue in the path it chooses safely between current speed and 0.
Pretend we're in the car together. We're traveling at 70mph on a two lane road at night. I cover your eyes with my hands on a curvy road. Assure me we will stop in time without hitting something.
> [...] you might be gradually slowing down into a concrete median.
Which seems to be less of a problem than not gradually slowing down as you run into the concrete median.
Just as a human driver would, a road vehicle self-driving system needs to act sensibly to mitigate risk if its confidence in its ability to safely operate in the conditions it finds itself is reduced, until and unless a human driver has resumed active control.
I think we agree, its simply in the execution. Why didn't the vehicle slow down when it lost confidence? Was it software? Could the ultrasound sensors and/or MobileEye camera not see the concrete median at night?
This is going to keep happening until Tesla change their approach from a move-fast-and-break-things startup philosophy to sound engineering one that puts safety in design front-and-centre.
Traditional real-world enigneers have been drilled in this for longer than living memory. When the software guys decide to bring their software appraoches to the real world, this is what happens.
2014 Model S got 5-star/82%[0] (where 100% is best) adult occupant protection on EuroNCAP. For comparison, the smaller, lighter, cheaper 2015 Honda Jazz got 5-star/93%[1]
Other cars are significantly better than this [2]. E.g. 2016 Alfa Romeo Giulia 5-star/98%, 2016 Toyota Prius 5-star/92%, 2015 Volvo XC90 5-star/97%, 2014 Porsche Macan 5-star/88%. I could go on, but the Model S is not the safest car ever tested - it has approximately the same level of crashworthiess as the 2014 Smart Fortwo (4-star/82%) [3] according to EuroNCAP when looking at adult occupant safety.
Cars have come a long way in the past decade or two though - its amazing to see "small" cars like the Jazz where they dont even get a cracked windscreen when basically driving into a wall at 40mph. For comparison check out the 2000 Citroen Saxo fold up like a tin can... http://www.euroncap.com/en/ratings-rewards/latest-safety-rat...
The differences between the Tesla and the Jazz are a bit strange. The main difference in score between the two appears to be the side impact from a pole - which is all 'good' for the Jazz, but it was 'weak' for the Tesla in the body section. But if you look at the two videos of the cars, after the pole crash, Tesla is almost completely intact (29 kph) [1] and the Jazz is half destroyed (32 kph) [2]. The Jazz impact was 10% faster - but the difference in damage is remarkable.
It's also worth noting that where as this child rating is higher for the Jazz - the actual 'Crash Test Performance' was rated as Good for the Tesla but Adequate for the Jazz. The lower test score for the Tesla is because they don't have Isofix installed.
The point of a crash test is not to show that the vehicle is super rigid and impervious to damage. It is to show that a human inside the car is safe. To that end, the car should "crumple." It shouldn't remain 100% rigid, and it should fall apart like tissue paper. It should absorb as much kinetic energy as possible so that the human doesn't have to.
I do know that (to the extent that it's true). I also know that not driving head-first into a truck that's turning across in front of the driver, or into a stationary car when the one in front of you swerves around it, is even safer than having the highest safety rating.
Don't get me wrong - Tesla engineers are talented engineers. The culture of the company however is not suited to building mass-production cars. It's trying to put the Silicon Valley philosophy of rushing towards the future ahead of taking measured, reliably proven steps.
You don't build buildings or car structures via an agile methodology using customer feedback. Waterfall-style approaches are used because hammering out the long tail of risk is important. Car software technology must necessarily be no different.
Yet there's little evidence that any other car company is holding their car-control software to any high standard. They don't release it (for peer review) because they regard it as valuable IP. Then when forced to like Toyota did in the case of the Camry, its found to be the biggest pile of irresponsible spaghetti code imaginable.
So no, Tesla is not materially different in process. Perhaps in talent - Tesla has actual experienced programmers involved, instead of solely relying on junior folks for the bread-and-butter coding.
No only my cultural experience with foreign engineers. Anybody over 25 still doing engineering/coding is considered a failure. Their career path includes management early. All the engineers I work with from Japan, Singapore, China are young and inexperienced.
And they are only allowed to report success up the chain. Management is fed a constant diet of sweet crap.
I won't argue with your assumptions because they fall in line with my (completely non-experience based) assumptions about that dev culture (and the fact that my Honda onboard app is such shit), but I had thought that something like autopilot would be outsourced to a specialty firm?
> Toyota has brought onboard the entirety of the workforce at Jaybridge Robotics, an artificial intelligence software firm based in Cambridge, Mass.
Obviously, it's extremely possible for experienced engineers to have their work be for naught when working for an incompetent overseer. And my intuition is that automotive engineering requires such a high amount of coupling and integrated behavior that you can't just outsource your AI to an expert firm and then tack it onto the finished car. But those are causes of failures that are separate from the aspect of auto software engineer culture that you mention, which could be incompetent for their own devices while not being the primary driver of failure when it comes to autopilot features.
Nobody reads manuals and follows instructions . I feel Tesla is going to learn this lesson that hard way.
From the article:
"We specifically advise against its use at high speeds on undivided roads," it said. Tesla states clearly in its owner's manual that drivers should stay alert and keep their hands on the wheel to avoid accidents when the Autopilot feature is engaged.
People have been trained to recognize the discrepancy between a product's by-the-book usage (which is often impractically stringent, as a CYA for the company) and its real world usage.
Tesla could easily have the "autopilot" turn off if no hands are detected on the wheel for, say, 10 seconds. Putting a blurb in the instruction manual about keeping your hands on the wheel but then allowing users to use it hands-free is broadcasting, "Hey, you don't really have to do this"
I don't know about most people but there are some things (like cars), that I treat as super-reliable machines that will protect me even if I do some dumb things. And most car systems are made like that. They assume that the driver is going to fuck up at some point and they try to handle that. It seems Tesla's auto-drive feature is not like that. It's like when Chrome has a silly bug, I just accept it as it is and try to work my way around it.
> I don't know about most people but there are some things (like cars), that I treat as super-reliable machines that will protect me even if I do some dumb things.
It still boggles my mind that people think this way. YOU put yourself in control a potentially deadly vehicle. You choose to do so. What will your super-reliable car do if you run into a pedestrian?
"It shouldn't be hard."
It is.
"We shouldn't expect people to read the manual."
We should.
If they don't, and injure someone or themselves, they are being criminally negligent.
On the medical devices I've worked on, the risk analysis took into account that the users may not read the instructions for use or might be lacking in training. I'm sure that automotive is similar, which has had me wondering what the risk analysis looks like for the Teslas.
> Tesla is going to learn this lesson that hard way.
Yes. I hear you. If I, as a non-Tesla driving, regular motorist got grievously injured by a Tesla running autosteer, I bet I'd be able to choose from the nation's finest injury lawyers.
There's an old saying: your freedom to Tesla autosteer your vehicle ends at the tip of my bumper. (Or something like that.)
> "We specifically advise against its use at high speeds on undivided roads"
You don't have to advise; you could mandate. The car knows how fast it's going and where it is, and with a little help from its Internet connection, can know when it's on an undivided road.
CoPilot? Someone who can completely take over for you when you're feeling tired. Compared to an Autopilot which requires pilot or skipper supervision I think copilot would be a far worse term if it weren't for the fact that the name doesn't really matter too much.
If this is really Tesla's line of thinking, they may want to rebrand that feature. Autopilot definitely suggests something more than helping you steer. I mean, when do you fail to steer in the first place? It's not exactly rocket science; the only use case for this is when the driver falls asleep, in which case the "autopilot" can catch the mistake, slow the car, start some heavy metal music and call the police for a fine to deter the driver from further sleepy driving.
Since that's not what it's doing at all, I think this is not how Tesla intends the feature to currently work.
The relief which is afforded to the pilot by the
use of the Automatic Pilot permits him to devote the
greater part of his time to navigation, map reading, and
the operation of his radio installation, whereas with manual
control, his whole time is occupied with the actual flying
of the aircraft.
That was the original intent of Autopilot: offload the mechanics of flying so that the pilots could concentrate on finding their destination even at night or in bad weather: conditions in which the autopilot was also immensely superior to an infallible and easily-confused human, not the other way about as we see with systems such as Tesla's.
They didn't have to keep their hands on the controls, they could trust that the machine could handle everything except the one-in-one-million scenario.
"... and then we post blog entries, and our CEO retweets videos of people using our Autopilot exactly in the manner by which we 'specifically advise'."
Similar with the garage door incident - "Well, no-one said it should be used without full supervision! It needs to be watched!" ... like in the Tesla videos where the owner summons the vehicle as he's walking out the front door, or hits auto-park... and walks away.
In order to enable autopilot you must agree to the terms right on the display screen. Again it is possible for people to not read them (although they are pretty brief).
Tesla needs (or is going to be forced) to issue a recall soon. Putting "beta" technology in a car isn't OK when it puts drivers and others on the road in danger. This is especially problematic because people have a preconceived notion of what "auto pilot" means and just because Tesla says it's not ready for prime time so therefore you should "keep your hands on the wheel" doesn't make it OK to put into production.
But OTA updates can disable the feature, which is the essential safety response right now. Even if they wanted to recall and add sensors, I don't think that would make this system safe enough - the bigger issue is the human factor involved in a not-quite-good-enough autonomous vehicle.
The most likely reason I see for them disabling the feature in OTA would be if countries pass legislation forcing them to. In that case, the car owner would likely not have recourse like in the PS/Linux case since they were legally required to remove it.
To be honest, I'm really surprised this hasn't happened in EU countries. If you'd asked me 10 years ago whether the EU would allow their entire road system to be used for a public beta test of half-baked semi-auto driving systems, I'd have said there is no way. Yet tesla just went ahead and pushed that feature out in a firmware update and noone really seems to have challenged them on it. I expect with this recent coverage, we'll see that.
Not sure exactly to what you're referring, but if it involves hacking your Tesla to re-enable it,
a) that will vastly reduce the level of usage, and
b) the OTA update could very well completely remove the code from storage, making the job of hacking a much more involved process than just enabling a flag.
Sony is settling a class-action suit with people who bought PS3s while relying on the "Other OS" feature. So, it's not about technical ability to re-enable the Autopilot feature; it's about being liable to their customers who relied on the feature being present when they shelled out $70k+.
Ah. Indeed. I suspect that if they do disable this, though, it'll be my NTSB mandate, so they won't have much of a choice - similar to the VW thing. They might then have to deal with the resulting lawsuits, but I can see many scenarios where they don't have a chance to avoid them. Tesla did, after all, sell a product with an advertised feature that isn't ready for prime-time.
They do actually, remember the titanium protective plate retrofit a year or so back on the Model S? There have also been various drive train issues and such.
IIRC think the Model X had a problem with the headrest already that resulted in full recall.
Musk is losing its mind. Blaming the user won't work long with life critical systems. Unless he expects backing from the NRA, putting complex and dangerous devices that can't escape dangers themselves in the hands of the average joe will create incidents. And even if he's legally not to blame I don't think he's gonna sale cars long with such images in the public mind.
I hope he's not putting too much hubrys in the AutoPilot thing and try to enforce it at any cost. Soon there'll be so many corner cases, AP will only "best used while parked with nobody inside nor outside the car, preferably with engine stopped too".
The reason I liked Tesla/Musk before is that he pushed quality and safety Unconditionally. I'm starting to think it was mostly nerd honeymoon phase. Either revamp the logic (sensors-recall and software) or disable it.
> "As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel," said Tesla. "He did not do so and shortly thereafter the vehicle collided with a post on the edge of the roadway."
This is the real danger of autopilot: not that it is unreliable but that humans are lazy and don't think rare events (like dangerous situations that result in a crash without intervention) don't happen to them.
I'm not sure autopilot is a good middle ground between assistance and full automation exactly because of what these accidents show. It basically makes the typical ride so uneventful and mind-numbingly boring drivers stop paying attention and become unable to react appropriately when a dangerous situation does occur.
Cruise control had similar problems: it reduces speeding-related accidents (because drivers are less likely to micro-optimize their speed) but it increases the risk of rear-ending because drivers don't have to stay as alert for the majority of the driving (and then miss those situations in which they need to react quickly).
Humans are not good at staying attentive for extended periods of timing, especially when nothing ever happens. Self-driving cars can fix this by removing the need for human intervention, but autopilot seems like one of those technologies that should be safer in theory but ultimately fails to consider the human factor.
> "No force was detected on the steering wheel for over two minutes after autosteer was engaged," said Tesla, which added that it can detect even a very small amount of force, such as one hand resting on the wheel."
Why doesn't the "autopilot" turn off if no hands are detected on the wheel?
Well the whole autopilot feature is silly if you need to keep your hands on the wheel. Then you might as well steer yourself, or am I missing something? (I don't know details about this feature, I just know it steers automatically on highways.)
Until full driver attention is no longer needed, enabling "autopilot" and saying in the manual "PS. you still need to stay as alert as you normally would and keep your hands on the wheel" (i.e. no benefits from having this feature) is just asking for accidents.
Imagine, as you suggest, it does turn off when there's no hand on the wheel. People are just going to rest a hand on it and still not pay attention. What's the next step, tracking people's eyes? Or shall we just not enable autopilot in the first place, if it can't do its only job?
The problem is that Autopilot isn't really a self driving system, even it's marketed or popularized as such, so it shouldn't be treated as that kind of system. There are other cars that have the same or analogous technology (Lane Assist) but have much smaller time frames until they require the driver to put their hands on the wheel.
I think the value of "autopilot" for me would come in increased awareness when my attention is fading. When I'm coming home at 2AM from a gig and the road is long, straight, flat, and boring, my attention wanders. Sure, I start out strong, but soon my eyes just don't focus correctly anymore. I roll down the windows and turn up the radio to stay awake but sometimes I don't realize how I got from here to there.
That 30 minute trip could end in disaster or, using something like "autopilot", it could end with me pulling into my garage without issue. I tend to think that it will be very useful in these types of situations.
> Sure, I start out strong, but soon my eyes just don't focus correctly anymore. I roll down the windows and turn up the radio to stay awake but sometimes I don't realize how I got from here to there.
I understand what you're saying, and that it is unlikely that something will happen, but Tesla still recommends full attention in this scenario. You need to be fully attentive at all times when driving, even with an assisted driving system. Tesla states this clearly in their manual. If something were to come across the road that the car were not prepared for, it may beep at you, or it may not.
This seems to be what happened to the Florida driver. He likely did not think he was abusing the system. He probably crossed that road hundreds of times before on autopilot as part of a commute. And, neither the driver nor the system pushed the brake.
If I were you and got into an accident, I know afterwards my brain would search for a reason why this happened and how I can help it not happen again both to me and to others. The solution is to not drive when tired, and to use the system as designed. An additional thing we can do is put in a few measures to check that the driver is still alert, like requiring hands on the wheel at more frequent intervals.
You have to pass a medical examination to get a driving license. Driving while medically incapacitated is irresponsible and illegal the same as driving drunk or on drugs. Unless we have a real self driving car, which Tesla is clearly not.
Driving while medically incapacitated is irresponsible
Sane people don't set out to drive while medically incapacitated. That doesn't mean it can't happen while on the road (heart attack, TIA, severe allergic reaction to e.g. a bug).
Autonomous vehicles will require engineering that's more like NASA and less like weekend hackathon. Given Musk's association with SpaceX, I was hopeful we'd get the former, but it's starting to look like an autopilot solution was released half-baked.
The current SV engineering climate -- youth beats experience, move fast and break things, release early and often -- works great for photo sharing apps but not great when lives are at stake.
Actually I think SpaceX and NASA do think quite differently about things. NASA has to engineer toward maintaining their national mandate. Every launch is a PR game targeting the American public and congress. SpaceX instead has to worry about keeping customers happy. I think SpaceX has a simpler job there. Sourcing, manufacturing, deployment, insurance, etc. all seem simpler for a private company. They're shown that they're not shy about breaking their toys once the customers payload is off if they think they have a good chance of learning something.
SpaceX hasn't had people in their vehicles yet. Maybe they're be more careful when they do. Or maybe not. I can't find the quote right now but I think it was Sagan who said that part of what makes exploration so captivating is that it is dangerous, maybe we shouldn't make safety the number one priority because it might slacken our awe.
> The driver in Montana was headed from Seattle to Yellowstone National Park when he crashed on a two-lane highway near Cardwell. ..."It's a winding road going through a canyon, with no shoulder"
From the Tesla Owners Manual:
> Traffic-Aware Cruise Control is
primarily intended for driving on dry, straight
roads, such as highways and freeways.
...
> Autosteer is particularly unlikely to operate as
intended in the following situations:
• When driving on hills.
• The road has sharp curves or is excessively rough.
Can autopilot not tell a straight road from a curvy road? If it can, why does it apply throttle when it detects that it is driving in a type of road it cannot handle?
A driver is still supposed to remain alert and have their hands on the wheel, this allows Tesla to collect telemetry on autopilot so it CAN work on improving the software to handle these conditions better. I think the flaw here is the hands-on requirement is much more lenient than it should be, if it allowed the driver to keep their hands off for two full minutes (it is supposed to slow down if a driver isn't keeping their hands on the wheel) then I'd say that's a very poor design choice.
I don't think the car tells the user whether it can handle the road or not. It just sends alerts when it needs help.
Humans are the guinea pig for Tesla. They want this data of their system driving outside its normal bounds. Otherwise yes Tesla would disable autopilot on unapproved roads.
From the description of "winding road going through a canyon with no shoulder" and near Cardwell, I would say this is MT-2 [0]. I live in this region and have driven this road a few times, it is not a simple road that one can just set cruise control and drive through. There are a few curves that have advisory speeds of 45-mph. I am aware that it is possible to travel well above an advisory speed and be fine but the fact remains, this is a curvy road that requires modulation of speed in order to travel in a comfortable manner.
Hindsight is 20/20, no pun intended. Why was this shipped at all? I mean the self-driving features. This has now done way too much long-term damage and impacted all self-driving initiatives across the board. Just like they opened up their patents to benefit everyone everywhere by ushering the age of low/zero-emission vehicles they should have done the same for the self-driving features. This should have baked in the lab for another 3-5 years.
"It's a winding road going through a canyon, with no shoulder," Shope told CNNMoney. The driver told Shope the car was in Autopilot mode, traveling between 55 and 60 mph"
Wow, really?
This is not the Autosteer's fault, it's pure negligence
If the autopilot couldn't safely handle going 55-60mph, it should have slowed down to speeds that it could handle. If it couldn't safely handle those conditions, it should have safely slowed to a stop. Failing ungracefully is completely autopilot's (and Tesla's) fault.
55-60 may be too fast for some of the curves in this area. There are curves with an advisory speed of 45-mph. From the description of road and the area, I would guess it is MT-2 between Cardwell and Three Forks (I live in this area).
Stopping in 'a winding canyon with no shoulder' seems like a colossally bad idea. Mind you, trusting the badly named autopilot in this situation seems like an even worse idea. Especially if you consider the recent death that EVER Tesla drive must know about at this point. To then overconfidently trust the AP in an already dangerous environment for a human drive seems crazy.
> "We specifically advise against its use at high speeds on undivided roads," it said. Tesla states clearly in its owner's manual that drivers should stay alert and keep their hands on the wheel to avoid accidents when the Autopilot feature is engaged.
Most people don't read the manual unless what they're doing actively doesn't work. Especially with press surrounding Autopilot saying it's basically the best self driving car and is completely trustworthy (even if they don't explicitly state that, it's still implied), it's not unthinkable that someone would completely trust the car and not worry about sensors. The car should be much more proactive at getting the user's attention, either by actively slowing the car down to some safe speed with cautions on and/or significantly lowering the time limit of taking their hand off the wheel before any indicators appear. I'm not sure about the former, but the latter is done in every other car that has this kind of technology (e.g. cars with Lane Assist) and the time is very low, around 10 seconds.
The most glaring flaw here is the name - Autopilot. It gives the driver the impression this is a true self-driving system. I have seen videos of people sleeping in their Tesla while Autopilot is engaged. A more appropriate name might be "co-pilot"
Ironically Tesla, a company willing to invest long-term in its products, chose short term revenue in the form of their Marketing team boasting of "autopilot" in their vehicles, in exchange for violating long-term public trust and responsibility.
Tesla is getting hundreds of millions of miles logged for R&D on their autopilot feature. It makes a lot sense business wise to launch this tech when they did.
It's clear from their PR response with the semi-truck accident that they see fatalities as an acceptable consequence as long as they can say they have fewer accidents per mile than 'un-assisted' drivers.
The question is: do we share their belief that this is ethical?
If the auto pilot calls for the driver to be on alert and place their hands on the wheel and the driver doesn't, surely the auto pilot should stop the car
I really think Tesla needs to make autopilot's "hands-on" requirement less lenient, the driver had their hands off the wheel for a full two minutes and the car didn't bark at him? I think (maybe I'm wrong and the other non-fatal one did) this is the first incident where autopilot was involved with an inattentive driver, and probably the one that is going to put the screws to Tesla, unfortunately.
Didn't the car bark it him though. The article says that Tesla claims the car repeatedly alerted the driver to take control of the vehicle and he didn't.
I think you are wrong that this the first incident where autopilot was involved with an inattentive driver. In the fatal crash responders found a portable DVD player inside the car and seemed confident it was in use at the time of the crash.
The driver's response is moot. The larger issue here is that if autopilot doesn't provide for a safe stop under its own control when it encounters a situation it can't handle, any vehicle on autopilot becomes a lethal missile if the driver has some kind of medical problem.
A traditional car won't get more than a few hundred meters if the driver on the freeway has a stroke, say. A car on autopilot could travel hundreds of kilometers and end up in a schoolyard at freeway speeds.
I hate to play Devil's advocate here, but a traditional car on cruise control will absolutely become a lethal missile if its driver has a stroke. I don't think automation makes that particular use case more dangerous.
> I hate to play Devil's advocate here, but a traditional car on cruise control will absolutely become a lethal missile if its driver has a stroke.
You're talking worst case scenario. This is a case where the driver was alive and chose to ignore warnings.
> I don't think automation makes that particular use case more dangerous.
It's more dangerous when the autopilot driver can easily misuse the system. Add a few requirements like hands on the wheel every few seconds before shut-off and you'll see this happen less frequently. Someone may still circumvent that measure. Nothing's perfect. There will be fewer people who abuse it.
Autopilot absolutely should start slowing the car down if the driver takes his hands off the wheel for more then a few seconds. Anything else is reckless.
I was, however, responding to the specific scenario of 'Driver has a stroke --> Self-driving car becomes an unguided missile' by pointing out that it the exact same scenario as 'Driver has a stroke --> Regular car with cruise control becomes an unguided missile.'
Also going to play devil's advocate here. Sure the car only goes a few hundred metres, but a slight knock of the wheel as they pass out or a small corner in the road could easily send said car into a crowded cafe.
What is the chance that a Tesla car fails to continue safely and eventually pull over after the driver carks it? 0.1%? 1%? I doubt anywhere near 99% of strokes while driving end with the car just slowing down for a hundred metres and stopping safely in the middle of the road. I'd be surprised if the majority do. Not that it's a common occurance, but if I picture a typical stroke while driving alone, I'm picturing a car veering slowly off the road and hitting something at maybe 50-75% of the speed it was originally travelling.
Tesla's self-driving would have to be abysmally bad to not beat that, and given we're only discussing 3 accidents so far, I don't believe that to be the case.
I agree with your analysis of how far each vehicle will get - in fact that's why I say the car on autopilot is more dangerous. A conventional car with cruise control enabled will 'fail early' and crash in fairly short order (most likely running off the road into a crash barrier or something) if the driver stops controlling it. A car on autopilot won't crash for some undefined but probably long time, which means that when things do go pear-shaped, it's much more likely to be on the road and in a significantly different environment from where it was driving.
The problem is that you not only need a "hands on", but a "brains on" on the wheel. Which will turn autopilot driving in more unpleasant experience than regular driving.
The real issue is whether or not "Autopilot" is safer than a human or not. If the number of deaths for similar road conditions per distance travelled are less then Tesla should be allowed to continue the way that they are doing things. It's obvious to me the way other systems do this (by slowing down if the user doesn't put his or her hands on the wheel frequently) leads to people not using the feature because it is less useful which may, paradoxically, leads people to have a worse driver at the wheel.
Just the stats, ma'am. Are they safer or not? If so, then it's fine by me. What we gain from self-driving cars in terms of real world usage data is extremely valuable.
instead of aiming for "the same number or slightly less" accidents than a human, why not have humans give oversight to prevent autopilot errors, and autopilot assist where humans might make a mistake?
actually try to let both computers and people excel where they do best, and massively decrease road accidents?
That sounds good to. I think 3pt14159's implicit question was, "should Tesla be allowed to continue doing this the way they want?" not "Is tesla take the best of all possible approaches to car automation?"
Tesla should contact the design team which creates the manuals for Ikea to create a purely graphical "owners manual" clearly describing what the autopilot can and cannot do. Admittedly, a lot of the Tesla marketing seems to indicate otherwise, but the autopilot is just a very sophisticated lane and distance/speed assistant. It seems to handle the typical freeway very well, but does not promise to handle most other driving situation.
For example, it does not handle any crossing traffic, nor stationary obstacles as well as debris on the road. Off the freeway it is entirely the responsibility of the driver to keep the car safe, the autopilot cannot (by design) handle most of the situations.
The discussion of having the hands on the wheel is a bit a red herring. Having the hands on the wheel limits the amount of distracting things you can do while driving, but the key point is paying attention to what is going on. Hands off on an empty highway, why not? But when a truck appears on the horizon with some potential to cross - better be prepared to take action. The same applies to small country roads, and especially when the autopilot starts to warn the driver.
>"It's a winding road going through a canyon, with no shoulder," Shope told CNNMoney. The driver told Shope the car was in Autopilot mode, traveling between 55 and 60 mph ...
in SUV (Model X)... are people trying to win Darwin award?
I just really don't understand the whole self-driving car craze. What is so attractive about it? Why do people want it so badly? Personally, I like driving, rarely even use cruise control. I would never fully trust these auto-pilots. Ever. So what's the point? If I can't crawl into the backseat and take a nap and wake up at my destination, then what's the use?
I agree with these sentiments, but for slightly different reasons.
One of the main selling points is that commuters can save time (the focus on self driving cars, not trucks/rail). However, long commutes are mostly a constraint imposed by suburban sprawl and associated planning failures. The same applies to automated delivery mechanisms from shops, again of limited use in sufficiently dense communities. In such a scenario, this feels a lot like optimizing 1% of execution time in a program.
For large scale transport of goods or people, the same situation applies for the economics - the labor of the driver should be a small fraction of the operating costs.
There could be some safety benefits, but this remains uncertain. I found https://www.bloomberg.com/view/articles/2015-05-28/cars-will... quite interesting in this respect.
Seeing what happened with the invention of the automobile, I won't be surprised if self driving cars become the norm somehow. In the most extreme case there might be an outright ban on non autonomous cars at some point in the future. This is something I certainly don't look forward to.
Because there's driving and driving. I like driving too but when you are on the highway there's nothing rewarding, it's not driving if you only drive straight. The future is to lean back and do nothing like the google car. Or Little train in road.
IMHO, autopilot means "vehicle drives by itself without my involvement"...it's not autopilot if my hands have to be on the wheel. It seems that this technology is Cruise Control v2.
Having the hands on the wheel only gives the car manufacturers leverage in denying there is any problem on their end.
> Do we not all realize that, per-mile-driven, Tesla's auto pilot is much safer than human driving?
Good question. It's not safer. Tesla's quoted statistic of one death per 94 million miles driven in the US is not as useful as it could be. That includes motorcycle deaths and roads and conditions in which the Tesla AP is not likely to be engaged. When you limit it to divided roads, the rate is more like 1 death per 150 million miles driven [1], which makes the Tesla less safe than your average American vehicle, and that still includes motorcycles.
I think it is. Given the number of Autopilot miles driven, you'd only expect one death so far, so that we have one doesn't really tell us anything.
However, injuries from car crashes are (fortunately!) much more common - roughly 60 times as likely. So you'd expect to have about 60 Autopilot incidents resulting in injury by now. I don't really keep up with Tesla news but, based on the reaction to this story (where the guy wasn't even injured), I'm going to assume the actual number of injuries is tiny.
I think it's not safer based on existing data. Musk was pretty quick to tout that autopilot is safer over 100 million miles driven. It turns out his statistic was not as useful as it could have been. There was no mention of conditions in which autopilot is not likely to be engaged such as adverse weather or non-divided roads. Plus some deaths are from motorcycles, which, it seems, should somehow be excluded from the comparison.
On top of that, it might make more sense to compare the Tesla models' safety records to cars in a similar pricing class. Comparing a Tesla to a beater from 1990 isn't so useful. So there are different degrees of comparison. Tesla's headline-ready numbers do not tell the full story.
I don't feel badly about drawing different conclusions from the same data Tesla used in its response to the incident. I think they overlooked a few things. You're certainly free to intuit what you like.
> I don't really keep up with Tesla news but, based on the reaction to this story (where the guy wasn't even injured), I'm going to assume the actual number of injuries is tiny.
Tesla does not make this information available, so I don't think it is worth speculating. Effort would be better spent encouraging regulators to demand more detailed reports from companies that sell vehicles equipped with driver-assistance systems. Until all companies are required to report such information, Tesla is unlikely to do so. We can turn the screws on Tesla too, and I think they would be served by lobbying for tighter regulations to create a level and safe playing field for car companies and drivers alike, but I wouldn't count on them to spend much extra effort on holding themselves accountable in such a fashion. They're already having a tough time meeting their production demands, and Musk is still setting aggressive targets for complete autonomy that no other car company, including Google, is even close to. I think Musk has said full autonomy is 2 years out, whereas Google says it's between 3 and 30 years. Mobile Eye has a 5-10 year plan.
It sounds like you're willing to speculate/entertain assumptions (e.g. regarding miles of "relatable" road usage) when it harms Tesla, but not when it might help them.
Must every critical argument I make be balanced with something positive in order to demonstrate I'm not 100% against Tesla?
Tesla definitely done great things. They're just not front and center in the news at the moment.
I think it's best to look at the data or issues being discussed rather than some hidden motive. When I discuss politics, I don't just say "I'm voting for Billy", I try to say, "I like Billy's position on X because Y". Same thing here.
It's not about balance, it's about you willing to make guesses regarding the bad things Tesla has done, and not make guesses about the potential good things.
This isn't a "let's give 50% time to the creationists" kind of thing, this is a "let's not stretch only one side of the truth" kind of thing.
You said the equivalent of, "I'm voting for X because of my speculation about Y but I refuse to speculate about Z as it does not fit my current argument."
> It's not about balance, it's about you willing to make guesses regarding the bad things Tesla has done, and not make guesses about the potential good things.
You're being pedantic. I can wager guesses about good things too. They're just not pertinent to the focus of this article. Constructive criticism, for me, is a positive thing.
> You said the equivalent of, "I'm voting for X because of my speculation about Y but I refuse to speculate about Z as it does not fit my current argument."
I gave some raw analysis of the statistics drawn, further here [1], and all you can do is complain that I am somehow voting against Tesla. I never said any such thing. I said we should demand more transparency from them about accident data. I made some further speculation about Tesla's likelihood to do this on their own because it supports the argument that we should be part of this conversation.
To expand on that, I think we should ask this of all companies that enable driver-assist. Currently, California requires monthly reports on accidents involving fully autonomous vehicles. I think it makes sense to do that for every car [2].
If you think Tesla and other companies will become more transparent and you don't feel the need to do anything, great, that's your prerogative.
But is it safer than the BMW system that does the exact same thing?
I also seriously doubt it's much safer than driving a normal car in the same situation. Musk threw out a bullshit comparison with general vehicle fatality stats. But Autopilot is used in a situation that's safer than average travel--highway driving in good weather. Most deaths happen at intersections--not highway.
I think lawsuits and government intervention will kill self driving cars for the next decade or two. Even if autonoumous are 10x better, thats still hundreds of deaths per year that will be attributed to a half dozen autonomous car maker's software.
For folks apologizing for Tesla: people ignore instructions and prompts from their computers or gadgets. That's not driver fault: it's a part of the design problem that engineers have to deal with. As an engineer, you take the world as you find it.
I had a professor in college who specialized in airplane cockpit design. Her philosophy was that everything is a design failure. If the pilot pushes the wrong button, the design must be fixed to keep that from happening. It would seem to me that principle applies with even more force when we're talking about consumer products operated by random people rather than trained pilots.
Google has a completely different strategy. They're aiming for 100% automation via a more gradual testing process before making any sort of automation available to consumers.
I believe their thought is that the closer we get to complete automation, the less likely a driver is to remain aware. We're only at roughly 10% automation right now and drivers are already taking their eyes off the road. When we get to 90%, humans will have an even tougher time retaining attentiveness. Watch their talks to be sure.
Google's self driving car group gives some awesome transparency reports every month, including details about every accident [1]. It's like they're ready to become their own company.
Fortune had a good article critiquing Tesla's strategy vs. Google's and other car companies' [2]. The author says Tesla is being defensive and resistant to public critique. Other companies expect pushback from the public and incorporate that into their product offerings.
>Google's self driving car group gives some awesome transparency reports every month, including details about every accident
Google isn't doing this out of the goodness of their heart. They have always been required by law to send autonomous vehicle accident reports to the DMV, which are made available to the public on the DMV's website.
So the transparency itself is mandatory. Google merely chose to control the message and get free PR by publishing an abbreviated summary on their website too. Judging by this comment, it's working!
> So the transparency itself is mandatory. Google merely chose to control the message and get free PR by publishing an abbreviated summary on their website too. Judging by your comment, it's working!
Hahaha I guess so! :-D
There should be a similar requirement for transparency of driver-assist vehicles. Clearly, the companies are not giving this up themselves until they're all forced to do it at once.
This is really to a point where it can only hurt Google. Just look at the word choice in the article:
>The crashes are calling the safety of such automatic driving features into question, just as they're being incorporated into more and more cars on the road.
Tesla's failure in this court harms the entire concept, not just their own brand. The average person isn't going to distinguish between Tesla's inappropriately named 'autopilot', which is not 100% automatic, and Google's, which is.
"It's a winding road going through a canyon, with no shoulder"
Doesn't sound like the recommended use case for autopilot? I thought you were only supposed to use it on a limited-access throughway?
Anyway, good to hear no one died this time, but both of these accidents do sound like there's a lot of user culpability involved. It seems like it would be reasonable on Tesla's end, since all Teslas know where they are, to limit autopilot to only roadways where it is reasonable to use it.
Does the autopilot routinely alert for short duration events where it's confused but the situation passes quickly and it's not a problem or is there a very low false positive rate and an alert is pretty much a guarantee that you'll need to take over?
Basically, does the system train you to ignore most alerts because of a high false positive ratio?
"It's a winding road going through a canyon, with no shoulder," Shope told CNNMoney. The driver told Shope the car was in Autopilot mode, traveling between 55 and 60 mph when it veered to the right and hit a series of wooden stakes on the side of the road.
Thank goodness no one was hurt.
But I have to ask, what was the driver thinking? Autopilot late at night on a winding mountain road with no shoulder?
A question for Tesla: Why isn't Autopilot disabled when it detects dangerous conditions? TFA said it alerted the driver to put hands on wheel. If there is no response, what is the protocol? At what point will the vehicle attempt to pull over and stop (for instance, in the event of a medical emergency or conditions are simply too dangerous)?
What I don't understand is why Tesla doesn't whitelist roads where this feature can be used, since the manual sets very specific conditions. Especially given that the feature is currently enabled, they must know which roads work with high confidence values.
Or assume they wanted to do the right thing from the start, and they don't have this dataset. It's not as if we don't have OpenStreetMap (or as if they couldn't buy a commercial dataset) in which you can easily filter out straight highways.
With this it could probably even work without being attentive as a driver, which would be a killer feature over the current state of affairs (currently it has no benefits whatsoever, if you follow the instructions, which is why probably almost nobody does).
Why not have Autopilot running and monitoring the sensors to learn the road, but not have it actually take control of the vehicle? Even if the driver is manually in control the Autopilot system could still be learning.
Tesla may have begun by doing that. Recall that last October autopilot was introduced "over the air", so, customers who already had vehicles could use it immediately. Presumably, Tesla was collecting data up until that point.
I imagine Tesla is generating more sales with a usable autopilot. It generated quite a bit of media hype and they are having trouble keeping up with production demands.
So, Tesla chose to introduce this feature to increase sales and get more data on how drivers interact with the system. It seems excessively risky to me. I'm not building a self-driving system myself, nor running a huge company, so I can't say much about what it takes to do that. I can say I would not like to drive on the road next to a vehicle that can unexpectedly make the wheel turn, forcing its driver to be corrective at a moment's notice.
That's not really why this is getting attention. The core issue here is that Tesla is shipping a product called "autopilot", which gives the expectation that it is in fact an autopilot. In the background, people are hearing that autonomous cars are "3 years away!!!" so the expectation is that the Tesla Autopilot feature is the real deal.
Thus you have people watching movies, falling asleep, or driving on dangerous roads using a technology that nowhere near delivers what the name promises. The result is people are dying and getting hurt by an immature technology being pushed too hard by an irresponsible company.
> The core issue here is that Tesla is shipping a product called "autopilot", which gives the expectation that it is in fact an autopilot.
An autopilot system is defined as a system that assists, but does not replace, the human operator of a vehicle. Therefore Tesla's system is, in fact, an autopilot.
From wiki:
"An autopilot is a system used to control the trajectory of a vehicle without constant 'hands-on' control by a human operator being required."
That's the clear opposite of what Tesla's manual suggest. If it required hands-on wheel and constant attention, call it what it is -- a driver assistance system, not autopilot.
The actual implementation only requires periodic, not constant, hands-on-wheel. Although it's in the driver's interest to pay attention constantly, as evident from this accident.
Tesla has gone out of their way to get more and more attention. For a company that sells only tens of thousands of cars it's pretty amazing. Until these recent crashes the majority of that press has been very positive.
The downside of this is that when you get bad publicity it's going to be much bigger as well.
It was exactly the same with previous crashes. Electric vehicles are dangerous etc. etc. People would read one or two reports and jump to the conclusion that the cars were not safe.
The autopilot feature seems like one that should be taken through rigorous testing approved by the appropriate regulatory structure before being allowed out on the roads. The fact that Tesla has been able to simply update firmware to include this experimental new feature on their vehicles as they please feels very "wild west" and as much as I don't tend to prefer regulation, this feels like the proper place for it. By the way, this has been my feeling on the matter before any autopilot crashes were reported.
The real problem is, how do you actually design a test that will catch these issues. If you have 3 crashes in millions of miles, you are pretty unlikely to find those errors during testing.
I wish Tesla was more open about the way the actually test their Autopilot software.
While automated driving is a relatively new field, the science and data of traffic conditions and accidents is quite deep. You don't have to blindly wait until your beta users get into accidents. In the case of the first fatality, how is it possible that Tesla engineers were unaware of a situation in which a lightly-colored vehicle might show up on days in which the sky is bright?
Of course they were aware of that situation. They chose to deprioritize that to reduce the incidence of false positives that came from overhead highway signs. Nothing wrong with attempting to squash false positives, which can lead to deadly situations themselves. But there was nothing preventing Tesla from doing adequate testing to determine what unintended consequences their software modifications would entail. They don't have to wait for someone to get decapitated to realize that there's a trade off in reducing false positives from overhead signs.
We'll just have to see how liability laws and juries treat these technologies. We may well see a situation where thousands of lives could be saved from automation, but any accident leads to millions in lawsuits and the technology is killed. Hopefully this isn't the outcome.
Airbags occasionally kill or injure people also despite on balance saving lives. So there is precedent for this sort of math, but this is a very different kind of technology and not an ordinary engineering tradeoff.
Takata airbags have been in the news the last several years because at least two people have died and more than a hundred have been injured.
I've heard commenters opine that this "could mean the end of Takata as a company." But I haven't heard anyone say this "could mean the end of airbags as a feature in cars."
I guess what the Tesla autopilot needs is a lot of good publicity every time it avoids an accident which a human driver would otherwise have actually caused.
But where would anyone get the counterfactual for those cases? Does anyone even have information about that?
> I guess what the Tesla autopilot needs is a lot of good publicity every time it avoids an accident which a human driver would otherwise have actually caused.
Sounds like a plan! Let's start right now. Here's an example of the autopilot dodging a reckless driver and avoiding a collision that many human drivers wouldn't have avoided: https://youtu.be/9I5rraWJq6E
That's a video by the guy who died.. Some have argued he could've been more aware of the incoming truck due to its short amount of time to make its desired transition from left to right. Also the camera was positioned farther forward than the driver's eyes.
> I guess what the Tesla autopilot needs is a lot of good publicity every time it avoids an accident which a human driver would otherwise have actually caused.
Even if that happened, people will cling to the negative news much more than the positive news, aka Negativity Bias. Even if Tesla has a stellar record overall, one bad incident, especially with a somewhat unknown technology, will be much more in people's minds than the countless positive incidents.
Even before Takata, airbags were dangerous. US regulations require airbags to arrest unbelted occupants. This means they are larger and more explosive than those that take the edge of for an otherwise restrained occupant (e.g. Europe).
When is Tesla going to stop blaming the drivers? Whether it's their fault or not, it looks bad.
To be honest I don't feel comfortable with that kind of data collection anyway. I'll never even think about getting a Tesla until they give us an option to turn their tracking systems off. It's creepy and unnecessary.
Looking at the recent crash reports (driver watching Harry Potter, driver refusing to take over in winding road), it seems that it is just a matter of time for a news headline about a Tesla crash report with nobody on the front seats and a couple making out on the back seats.
What I find just as disturbing is the highly conflicted M&A scam he is running using SolarCity. The SolarCity scam exposes him as a dishonest scamster.
That one is crazy. No way that goes through. His job as CEO of SolarCity is to shop around for the best price from other acquirers. Michael Dell did this when he combined two of his companies and still got fined for it.
Maybe I'm drinking the Fortune coolaid. Their article seemed logical to me [1]
I agree completely. That you are gathering data that may help others in the future does not absolve you of deliberately setting up dangerous situations in order to get that data.
To maintain speed up a hill, cruise control applies gas to accelerate the car against the increased component of gravity. This is to maintain a steady speed, but in a geared car (i.e. no CVT), what sometimes happens is that the car starts to lose speed as the revs in top gear drop out of the power band, and then it downshifts and hits the gas hard to accelerate and recover the lost speed.
Meanwhile, many humans who are NOT using cruise control do the opposite: they don't push their foot enough to counteract gravity and force a downshift, and just accept that they lose speed going up a hill.
If the traffic is dense enough, what can happen is that all of sudden your cruise control is downshifting and jumping forward just as you are overtaking a car that is itself losing speed. Potential for a rear-end collision at highway speeed.
Of course this isn't an issue if cruise control is only used in very low densities of traffic. But unfortunately it seems to be used wherever people feel like their feet need a break.
Feet aren't on the peddles. Which is slight more dangerous than feet on the peddles. Kinda like hands on the wheel is less dangerous than hands off the wheel (at least until automation becomes better at driving than humans).
> The problem I see is, Tesla is the only one actively testing autonomous driving.
And that is the problem right there. Tesla is "testing" a feature that is being shipped to consumers in a finished product. They are providing the most minimal of safety training regarding the feature, and not even enforcing simple technological measures to make the car safer in this mode (e.g. making sure your hand is on the wheel. They have the sensors in place that test this already, and yet they do nothing with that data).
So essentially, the users are beta testers. They are testing the limits of the sensors. Looks like they don't work on white colored trucks blocking the highway. Sorry you were decapitated, we'll fix that ASAP though!
This is what happens when you bring the "work fast and break things" mentality to an area where "break things" means "people die".
To be clear, I am very pro autonomous car technology. I am a robotics engineer. I've worked on these systems. But this is the exactly wrong way to bring them to market. As more people die due to tesla's autopilot not being a true autopilot, it will color the public's view to reject this technology before it's even mature enough to save lives.
> The problem I see is, Tesla is the only one actively testing autonomous driving.
No, they aren't.
They may be the only ones using consumer-operated vehicles as their test platform for the particular degree of automation that they are testing, but there may be very good, safety-related reasons why other people aren't doing that.
There are tens of thousands of people either walking or driving the streets of Mountain View and Sunnyvale where Google is testing its self-driving cars. So, the members of the public are at risk.
But, I understand that you mean behind the wheel. Which is fun because the Google cars do not have a wheel. As I'm sure you may know, instead of a traditional steering wheel they have horizontally mounted, valve wheel with a handle. It looks a little bizarre when you first see it, but I would imagine it could be a safer, smoother alternative to the current standards steering wheel.
Unfortunately, I couldn't quickly provide an image; I've seen it multiple times on my walk to and from work.
> There are tens of thousands of people either walking or driving the streets of Mountain View and Sunnyvale where Google is testing its self-driving cars. So, the members of the public are at risk.
There is a huge difference between Google operating a handful of cars in certain areas and Tesla selling 10-20 thousand vehicles per month that can be operated by untrained consumers anywhere in the US.
Tesla is using its customers as data-generating guinea pigs. In return, Tesla may become a guinea pig itself by showing other companies how not to progress towards autonomous vehicles.
Licensed drivers. Drivers who have a license that indicates that they will take responsibility for the actions of any vehicle they control and understand how to operate any features which may put themselves or others at risk.
> Drivers who have a license that indicates that they will take responsibility
The fact that customers are licensed does not mean the seller is free from any and all regulation. Gun owners are licensed, and guns are required to have safety switches. Cars have a long history of being regulated [1]
Basically, anything that can contribute to deaths is going to monitored closely by consumer protection bureaus, and will probably be heavily regulated.
I didn't mean to imply otherwise. Tesla probably could do a much better job informing it's customers what they are getting into, because they certainly aren't in a position to figure it out themselves. And they definitely can make their software better.
What I'm saying is, if your car injures someone while you sit in the driver seat, and you could have easily taken an easily foreseeable action to prevent it, then you are at least partially responsible.
> What I'm saying is, if your car injures someone while you sit in the driver seat, and you could have easily taken an easily foreseeable action to prevent it, then you are at least partially responsible.
Oh absolutely. No question there for me at this time.
There may yet be some class action or something that reveals some unjust action by the driver-assist companies. I agree that right now, all other things being perfect, if a driver of one of these cars is in an accident, then some person is responsible.
Also, Google doesn't drive it's test car on highway speeds with humans inside. Google self-driving car's speed is limited to 25 mph, so even if it hit something, potential that it would be lethal is very slim.
Google has two basic models of self-driving car. One is a modified standard car, and can drive at freeway speeds. If you drive around Mountain View, CA, you'll probably see some of them. The other is a little electric thing with no steering wheel limited to 25MPH. The Computer History Museum in Mountain View has one on display.
High end cars from a few manufacturers have a significant fraction of Tesla's features, they're getting plenty of test mileage. tesla's feature set is still level 2, though- at the high end of level 2.
Who the hell expected zero crashes? Please, raise your hand and explain why you think any system could be released that drives a car would ever be able to do so at a rate of 100%?
No one expected zero crashes, but any system like this should have dramatically fewer crashes than humans operating in similar situations (i.e. primarily highway driving) or it should not be deployed to the public.
Exactly. They are "testing autonomous driving" with human lives. The sickening part is Tesla advocates think this is alright in the name of "progress". I wonder how they would feel if they had a family member die to it. "Data points" right?
1. Tesla's "hands on wheel" enforcement is much weaker than their competitors. BMW, Volvo, and Mercedes have similar systems, but after a few seconds of hands-off-wheel, the vehicle will start to slow. Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel. Tesla is operating in what I've called the "deadly valley" - enough automation that the driver can zone out, but not enough to stay out of trouble.
The fundamental assumption that the driver can take over in an emergency may be bogus. Google's head of automatic driving recently announced that they tested with 140 drivers and rejected semi-automatic driving as unsafe. It takes seconds, not milliseconds, for the driver to recover situational awareness and take over.
2. Tesla's sensor suite is inadequate. They have one radar, at bumper height, one camera at windshield-top height, and some sonar sensors useful only during parking. Google's latest self-driving car has five 3D LIDAR scanners, plus radars and cameras. Volvo has multiple radars, one at windshield height, plus vision. A high-mounted radar would have prevented the collision with the semitrailer, and also would have prevented the parking accident where a Tesla in auto park hit beams projecting beyond the back of a truck.
Tesla is getting depth from motion vision, which is cheap but flaky. It cannot range a uniform surface.
3. Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet after sheering off the top of the Tesla driving under the semitrailer. Only when it hit a telephone pole did the car stop.
The Pennsylvania Turnpike crash is disturbing, because it's the case the "autopilot" is supposed to handle - divided limited-access highway under good conditions. The vehicle hit a guard rail on the side of the road. That may have been an system failure. Too soon to tell.
The NTSB, the air crash investigation people, have a team investigating Tesla. They're not an enforcement agency; they do intensive technical analysis. Tesla's design decisions are about to go under the microscope used on air crashes.
Tesla's spin control is backfiring. They tried to blame the driver. They're being sued by the family of the dead driver, and being investigated by the NHTSA (the recall people), and the NTSB. In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.