With three accidents on "autopilot" in a short period, the defects in Tesla's design are becoming clear.
1. Tesla's "hands on wheel" enforcement is much weaker than their competitors. BMW, Volvo, and Mercedes have similar systems, but after a few seconds of hands-off-wheel, the vehicle will start to slow. Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel. Tesla is operating in what I've called the "deadly valley" - enough automation that the driver can zone out, but not enough to stay out of trouble.
The fundamental assumption that the driver can take over in an emergency may be bogus. Google's head of automatic driving recently announced that they tested with 140 drivers and rejected semi-automatic driving as unsafe. It takes seconds, not milliseconds, for the driver to recover situational awareness and take over.
2. Tesla's sensor suite is inadequate. They have one radar, at bumper height, one camera at windshield-top height, and some sonar sensors useful only during parking. Google's latest self-driving car has five 3D LIDAR scanners, plus radars and cameras. Volvo has multiple radars, one at windshield height, plus vision. A high-mounted radar would have prevented the collision with the semitrailer, and also would have prevented the parking accident where a Tesla in auto park hit beams projecting beyond the back of a truck.
Tesla is getting depth from motion vision, which is cheap but flaky. It cannot range a uniform surface.
3. Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet after sheering off the top of the Tesla driving under the semitrailer. Only when it hit a telephone pole did the car stop.
The Pennsylvania Turnpike crash is disturbing, because it's the case the "autopilot" is supposed to handle - divided limited-access highway under good conditions. The vehicle hit a guard rail on the side of the road. That may have been an system failure. Too soon to tell.
The NTSB, the air crash investigation people, have a team investigating Tesla. They're not an enforcement agency; they do intensive technical analysis. Tesla's design decisions are about to go under the microscope used on air crashes.
Tesla's spin control is backfiring. They tried to blame the driver. They're being sued by the family of the dead driver, and being investigated by the NHTSA (the recall people), and the NTSB. In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.
> In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.
This is what we should be trumpeting. Responsibility, honesty, and transparency. Google's self driving car project reports [1] should be the standard. They're published monthly and they detail every crash *per requirement by CA state law (thanks schiffern).
Note: These reports are currently only required in California, and only for fully autonomous vehicles. So, Tesla and other driver-assist-capable car companies do not need to publicize accident reports in the same way. Tesla may be unlikely to start filing such reports until everyone is required to do so. We should demand this of all cars that are driver-assist during this "beta" phase.
Tesla owners and investors ought to be interested in a little more transparency at this point. It's a bit concerning that Tesla/Musk did not file an 8-k on the Florida accident. We learned about it from NHTSA almost two months after it happened.
>Google's self driving car project reports [1] should be the standard. They're published monthly and they detail every crash
Copying a reply from earlier...
Google isn't doing this out of the goodness of their heart. They have always been required by law to send autonomous vehicle accident reports to the DMV, which are made available to the public on the DMV's website.
So the transparency itself is mandatory. Google merely chose to control the message and get free PR by publishing an abbreviated summary on their website too. Judging by this comment, it's working!
The requirement does not seem to be there for driver-assist vehicles. Also, it's a state-level requirement. Looks like it's just in CA for now. The accidents detailed in the Google reports are all from California.
And, Tesla may be unwilling to start reporting driver-assist accident rates to the public unless the other driver-assist car companies are forced to do it too.
> The accidents detailed in the Google reports are all from California.
You could have opened the first report[1] on that page and notice this is wrong.
> June 6, 2016: A Google prototype autonomous vehicle (Google AV) was traveling southbound on Berkman Dr. in Austin, TX in autonomous mode and was involved in a minor collision north of E 51st St. The other vehicle was approaching the Google AV from behind in an adjacent right turn-only lane. The other vehicle then crossed into the lane occupied by the Google AV and made slight contact with the side of our vehicle. The Google AV sustained a small scrape to the front right fender and the other vehicle had a scrape on its left rear quarter panel. There were no injuries reported at the scene.
I don't envy his position, or any other CEO or leader of a large group. It's a huge responsibility. You can face jail time for not sharing information. Usually only the most egregious violators see a cell, but it's still a scary prospect if you somehow got caught up in some other executive's scheme. At the end of the day, the CEO signs everything.
Edit: I see the downvote. Am I wrong? Is being a CEO all rainbows and sunshine? Maybe I got some detail wrong. Feel free to correct me.
the guy comes forward and says "I'm going to make an autopilot for the masses". He/We know it's tough to do. The guy release its autopilot. It's not quite ready for prime time. Double problem : the guy goes into a tough business alone, the guy makes stuff that's not 100% ready.
I like it when people try to do hard stuff, that's fine and maybe heroic. But they have to be prepared : if testing the whole thing must take 10 years to be safe and a gazillion dollars, then so be it. If it's too much to shoulder, then they should not enter the business.
> if testing the whole thing must take 10 years to be safe and a gazillion dollars, then so be it. If it's too much to shoulder, then they should not enter the business
That sounds like a succinct explanation for why we've been stuck in LEO since December 19, 1972.
We've got 7 billion+ humans on the surface of our planet. We lose about 1.8/s to various tragedies.
Most of which are of less import than helping automate something that's responsible for around 32,000 deaths in the US every year. Move fast, break people for a just cause, escape risk-averse sociotechnological local minima, and build a future where fewer people die.
I don't think the issue is that Tesla's cars are dangerous. The issue people are raising is that they pretend, at least through implications, that their cars can safely drive for you.
Tesla is also not doing any kind of super special research into self driving cars. The system their cars use is (afaik) an OEM component (provided by MobileEye) that powers the driver assist features of many other brands, too.
Instead of actually improving the technology they have chosen to lower the safety parameters in order to make it seem like the system is more capable than it actually is.
Safe is like 'secure'. It's a relative concept, and the level of safety we are willing to accept from products is something that should be communally decided and required by law.
We shouldn't go off expecting that someone elses ideas of safety will jive 100% with our own, and then blame them when someone else buys a product and is injured.
Should Telsa drive-assist deactivate and slow the car after 10 seconds without touching the wheel? Probably, but I won't blame them for not doing so if there is no regulatory framework to help them make a decision. It's certainly not obvious that 10 seconds is the limit and not 11, or 15 or 2 seconds.
Once again Tesla have made the discussion about the driver. They have put an auto-control system out in the wild that steers the car into hard obstacles at high speed. Beeps to warn the driver at some stage in the preceding interval do not make this control systems failure any more acceptable. It still misjudged situation as acceptable to continue and it still drove into obstacles. Those are significant failures and a blanket statement of responsibility going to the driver isn't going to cut it.
I agree; I think it's despicable that Tesla is acting in such a defensive manner.
--
"No force was detected on the steering wheel for over two minutes after autosteer was engaged," said Tesla, which added that it can detect even a very small amount of force, such as one hand resting on the wheel.
"As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel," said Tesla. "He did not do so and shortly thereafter the vehicle collided with a post on the edge of the roadway."
--
Bruh-slow the car down then. If you're storing information that points to the fact that that the driver is not paying attention, and you're not doing anything about it, that's on you 100%.
We are in the midst of a transition period between autonomous and semi-autonomous cars, and blaming the driver for the fact that the car, in auto-steer mode, steered into obstacles, is counter-productive in the long term. You need to make auto-steer not work under these conditions. The driver will quickly figure out the limits of your system as their car decelerates with an annoying beeping sound.
> No force was detected on the steering wheel for over two minutes after autosteer was engaged," said Tesla, which added that it can detect even a very small amount of force, such as one hand resting on the wheel.
That would seem to me to be reason to steer the car to the shoulder and disengage the auto-pilot.
Tesla is trying to take too big a step here and blaming the driver(s) of these cars is really really bad PR.
Isn't that one of the reasons there is still a driver? Cruise control will drive you right off of the road until you crash with no awareness of the shoulder or other traffic or anything. But it is all over the place.
Right. When the AP detected road conditions that it couldn't handle, it should have reduced vehicle speed as much as necessary for handling them. Yes, it was dumb to trust AP at 50-60 mph on a two-lane canyon road. But it was also dumb to ship AP that can't fail gracefully.
I do admit, however, that slowing down properly can be hard. It can even be that hitting an inside post is the best option. Still, I doubt that consumer AP would have put the vehicle into such an extreme situation.
It's certainly a major fault of Tesla's - if you know it's not safe, you've got no excuse to keep going.
But the driver shouldn't be allowed to escape blame either. When your car is repeatedly telling you that you need to take control, then you should be taking control.
I've worked on highly safety related (classification ASIL D) automotive systems. There are rules about how to think about the driver.
The driver is not considered a reliable component of such a system, and must not be used to guide decisions.
Yes, the driver clearly was a fool, but system design should not have taken his decision into account, and come to its own conclusion for a safe reaction (e.g., stopping the car).
I don't disagree - like I say, none of my comment was intended to let Tesla off the hook in any way.
My point was simply that the drivers who use semi-automatic cars and choose to ignore warnings to retake control need to be responsible for their actions as well.
Exactly! In this case the driver may have been at fault.
What happens next time, when the driver has a stroke?
"Well, he didn't put his hand on the wheel when the car told him to, it's not our fault!"
Like you say, slow it down. Sure, the driver of a non-autonomous vehicle who has a stroke might be in a whole world of trouble, but isn't (semi-) autonomy meant to be a _better_ system?
Suppose I'm driving a "normal" car. I have my hands on the wheel, but I am reading a book. I hit a tree.
Is the auto-maker responsible? They advertised the car as having cruise control.
Call me old, but I remember similar hysteria when cruise control systems started appearing. People honestly thought it would be like autopilot or better - and ploughed into highway dividers, forests, other traffic, you name it, at 55mph.
Cruise control is supposed to control the speed of the car. If you set it for 55 and the car accelerated to 90 it would be considered dangerous and defective.
The steering control is supposed to control the steering of the car. If you set it for "road" and it decides to steer for "hard fixed objects" how should we regard it?
Why would the auto-maker be responsible for your distracted driving, and what does it have to do with cruise control?
Some of the uproar about these incidents is related to the fact that Tesla marketing is implying far more automation and safety here than the technology can actually provide. And apparently it can cost lives.
Exactly, they shouldn't be responsible for distracted driving. Autopilot can certainly get better, but you're being willfully ignorant if you don't believe that these Tesla drivers with their hands off the wheel don't fall under distracted driving. Could autopilot improve? Yes. Is the driver at fault here for not staying alert and responsible for the car they chose to drive? Absolutely.
Please show me a source for the claim that they're "implying far more automation" as everything I've ever seen from Tesla on this says hey dummy, don't stop paying attention or take your hands off the wheel.
Its normal to exceed the speed limit e.g. passing. No absolute limit can be arbitrarily set without tying the pilot's hands/limiting their options which could make the roads more dangerous.
It may be common to exceed to speed limit when passing, but it is illegal. The setting of these speed limits is meant to be objective, maybe are sometimes subjective, but it is wrong to call them arbitrary.
If the car autopilot software permits faster than legal speed driving, the manufacturer is taking on liability in the case of speed related accidents and damage.
It costs lives because folks are toying with this new thing. If they'd just use it as intended, they'd achieve the higher levels of safety demonstrated in testing.
Yeah, I just blamed the victims for misusing autopilot. Like anything else, its a tool for a certain purpose. Its not foolproof. Testing its limits can get you hurt, just like a power saw.
It is about the driver you turn on a test feature (it warns you it's a test feature) and ignore all the procedures you must follow to use this not yet production feature it's your fault.
It bugs me how quick Tesla is to blame their drivers. That autopark crash, for example, was clearly the result of a software design flaw. Tesla pushed an update where to activate autopark all you had to do was double tap the park button and exit the vehicle. That's it. No other positive confirmation. No holding the key fob required. The vehicle would start moving without a driver inside.
You'd think someone would have realized that a double tap could accidentally happen why trying to single tap.
The reporting on this was terrible, but there's a YouTube video illustrating the design flaw.[1]
Thing is, someone at Tesla must know this is the real culprit. But there was no public correction of their local manager who blamed the driver.
its a bit harder than a double tap... You press and hold. Count to three. Now wait for the car to start blinking and make a shifting noise. Press either the front of the key or back of the key to move forward. Definitely not easy to engage so I can understand tesla view on this one... I do this every morning/night to pull my car out of the tiny garage since I can't get my door open inside the garage
There are multiple methods of activating their autopark feature. At the time of the crash, the setting that allowed you to activate it using the keyfob as you describe also meant that merely double-tapping the P button in the car would activate autopark and cause the car to start moving forward a few seconds after getting out. This was added in a patch, so unless you paid attention to their release notes you may have missed it.
I'm not sure why it is, but a lot of 100 year old houses have garages that are just big enough to fit a car in and not be able to open the doors. Car dimensions really haven't grown all that much, the Ford model T was 66 inches wide and most modern sedans are ~70-75 inches.
Maybe they were never intended for cars?
Maybe it was a cost savings thing? Homes that are only 50-75 years old seem to have larger carports instead of garages.
* I answered my own questions. A model T was 66 inches wide but had 8 inch running boards on either side so the crew cabin is closer to 50 inches. The doors, if it had them, would be shorter as well. A model S is 77 inches wide so that's probably 24 inches difference in clearance.
False. Did you watch the video? It proves that you only had to double tap and exit.
Tesla pushed an update that enabled this as a "convenient" shortcut to the more elaborate, safer method you describe. In fact the video author praises it. Seems like a nice handy feature.. At first. Safety-critical design needs more care than this though.
It is True. You have to press in two places on the key. First you have to press and Hold in the middle of the key. Next you have to press either in the Front of the key or the Back of the key. Show me the video? Am I missing something can I simply press in the Front Twice?
>Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel.
>They tried to blame the driver.
In this incident, the driver was using autopilot in a fashion that it should not be; twisting road at high speed. The driver IS at fault.
>They're being sued by the family of the dead driver
This is no proof of anything. You can find a lawyer to sue anybody over anything.
>and being investigated by the NHTSA (the recall people), and the NTSB
Well, of course they are. They're supposed to. Again, no proof of anything Tesla did wrong.
I'm not saying there isn't a problem with this. I'm saying your reasoning is wrong.
I'm also curious as to how many autopilot sessions take place every day. If there have been three crashes, such as this where the driver is at fault, out of a million then that's one thing not considered so far.
>In this incident, the driver was using autopilot in a fashion that it should not be; twisting road at high speed. The driver IS at fault.
I disagree on this point. In any system that's supposed to be sophisticated enough to drive the car and but also carries a giant caveat like "except in these common driving situations..." failure is not instantly down to "user error." Such a system should gracefully refuse to take over if it's not running on an interstate or another "simple enough" situation; otherwise, as many have noted, the threshold for "is the driver paying sufficient attention" should be set much, much lower.
That the system is lulling the users into bad decisions is not automatically the fault of said user. Some blame, maybe most of the blame, has to fall on the autopilot implementation. When lives are on the line, design should err on the side of overly-restrictive, not "we trust the user will know when not to use this feature and if they are wrong it is their fault."
It's not lulling anybody into anything! The guy was given multiple warnings by the car that what he was doing was unsafe. The last of which was extremely pertinent to the crash. To quote the article, "As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel". This is after the initial alerts telling him that he needs to remain vigilant while using this feature. This isn't some burden of knowledge on when to use and when not to use the system, it's plenty capable of knowing when it's not operating at peak performance and lets you know. At that point, I'm willing to shift blame to the driver.
I just want to see "the vehicle again alerted the driver to put his hands on the wheel" followed by "and Autopilot then pulled to the side of the road, turned on the hazard lights, and required the driver to take over."
That's stupid. Autopilot doesn't have the ability to determine if it is safe to pull over. It has a fallback mode; eventually it will come to a complete stop in the middle of the road and turn the hazards on. But since this is also dangerous it makes sense to give the driver time to stop it from happening.
It's stupid to come to a gradual slowdown with hazard lights, but it's not stupid to keep blazing away at speed, with an obviously inattentive (or perhaps completely disabled) driver? I'm confused - even you acknowledge it has that fallback, but it's 'stupid' to enact it after a couple of failed warnings? When should that threshold be crossed, then?
so far we have been lucky the accidents have only impacted the driver. is it going to take the first accident where an autopilot car kills another innocent driver before people see the problem?
When autopilot kills more innocent drivers than other drivers you can point me to a problem. Time will tell if it is better or worse and the track record up until May/June was going pretty well.
I'd rather 10 innocent people die to freak autopilot incidents than 1,000 innocent people die because people in general are terrible at operating a 2,000-3,500lb death machine. Especially because everyone thinks of themselves as a good driver. It's fairly obvious not everyone is a good driver - and that there are more bad drivers than good.
Maybe I only see things that way because there have been four deaths in my family due to unaware drivers which could have easily been avoided by self-driving vehicles. All of them happened during the daytime, at speeds slower than 40mph, and with direct line of sight. Something sensors would have detected and braked to avoid.
Assuming we stick with the same conditions as in this case, I'd call an innocent life taken due to a driver disregarding all warnings from his system negligence. That also fits with what this driver was ticketed with which was reckless endangerment. We don't even get heads up display warnings about not driving into oncoming traffic, but nobody is going to blame a dumb-car for the actions of its driver. I would say that inaction in this hypothetical case would be the driver's crime.
The Tesla warnings remind me of Charlie and the Chocolate Factory, where one of the brats were about to do something disaterous and Gene Wilder would in a very neutral tone protest "no, stop, you shouldn't do that".
From a risk management standpoint, Tesla's design doesn't strike me as sufficient. Warnings without consequences are quickly ignored.
I guess I just don't know how many flashing warnings somebody has to ignore for it to finally be their fault.. At some point, their unwillingness to change their natural inclinations in the face of overwhelming evidence that they need to should have a negative consequence. This isn't like "just click through to ToS". If you buried it in the ToS or something similar, I absolutely agree with your point of view, but when you make it painstakingly clear in unavoidable manners I think we've crossed into territory of user responsibility. If this feature needs to be removed from Tesla cars (and I accept that potential future) I would definitely put that "fault" on the users of Tesla cars rather than Tesla based on the implementation I've read about.
Then the classification would fail-safe (i.e. no autopilot for you) and that's good[1]. The alternative with the current technology is apparently depending on humans to decide (poorly). This is to minimise deaths caused by engaging autopilot at the wrong time while you gather more data.
[1] If the goal is to stay out of court. I understand the AI drivers are better on average argument.
Tesla's "autopilot" system and implementation has serious problems. I think it's becoming clear that they did something wrong. Whether it rises to the standard of negligence I'm not sure, but it probably will if they don't make some changes.
With that said, I agree that ultimately the driver is at fault. It's his car, and he's the one that is operating it on a public road, he's the one not paying attention.
One of the laws of ship and air transport is that the captain is the ultimate authority and is responsible for the operation of the vehicle. There are many automated systems, but that does not relieve the captain of that responsibility. If an airplane flies into a mountain on autopilot, it's the captain's fault because he wasn't properly monitoring the situation.
Sorry, but you don't get off that easy when your design is intended to make life and death decisions. I work on FDA regulated medical devices and we have to do our best to anticipate every silly thing a user can (will) do. In this case, the software must be able to recognize that it is operating out of spec and put a stop to it.
The Pennsylvania Turnpike crash is disturbing, because it's the case the "autopilot" is supposed to handle - divided limited-access highway under good conditions. The vehicle hit a guard rail on the side of the road. That may have been an system failure. Too soon to tell.
This is the one I'm very interested to hear more about. From what I've heard, Tesla's self-driving is very aggressive in swerving to avoid obstacles -- was hitting the guard rail the result of a deliberate swerve, and if yes then was there a logic error, a sensor error, or in fact a legitimate reason to swerve?
Reasoning about what's legitimate can be a really fine line. Oncoming car, raccoon, fawn, human, doe, coyote, elk: some of these obstacles might actually be net-safer to not swerve around [perhaps depending on speed].
I'd be really curious if any of the autopilots have a value system that tries to minimize damage/loss of life. And how does it value the exterior obstacle against the passengers in the car?
I'm pretty sure that hitting a full-grown elk is a poor choice, and often a fatal one. A large, full-grown elk can weigh 600 pounds and stand 5 feet tall at the shoulder, which means it will probably go over the crumple zone and directly strike the windshield.
This happens commonly with moose in New England, which run up to 800 pounds and 6.5 feet tall at the shoulder. A car will knock their legs out from under them but the body mass strikes much higher and does an alarming amount of damage to the passenger compartment. You can Google "moose car accident" and find some rather bloody photos.
The standard local advice is to try really hard to avoid hitting moose, even at the cost of swerving. I would prefer that an autopilot apply a similar rule to large elk. Even if you wind up striking another obstacle, you at least have a better chance of being able to use your crumple zone to reduce the impact.
In this situation, small-to-normal passenger cars are actually much safer than typical American trucks and SUVs. When the legs are taken, the mass of the body strikes the roof of the lower vehicle and bounces over the top. The larger vehicles direct the body through the windshield into the passenger compartment.
I'm not sure about moose, but with more nimble animals, swerving is no guarantee of avoiding collision. A deer can change direction quicker than an automobile, and if it had any sense, it wouldn't be running across the road in the first place. It's definitely safer to slow down and honk your horn than it is to try to guess which direction the damn thing will jump.
[EDIT:] When I was learning to drive, my father often recounted the story of a hapless acquaintance who, in swerving to avoid a snake, had precipitated a loss of control that killed her infant child. He's not a big fan of swerving for animals.
Moose or Elks don't 'bounce over the top' on a small vehicle. They crush it, rip the roof off, or go through it. It may depend on antlers, angles, speed, and so on.
I've tried to pick the least graphic images possible:
I don't think there is much to be gained in terms of safety. Let's not forget that while you may have large moose, there's a range where they can still be much smaller while growing and still damage smaller cars fairly easily.
There is lots of ambiguity here. Depends on how you hit it, etc.
Roof landings don't always end well -- my Civic was trashed when a deer basically bounced up and collapsed my roof on the passenger side. My head would have been trashed as well had I been a passenger!
In general, if you cannot stop, hitting something head on is the best bet. Modern cars are designed handle that sort of collision with maximum passenger safety.
The cost of swerving is very, very minimal (especially in a modern car that won't go sideways no matter what you do).
Imagine a two lane road with a car traveling toward an obstruction in its lane. The distance required to get the car out of the obstructed lane via the steering wheel is much less than stopping it via the brakes.
There's a reason performance driving has the concept of brakes being an aid to your steering wheel as opposed to being operated as a standalone system.
Yes, I included it precisely because there's no reason to ever hit an elk. "some of these obstacles ..." -- it's one of the extrema that you might imagine as test cases for this kind of a feature.
Technically the swerving itself is not necessarily illegal, but it makes you liable for anything that could have been avoided by not swerving. If you swerve on a free and open road and don't cause any damage or injury to anything or anyone (including yourself and your car), it's not like you will still be dragged to court just because you didn't want to unnecessarily run over someone's pet.
> obstacles might actually be net-safer to not swerve around
In the UK its illegal to swerve for these things. In the sense that if you subsequently cause an accident for swerving to avoid a cat, you are at fault.
Wait, it's illegal to swerve for a moose (TIL: moose are called elk in Europe, American elk are totally different) in the UK? Hitting a moose is a fast route to the grave.
Such a value system would be a nightmarish thing so I hope that it doesn't exist yet.
Just imagine your car calculating that it should let you die in order to save a group of school children... except that the school children were in fact a bunch of sheep that the system mis-identified.
The danger of AI isn't in paperclip optimizers (at least for now), it's in humans over-relying on machine learning systems.
I think such a value system is fine in as much as it preserves the safety of the driver.
I'd like my car to make value judgements about what's safer - braking and colliding with an object or swerving to avoid it. I'd rather it brakes and collides with a child than swerve into oncoming traffic, but I'd rather it swerved into oncoming traffic to avoid a truck that would otherwise T-bone me.
As long as the system only acts in the best interest of it's passengers, not the general public, it's fine by me.
I'd rather it brakes and collides with a child than swerve into oncoming traffic
Wait, what? You'd rather hit and kill a child than hit another vehicle - a crash which at urban/suburban speeds would most likely be survivable, without serious injury, for all involved?
You cannot codify some of the "dark" choices that are typically made today in split second, fight or flight situations. If we go down this road, it opens to door to all sorts of other end justifies the means behaviors.
There's a threshold number of dead hypothetical children that's going to trigger a massive revolt against tone deaf Silicon Valley industry.
The moral choice is the opposite direction: the car should prefer to kill the occupant than a bystander. The occupant is the reason why the car is in use; if it fails, the penalty for that failure should be on the occupant, not bystanders.
Yeah this thread has really squicked me out. Of course dehumanization is an age-old tactic for enabling asshole behaviors of all sorts. I'd like to hope, however, that outside the current commuter-hell driving context, few people would even admit to themselves, let alone broadcast to the world, their considered preference for killing children. That is, when they no longer drive, perhaps people won't be quite so insane about auto travel. The twisted ethos displayed in this thread will be considered an example of Arendt's "banality of evil".
I don't even drive at all, and yet this thread irks me from the other direction. Dehumanization is an age-old tactic for enabling asshole behaviors, and bringing children into a discussion is an age old tactic for justifying irrational decisions ("Think of the children").
You claimed that any car may be driven slowly enough to avoid killing people on the road with only mere convenient to others. That's patently false: firetruck. And then there are several other posters claiming that it's the driver fault if any accident happens at all, due to the mere fact that the driver chooses to drive a car. If we remove every single vehicle in the US out of the street tomorrow, a whole lot of people would die, directly or indirectly. I'd like to see anyone stating outright that "usage of cars in modern society have no benefit insofar as saving human lives is concerned".
In the end, I think both sides are fighting a straw-man, or rather, imagine different scenario in the discussion. I read the original poster as imagine a case where swerving means on a freeway potentially being deadly for at least 2 vehicles, along with a multi-vehicle pile up. You're imaging suburban CA where swerving means some inconvenience with the insurance company. I have little doubt that everyone would swerve to avoid the collision in the latter.
Also since we're on HN and semantic nit-picking is a ritual around here, "avoid killing children in the road above all other priorities, including the lives of passengers" is NOT a good idea. As far as the car knows, I might have 3 kids in the backseats.
I believe you, that you don't drive at all. You call both situations strawmen, yet each is a common occurrence. Safe speed in an automobile is proportional to sightline distance. If drivers or robocars see a child (or any hazard) near the road, they should slow down. By the time they reach the child, they should be crawling. That's in an urban alley or on a rural interstate. If the child appeared before the driver or robocar could react, the car was traveling too fast for that particular situation.
In that "traveling too fast" failure mode, previous considerations of convenience or property damage no longer apply. The driver or robocar already fucked up, and no longer has standing to weigh any value over the safety of pedestrians. Yes it's sad that the three kids in the backseat will be woken from their naps, but their complaint is with the driver or robocar for the unsafe speeds, not with the pedestrian for her sudden appearance.
This is the way driving has always worked, and it's no wonder. We're talking about kids who must be protected from the car, but we could be talking about hazards from which the car itself must be protected. If you're buzzing along at 75 mph and a dumptruck pulls out in front of you, you might have the right-of-way, but you're dead anyway. You shouldn't have been traveling that fast, in a situation in which a dumptruck could suddenly occupy your lane. In that failure mode, you're just dead; you don't get to choose to sacrifice some children to your continued existence.
Fortunately, the people who actually design robocars know all this, so the fucked-up hypothetical preferences of solipsists who shouldn't ever control an automobile don't apply.
Thanks for your response. That's certainly an interesting way to think about driving (in a good way). Your previous posts would have done better by elaborate that way.
Just a bit of clarifying, I didn't call the situations rare or a strawman. I said the arguments were against a strawman, as you were thinking of different scenario and arguing on that.
Out of curiosity, had the situation with 3 kids in the car being potentially deadly to swerve. What would the response be?
"You claimed that any car may be driven slowly enough to avoid killing people on the road with only mere convenient to others. That's patently false: firetruck."
As a firefighter I'd like to offer a few thoughts on this:
- fire engines, let alone ladder trucks are big but slow. Older ones have the acceleration of a slug and struggle to hit high speeds
- with very, very few exceptions, the difference of 10 seconds to 2 minutes that your arrival makes is likely to have no measurable difference (do the math, most departments are likely to have policies allowing only 10mph above posted limits at best, and if you're going to a call 3mi away...)
- again, as mentioned, most departments have a policy on how far the speed limit can be exceeded. It might feel faster when it goes by when you're pulled over, but realistically there won't be much difference
- "due caution and regard" - almost all states have laws saying that there's implied liability operating in "emergency mode" - that is, you can disobey road laws as part of emergency operations, but any incident that happens as a result thereof will be implied to be the fault of the emergency vehicle/operator until and unless proven otherwise
If I'm driving an engine, blazing around without the ability to respond to conditions as much as I would in a regular situation/vehicle, then emergency mode or not, I am in the wrong.
This is a ridiculous claim to make. Emergency vehicles will be the very last kind of vehicle to get autonomous driving (if ever), because they routinely break regular traffic rules, and routinely end up in places off-road in some way.
Hell, modern jumbo jets can fly themselves, from takeoff to landing, but humans are still there specifically to handle emergencies.
> As far as the car knows, I might have 3 kids in the backseats.
Strange that you envision having a car capable of specifically determining 'children' in the street, but nothing about the occupants. Especially given that we already have cars that complain if they detect an occupant not wearing a seatbelt, but no autonomous driving system that can specifically detect an underage human.
> You're imaging suburban CA where swerving means some inconvenience with the insurance company.
"I'd rather other people die than me" is not about imagining mere inconvenience with insurance companies.
For someone complaining about strawmen, you're introducing a lot of them.
You seems to selectively cut out and either skim or didn't read my full post. Except the last paragraph, my full comment has little to do with autonomous driving and just driving/ morality with regard to driving in general.
No, I do NOT trust the car to detect children either on the road or in the car, that's why I phrased my comment that way.
> "I'd rather other people die than me" is not about imagining mere inconvenience with insurance companies.
Yes, I specifically point out the scenario the poster of that quote might be thinking about to contrast with the inconvenience scenario.
You should take your own advice: read the post again before commenting.
Legally speaking children are found to be at fault all the time, and under certain circumstances even treated as adults. So no, they're not incapable of being at fault.
Let me guess, you haven't done much work in public relations? Much worse for a company than doing some evil reprehensible shit, is doing such shit in a fashion that's an easy, emotional "narrative hook" for the media. A grieving mother clutching a tear-stained auto repair bill itemized with "child's-head-sized dent, one" is the easiest hook imaginable.
Maybe so. But Google for "parents of dead child sent bill for airlift" - if you want to be pedantic, you can even argue that in some of those cases, there may not have been any choice, only an implied consent by law that says that "a reasonable person would have wanted to be flown to hospital".
You'll find examples. And for every waived bill you'll also hear "well, regardless of the tragic outcome, a service was rendered, and we have the right to be paid for our work".
Please note that I am carefully avoiding passing any judgment on the morality of any of the above.
Unless I really misunderstand, the invoices to which you refer are for services intended to preserve the life of a child, not services intended to relieve a third party of certain trifling inconveniences associated with a child's death?
Society will not tolerate a robocar that cheerfully kills our children to avoid some inconvenience to its occupants.
One might think that we're not talking about inconvenience, but we are, because after all any car may be driven slowly enough to avoid killing children in the road. A particular robocar that is not driven that slowly (which, frankly, would be all useful robocars), must avoid killing children in the road above all other priorities, including the lives of passengers. A VW-like situation in which this were discovered not to be the case, would involve far more dire consequences for the company (and, realistically, the young children and grandchildren of its executives) than VW faces for its pollution shenanigans.
At the moment our autonomous cars can't manage to avoid a stationary guard rail on the side of the road, so it's a bit premature to be worrying about our cars making moral decisions.
The first priority of an autonomous vehicle should be to cause no loss of life whatsoever. The second priority should be to preserve the life of the occupants.
Nobody buys an autonomous car that will make the decision to kill them.
You'd hope the system has sensor, video and audio logs and some kind of black box arrangement to at least get the maximum value in terms of information out of those crashes that do happen. If so it should be possible to get a real answer to your question.
Tesla's spin control is backfiring. They tried to blame the driver. They're being sued by the family of the dead driver, and being investigated by the NHTSA (the recall people), and the NTSB. In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.
It's a hell of a lot safer for Google to take fault for in a 2mph bump into a bus than it is for a case where someone died. I'd guess Tesla will ultimately settle the lawsuits against them but if they came out and admitted 100% fault, which it likely isn't, they'd have absolutely no ground to stand on and could get financially ruined at trial.
> It takes seconds, not milliseconds, for the driver to recover situational awareness and take over.
The really depressing thing is that this is first-year material for one of the most popular tertiary courses around: psychology. It is simply gobsmacking to think that Tesla thinks a human can retarget their attention so quickly. It's the kind of thing you'd give to first-year students and say "what is wrong with this situation", it's so obvious.
>3. Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet after sheering off the top of the Tesla driving under the semitrailer. Only when it hit a telephone pole did the car stop.
This is actually the most distributing part to me. In these situations, graceful failure or error handling is absolutely critical. If it cannot detect an accident, what the heck business does it have trying to prevent them?
If he had hit the accelerator, you can bet your bottom dollar that Tesla would have mentioned this detail in order to continue the "driver's fault" narrative.
I still say they best damn well rename the feature. The connotations of the work autopilot are too strong as to be misleading to what it can actually do.
when the technology matures then bring the name back, but until then call it something else.
I am still amused that most of these self driving implementations only drive when people are least likely to need the assistance, good visibility conditions. They are the opposite of current safety features which are designed to assist in bad driving conditions. I won't trust any self driving car until it can drive in horrible conditions safely.
At most these services should be there to correct bad driving habits, not instill new ones
I can see how the name could be misleading to some people. However, I think the term is quite accurate. In an aircraft, Autopilot takes care of the basics of flying the plane while the pilot is fully responsible for what is happening, watching out for traffic, and ready to take over if necessary. This is exactly what it means for Tesla as well.
In any case, as somebody who has used it off and on for 6 months in all kinds of situations it is clear that it has limitations but it is also easy to know when to be worried. Its best use is in stop-and-go freeway traffic. Its worse use is on smaller curvy surface streets. It is also really nice for distance driving on freeways.
To me this seems to limit what is possible by software updates.
Does that mean that existing Tesla owners will never get 'real' autopilot capabilities by software upgrades, although I assume most expect to get them?
I think Tesla should start some upgrade program for self driving features. One thing that holds me back from buying one of the current cars is that they could get obsolete quickly if the next version has better self driving sensors.
Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet
Are we sure it continued under power? The Tesla is a heavy vehicle, with weight centred close to the ground. Momentum could easily have carried it several hundred feet after a high speed collision.
There's a report on electrek.co somewhere showing a google map of the crash site and where the car ended up. The car ended up a quarter mile down the road and had passed through a fence or two and some grass. There's also a report somewhere of a driver who says he made the same turn as the truck and followed the car as it continued down the road. Seems like it was under power. NHTSA, NTSB or Tesla could confirm with the data.
Does being under autopilot that disengages disable the accelerator? Otherwise, a foot from a disabled driver could have caused the vehicle to continue under power. Obviously Tesla probably has all this telemetry already.
Autopilot is not autonomous driving. It is fancy cruise control. Just like cruise control can't handle every situation neither can autopilot. Just like autopilot in an aircraft it requires a human paying attention and ready to take over.
The system works as intended and advertised. Obviously it isn't perfect but with the fatality rate roughly equal to human drivers and the accident rate seemingly much lower it seems at least as good as a human.
This would be very reasonable if the feature was not experimental. You specifically turn on experimental feature and then purposefully ignore instructions on using that experimental feature. This is obviously manufacturer's fault.
Yes. It is completely predictable some percentage of people ignore instructions of when production quality features. That's why the fail safe comes first, even in an experimental feature. If you can't provide fail safe, it's not ready for prime time, including testing by the public.
Cool logic. I guess we should stop prescribing drugs because there is no failsafe when people ignore dosage and directions. We should stop selling microwaves cause people might ignore warning and put a cat there... There is of course an alternative of people actually being responsible for doing stupid things and not blaming everyone and everything for their own actions.
I agree their sensor suite needs improvement. I hope they are adding Lidars and/or radars before they start shipping large volumes. A recall of large numbers of cars would kill Tesla
Maybe when they develop a safe, reliable autopilot system, they'll offer it for sale to the public. Some firms seem to have gotten that backwards...
It has always seemed that Google view robocars as a long-term revolutionary concept, while car companies see them as another gaudy add-on like entertainment centers or heated seats. It's not yet clear which is the right stance to take as a firm, but Tesla's recent troubles might be an indication.
No, Tesla issued a somewhat deceptive press release to make people think that:
"Tesla received a message from the car on July 1st indicating a crash event, but logs were never transmitted. We have no data at this point to indicate that Autopilot was engaged or not engaged. This is consistent with the nature of the damage reported in the press, which can cause the antenna to fail."[1]
When that gets trimmed down to a sound bite, it can give the impression the vehicle was not on autopilot.
The driver told the state trooper he was on autopilot.[2] He was on a long, boring section of the Pennsylvania Turnpike; that's the most likely situation to be on autopilot.
Multiple people here have reported that their car has allowed 15 minutes or more without having to touch the wheel. If each warning reset that timer, by design or bug, then you could easily travel 50 miles plus.
Plus, I'm almost certain that Tesla's goal with the system isn't to "have an autonomous system that you have to interact with steadily", hence the 15 minute warning in the first place. They have perverse incentives - to allow it in the first place implies reliability that may or may not be there.
Why would you need "more than an hour" to do 50 miles? Especially where you'd expect autopilot to be most used (highways)? 'round here it's about 40mn highway, and barely an hour (give or take) inter-city, and that's driving at posted speeds which people don't necessarily do (and I expect Tesla does not enforce)
1. Tesla's "hands on wheel" enforcement is much weaker than their competitors. BMW, Volvo, and Mercedes have similar systems, but after a few seconds of hands-off-wheel, the vehicle will start to slow. Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel. Tesla is operating in what I've called the "deadly valley" - enough automation that the driver can zone out, but not enough to stay out of trouble.
The fundamental assumption that the driver can take over in an emergency may be bogus. Google's head of automatic driving recently announced that they tested with 140 drivers and rejected semi-automatic driving as unsafe. It takes seconds, not milliseconds, for the driver to recover situational awareness and take over.
2. Tesla's sensor suite is inadequate. They have one radar, at bumper height, one camera at windshield-top height, and some sonar sensors useful only during parking. Google's latest self-driving car has five 3D LIDAR scanners, plus radars and cameras. Volvo has multiple radars, one at windshield height, plus vision. A high-mounted radar would have prevented the collision with the semitrailer, and also would have prevented the parking accident where a Tesla in auto park hit beams projecting beyond the back of a truck.
Tesla is getting depth from motion vision, which is cheap but flaky. It cannot range a uniform surface.
3. Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet after sheering off the top of the Tesla driving under the semitrailer. Only when it hit a telephone pole did the car stop.
The Pennsylvania Turnpike crash is disturbing, because it's the case the "autopilot" is supposed to handle - divided limited-access highway under good conditions. The vehicle hit a guard rail on the side of the road. That may have been an system failure. Too soon to tell.
The NTSB, the air crash investigation people, have a team investigating Tesla. They're not an enforcement agency; they do intensive technical analysis. Tesla's design decisions are about to go under the microscope used on air crashes.
Tesla's spin control is backfiring. They tried to blame the driver. They're being sued by the family of the dead driver, and being investigated by the NHTSA (the recall people), and the NTSB. In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.