Hacker News new | past | comments | ask | show | jobs | submit login
Tesla confirms “Autopilot” crash in Montana (ktvq.com)
267 points by kempbellt on July 12, 2016 | hide | past | web | favorite | 512 comments



With three accidents on "autopilot" in a short period, the defects in Tesla's design are becoming clear.

1. Tesla's "hands on wheel" enforcement is much weaker than their competitors. BMW, Volvo, and Mercedes have similar systems, but after a few seconds of hands-off-wheel, the vehicle will start to slow. Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel. Tesla is operating in what I've called the "deadly valley" - enough automation that the driver can zone out, but not enough to stay out of trouble.

The fundamental assumption that the driver can take over in an emergency may be bogus. Google's head of automatic driving recently announced that they tested with 140 drivers and rejected semi-automatic driving as unsafe. It takes seconds, not milliseconds, for the driver to recover situational awareness and take over.

2. Tesla's sensor suite is inadequate. They have one radar, at bumper height, one camera at windshield-top height, and some sonar sensors useful only during parking. Google's latest self-driving car has five 3D LIDAR scanners, plus radars and cameras. Volvo has multiple radars, one at windshield height, plus vision. A high-mounted radar would have prevented the collision with the semitrailer, and also would have prevented the parking accident where a Tesla in auto park hit beams projecting beyond the back of a truck.

Tesla is getting depth from motion vision, which is cheap but flaky. It cannot range a uniform surface.

3. Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet after sheering off the top of the Tesla driving under the semitrailer. Only when it hit a telephone pole did the car stop.

The Pennsylvania Turnpike crash is disturbing, because it's the case the "autopilot" is supposed to handle - divided limited-access highway under good conditions. The vehicle hit a guard rail on the side of the road. That may have been an system failure. Too soon to tell.

The NTSB, the air crash investigation people, have a team investigating Tesla. They're not an enforcement agency; they do intensive technical analysis. Tesla's design decisions are about to go under the microscope used on air crashes.

Tesla's spin control is backfiring. They tried to blame the driver. They're being sued by the family of the dead driver, and being investigated by the NHTSA (the recall people), and the NTSB. In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.


> In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.

This is what we should be trumpeting. Responsibility, honesty, and transparency. Google's self driving car project reports [1] should be the standard. They're published monthly and they detail every crash *per requirement by CA state law (thanks schiffern).

Note: These reports are currently only required in California, and only for fully autonomous vehicles. So, Tesla and other driver-assist-capable car companies do not need to publicize accident reports in the same way. Tesla may be unlikely to start filing such reports until everyone is required to do so. We should demand this of all cars that are driver-assist during this "beta" phase.

Tesla owners and investors ought to be interested in a little more transparency at this point. It's a bit concerning that Tesla/Musk did not file an 8-k on the Florida accident. We learned about it from NHTSA almost two months after it happened.

[1] https://www.google.com/selfdrivingcar/reports/


>Google's self driving car project reports [1] should be the standard. They're published monthly and they detail every crash

Copying a reply from earlier...

Google isn't doing this out of the goodness of their heart. They have always been required by law to send autonomous vehicle accident reports to the DMV, which are made available to the public on the DMV's website.

https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testi...

So the transparency itself is mandatory. Google merely chose to control the message and get free PR by publishing an abbreviated summary on their website too. Judging by this comment, it's working!

For example, here's the full report on Google's accident in February, the first one where the autonomous car was the vehicle at fault: https://www.dmv.ca.gov/portal/wcm/connect/3946fbb8-e04e-4d52...

(see also studentrob's excellent reply here: https://news.ycombinator.com/item?id=12084444)


> So the transparency itself is mandatory

Then why on earth is Tesla being opaque and seemingly passing the buck with their PR?! I don't get it.


The requirement does not seem to be there for driver-assist vehicles. Also, it's a state-level requirement. Looks like it's just in CA for now. The accidents detailed in the Google reports are all from California.

And, Tesla may be unwilling to start reporting driver-assist accident rates to the public unless the other driver-assist car companies are forced to do it too.

I'll make an addendum to my above comment.


> The accidents detailed in the Google reports are all from California.

You could have opened the first report[1] on that page and notice this is wrong.

> June 6, 2016:​ A Google prototype autonomous vehicle (Google AV) was traveling southbound on Berkman Dr. in Austin, TX in autonomous mode and was involved in a minor collision north of E 51st St. The other vehicle was approaching the Google AV from behind in an adjacent right turn-only lane. The other vehicle then crossed into the lane occupied by the Google AV and made slight contact with the side of our vehicle. The Google AV sustained a small scrape to the front right fender and the other vehicle had a scrape on its left rear quarter panel. There were no injuries reported at the scene.

[1] https://static.googleusercontent.com/media/www.google.com/en...


Blast! I can't get anything right. It's too late to edit my original comment.

Well, that's awesome that Google reports on accidents outside of California too.

Thank you for correcting me, delroth.


Musk used to be calmer and more open before. Now he seems bratty. Maybe stress or overconfidence.


I don't envy his position, or any other CEO or leader of a large group. It's a huge responsibility. You can face jail time for not sharing information. Usually only the most egregious violators see a cell, but it's still a scary prospect if you somehow got caught up in some other executive's scheme. At the end of the day, the CEO signs everything.

Edit: I see the downvote. Am I wrong? Is being a CEO all rainbows and sunshine? Maybe I got some detail wrong. Feel free to correct me.


the guy comes forward and says "I'm going to make an autopilot for the masses". He/We know it's tough to do. The guy release its autopilot. It's not quite ready for prime time. Double problem : the guy goes into a tough business alone, the guy makes stuff that's not 100% ready.

I like it when people try to do hard stuff, that's fine and maybe heroic. But they have to be prepared : if testing the whole thing must take 10 years to be safe and a gazillion dollars, then so be it. If it's too much to shoulder, then they should not enter the business.


> if testing the whole thing must take 10 years to be safe and a gazillion dollars, then so be it. If it's too much to shoulder, then they should not enter the business

That sounds like a succinct explanation for why we've been stuck in LEO since December 19, 1972.

We've got 7 billion+ humans on the surface of our planet. We lose about 1.8/s to various tragedies.

Most of which are of less import than helping automate something that's responsible for around 32,000 deaths in the US every year. Move fast, break people for a just cause, escape risk-averse sociotechnological local minima, and build a future where fewer people die.


I don't think the issue is that Tesla's cars are dangerous. The issue people are raising is that they pretend, at least through implications, that their cars can safely drive for you.

Tesla is also not doing any kind of super special research into self driving cars. The system their cars use is (afaik) an OEM component (provided by MobileEye) that powers the driver assist features of many other brands, too.

Instead of actually improving the technology they have chosen to lower the safety parameters in order to make it seem like the system is more capable than it actually is.


Safe is like 'secure'. It's a relative concept, and the level of safety we are willing to accept from products is something that should be communally decided and required by law.

We shouldn't go off expecting that someone elses ideas of safety will jive 100% with our own, and then blame them when someone else buys a product and is injured.

Should Telsa drive-assist deactivate and slow the car after 10 seconds without touching the wheel? Probably, but I won't blame them for not doing so if there is no regulatory framework to help them make a decision. It's certainly not obvious that 10 seconds is the limit and not 11, or 15 or 2 seconds.


It sounds like it's some predicament he's in by some accident that wasn't his fault.


As outsiders, we can only know what the company, regulatory agencies, and reporters tell us.


> we can only know what the company ... [tells] us.

Tesla tells us "it's the driver's fault that he died while using our Autopilot".

Either Musk approved this message, or he didn't bother to read it before it went out, and didn't bother to apologize afterwards.

I think we can draw some conclusions from that.


There are pending investigations by the NHTSA and NTSB. We'll know more when they are ready.


Once again Tesla have made the discussion about the driver. They have put an auto-control system out in the wild that steers the car into hard obstacles at high speed. Beeps to warn the driver at some stage in the preceding interval do not make this control systems failure any more acceptable. It still misjudged situation as acceptable to continue and it still drove into obstacles. Those are significant failures and a blanket statement of responsibility going to the driver isn't going to cut it.


I agree; I think it's despicable that Tesla is acting in such a defensive manner.

-- "No force was detected on the steering wheel for over two minutes after autosteer was engaged," said Tesla, which added that it can detect even a very small amount of force, such as one hand resting on the wheel.

"As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel," said Tesla. "He did not do so and shortly thereafter the vehicle collided with a post on the edge of the roadway." --

Bruh-slow the car down then. If you're storing information that points to the fact that that the driver is not paying attention, and you're not doing anything about it, that's on you 100%.

We are in the midst of a transition period between autonomous and semi-autonomous cars, and blaming the driver for the fact that the car, in auto-steer mode, steered into obstacles, is counter-productive in the long term. You need to make auto-steer not work under these conditions. The driver will quickly figure out the limits of your system as their car decelerates with an annoying beeping sound.


> No force was detected on the steering wheel for over two minutes after autosteer was engaged," said Tesla, which added that it can detect even a very small amount of force, such as one hand resting on the wheel.

That would seem to me to be reason to steer the car to the shoulder and disengage the auto-pilot.

Tesla is trying to take too big a step here and blaming the driver(s) of these cars is really really bad PR.


> That would seem to me to be reason to steer the car to the shoulder and disengage the auto-pilot.

Agreed. Or where a shoulder isn't available engage hazard flashers and drop speed to 10 miles an ho


How can autopilot know if it is safe to steer into the shoulder? It won't even change lanes without driver input.


> How can autopilot know if it is safe to steer into the shoulder?

I would suggest that if it can't, its not ready for general use on consumer road vehicles.


Isn't that one of the reasons there is still a driver? Cruise control will drive you right off of the road until you crash with no awareness of the shoulder or other traffic or anything. But it is all over the place.


Except for that one case where there is no shoulder, just a 500 foot drop to death.


Restrict auto-steering to very specific roads which have sufficient information. That's basically the vast majority of roads.


Right. When the AP detected road conditions that it couldn't handle, it should have reduced vehicle speed as much as necessary for handling them. Yes, it was dumb to trust AP at 50-60 mph on a two-lane canyon road. But it was also dumb to ship AP that can't fail gracefully.

I do admit, however, that slowing down properly can be hard. It can even be that hitting an inside post is the best option. Still, I doubt that consumer AP would have put the vehicle into such an extreme situation.


I think it also needs AP confidence indicator, so the driver could get ready to pick up the wheel.


It's certainly a major fault of Tesla's - if you know it's not safe, you've got no excuse to keep going.

But the driver shouldn't be allowed to escape blame either. When your car is repeatedly telling you that you need to take control, then you should be taking control.


I've worked on highly safety related (classification ASIL D) automotive systems. There are rules about how to think about the driver.

The driver is not considered a reliable component of such a system, and must not be used to guide decisions.

Yes, the driver clearly was a fool, but system design should not have taken his decision into account, and come to its own conclusion for a safe reaction (e.g., stopping the car).


I agree with that, and as I say I'm not attempting to defend Tesla in the slightest.

My point was simply that the driver has to take responsibility for their actions in this situation as well.


If the robocar were to warn once, and then steadily slow, all drivers would learn to take over when warned.


I don't disagree - like I say, none of my comment was intended to let Tesla off the hook in any way.

My point was simply that the drivers who use semi-automatic cars and choose to ignore warnings to retake control need to be responsible for their actions as well.


Incapacitated? The software needs to fail more gracefully than this when the driver is a dependency and isn't responding.


Exactly! In this case the driver may have been at fault.

What happens next time, when the driver has a stroke?

"Well, he didn't put his hand on the wheel when the car told him to, it's not our fault!"

Like you say, slow it down. Sure, the driver of a non-autonomous vehicle who has a stroke might be in a whole world of trouble, but isn't (semi-) autonomy meant to be a _better_ system?


Suppose I'm driving a "normal" car. I have my hands on the wheel, but I am reading a book. I hit a tree.

Is the auto-maker responsible? They advertised the car as having cruise control.

Call me old, but I remember similar hysteria when cruise control systems started appearing. People honestly thought it would be like autopilot or better - and ploughed into highway dividers, forests, other traffic, you name it, at 55mph.


Cruise control is supposed to control the speed of the car. If you set it for 55 and the car accelerated to 90 it would be considered dangerous and defective.

The steering control is supposed to control the steering of the car. If you set it for "road" and it decides to steer for "hard fixed objects" how should we regard it?


>Is the auto-maker responsible?

Why would the auto-maker be responsible for your distracted driving, and what does it have to do with cruise control?

Some of the uproar about these incidents is related to the fact that Tesla marketing is implying far more automation and safety here than the technology can actually provide. And apparently it can cost lives.


Exactly, they shouldn't be responsible for distracted driving. Autopilot can certainly get better, but you're being willfully ignorant if you don't believe that these Tesla drivers with their hands off the wheel don't fall under distracted driving. Could autopilot improve? Yes. Is the driver at fault here for not staying alert and responsible for the car they chose to drive? Absolutely.

Please show me a source for the claim that they're "implying far more automation" as everything I've ever seen from Tesla on this says hey dummy, don't stop paying attention or take your hands off the wheel.


> Please show me a source for the claim that they're "implying far more automation"

It is literally called "autopilot."


Are you implying that the drivers involved were not even aware that they were using the feature negligently?

Even after they used it for miles, it having certainly audibly reminded them many times, over months, that they must have their hands on the wheel?

Maybe if someone is deaf, reads "autopilot", jumps in the car and crashes, the name is at fault. Otherwise, unlikely.


Sure, the driver is at fault, but Tesla is almost encouraging this by not having a shorter threshold for keeping hands off the wheel.


>Please show me a source for the claim that they're "implying far more automation"

Th key word is "implying". Of course they say "keep your hands on the wheel" in the instruction manual.


What if the manufacturer is permitting conditions that incentivizes distracting driving? Like not requiring hands on the wheel?

It's also a problem if it permits driving faster than the speed limit.


Its normal to exceed the speed limit e.g. passing. No absolute limit can be arbitrarily set without tying the pilot's hands/limiting their options which could make the roads more dangerous.


It may be common to exceed to speed limit when passing, but it is illegal. The setting of these speed limits is meant to be objective, maybe are sometimes subjective, but it is wrong to call them arbitrary.

If the car autopilot software permits faster than legal speed driving, the manufacturer is taking on liability in the case of speed related accidents and damage.


No its not actually illegal. Here's an example regulation from Washington state:

http://app.leg.wa.gov/rcw/default.aspx?cite=46.61.425


It costs lives because folks are toying with this new thing. If they'd just use it as intended, they'd achieve the higher levels of safety demonstrated in testing.

Yeah, I just blamed the victims for misusing autopilot. Like anything else, its a tool for a certain purpose. Its not foolproof. Testing its limits can get you hurt, just like a power saw.


It's a completely different conversation. Cruise control maintains speed only. It still requires positive driver control.

It's controls are such that it disengages gracefully. You notice a problem, instinctually hit the brake and disability cruise.


It is about the driver you turn on a test feature (it warns you it's a test feature) and ignore all the procedures you must follow to use this not yet production feature it's your fault.


> Those are significant failures and a blanket statement of responsibility going to the driver isn't going to cut it.

That will be the communication pattern for the next two decades; we better get used to it.


I would never buy a Tesla for this reason.

You drive their cars, and you're always driving around with a third party with its own interests.


Like a faulty GM ignition switch?


It bugs me how quick Tesla is to blame their drivers. That autopark crash, for example, was clearly the result of a software design flaw. Tesla pushed an update where to activate autopark all you had to do was double tap the park button and exit the vehicle. That's it. No other positive confirmation. No holding the key fob required. The vehicle would start moving without a driver inside.

You'd think someone would have realized that a double tap could accidentally happen why trying to single tap.

The reporting on this was terrible, but there's a YouTube video illustrating the design flaw.[1]

Thing is, someone at Tesla must know this is the real culprit. But there was no public correction of their local manager who blamed the driver.

[1] https://www.youtube.com/watch?v=t-JoZL9edlA


its a bit harder than a double tap... You press and hold. Count to three. Now wait for the car to start blinking and make a shifting noise. Press either the front of the key or back of the key to move forward. Definitely not easy to engage so I can understand tesla view on this one... I do this every morning/night to pull my car out of the tiny garage since I can't get my door open inside the garage


There are multiple methods of activating their autopark feature. At the time of the crash, the setting that allowed you to activate it using the keyfob as you describe also meant that merely double-tapping the P button in the car would activate autopark and cause the car to start moving forward a few seconds after getting out. This was added in a patch, so unless you paid attention to their release notes you may have missed it.


Wait, did you build your garage for the car? What were you doing before self-parking cars?


I'm not sure why it is, but a lot of 100 year old houses have garages that are just big enough to fit a car in and not be able to open the doors. Car dimensions really haven't grown all that much, the Ford model T was 66 inches wide and most modern sedans are ~70-75 inches.

Maybe they were never intended for cars?

Maybe it was a cost savings thing? Homes that are only 50-75 years old seem to have larger carports instead of garages.

* I answered my own questions. A model T was 66 inches wide but had 8 inch running boards on either side so the crew cabin is closer to 50 inches. The doors, if it had them, would be shorter as well. A model S is 77 inches wide so that's probably 24 inches difference in clearance.


We used to park in the driveway - I have to park inside now to reach the power plug - but I see three benefits

1 my car is nice and clean most of the time now

2 no more gas station dirty hands on the gas pump, I'm OCD umk

3 I get to watch my car pull out of the garage every morning in amazement :)


Not all cars are the same size.


False. Did you watch the video? It proves that you only had to double tap and exit.

Tesla pushed an update that enabled this as a "convenient" shortcut to the more elaborate, safer method you describe. In fact the video author praises it. Seems like a nice handy feature.. At first. Safety-critical design needs more care than this though.


It is True. You have to press in two places on the key. First you have to press and Hold in the middle of the key. Next you have to press either in the Front of the key or the Back of the key. Show me the video? Am I missing something can I simply press in the Front Twice?


The video is linked in the original comment you responded to. You ought to watch it before responding.


>Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel.

>They tried to blame the driver.

In this incident, the driver was using autopilot in a fashion that it should not be; twisting road at high speed. The driver IS at fault.

>They're being sued by the family of the dead driver

This is no proof of anything. You can find a lawyer to sue anybody over anything.

>and being investigated by the NHTSA (the recall people), and the NTSB

Well, of course they are. They're supposed to. Again, no proof of anything Tesla did wrong.

I'm not saying there isn't a problem with this. I'm saying your reasoning is wrong.

I'm also curious as to how many autopilot sessions take place every day. If there have been three crashes, such as this where the driver is at fault, out of a million then that's one thing not considered so far.


>In this incident, the driver was using autopilot in a fashion that it should not be; twisting road at high speed. The driver IS at fault.

I disagree on this point. In any system that's supposed to be sophisticated enough to drive the car and but also carries a giant caveat like "except in these common driving situations..." failure is not instantly down to "user error." Such a system should gracefully refuse to take over if it's not running on an interstate or another "simple enough" situation; otherwise, as many have noted, the threshold for "is the driver paying sufficient attention" should be set much, much lower.

That the system is lulling the users into bad decisions is not automatically the fault of said user. Some blame, maybe most of the blame, has to fall on the autopilot implementation. When lives are on the line, design should err on the side of overly-restrictive, not "we trust the user will know when not to use this feature and if they are wrong it is their fault."


It's not lulling anybody into anything! The guy was given multiple warnings by the car that what he was doing was unsafe. The last of which was extremely pertinent to the crash. To quote the article, "As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel". This is after the initial alerts telling him that he needs to remain vigilant while using this feature. This isn't some burden of knowledge on when to use and when not to use the system, it's plenty capable of knowing when it's not operating at peak performance and lets you know. At that point, I'm willing to shift blame to the driver.


I just want to see "the vehicle again alerted the driver to put his hands on the wheel" followed by "and Autopilot then pulled to the side of the road, turned on the hazard lights, and required the driver to take over."


>and Autopilot then pulled to the side of the road

"It's a winding road going through a canyon, with no shoulder"

So going from a dangerous situation to another dangerous situation.


That's stupid. Autopilot doesn't have the ability to determine if it is safe to pull over. It has a fallback mode; eventually it will come to a complete stop in the middle of the road and turn the hazards on. But since this is also dangerous it makes sense to give the driver time to stop it from happening.


It's stupid to come to a gradual slowdown with hazard lights, but it's not stupid to keep blazing away at speed, with an obviously inattentive (or perhaps completely disabled) driver? I'm confused - even you acknowledge it has that fallback, but it's 'stupid' to enact it after a couple of failed warnings? When should that threshold be crossed, then?


That is exactly what it does; apparently the accident happened before it could complete its "slow down and stop with the hazard lights on" process.


so far we have been lucky the accidents have only impacted the driver. is it going to take the first accident where an autopilot car kills another innocent driver before people see the problem?


When autopilot kills more innocent drivers than other drivers you can point me to a problem. Time will tell if it is better or worse and the track record up until May/June was going pretty well.

I'd rather 10 innocent people die to freak autopilot incidents than 1,000 innocent people die because people in general are terrible at operating a 2,000-3,500lb death machine. Especially because everyone thinks of themselves as a good driver. It's fairly obvious not everyone is a good driver - and that there are more bad drivers than good.

Maybe I only see things that way because there have been four deaths in my family due to unaware drivers which could have easily been avoided by self-driving vehicles. All of them happened during the daytime, at speeds slower than 40mph, and with direct line of sight. Something sensors would have detected and braked to avoid.


Assuming we stick with the same conditions as in this case, I'd call an innocent life taken due to a driver disregarding all warnings from his system negligence. That also fits with what this driver was ticketed with which was reckless endangerment. We don't even get heads up display warnings about not driving into oncoming traffic, but nobody is going to blame a dumb-car for the actions of its driver. I would say that inaction in this hypothetical case would be the driver's crime.


The Tesla warnings remind me of Charlie and the Chocolate Factory, where one of the brats were about to do something disaterous and Gene Wilder would in a very neutral tone protest "no, stop, you shouldn't do that".

From a risk management standpoint, Tesla's design doesn't strike me as sufficient. Warnings without consequences are quickly ignored.


The car crashed. Sounds like a pretty strong consequence to me.


[flagged]


I guess I just don't know how many flashing warnings somebody has to ignore for it to finally be their fault.. At some point, their unwillingness to change their natural inclinations in the face of overwhelming evidence that they need to should have a negative consequence. This isn't like "just click through to ToS". If you buried it in the ToS or something similar, I absolutely agree with your point of view, but when you make it painstakingly clear in unavoidable manners I think we've crossed into territory of user responsibility. If this feature needs to be removed from Tesla cars (and I accept that potential future) I would definitely put that "fault" on the users of Tesla cars rather than Tesla based on the implementation I've read about.


> In this incident, the driver was using autopilot in a fashion that it should not be; twisting road at high speed.

I don't understand why, if the car's GPS co-ordinates are known, the car even allows autopilot on anything but roads known to be within spec.

> The driver IS at fault.

I'm not sure legal liability is that clear cut.


I can't imagine that every road in the world can be determined to be "safe" or "not safe" via GPS coordinates alone.


Then the classification would fail-safe (i.e. no autopilot for you) and that's good[1]. The alternative with the current technology is apparently depending on humans to decide (poorly). This is to minimise deaths caused by engaging autopilot at the wrong time while you gather more data.

[1] If the goal is to stay out of court. I understand the AI drivers are better on average argument.


Certainly the software making the driving decision is capable of 'knowing'


Tesla's "autopilot" system and implementation has serious problems. I think it's becoming clear that they did something wrong. Whether it rises to the standard of negligence I'm not sure, but it probably will if they don't make some changes.

With that said, I agree that ultimately the driver is at fault. It's his car, and he's the one that is operating it on a public road, he's the one not paying attention.

One of the laws of ship and air transport is that the captain is the ultimate authority and is responsible for the operation of the vehicle. There are many automated systems, but that does not relieve the captain of that responsibility. If an airplane flies into a mountain on autopilot, it's the captain's fault because he wasn't properly monitoring the situation.


>>In this incident, the driver was using autopilot in a fashion that it should not be; twisting road at high speed. The driver IS at fault.

It would be disastrous for the car to take control when it shouldn't. That itself is a giant red flag to me.

Its all or nothing when it comes to automation that involves human life.


Sorry, but you don't get off that easy when your design is intended to make life and death decisions. I work on FDA regulated medical devices and we have to do our best to anticipate every silly thing a user can (will) do. In this case, the software must be able to recognize that it is operating out of spec and put a stop to it.


The Pennsylvania Turnpike crash is disturbing, because it's the case the "autopilot" is supposed to handle - divided limited-access highway under good conditions. The vehicle hit a guard rail on the side of the road. That may have been an system failure. Too soon to tell.

This is the one I'm very interested to hear more about. From what I've heard, Tesla's self-driving is very aggressive in swerving to avoid obstacles -- was hitting the guard rail the result of a deliberate swerve, and if yes then was there a logic error, a sensor error, or in fact a legitimate reason to swerve?


> or in fact a legitimate reason to swerve?

Reasoning about what's legitimate can be a really fine line. Oncoming car, raccoon, fawn, human, doe, coyote, elk: some of these obstacles might actually be net-safer to not swerve around [perhaps depending on speed].

I'd be really curious if any of the autopilots have a value system that tries to minimize damage/loss of life. And how does it value the exterior obstacle against the passengers in the car?


> elk

I'm pretty sure that hitting a full-grown elk is a poor choice, and often a fatal one. A large, full-grown elk can weigh 600 pounds and stand 5 feet tall at the shoulder, which means it will probably go over the crumple zone and directly strike the windshield.

This happens commonly with moose in New England, which run up to 800 pounds and 6.5 feet tall at the shoulder. A car will knock their legs out from under them but the body mass strikes much higher and does an alarming amount of damage to the passenger compartment. You can Google "moose car accident" and find some rather bloody photos.

The standard local advice is to try really hard to avoid hitting moose, even at the cost of swerving. I would prefer that an autopilot apply a similar rule to large elk. Even if you wind up striking another obstacle, you at least have a better chance of being able to use your crumple zone to reduce the impact.


In this situation, small-to-normal passenger cars are actually much safer than typical American trucks and SUVs. When the legs are taken, the mass of the body strikes the roof of the lower vehicle and bounces over the top. The larger vehicles direct the body through the windshield into the passenger compartment.

I'm not sure about moose, but with more nimble animals, swerving is no guarantee of avoiding collision. A deer can change direction quicker than an automobile, and if it had any sense, it wouldn't be running across the road in the first place. It's definitely safer to slow down and honk your horn than it is to try to guess which direction the damn thing will jump.

[EDIT:] When I was learning to drive, my father often recounted the story of a hapless acquaintance who, in swerving to avoid a snake, had precipitated a loss of control that killed her infant child. He's not a big fan of swerving for animals.


Moose or Elks don't 'bounce over the top' on a small vehicle. They crush it, rip the roof off, or go through it. It may depend on antlers, angles, speed, and so on.

I've tried to pick the least graphic images possible:

http://img.radio-canada.ca/2015/08/09/635x357/150809_kr04b_v... from http://ici.radio-canada.ca/regions/saguenay-lac/2015/08/09/0... -- which ended up killing the two occupants of the car.

This one is interesting because you may very well hit the moose facing forwards: http://www.snopes.com/photos/accident/moose.asp

I don't think there is much to be gained in terms of safety. Let's not forget that while you may have large moose, there's a range where they can still be much smaller while growing and still damage smaller cars fairly easily.


There is lots of ambiguity here. Depends on how you hit it, etc.

Roof landings don't always end well -- my Civic was trashed when a deer basically bounced up and collapsed my roof on the passenger side. My head would have been trashed as well had I been a passenger!

In general, if you cannot stop, hitting something head on is the best bet. Modern cars are designed handle that sort of collision with maximum passenger safety.


The cost of swerving is very, very minimal (especially in a modern car that won't go sideways no matter what you do).

Imagine a two lane road with a car traveling toward an obstruction in its lane. The distance required to get the car out of the obstructed lane via the steering wheel is much less than stopping it via the brakes.

There's a reason performance driving has the concept of brakes being an aid to your steering wheel as opposed to being operated as a standalone system.


Yes, I included it precisely because there's no reason to ever hit an elk. "some of these obstacles ..." -- it's one of the extrema that you might imagine as test cases for this kind of a feature.


In some countries it's illegal to swerve to avoid an animal. I assume this means that they've judged it safer not to swerve.


Technically the swerving itself is not necessarily illegal, but it makes you liable for anything that could have been avoided by not swerving. If you swerve on a free and open road and don't cause any damage or injury to anything or anyone (including yourself and your car), it's not like you will still be dragged to court just because you didn't want to unnecessarily run over someone's pet.


It usually means if you swerve and cause an accident, it's your fault and you can't blame it on the animal jumping out in front of you.


Probably because it would be impossible to prove that there was an animal there at that time.


Many cars have dashcams now, especially Teslas.

But definitely the law could, and probably is, out of touch.


> obstacles might actually be net-safer to not swerve around

In the UK its illegal to swerve for these things. In the sense that if you subsequently cause an accident for swerving to avoid a cat, you are at fault.


Wait, it's illegal to swerve for a moose (TIL: moose are called elk in Europe, American elk are totally different) in the UK? Hitting a moose is a fast route to the grave.


There are no moose in the UK. There's the occasional deer on the road, and parts of the country with unfenced sheep grazing, but no moose.

Common sense will be applied and liability is not strict nor is it a criminal offence per se. https://www.clarkewillmott.com/personal-injury-and-medical-n...


Hitting a deer or a sheep in most cars has a pretty good chance of getting you killed in a coupe or a sedan.


I guess that they can let the courts set precedence when the UK has it's first moose-on-road-based collision...


How about cows?


Such a value system would be a nightmarish thing so I hope that it doesn't exist yet.

Just imagine your car calculating that it should let you die in order to save a group of school children... except that the school children were in fact a bunch of sheep that the system mis-identified.

The danger of AI isn't in paperclip optimizers (at least for now), it's in humans over-relying on machine learning systems.


I think such a value system is fine in as much as it preserves the safety of the driver.

I'd like my car to make value judgements about what's safer - braking and colliding with an object or swerving to avoid it. I'd rather it brakes and collides with a child than swerve into oncoming traffic, but I'd rather it swerved into oncoming traffic to avoid a truck that would otherwise T-bone me.

As long as the system only acts in the best interest of it's passengers, not the general public, it's fine by me.



It's fine for you until you step outside the car.


I'd rather it brakes and collides with a child than swerve into oncoming traffic

Wait, what? You'd rather hit and kill a child than hit another vehicle - a crash which at urban/suburban speeds would most likely be survivable, without serious injury, for all involved?


This is where the current AI hype cycle will die.

You cannot codify some of the "dark" choices that are typically made today in split second, fight or flight situations. If we go down this road, it opens to door to all sorts of other end justifies the means behaviors.

There's a threshold number of dead hypothetical children that's going to trigger a massive revolt against tone deaf Silicon Valley industry.


The moral choice is the opposite direction: the car should prefer to kill the occupant than a bystander. The occupant is the reason why the car is in use; if it fails, the penalty for that failure should be on the occupant, not bystanders.


But in this case it isn't a bystander, it's out in the middle of the road in front of a moving vehicle.

If a child runs in front of my car then it is most definitely at fault and I'd honestly much rather it be dead than me, if those are the options.


You're using the word "it" to describe human children now?


Yeah this thread has really squicked me out. Of course dehumanization is an age-old tactic for enabling asshole behaviors of all sorts. I'd like to hope, however, that outside the current commuter-hell driving context, few people would even admit to themselves, let alone broadcast to the world, their considered preference for killing children. That is, when they no longer drive, perhaps people won't be quite so insane about auto travel. The twisted ethos displayed in this thread will be considered an example of Arendt's "banality of evil".


I don't even drive at all, and yet this thread irks me from the other direction. Dehumanization is an age-old tactic for enabling asshole behaviors, and bringing children into a discussion is an age old tactic for justifying irrational decisions ("Think of the children").

You claimed that any car may be driven slowly enough to avoid killing people on the road with only mere convenient to others. That's patently false: firetruck. And then there are several other posters claiming that it's the driver fault if any accident happens at all, due to the mere fact that the driver chooses to drive a car. If we remove every single vehicle in the US out of the street tomorrow, a whole lot of people would die, directly or indirectly. I'd like to see anyone stating outright that "usage of cars in modern society have no benefit insofar as saving human lives is concerned".

In the end, I think both sides are fighting a straw-man, or rather, imagine different scenario in the discussion. I read the original poster as imagine a case where swerving means on a freeway potentially being deadly for at least 2 vehicles, along with a multi-vehicle pile up. You're imaging suburban CA where swerving means some inconvenience with the insurance company. I have little doubt that everyone would swerve to avoid the collision in the latter.

Also since we're on HN and semantic nit-picking is a ritual around here, "avoid killing children in the road above all other priorities, including the lives of passengers" is NOT a good idea. As far as the car knows, I might have 3 kids in the backseats.


I believe you, that you don't drive at all. You call both situations strawmen, yet each is a common occurrence. Safe speed in an automobile is proportional to sightline distance. If drivers or robocars see a child (or any hazard) near the road, they should slow down. By the time they reach the child, they should be crawling. That's in an urban alley or on a rural interstate. If the child appeared before the driver or robocar could react, the car was traveling too fast for that particular situation.

In that "traveling too fast" failure mode, previous considerations of convenience or property damage no longer apply. The driver or robocar already fucked up, and no longer has standing to weigh any value over the safety of pedestrians. Yes it's sad that the three kids in the backseat will be woken from their naps, but their complaint is with the driver or robocar for the unsafe speeds, not with the pedestrian for her sudden appearance.

This is the way driving has always worked, and it's no wonder. We're talking about kids who must be protected from the car, but we could be talking about hazards from which the car itself must be protected. If you're buzzing along at 75 mph and a dumptruck pulls out in front of you, you might have the right-of-way, but you're dead anyway. You shouldn't have been traveling that fast, in a situation in which a dumptruck could suddenly occupy your lane. In that failure mode, you're just dead; you don't get to choose to sacrifice some children to your continued existence.

Fortunately, the people who actually design robocars know all this, so the fucked-up hypothetical preferences of solipsists who shouldn't ever control an automobile don't apply.


Thanks for your response. That's certainly an interesting way to think about driving (in a good way). Your previous posts would have done better by elaborate that way.

Just a bit of clarifying, I didn't call the situations rare or a strawman. I said the arguments were against a strawman, as you were thinking of different scenario and arguing on that.

Out of curiosity, had the situation with 3 kids in the car being potentially deadly to swerve. What would the response be?


"You claimed that any car may be driven slowly enough to avoid killing people on the road with only mere convenient to others. That's patently false: firetruck."

As a firefighter I'd like to offer a few thoughts on this:

- fire engines, let alone ladder trucks are big but slow. Older ones have the acceleration of a slug and struggle to hit high speeds

- with very, very few exceptions, the difference of 10 seconds to 2 minutes that your arrival makes is likely to have no measurable difference (do the math, most departments are likely to have policies allowing only 10mph above posted limits at best, and if you're going to a call 3mi away...)

- again, as mentioned, most departments have a policy on how far the speed limit can be exceeded. It might feel faster when it goes by when you're pulled over, but realistically there won't be much difference

- "due caution and regard" - almost all states have laws saying that there's implied liability operating in "emergency mode" - that is, you can disobey road laws as part of emergency operations, but any incident that happens as a result thereof will be implied to be the fault of the emergency vehicle/operator until and unless proven otherwise

If I'm driving an engine, blazing around without the ability to respond to conditions as much as I would in a regular situation/vehicle, then emergency mode or not, I am in the wrong.


> That's patently false: firetruck

This is a ridiculous claim to make. Emergency vehicles will be the very last kind of vehicle to get autonomous driving (if ever), because they routinely break regular traffic rules, and routinely end up in places off-road in some way.

Hell, modern jumbo jets can fly themselves, from takeoff to landing, but humans are still there specifically to handle emergencies.

> As far as the car knows, I might have 3 kids in the backseats.

Strange that you envision having a car capable of specifically determining 'children' in the street, but nothing about the occupants. Especially given that we already have cars that complain if they detect an occupant not wearing a seatbelt, but no autonomous driving system that can specifically detect an underage human.

> You're imaging suburban CA where swerving means some inconvenience with the insurance company.

"I'd rather other people die than me" is not about imagining mere inconvenience with insurance companies.

For someone complaining about strawmen, you're introducing a lot of them.


You seems to selectively cut out and either skim or didn't read my full post. Except the last paragraph, my full comment has little to do with autonomous driving and just driving/ morality with regard to driving in general.

No, I do NOT trust the car to detect children either on the road or in the car, that's why I phrased my comment that way.

> "I'd rather other people die than me" is not about imagining mere inconvenience with insurance companies.

Yes, I specifically point out the scenario the poster of that quote might be thinking about to contrast with the inconvenience scenario.

You should take your own advice: read the post again before commenting.


You are still the one who elected to hurl 1500 kilograms of metal down the road.


Read the last sentence of the parent comment again.


You are not alone in that, but perhaps in a minority. A child cannot truly be 'at fault'.


Legally speaking children are found to be at fault all the time, and under certain circumstances even treated as adults. So no, they're not incapable of being at fault.


Hypothetical question: would you send the bill for bumper dent-repair to the parents of the child who died making that dent?


Would I? Of course not.

Would my insurance company? Probably.


Let me guess, you haven't done much work in public relations? Much worse for a company than doing some evil reprehensible shit, is doing such shit in a fashion that's an easy, emotional "narrative hook" for the media. A grieving mother clutching a tear-stained auto repair bill itemized with "child's-head-sized dent, one" is the easiest hook imaginable.


Maybe so. But Google for "parents of dead child sent bill for airlift" - if you want to be pedantic, you can even argue that in some of those cases, there may not have been any choice, only an implied consent by law that says that "a reasonable person would have wanted to be flown to hospital".

You'll find examples. And for every waived bill you'll also hear "well, regardless of the tragic outcome, a service was rendered, and we have the right to be paid for our work".

Please note that I am carefully avoiding passing any judgment on the morality of any of the above.


Unless I really misunderstand, the invoices to which you refer are for services intended to preserve the life of a child, not services intended to relieve a third party of certain trifling inconveniences associated with a child's death?


This is a great argument if your goal is no autonomous cars. Nobody buys a car that will choose to kill them.


Society will not tolerate a robocar that cheerfully kills our children to avoid some inconvenience to its occupants.

One might think that we're not talking about inconvenience, but we are, because after all any car may be driven slowly enough to avoid killing children in the road. A particular robocar that is not driven that slowly (which, frankly, would be all useful robocars), must avoid killing children in the road above all other priorities, including the lives of passengers. A VW-like situation in which this were discovered not to be the case, would involve far more dire consequences for the company (and, realistically, the young children and grandchildren of its executives) than VW faces for its pollution shenanigans.


Why would I want to be in a society where every other driver has a car that would act in the interests of it's occupants and not the general public?


At the moment our autonomous cars can't manage to avoid a stationary guard rail on the side of the road, so it's a bit premature to be worrying about our cars making moral decisions.


I'd rather it brakes and collides with a child...

Humanity: you're doing it wrong. That some humans have this attitude is another reason why humans shouldn't drive automobiles.


The first priority of an autonomous vehicle should be to cause no loss of life whatsoever. The second priority should be to preserve the life of the occupants.

Nobody buys an autonomous car that will make the decision to kill them.


You'd hope the system has sensor, video and audio logs and some kind of black box arrangement to at least get the maximum value in terms of information out of those crashes that do happen. If so it should be possible to get a real answer to your question.


Tesla's spin control is backfiring. They tried to blame the driver. They're being sued by the family of the dead driver, and being investigated by the NHTSA (the recall people), and the NTSB. In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.

It's a hell of a lot safer for Google to take fault for in a 2mph bump into a bus than it is for a case where someone died. I'd guess Tesla will ultimately settle the lawsuits against them but if they came out and admitted 100% fault, which it likely isn't, they'd have absolutely no ground to stand on and could get financially ruined at trial.


Damages from a single death have a relatively low cap unless the victim was making a lot of money. Lost sales is generally a much larger issue.


Tesla drivers do tend to be higher earners than average.


300k * 20x = 6 million. Not chump change, but not going to do much to Tesla.

A CEO making 15 million / year is another story.


The base Model S with no extras costs more than twice the average price of all new cars in 2016.


> It takes seconds, not milliseconds, for the driver to recover situational awareness and take over.

The really depressing thing is that this is first-year material for one of the most popular tertiary courses around: psychology. It is simply gobsmacking to think that Tesla thinks a human can retarget their attention so quickly. It's the kind of thing you'd give to first-year students and say "what is wrong with this situation", it's so obvious.


> It is simply gobsmacking to think that Tesla thinks a human can retarget their attention so quickly

They don't actually believe it, but they have to make the facts fit the narrative of autonomous driving being right there.


>3. Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet after sheering off the top of the Tesla driving under the semitrailer. Only when it hit a telephone pole did the car stop.

This is actually the most distributing part to me. In these situations, graceful failure or error handling is absolutely critical. If it cannot detect an accident, what the heck business does it have trying to prevent them?


I wonder if the dead driver hit the accelerator?


If he had hit the accelerator, you can bet your bottom dollar that Tesla would have mentioned this detail in order to continue the "driver's fault" narrative.


I still say they best damn well rename the feature. The connotations of the work autopilot are too strong as to be misleading to what it can actually do.

when the technology matures then bring the name back, but until then call it something else.

I am still amused that most of these self driving implementations only drive when people are least likely to need the assistance, good visibility conditions. They are the opposite of current safety features which are designed to assist in bad driving conditions. I won't trust any self driving car until it can drive in horrible conditions safely.

At most these services should be there to correct bad driving habits, not instill new ones


I can see how the name could be misleading to some people. However, I think the term is quite accurate. In an aircraft, Autopilot takes care of the basics of flying the plane while the pilot is fully responsible for what is happening, watching out for traffic, and ready to take over if necessary. This is exactly what it means for Tesla as well.

In any case, as somebody who has used it off and on for 6 months in all kinds of situations it is clear that it has limitations but it is also easy to know when to be worried. Its best use is in stop-and-go freeway traffic. Its worse use is on smaller curvy surface streets. It is also really nice for distance driving on freeways.


"Tesla's sensor suite is inadequate."

To me this seems to limit what is possible by software updates.

Does that mean that existing Tesla owners will never get 'real' autopilot capabilities by software upgrades, although I assume most expect to get them?


I think that was pretty clear before that the sensors do not cover that.

Maybe autonomous highway driving in good conditions will be possible with these sensors. (With the last days information, even that seems unlikely.)


Isn't autonomous highway driving in good conditions the only thing AP is trying to do?


I think Tesla should start some upgrade program for self driving features. One thing that holds me back from buying one of the current cars is that they could get obsolete quickly if the next version has better self driving sensors.


It's amazing to me that anybody could read these stories and still want to be a beta-tester for Tesla's immature autopilot features.

I think I'm the only person around who actually enjoys driving my car.


Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet

Are we sure it continued under power? The Tesla is a heavy vehicle, with weight centred close to the ground. Momentum could easily have carried it several hundred feet after a high speed collision.


There's a report on electrek.co somewhere showing a google map of the crash site and where the car ended up. The car ended up a quarter mile down the road and had passed through a fence or two and some grass. There's also a report somewhere of a driver who says he made the same turn as the truck and followed the car as it continued down the road. Seems like it was under power. NHTSA, NTSB or Tesla could confirm with the data.


Does being under autopilot that disengages disable the accelerator? Otherwise, a foot from a disabled driver could have caused the vehicle to continue under power. Obviously Tesla probably has all this telemetry already.


> a foot from a disabled driver could have caused the vehicle to continue under power

I imagine NHTSA, NTSB, or Tesla will report on that when ready.


Autopilot is not autonomous driving. It is fancy cruise control. Just like cruise control can't handle every situation neither can autopilot. Just like autopilot in an aircraft it requires a human paying attention and ready to take over.

The system works as intended and advertised. Obviously it isn't perfect but with the fatality rate roughly equal to human drivers and the accident rate seemingly much lower it seems at least as good as a human.


This would be very reasonable if the feature was not experimental. You specifically turn on experimental feature and then purposefully ignore instructions on using that experimental feature. This is obviously manufacturer's fault.


Yes. It is completely predictable some percentage of people ignore instructions of when production quality features. That's why the fail safe comes first, even in an experimental feature. If you can't provide fail safe, it's not ready for prime time, including testing by the public.


Cool logic. I guess we should stop prescribing drugs because there is no failsafe when people ignore dosage and directions. We should stop selling microwaves cause people might ignore warning and put a cat there... There is of course an alternative of people actually being responsible for doing stupid things and not blaming everyone and everything for their own actions.


I agree their sensor suite needs improvement. I hope they are adding Lidars and/or radars before they start shipping large volumes. A recall of large numbers of cars would kill Tesla


>> Google admitted fault, took the blame

Let's wait until Google's car kills someone and then compare their reaction to that of Tesla's.


Still waiting...


My understanding is that most of Google's cars are limited to around 30mph. Seems unlikely to kill an occupant at least.


Still waiting to be able to purchase a Google self-driving car as well...


Maybe when they develop a safe, reliable autopilot system, they'll offer it for sale to the public. Some firms seem to have gotten that backwards...

It has always seemed that Google view robocars as a long-term revolutionary concept, while car companies see them as another gaudy add-on like entertainment centers or heated seats. It's not yet clear which is the right stance to take as a firm, but Tesla's recent troubles might be an indication.


Which is exactly why I think it's a rather unfair comparison. They are two different things that may have the same outcome.


The Pennsylvania turnpike case hasn't been shown to involve autopilot yet. Tesla's logs have evidence to the contrary.


No, Tesla issued a somewhat deceptive press release to make people think that:

"Tesla received a message from the car on July 1st indicating a crash event, but logs were never transmitted. We have no data at this point to indicate that Autopilot was engaged or not engaged. This is consistent with the nature of the damage reported in the press, which can cause the antenna to fail."[1]

When that gets trimmed down to a sound bite, it can give the impression the vehicle was not on autopilot.

The driver told the state trooper he was on autopilot.[2] He was on a long, boring section of the Pennsylvania Turnpike; that's the most likely situation to be on autopilot.

The NTSB will sort this out.

[1] http://www.philly.com/philly/blogs/real-time/Was-Tesla-invol... [2] http://www.bloomberg.com/news/articles/2016-07-11/pennsylvan...


Well I was just a little ahead of my time: https://mobile.twitter.com/elonmusk/status/75369986163612876... . As it turns out the Autopilot was indeed turned off in the Pennsylvania case.


> Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel.

How are these numbers compatible with each other? Was the car going at >1000 miles per hour?


Multiple people here have reported that their car has allowed 15 minutes or more without having to touch the wheel. If each warning reset that timer, by design or bug, then you could easily travel 50 miles plus.

Plus, I'm almost certain that Tesla's goal with the system isn't to "have an autonomous system that you have to interact with steadily", hence the 15 minute warning in the first place. They have perverse incentives - to allow it in the first place implies reliability that may or may not be there.


Meh.. I assumed it was a buggy case that allowed more than an hour of hands-off time.


Why would you need "more than an hour" to do 50 miles? Especially where you'd expect autopilot to be most used (highways)? 'round here it's about 40mn highway, and barely an hour (give or take) inter-city, and that's driving at posted speeds which people don't necessarily do (and I expect Tesla does not enforce)


I wrote this on a previously buried thread:

Naming their system "autopilot" goes beyond the marketing dept overstepping and into the territory of irresponsibility.

The "average" definition of autopilot per Wikipedia is "autopilot is a system used. . . WITHOUT CONSTANT 'HANDS-ON' control by a human operator being required."

And yet Tesla's disclaimer states:

"Tesla requires drivers to remain engaged and aware when Autosteer is enabled. Drivers must keep their hands on the steering wheel."(2)

The disconnect between the name and the actual function is needlessly confusing and probably irresponsible.

They should change the name to "driver assist" or something more accurate given that mis-interpreting the system's function can lead to death of the driver or others.

(1) https://en.wikipedia.org/wiki/Autopilot

(2) https://www.teslamotors.com/presskit/autopilot


> They should change the name to "driver assist" or something more accurate given that mis-interpreting the system's function can lead to death of the driver or others.

While I definitely agree on the name change, it should be accompanied by a feature change. In its current mode, and especially if people know it has been renamed, the same stuff will happen.

When you take your hands off the wheel, it turn on warning lights, break increasingly hard (depending on what it detects behind it) and sound loud warnings in the car. If it's truly an assistive feature where you can't be inattentive, the only scenario in which you stop steering is when you fell asleep.


That's what it does - it alerts drivers, and when their hands are off for some amount of time the car will pull off to the shoulder and stop until your hands are back on the wheel (IIRC).


What's the interval between when you take your hands off and it decides to pull over? From what I've seen in discussions and videos, it's pretty long. Other car manufacturers seem to only allow a few seconds.


The article seems to imply that the warning is given if driving conditions become "uncertain" and the driver's hands are off the wheel. Perhaps the time interval between warning and the car automatically stopping should be lower. What's the point of the car continuing on autopilot if it's uncertain of where it's headed?


More details here:

http://www.teslarati.com/what-happens-ignore-tesla-autopilot...

Based on that account, it seems the warning is triggered if the road becomes curvy. There may also be additional triggers.

It seems insufficient to me. IMO there should be an interval of at most 5 seconds before the first warning, and another 5 seconds before the second, and then the car should start decelerating.


> What's the point of the car continuing on autopilot if it's uncertain of where it's headed?

Haha agreed! I don't know exactly how the system works. My sense is Tesla's AP let's you be hands off for a much longer time than other self driving systems


I think about three minutes. My VW Passat demands the hands back on the steering wheel after 15 seconds or so I believe. Same experience driving a recent Audi.


Ah okay, I don't own a Tesla and this wasn't covered in news articles.

In that case, it sounds like it might actually work more or less as it should. It's just a bit useless in its current form.


Is it even useful if it's merely assisting the driver, though? I haven't used it, but actively steering a car while someone else is also actively steering it sounds pretty terrible.

If you're not actively steering it but just waiting to take over if things go poorly, things also seem like they'd be difficult. Other than getting in an accident, how is a person supposed to know when autopilot is doing it's thing correctly, and when it isn't?


> but actively steering a car while someone else is also actively steering it sounds pretty terrible.

My passat has most assist systems and I like how it steers. It tries very hard to get my attention and get my hands back on the steering wheel. It barely lets me keep my hands 15 seconds of the wheel but this is perfect for what it's designed to do: keep you in the lane if you are being distracted by something (eg: baby).

Even more it has various modes to get your attention back including detecting tiredness and inability to hold the steering wheel on the lane.


> keep you in the lane if you are being distracted by something (eg: baby).

If you can't drive responsibly with a baby on board lane assist isn't going to save your ass. If something happens to your baby which distracts you ask someone else in the car to take care of it, if there is nobody to help pull over at the earliest opportunity.

Lane assist will not save you from anything else but lane departure and there are a lot of other things that can go wrong while you allow yourself to be distracted because you rely on lane assist to keep you safe. That's not the intended use of this feature.


Getting distracted by a sudden cry or a little child throwing an item is precisely what the feature was designed for. It was not designed for watching TV or cosntantly getting your hands of the wheel to do something else.

Also I find it interesting how much you assume about a person or situation from a single word.


Be honest, it's for texting, right? b^)


I can't imagine the gymnastics involved in this feat either. A baby doesn't belong on the front seat without some modifications to the car as far as I know.

I absolutely agree. There is absolutely no excuse for the driver to take their eyes off the road to assist a baby in the car. How would you assist a baby in the back seat while driving with a seat belt on anyway?

As I read about people using these features, I'm convinced they shouldn't exist. Yes, crazies will apply eyeliner while going sixty on a highway but I doubt driver assist is an adequate solution for such stupidity should a driver poke themselves in the eye.


Who said anything about a) baby on a front seat (which is not permitted anyways) and b) assisting a baby.

If you have ever driven with a baby or child in a car you will have learned that they can cry or make a tantrum. Assist systems are intended to supplement and assist the driver in situations of increased stress. The idea that one would actively take their hands off the wheel to do something with a child is so preposterous ghat only commenters on HN could seriously suggest that.


I wouldn't worry about it. Some people just feel a constant need to get up on that high horse whenever possible.


Pretty much every modern car has a way to deactivate the passenger airbag specifically so you can place a baby seat in the front.


At least in the United States, most state laws dictate that babies are to ride in the back seat (if available) in child seats. Some states also dictate when a child can be turned around to face forward. Varies state to state.

Here's a state-by-state break-down:

http://www.ghsa.org/html/stateinfo/laws/childsafety_laws.htm...


If you can't drive responsibly with a baby on board lane assist isn't going to save your ass.

You seem really defensive about a pretty inoffensive remark. Maybe you need to reflect on why you are so angry about something so trivial.


Given how many deaths happen due to car accidents, one would think that any car feature that decreases safety should be considered a bug (UX bugs included).

The sad thing is that Tesla has been repeatedly blaming the driver, instead of improving its technology in an openly earnest manner -- an approach that eventually made air travel the safest.

With so many sensors, car travel can eventually become safer than air travel in the future. If Tesla attitude improves, that future will arrive faster.


Has this feature decreased safety? Tesla's first fatality came after 130 million autopilot miles whereas traditions in the US, we have one fatality every 90 million miles. If that continues to hold, this seems to be significantly safer with it's own set of caveats. Not altogether different than getting stuck in a burning car due to your seatbelt.


I actually worked out the math on this--Tesla is lying with statistics when they made this comment they did about 1 in 130 million miles.

Here's what I did:

  1 in 90 million rate: 1.1 x 10^8
  Probability of No accident in one mile: 1 - 1.1 x 108 = ~9.9999999 x 10^8
  Probability of No accident and driving 130 million miles: (9.99999999 x 10 ^ 8) ^ 130e6 = 0.24
  Probability AT LEAST ONE ACCIDENT in 130 million miles = 1 - 0.24 = 0.76
If you do the same for a rate of 1 in 130 miles you get: 0.63.

So the true rate could really be worse than 1 in 90 million miles and they just got lucky that no incident prevented itself, which would make it more dangerous than a regular car.


Another guy said on divided roads the rate is only about one per 150 million miles [1]

And, that still includes motorcycle accident rates which shouldn't be included in a comparison to Teslas.

[1] https://www.reddit.com/r/technology/comments/4sgxv1/while_te...


In other words, the exact rate is very uncertain when there's only one crash data point. They might be lucky (or unlucky). Well that's obvious, and I wouldn't call it lying at all.


It is misleading though...


I agree that the fatality rate does not allow for any comparisons. Autopilot could cause significantly more or fewer fatalities than humans.

On the other hand, one also needs to consider accident rate. From what I understand, based on US averages, we should have had 50+ accidents out of 100+ million miles driven on AP, but we had about three.


Also, are they counting all previous versions of the software instead of just the latest?

Moreover, I wonder how the human drivers in the statistics fare if you correct for drunk driving / using cellphone, etc. In my opinion, any autopilot worth it's name should be an order of magnitude better than the average human driver.


Averages are a bad way to look at the overall risk.

An example:

  Every time the auto pilot encounters a truck turning in front results in a fatality.
  1 in 10 times a drunk driver fails to avoid the truck turning in time.
The average overall for drunk driver could be 1 in 50 million miles and the auto pilot 1 in 90 million, but you are certain to die in a few instances with the auto pilot, but the human could possibly save himself in the same situation. I imagine the Tesla has lots of these bugs that haven't been found yet.

I would trust a drunk driver over Tesla's autopilot at this point. Don't die for Elon Musk's dream--being one of the first auto pilot fatalities isn't that big of an accomplishment.

Also, I'd give "Technopoly" a read.


That's 90 million miles among all cars.

I'd wager that if you compare it with cars at Tesla price point -- mostly luxury cars -- Tesla would look pretty bad.


Why do you think the accident rate is lower among luxury cars as compared to more common vehicles? I would suspect the opposite. My hypothesis is that the people who seek out and buy cars that go 0 to 60 in 3.2 seconds probably drive faster and more aggressively than average. They therefore are more likely to have an accident.


> My hypothesis is that the people who seek out and buy cars that go 0 to 60 in 3.2 seconds probably drive faster and more aggressively than average

Highly unscientific anecdotal evidence and personal experience living in an area with an unusually high concentration of such cars shows me that most people that own really fast cars do drive fast and enthusiastically from time to time but do so in a responsible manner and typically respectfully go with the flow, whereas a lot of people with more regular cars (mostly MPVs around here) have a habit of driving a good deal too close and a good deal too fast every single day. Many even engage in some truly reckless behaviour as soon as they feel threatened/frustrated, a telltale sign of power/control issues.


Very anecdotal evidence here commuting in Toronto here: - more drivers in $80k+ cars leave 2-3 times the following distance than other drivers with cheaper cars. Not all, but very noticeable sometimes.


Several reasons, but again, it is just guessing that cannot replace actual data. That doesn't mean we should take the only data point available (90 million miles) and declare it settled.

Your point about people seeking to drive faster is a valid one.

The counterpoint is there are people who cannot afford luxury cars but also want to drive faster -- they'll end up buying cheaper cars that can drive fast within their price range. Those are likely to be more fatal than the average luxury car.

1) Hypothesis on car age. An accident on a 2002 non-luxury car model is supposedly more likely to be fatal than an accident with your average new car.

This matters because Teslas with autopilot are relatively new compared to overall car population.

2) Hypothesis that low cost cars are more fatal than an average luxury cars.

3) Demographic hypothesis. Because of price point, the buyer demographic of newer luxury cars will be different than the overall demographic. Age group (e.g. teenagers vs. young adults vs. older adults), education level, profession.


Luxury cars do tend to do pretty well among late model cars in terms of fatality rates--haven't seen rates for accidents overall. But the biggest correlation is probably that bigger cars are safer and smaller cars less so:

http://www.cbsnews.com/news/report-lists-cars-with-highest-a...

The fatality rates are probably also notably better on late model cars relative to clunkers because of safety features.


But this is overanalyzing, if we use only the "1 fatality in 130 million miles" stat. That 1 fatality was a car driving under a truck at speed and losing its entire top half.

I don't think more expensive cars are less likely to have that kind of accident, nor would it be less fatal.


I was responding to a specific comment. I agree that fatality statistics are utterly meaningless based on one event.


Most of them have a lot of safety features, like automatically braking if needed.


Really? Bringing guesses to a fact game. We can do better.


Of course they're going to blame the driver. Admitting anything else tends to lead to class action lawsuits. They're a business, not a lifestyle.


This. It's like they want marketing credit for the engineering feat of creating 'autopilot' while maintaining the narrow legal position that the system isn't actually automatic or a pilot.


I'd like to see a survey of how many Tesla drivers think Tesla advocates complete cession of control to Autopilot.

My suspicion is zero.


Well sadly the first casualties was a man that made 3 millions view on YouTube with a video called "Autopilot saves Model S" months before his fatal accident... http://youtu.be/9I5rraWJq6E

At that time I don't recall that Tesla made any statement about how dangerous it was to deliberately push the limit of the system to entertains viewers.

This might have contributed to make him overconfident about autopilot.

RIP


> At that time I don't recall that Tesla made any statement about how dangerous it was to deliberately push the limit of the system to entertains viewers.

Yeah who knows. I think Musk retweeted his video. Hindsight is 20/20. Tesla can still move forward and learn from this.


That's going to hurt in the lawsuit, isn't it?


I'm not a lawyer, so, I don't know


It doesn't matter what they advocate for, it matters the perception of the technology. Just because Tesla doesn't explicitly say that it's a perfect system and says that the driver has to be aware at all times doesn't mean that actually happens, and with a lenient time frame without hands on the wheel it's not surprising that drivers would essentially give control to the car.


What I'm saying is I don't think it has anything to do with calling it autopilot, and even less to do with it not meeting the technical definition of autopilot.

People are using it in this way because they're able to, and they'll continue to do so until they're not able to. I think we agree here actually. The name isn't the problem really the issue.


I couldn't agree more. Autopilots of any kind have always required supervision. Tesla doesn't call its car self driving and there are many warnings that it isn't. Ultimately, the system would be called Boris and people would still use it irresponsibly.


Based on the times I've seen people staring down at their phones on the 280, it's definitely much higher than 0.


Do you have any evidence at all that any accidents have been caused by misinterpreting the functions name? Do you think if it were called something different these accident wouldn't have happened? I see little reason seriously to believe that.


If it's a good enough name for airplanes, it's a good enough names for cars. Both require users to be ready to take control at any moment.


> If it's a good enough name for airplanes, it's a good enough names for cars

* The real autopilot doesn't require hands-on-the-yoke while it is engaged, but 'autopilot' does.

* Airplanes don't have to deal with complex traffic conditions requiring interaction with other airplanes

* Airplanes do not smash into lane barriers/trees if they veer a few feet off the expected course

* "Taking control" on an airplane could safely take tens of seconds, but for a car, that is far too long.

So no, autopilot isn't good enough for cars. Cars need something much better.


Aircraft (with over 9 passengers) are required to have TCAS[0]. TCAS issues a resolution advisory if conflicting traffic is detected, and if the approaching aircraft also maneuvers, TCAS can issue a "Reversal" if needed.

Airport Runways have typically have edge lighting, and if an aircraft is off-center during takeoff or landing, can hit them and do damage[1]. Airliners land at 150mph+ so it doesn't take much deviation to hit the edge lights. Occasionally, this also happens during a gusty cross-wind landing while coupled to autopilot (auto-land), or immediately after a touchdown after a coupled landing.

[0] https://en.wikipedia.org/wiki/Traffic_collision_avoidance_sy...

[1] http://www.emirates247.com/news/emirates/etihad-plane-hits-r...


You're far less likely to suddenly encounter unexpected conditions when flying though; the sky has a tendency to be nothing but air, while the ground has a tendency to be filled with things you have to avoid, many of which are moving.


I would argue that the time frames involved are very dissimilar. In a Tesla, you need to be able to take complete control in under a second if the system fails. The closest I can think of in a plane is during an assisted landing, which is a relatively brief period where the pilot is fully engaged anyway.


Airplane autopilots do not require millisecond reaction times from the pilots during the control hand off, while Tesla 'autopilot' apparently does.


Pilots (require certification and extensive training) are different from car drivers.


Going beyond the plane analogy everyone is going with, all major transportation systems that utilize autopilot systems (planes, ships, trains) require training that teaches the pilots (or conductors for trains) to know how to stay alert and when to take over.

Perhaps it is true that the semi-intelligent autopilot is good enough for these trained activities, but the untrained public doesn't realize that autopilot today is not full automation.


Flying a plane is more difficult, therefore requires more training. But driving a car requires licensing (certification) and training.


Meaningful training is not required to get a driver's license in the U.S. In most states it is completely adequate for a parent to teach you to drive, with no minimum number of hours in a classroom, public or private.

Pilots also receive ongoing currency training. This doesn't exist for automobile drivers. Maybe some states require a written test every few years? But in the three states I've lived in, no written, oral, or in-car test it recurrency training required since the original was issued when I was 15 or whenever.


I had to pass a 20 question multiple choice test, drive at 15mph on a short course and parallel park to get my license at 17 years old. That was my only certification process. Unless laws change, I will never have to go through a training or certification process again in my life.

Comparing pilot certification to that of a driver's is disingenuous.


Thankfully, it's not that easy in all countries. For my test, 3 failures in any category was an instant fail. It's fairly hard to pass the first time.

Although if US drivers are so untrained, perhaps auto-pilot is even more relevant.


No doubt, but both groups need to supervise their Autopilots because autopilots of any kind have always been supervised. That's the point the grandparent is making.


The supervision required for the airplanes is not the same. If airplane autopilot flies you straight into the ground and turns off half a second before hitting it, that won't count as an acceptable behavior on it's part.


The clouds are rarely full of rocks. The planes have pilot/copilot and cabin crew. The pilots are highly trained professionals. The systems on the planes pass extremely rigorous testing to be certified.

Chances of gaining control and awareness of the situation in a timely manner on an airplane are higher.


Autopilot for planes came from a culture where failures weren't denied because of profits. Public safety and being surer than sure that incidents never happened again were and are the utmost concern. Pilots are professionals who went through training and certification.


Yes but one has a much higher barrier to entry.


It's sad. I am a big fan of Tesla and had/still have very high hopes from them, but it's decreasing. It seems Tesla is hell-bent on committing suicide. This way, they don't need the fossil-fuel car lobby to destroy them. The BMW, Volkswgen and the likes of dirty-gas-guzzlers-producers would be chuckling now.

Why the hell are they so bullish on auto-pilot? I cannot see any sense in it. They must just concentrate more on the battery, charging-stations and efficiency/performance part of the equation. Instead, I am seeing more efforts are being thrown in not-so-relevant areas and that too aggressively.

Semi-automatic auto-pilot is the idea rejected even by Google. Tesla should learn fast and get back to its core business: battery powered electric cars, period and not the battery powered automatic electric cars.

I'd be happy if they prove this wrong.

edit: many typos and grammatical mistakes, sorry.


It really fits well with their image of an agile tech company producing car from the future. Also they need those software upgrade as it buys them some time to focus on new car model rather than replacing existing one.

Tesla is still a tiny player in the car business and at some point a big brand will enter its market in force. They need to be bullish to maintain their image.

That said, their PR has always been overreactive. Remember all the dirt they thrown at people reviewing their car. Or how they came hard on people with their burning Tesla. Those were unnecessary "dickish". If Tesla has some actual blame to take, the tone set by their PR people is going to seriously backfire.


> It seems Tesla is hell-bent on committing suicide.

It's unfathomable why they don't just concentrate on their core competency (EV technology) and keep their AI experiments in R&D (where they belong).


> The BMW, Volkswgen and the likes of dirty-gas-guzzlers-producers would be chuckling now.

Funnily, all of those are investing heavily in automated electric cars already, or even have products for semi-automated electric cars on the market.

Just less hype around those.

If Tesla just continues, they'll be dead in a year, and Volkswagen, which also got further encouraged by the EPA to develop electric cars, will just take over their market share.


> The BMW, Volkswgen and the likes of dirty-gas-guzzlers-producers would be chuckling now.

Why? Autonomous cars =/= electric cars


With Tesla in the picture, there is a lot of pressure on the BMW, Volkswagen etc to invest in the EV technology. So they would be chuckling now because with Tesla getting out of the competition (or with Tesla becoming weaker), they would have an easier task pushing their dirty-gas-guzzlers and of getting loads of easy profits.

With Tesla out, they can even squash the entire electric car efforts altogether and will plunder the money thus "saved" in the form of fat bonuses to the higher-ups (ala Volkswagen style), which they would have to invest in EV research otherwise.

I may seem exaggerating, but with the recent Volkswagen pollution scandal, many will agree with this. Tesla is better, Elon Musk is better in terms of innovation and their direction seems more promising for the public good. They are disrupting the fossil fuel giants who are hellbent in garnering as much money as they can at whatever costs.

I know I may (turn out to) be a foolish naive believing too much in Tesla and Musk, but I find myself helpless here.


> Tesla is better, Elon Musk is better

Perhaps you live in a bubble, as most car makers barely raise an eyebrow over tesla as a competitor so far. Making tonloads of batteries is not necessarily better than making efficient, safe and less polluting "gas guzzlers".

From the outside, it seems there is a push to revive the american car industry through EV, supported by lenient and eager to see results US regulators. Of course, time will tell, and in the process hopefully we will all have safer and eco-friendly cars.


I don't understand why Tesla is allowed to release a safety critical system on to the roads when it requires buyers to acknowledge its not-finishedness. They're putting beta systems into an uncontrolled environment. No driver in the nearby lanes has been given that choice.


This a thousand times. Why is this allowed? And then going the extra mile and calling it autopilot which has all kinds of connotations about not needing to give a fuck about what the vehicle does. Bad bad bad. Recall and remove the feature until it's ready for actual use. Don't put everyone else on the road at risk so you can beta test.


Well, what I fail to understand (but do not find unbelievable) is how people dare leave their car's control to software just released. I'm sorry for all the incidents, think Tesla is to blame here, and would like electric to replace gas, so no cynicism intended, but that's just madness to me, the openness of people to alpha-test sth. that may easily cost them their lives. It's called autopilot, so what, while I admit it's bad naming, one should be responsible for their safety, before anybody else.

On the other hand self driving vehicles should be a different class of vehicles that's aptly moderated by the govts. A road is a complex thing to be in as an AI backed vehicle, especially considering the lack of the communication and organisation systems used in naval and aerial vehicles.


It's already something other cars on the roads have. Tesla just did it on the cheap and Tesla gets more press because it's not a 100 year old car company and its founder is Elon Musk.


Other cars require hands on the wheel at more frequent rates, as I understand it. There's no conspiracy. Everyone wants a good product out of Tesla. Sometimes that means additional pressure. 100 year old car companies have experienced plenty of pressure, and these days they anticipate the pushback. The difference between the companies is not how the press treats them, but how they handle and respond to adverse events. This article says it better [1]

> At Ford Motor Company, executives told me about all the pushback they received when they introduced features we now view as standard. People resisted and feared seat belts, airbags, automatic transmission, automatic steering, antilock brakes, automatic brakes, and trailer backup assist. Ford does extensive focus groups on new features that push boundaries, holding off on introducing ones that too many people don’t understand.

> Tesla doesn’t operate that way. When the company’s Autopilot software, which combines lane-keeping technology with traffic-aware cruise control, was ready last Fall, Tesla pushed it out to its customers overnight in a software update. Suddenly, Tesla cars could steer and drive themselves. The feature lives in a legal gray area.

[1] http://fortune.com/2016/07/11/elon-musk-tesla-self-driving-c...


"No driver in the nearby lanes has been given that choice."

Exactly!


Indeed.

I'm not sure of the legalities in the US, but in the UK I'd suggest that there are a few things that they'd fall foul of:

o advertising standards, its not an autopilot o trading standards, not making a safe product/warnings [0] o death by dangerous driving [1] o driving without due care and attention [2]

[0] There are layers of consumer protection that enforces things like no sharp edges, doesn't give off nasty vapours, does what the marketing says it does.

[1][2] these are crimes for the driver. If you are found to have been driving in a reckless way, non concentrating, or not having hands on the wheel, that your fault as a driver. very large fine and time in jail.

especially as the tesla has lots of sensors to prove what the driver was doing.


Am I the only one that thinks people should be held responsible for their own choices?

This argument is completely bogus. No driver in the nearby lanes is aware the other driver is drunk, a bad driver or emotionally unstable.

The entire idea of beta testing is to have your system on an "uncontrolled environment". The system has bugs, as any beta system does, and whoever uses it should be fully responsible for it. Yes, fully responsible.

Tell me what's the difference between using a beta system and not caring (leaving hands off the wheel, etc) and drunk driving? On both situations the driver is choosing to put everyone at risk, nobody knows about the fact until after an incident and if there is an incident, who's fault is that? The beer company? The car company?

If the driver assistance made cars crash because it ignored a command from the driver or something like that, then sure... it's 100% Tesla's fault. But clearly on all of these accidents the drivers were careless to the extreme by leaving hands off the wheel for several minutes.

We can discuss if they should try to improve how they handle people that remove their hands from the wheel, but that doesn't change the fact that ultimately the choice to be careless was made by the driver, not the car manufacturer.


Comparing this to a drunk driver is not relevant, as driving drunk is already illegal, whereas autopilot is not.

> We can discuss if they should try to improve how they handle people that remove their hands from the wheel, but that doesn't change the fact that ultimately the choice to be careless was made by the driver, not the car manufacturer.

Yes, the driver is to blame for that, just as tesla is to blame for allowing that to happen in the first place. They have sensors to detect whether the driver has his hands on the wheel, and yet they allow the vehicle to keep going for minutes in a situation it can't control.

There's plenty of blame to go around.


"Tell me what's the difference between using a beta system and not caring (leaving hands off the wheel, etc) and drunk driving?"

The latter is a clearly defined criminal offence whereas the former is currently unclear. While the driver is certainly liable for reckless driving, that the marketing seems to have suggested that the feature is usable, is a fact.


And does that remove the driver's responsibility?


Does it remove Tesla's responsibility?

You're literally now comparing Autopilot to criminal offenses and asking if it should be allowed, seems like you're suggesting Autopilot should be illegal.

Sure, drivers suck, this is a known bug in humans. I'd like to see Autopilot changed to counteract the human limitation rather than acting like humans should be perfect for a system to work.

This is how you get into bad UI design ("it is NOT my UI that sucks, the users just don't understand it! Read the manual!"). Except in this case people can literally die.


What's criminal is abuse of autopilot, not itself. Just like with alcohol, drink whatever you like, just don't get behind the wheel.


No, certainly. The responsibility is shared, Tesla liable for a sub par product (that is autopilot, not the whole car) abd driver for what boils down to reckless driving. Though I want to note that I'm not familiar with the US legislation on traffic.


For drunk driving and nearby drivers, the drunk driver didn't sign an acknowledgement that he's aware of the hazards of driving drunk and did it anyway.

Signing an acknowledgement means there's something to acknowledge, like "we're not totally done with this." Any driver on the road can reasonably expect that any nearby car was sold fully functional.

The truck that the one Tesla crashed into was not (I believe) injured, although the truck's owner experienced property damage caused by Tesla (my opinion). The truck driver could have been injured or killed. If it hadn't been a truck, it could have been a white mini-van, and there almost certainly would have been injuries or deaths.

This think belongs on a test track, not crawling up my bumper.


If you serve alcohol to a person and they're intoxicated enough to be a danger on the road, you can be held liable for failing to stop them from operating a vehicle if they harm themselves or others. It is your responsibility to cut them off and find them alternate transportation. It is also the fault of the intoxicated driver, but fault doesn't end there.

Similarly, if you provide and advertise an Autopilot feature that isn't autopilot, it is your responsibility to ensure the driver is attentive, has their hands on the wheel and that you pushed a not-broken implementation to them. Other manufacturers have done this, yet Tesla hasn't.


Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: