1. Tesla's "hands on wheel" enforcement is much weaker than their competitors. BMW, Volvo, and Mercedes have similar systems, but after a few seconds of hands-off-wheel, the vehicle will start to slow. Tesla allows minutes of hands-off time; one customer reports driving 50 miles without touching the wheel. Tesla is operating in what I've called the "deadly valley" - enough automation that the driver can zone out, but not enough to stay out of trouble.
The fundamental assumption that the driver can take over in an emergency may be bogus. Google's head of automatic driving recently announced that they tested with 140 drivers and rejected semi-automatic driving as unsafe. It takes seconds, not milliseconds, for the driver to recover situational awareness and take over.
2. Tesla's sensor suite is inadequate. They have one radar, at bumper height, one camera at windshield-top height, and some sonar sensors useful only during parking. Google's latest self-driving car has five 3D LIDAR scanners, plus radars and cameras. Volvo has multiple radars, one at windshield height, plus vision. A high-mounted radar would have prevented the collision with the semitrailer, and also would have prevented the parking accident where a Tesla in auto park hit beams projecting beyond the back of a truck.
Tesla is getting depth from motion vision, which is cheap but flaky. It cannot range a uniform surface.
3. Tesla's autopilot behavior after a crash is terrible. It doesn't stop. In the semitrailer crash, the vehicle continued under power for several hundred feet after sheering off the top of the Tesla driving under the semitrailer. Only when it hit a telephone pole did the car stop.
The Pennsylvania Turnpike crash is disturbing, because it's the case the "autopilot" is supposed to handle - divided limited-access highway under good conditions. The vehicle hit a guard rail on the side of the road. That may have been an system failure. Too soon to tell.
The NTSB, the air crash investigation people, have a team investigating Tesla. They're not an enforcement agency; they do intensive technical analysis. Tesla's design decisions are about to go under the microscope used on air crashes.
Tesla's spin control is backfiring. They tried to blame the driver. They're being sued by the family of the dead driver, and being investigated by the NHTSA (the recall people), and the NTSB. In comparison, when a Google self-driving car bumped a bus at 2mph, Google admitted fault, took the blame, and Urmson gave a talk at SXSW showing the data from the sensors and discussing how the self-driving car misjudged the likely behavior of the bus.
This is what we should be trumpeting. Responsibility, honesty, and transparency. Google's self driving car project reports  should be the standard. They're published monthly and they detail every crash *per requirement by CA state law (thanks schiffern).
Note: These reports are currently only required in California, and only for fully autonomous vehicles. So, Tesla and other driver-assist-capable car companies do not need to publicize accident reports in the same way. Tesla may be unlikely to start filing such reports until everyone is required to do so. We should demand this of all cars that are driver-assist during this "beta" phase.
Tesla owners and investors ought to be interested in a little more transparency at this point. It's a bit concerning that Tesla/Musk did not file an 8-k on the Florida accident. We learned about it from NHTSA almost two months after it happened.
Copying a reply from earlier...
Google isn't doing this out of the goodness of their heart. They have always been required by law to send autonomous vehicle accident reports to the DMV, which are made available to the public on the DMV's website.
So the transparency itself is mandatory. Google merely chose to control the message and get free PR by publishing an abbreviated summary on their website too. Judging by this comment, it's working!
For example, here's the full report on Google's accident in February, the first one where the autonomous car was the vehicle at fault: https://www.dmv.ca.gov/portal/wcm/connect/3946fbb8-e04e-4d52...
(see also studentrob's excellent reply here: https://news.ycombinator.com/item?id=12084444)
Then why on earth is Tesla being opaque and seemingly passing the buck with their PR?! I don't get it.
And, Tesla may be unwilling to start reporting driver-assist accident rates to the public unless the other driver-assist car companies are forced to do it too.
I'll make an addendum to my above comment.
You could have opened the first report on that page and notice this is wrong.
> June 6, 2016: A Google prototype autonomous vehicle (Google AV) was traveling southbound on Berkman Dr. in Austin, TX in autonomous mode and was involved in a minor collision north of E 51st St. The other vehicle was approaching the Google AV from behind in an adjacent right turn-only lane. The other vehicle then crossed into the lane occupied by the Google AV and made slight contact with the side of our vehicle. The Google AV sustained a small scrape to the front right fender and the other vehicle had a scrape on its left rear quarter panel. There were no injuries reported at the scene.
Well, that's awesome that Google reports on accidents outside of California too.
Thank you for correcting me, delroth.
Edit: I see the downvote. Am I wrong? Is being a CEO all rainbows and sunshine? Maybe I got some detail wrong. Feel free to correct me.
I like it when people try to do hard stuff, that's fine and maybe heroic. But they have to be prepared : if testing the whole thing must take 10 years to be safe and a gazillion dollars, then so be it. If it's too much to shoulder, then they should not enter the business.
That sounds like a succinct explanation for why we've been stuck in LEO since December 19, 1972.
We've got 7 billion+ humans on the surface of our planet. We lose about 1.8/s to various tragedies.
Most of which are of less import than helping automate something that's responsible for around 32,000 deaths in the US every year. Move fast, break people for a just cause, escape risk-averse sociotechnological local minima, and build a future where fewer people die.
Tesla is also not doing any kind of super special research into self driving cars. The system their cars use is (afaik) an OEM component (provided by MobileEye) that powers the driver assist features of many other brands, too.
Instead of actually improving the technology they have chosen to lower the safety parameters in order to make it seem like the system is more capable than it actually is.
We shouldn't go off expecting that someone elses ideas of safety will jive 100% with our own, and then blame them when someone else buys a product and is injured.
Should Telsa drive-assist deactivate and slow the car after 10 seconds without touching the wheel? Probably, but I won't blame them for not doing so if there is no regulatory framework to help them make a decision. It's certainly not obvious that 10 seconds is the limit and not 11, or 15 or 2 seconds.
Tesla tells us "it's the driver's fault that he died while using our Autopilot".
Either Musk approved this message, or he didn't bother to read it before it went out, and didn't bother to apologize afterwards.
I think we can draw some conclusions from that.
"No force was detected on the steering wheel for over two minutes after autosteer was engaged," said Tesla, which added that it can detect even a very small amount of force, such as one hand resting on the wheel.
"As road conditions became increasingly uncertain, the vehicle again alerted the driver to put his hands on the wheel," said Tesla. "He did not do so and shortly thereafter the vehicle collided with a post on the edge of the roadway."
Bruh-slow the car down then. If you're storing information that points to the fact that that the driver is not paying attention, and you're not doing anything about it, that's on you 100%.
We are in the midst of a transition period between autonomous and semi-autonomous cars, and blaming the driver for the fact that the car, in auto-steer mode, steered into obstacles, is counter-productive in the long term. You need to make auto-steer not work under these conditions. The driver will quickly figure out the limits of your system as their car decelerates with an annoying beeping sound.
That would seem to me to be reason to steer the car to the shoulder and disengage the auto-pilot.
Tesla is trying to take too big a step here and blaming the driver(s) of these cars is really really bad PR.
Agreed. Or where a shoulder isn't available engage hazard flashers and drop speed to 10 miles an ho
I would suggest that if it can't, its not ready for general use on consumer road vehicles.
I do admit, however, that slowing down properly can be hard. It can even be that hitting an inside post is the best option. Still, I doubt that consumer AP would have put the vehicle into such an extreme situation.
But the driver shouldn't be allowed to escape blame either. When your car is repeatedly telling you that you need to take control, then you should be taking control.
The driver is not considered a reliable component of such a system, and must not be used to guide decisions.
Yes, the driver clearly was a fool, but system design should not have taken his decision into account, and come to its own conclusion for a safe reaction (e.g., stopping the car).
My point was simply that the driver has to take responsibility for their actions in this situation as well.
My point was simply that the drivers who use semi-automatic cars and choose to ignore warnings to retake control need to be responsible for their actions as well.
What happens next time, when the driver has a stroke?
"Well, he didn't put his hand on the wheel when the car told him to, it's not our fault!"
Like you say, slow it down. Sure, the driver of a non-autonomous vehicle who has a stroke might be in a whole world of trouble, but isn't (semi-) autonomy meant to be a _better_ system?
Is the auto-maker responsible? They advertised the car as having cruise control.
Call me old, but I remember similar hysteria when cruise control systems started appearing. People honestly thought it would be like autopilot or better - and ploughed into highway dividers, forests, other traffic, you name it, at 55mph.
The steering control is supposed to control the steering of the car. If you set it for "road" and it decides to steer for "hard fixed objects" how should we regard it?
Why would the auto-maker be responsible for your distracted driving, and what does it have to do with cruise control?
Some of the uproar about these incidents is related to the fact that Tesla marketing is implying far more automation and safety here than the technology can actually provide. And apparently it can cost lives.
Please show me a source for the claim that they're "implying far more automation" as everything I've ever seen from Tesla on this says hey dummy, don't stop paying attention or take your hands off the wheel.
It is literally called "autopilot."
Even after they used it for miles, it having certainly audibly reminded them many times, over months, that they must have their hands on the wheel?
Maybe if someone is deaf, reads "autopilot", jumps in the car and crashes, the name is at fault. Otherwise, unlikely.
Th key word is "implying". Of course they say "keep your hands on the wheel" in the instruction manual.
It's also a problem if it permits driving faster than the speed limit.
If the car autopilot software permits faster than legal speed driving, the manufacturer is taking on liability in the case of speed related accidents and damage.
Yeah, I just blamed the victims for misusing autopilot. Like anything else, its a tool for a certain purpose. Its not foolproof. Testing its limits can get you hurt, just like a power saw.
It's controls are such that it disengages gracefully. You notice a problem, instinctually hit the brake and disability cruise.
That will be the communication pattern for the next two decades; we better get used to it.
You drive their cars, and you're always driving around with a third party with its own interests.
You'd think someone would have realized that a double tap could accidentally happen why trying to single tap.
The reporting on this was terrible, but there's a YouTube video illustrating the design flaw.
Thing is, someone at Tesla must know this is the real culprit. But there was no public correction of their local manager who blamed the driver.
Maybe they were never intended for cars?
Maybe it was a cost savings thing? Homes that are only 50-75 years old seem to have larger carports instead of garages.
* I answered my own questions. A model T was 66 inches wide but had 8 inch running boards on either side so the crew cabin is closer to 50 inches. The doors, if it had them, would be shorter as well. A model S is 77 inches wide so that's probably 24 inches difference in clearance.
1 my car is nice and clean most of the time now
2 no more gas station dirty hands on the gas pump, I'm OCD umk
3 I get to watch my car pull out of the garage every morning in amazement :)
Tesla pushed an update that enabled this as a "convenient" shortcut to the more elaborate, safer method you describe. In fact the video author praises it. Seems like a nice handy feature.. At first. Safety-critical design needs more care than this though.
>They tried to blame the driver.
In this incident, the driver was using autopilot in a fashion that it should not be; twisting road at high speed. The driver IS at fault.
>They're being sued by the family of the dead driver
This is no proof of anything. You can find a lawyer to sue anybody over anything.
>and being investigated by the NHTSA (the recall people), and the NTSB
Well, of course they are. They're supposed to. Again, no proof of anything Tesla did wrong.
I'm not saying there isn't a problem with this. I'm saying your reasoning is wrong.
I'm also curious as to how many autopilot sessions take place every day. If there have been three crashes, such as this where the driver is at fault, out of a million then that's one thing not considered so far.
I disagree on this point. In any system that's supposed to be sophisticated enough to drive the car and but also carries a giant caveat like "except in these common driving situations..." failure is not instantly down to "user error." Such a system should gracefully refuse to take over if it's not running on an interstate or another "simple enough" situation; otherwise, as many have noted, the threshold for "is the driver paying sufficient attention" should be set much, much lower.
That the system is lulling the users into bad decisions is not automatically the fault of said user. Some blame, maybe most of the blame, has to fall on the autopilot implementation. When lives are on the line, design should err on the side of overly-restrictive, not "we trust the user will know when not to use this feature and if they are wrong it is their fault."
"It's a winding road going through a canyon, with no shoulder"
So going from a dangerous situation to another dangerous situation.
I'd rather 10 innocent people die to freak autopilot incidents than 1,000 innocent people die because people in general are terrible at operating a 2,000-3,500lb death machine. Especially because everyone thinks of themselves as a good driver. It's fairly obvious not everyone is a good driver - and that there are more bad drivers than good.
Maybe I only see things that way because there have been four deaths in my family due to unaware drivers which could have easily been avoided by self-driving vehicles. All of them happened during the daytime, at speeds slower than 40mph, and with direct line of sight. Something sensors would have detected and braked to avoid.
From a risk management standpoint, Tesla's design doesn't strike me as sufficient. Warnings without consequences are quickly ignored.
I don't understand why, if the car's GPS co-ordinates are known, the car even allows autopilot on anything but roads known to be within spec.
> The driver IS at fault.
I'm not sure legal liability is that clear cut.
 If the goal is to stay out of court. I understand the AI drivers are better on average argument.
With that said, I agree that ultimately the driver is at fault. It's his car, and he's the one that is operating it on a public road, he's the one not paying attention.
One of the laws of ship and air transport is that the captain is the ultimate authority and is responsible for the operation of the vehicle. There are many automated systems, but that does not relieve the captain of that responsibility. If an airplane flies into a mountain on autopilot, it's the captain's fault because he wasn't properly monitoring the situation.
It would be disastrous for the car to take control when it shouldn't. That itself is a giant red flag to me.
Its all or nothing when it comes to automation that involves human life.
This is the one I'm very interested to hear more about. From what I've heard, Tesla's self-driving is very aggressive in swerving to avoid obstacles -- was hitting the guard rail the result of a deliberate swerve, and if yes then was there a logic error, a sensor error, or in fact a legitimate reason to swerve?
Reasoning about what's legitimate can be a really fine line. Oncoming car, raccoon, fawn, human, doe, coyote, elk: some of these obstacles might actually be net-safer to not swerve around [perhaps depending on speed].
I'd be really curious if any of the autopilots have a value system that tries to minimize damage/loss of life. And how does it value the exterior obstacle against the passengers in the car?
I'm pretty sure that hitting a full-grown elk is a poor choice, and often a fatal one. A large, full-grown elk can weigh 600 pounds and stand 5 feet tall at the shoulder, which means it will probably go over the crumple zone and directly strike the windshield.
This happens commonly with moose in New England, which run up to 800 pounds and 6.5 feet tall at the shoulder. A car will knock their legs out from under them but the body mass strikes much higher and does an alarming amount of damage to the passenger compartment. You can Google "moose car accident" and find some rather bloody photos.
The standard local advice is to try really hard to avoid hitting moose, even at the cost of swerving. I would prefer that an autopilot apply a similar rule to large elk. Even if you wind up striking another obstacle, you at least have a better chance of being able to use your crumple zone to reduce the impact.
I'm not sure about moose, but with more nimble animals, swerving is no guarantee of avoiding collision. A deer can change direction quicker than an automobile, and if it had any sense, it wouldn't be running across the road in the first place. It's definitely safer to slow down and honk your horn than it is to try to guess which direction the damn thing will jump.
[EDIT:] When I was learning to drive, my father often recounted the story of a hapless acquaintance who, in swerving to avoid a snake, had precipitated a loss of control that killed her infant child. He's not a big fan of swerving for animals.
I've tried to pick the least graphic images possible:
http://img.radio-canada.ca/2015/08/09/635x357/150809_kr04b_v... from http://ici.radio-canada.ca/regions/saguenay-lac/2015/08/09/0... -- which ended up killing the two occupants of the car.
This one is interesting because you may very well hit the moose facing forwards: http://www.snopes.com/photos/accident/moose.asp
I don't think there is much to be gained in terms of safety. Let's not forget that while you may have large moose, there's a range where they can still be much smaller while growing and still damage smaller cars fairly easily.
Roof landings don't always end well -- my Civic was trashed when a deer basically bounced up and collapsed my roof on the passenger side. My head would have been trashed as well had I been a passenger!
In general, if you cannot stop, hitting something head on is the best bet. Modern cars are designed handle that sort of collision with maximum passenger safety.
Imagine a two lane road with a car traveling toward an obstruction in its lane. The distance required to get the car out of the obstructed lane via the steering wheel is much less than stopping it via the brakes.
There's a reason performance driving has the concept of brakes being an aid to your steering wheel as opposed to being operated as a standalone system.
But definitely the law could, and probably is, out of touch.
In the UK its illegal to swerve for these things. In the sense that if you subsequently cause an accident for swerving to avoid a cat, you are at fault.
Common sense will be applied and liability is not strict nor is it a criminal offence per se. https://www.clarkewillmott.com/personal-injury-and-medical-n...
Just imagine your car calculating that it should let you die in order to save a group of school children... except that the school children were in fact a bunch of sheep that the system mis-identified.
The danger of AI isn't in paperclip optimizers (at least for now), it's in humans over-relying on machine learning systems.
I'd like my car to make value judgements about what's safer - braking and colliding with an object or swerving to avoid it. I'd rather it brakes and collides with a child than swerve into oncoming traffic, but I'd rather it swerved into oncoming traffic to avoid a truck that would otherwise T-bone me.
As long as the system only acts in the best interest of it's passengers, not the general public, it's fine by me.
Wait, what? You'd rather hit and kill a child than hit another vehicle - a crash which at urban/suburban speeds would most likely be survivable, without serious injury, for all involved?
You cannot codify some of the "dark" choices that are typically made today in split second, fight or flight situations. If we go down this road, it opens to door to all sorts of other end justifies the means behaviors.
There's a threshold number of dead hypothetical children that's going to trigger a massive revolt against tone deaf Silicon Valley industry.
If a child runs in front of my car then it is most definitely at fault and I'd honestly much rather it be dead than me, if those are the options.
You claimed that any car may be driven slowly enough to avoid killing people on the road with only mere convenient to others. That's patently false: firetruck. And then there are several other posters claiming that it's the driver fault if any accident happens at all, due to the mere fact that the driver chooses to drive a car. If we remove every single vehicle in the US out of the street tomorrow, a whole lot of people would die, directly or indirectly. I'd like to see anyone stating outright that "usage of cars in modern society have no benefit insofar as saving human lives is concerned".
In the end, I think both sides are fighting a straw-man, or rather, imagine different scenario in the discussion. I read the original poster as imagine a case where swerving means on a freeway potentially being deadly for at least 2 vehicles, along with a multi-vehicle pile up. You're imaging suburban CA where swerving means some inconvenience with the insurance company. I have little doubt that everyone would swerve to avoid the collision in the latter.
Also since we're on HN and semantic nit-picking is a ritual around here, "avoid killing children in the road above all other priorities, including the lives of passengers" is NOT a good idea. As far as the car knows, I might have 3 kids in the backseats.
In that "traveling too fast" failure mode, previous considerations of convenience or property damage no longer apply. The driver or robocar already fucked up, and no longer has standing to weigh any value over the safety of pedestrians. Yes it's sad that the three kids in the backseat will be woken from their naps, but their complaint is with the driver or robocar for the unsafe speeds, not with the pedestrian for her sudden appearance.
This is the way driving has always worked, and it's no wonder. We're talking about kids who must be protected from the car, but we could be talking about hazards from which the car itself must be protected. If you're buzzing along at 75 mph and a dumptruck pulls out in front of you, you might have the right-of-way, but you're dead anyway. You shouldn't have been traveling that fast, in a situation in which a dumptruck could suddenly occupy your lane. In that failure mode, you're just dead; you don't get to choose to sacrifice some children to your continued existence.
Fortunately, the people who actually design robocars know all this, so the fucked-up hypothetical preferences of solipsists who shouldn't ever control an automobile don't apply.
Just a bit of clarifying, I didn't call the situations rare or a strawman. I said the arguments were against a strawman, as you were thinking of different scenario and arguing on that.
Out of curiosity, had the situation with 3 kids in the car being potentially deadly to swerve. What would the response be?
As a firefighter I'd like to offer a few thoughts on this:
- fire engines, let alone ladder trucks are big but slow. Older ones have the acceleration of a slug and struggle to hit high speeds
- with very, very few exceptions, the difference of 10 seconds to 2 minutes that your arrival makes is likely to have no measurable difference (do the math, most departments are likely to have policies allowing only 10mph above posted limits at best, and if you're going to a call 3mi away...)
- again, as mentioned, most departments have a policy on how far the speed limit can be exceeded. It might feel faster when it goes by when you're pulled over, but realistically there won't be much difference
- "due caution and regard" - almost all states have laws saying that there's implied liability operating in "emergency mode" - that is, you can disobey road laws as part of emergency operations, but any incident that happens as a result thereof will be implied to be the fault of the emergency vehicle/operator until and unless proven otherwise
If I'm driving an engine, blazing around without the ability to respond to conditions as much as I would in a regular situation/vehicle, then emergency mode or not, I am in the wrong.
This is a ridiculous claim to make. Emergency vehicles will be the very last kind of vehicle to get autonomous driving (if ever), because they routinely break regular traffic rules, and routinely end up in places off-road in some way.
Hell, modern jumbo jets can fly themselves, from takeoff to landing, but humans are still there specifically to handle emergencies.
> As far as the car knows, I might have 3 kids in the backseats.
Strange that you envision having a car capable of specifically determining 'children' in the street, but nothing about the occupants. Especially given that we already have cars that complain if they detect an occupant not wearing a seatbelt, but no autonomous driving system that can specifically detect an underage human.
> You're imaging suburban CA where swerving means some inconvenience with the insurance company.
"I'd rather other people die than me" is not about imagining mere inconvenience with insurance companies.
For someone complaining about strawmen, you're introducing a lot of them.
No, I do NOT trust the car to detect children either on the road or in the car, that's why I phrased my comment that way.
> "I'd rather other people die than me" is not about imagining mere inconvenience with insurance companies.
Yes, I specifically point out the scenario the poster of that quote might be thinking about to contrast with the inconvenience scenario.
You should take your own advice: read the post again before commenting.
Would my insurance company? Probably.
You'll find examples. And for every waived bill you'll also hear "well, regardless of the tragic outcome, a service was rendered, and we have the right to be paid for our work".
Please note that I am carefully avoiding passing any judgment on the morality of any of the above.
One might think that we're not talking about inconvenience, but we are, because after all any car may be driven slowly enough to avoid killing children in the road. A particular robocar that is not driven that slowly (which, frankly, would be all useful robocars), must avoid killing children in the road above all other priorities, including the lives of passengers. A VW-like situation in which this were discovered not to be the case, would involve far more dire consequences for the company (and, realistically, the young children and grandchildren of its executives) than VW faces for its pollution shenanigans.
Humanity: you're doing it wrong. That some humans have this attitude is another reason why humans shouldn't drive automobiles.
Nobody buys an autonomous car that will make the decision to kill them.
It's a hell of a lot safer for Google to take fault for in a 2mph bump into a bus than it is for a case where someone died. I'd guess Tesla will ultimately settle the lawsuits against them but if they came out and admitted 100% fault, which it likely isn't, they'd have absolutely no ground to stand on and could get financially ruined at trial.
A CEO making 15 million / year is another story.
The really depressing thing is that this is first-year material for one of the most popular tertiary courses around: psychology. It is simply gobsmacking to think that Tesla thinks a human can retarget their attention so quickly. It's the kind of thing you'd give to first-year students and say "what is wrong with this situation", it's so obvious.
They don't actually believe it, but they have to make the facts fit the narrative of autonomous driving being right there.
This is actually the most distributing part to me. In these situations, graceful failure or error handling is absolutely critical. If it cannot detect an accident, what the heck business does it have trying to prevent them?
when the technology matures then bring the name back, but until then call it something else.
I am still amused that most of these self driving implementations only drive when people are least likely to need the assistance, good visibility conditions. They are the opposite of current safety features which are designed to assist in bad driving conditions. I won't trust any self driving car until it can drive in horrible conditions safely.
At most these services should be there to correct bad driving habits, not instill new ones
In any case, as somebody who has used it off and on for 6 months in all kinds of situations it is clear that it has limitations but it is also easy to know when to be worried. Its best use is in stop-and-go freeway traffic. Its worse use is on smaller curvy surface streets. It is also really nice for distance driving on freeways.
To me this seems to limit what is possible by software updates.
Does that mean that existing Tesla owners will never get 'real' autopilot capabilities by software upgrades, although I assume most expect to get them?
Maybe autonomous highway driving in good conditions will be possible with these sensors. (With the last days information, even that seems unlikely.)
I think I'm the only person around who actually enjoys driving my car.
Are we sure it continued under power? The Tesla is a heavy vehicle, with weight centred close to the ground. Momentum could easily have carried it several hundred feet after a high speed collision.
I imagine NHTSA, NTSB, or Tesla will report on that when ready.
The system works as intended and advertised. Obviously it isn't perfect but with the fatality rate roughly equal to human drivers and the accident rate seemingly much lower it seems at least as good as a human.
Let's wait until Google's car kills someone and then compare their reaction to that of Tesla's.
It has always seemed that Google view robocars as a long-term revolutionary concept, while car companies see them as another gaudy add-on like entertainment centers or heated seats. It's not yet clear which is the right stance to take as a firm, but Tesla's recent troubles might be an indication.
"Tesla received a message from the car on July 1st indicating a crash event, but logs were never transmitted. We have no data at this point to indicate that Autopilot was engaged or not engaged. This is consistent with the nature of the damage reported in the press, which can cause the antenna to fail."
When that gets trimmed down to a sound bite, it can give the impression the vehicle was not on autopilot.
The driver told the state trooper he was on autopilot. He was on a long, boring section of the Pennsylvania Turnpike; that's the most likely situation to be on autopilot.
The NTSB will sort this out.
How are these numbers compatible with each other? Was the car going at >1000 miles per hour?
Plus, I'm almost certain that Tesla's goal with the system isn't to "have an autonomous system that you have to interact with steadily", hence the 15 minute warning in the first place. They have perverse incentives - to allow it in the first place implies reliability that may or may not be there.
Naming their system "autopilot" goes beyond the marketing dept overstepping and into the territory of irresponsibility.
The "average" definition of autopilot per Wikipedia is "autopilot is a system used. . . WITHOUT CONSTANT 'HANDS-ON' control by a human operator being required."
And yet Tesla's disclaimer states:
"Tesla requires drivers to remain engaged and aware when Autosteer is enabled. Drivers must keep their hands on the steering wheel."(2)
The disconnect between the name and the actual function is needlessly confusing and probably irresponsible.
They should change the name to "driver assist" or something more accurate given that mis-interpreting the system's function can lead to death of the driver or others.
While I definitely agree on the name change, it should be accompanied by a feature change. In its current mode, and especially if people know it has been renamed, the same stuff will happen.
When you take your hands off the wheel, it turn on warning lights, break increasingly hard (depending on what it detects behind it) and sound loud warnings in the car. If it's truly an assistive feature where you can't be inattentive, the only scenario in which you stop steering is when you fell asleep.
Based on that account, it seems the warning is triggered if the road becomes curvy. There may also be additional triggers.
It seems insufficient to me. IMO there should be an interval of at most 5 seconds before the first warning, and another 5 seconds before the second, and then the car should start decelerating.
Haha agreed! I don't know exactly how the system works. My sense is Tesla's AP let's you be hands off for a much longer time than other self driving systems
In that case, it sounds like it might actually work more or less as it should. It's just a bit useless in its current form.
If you're not actively steering it but just waiting to take over if things go poorly, things also seem like they'd be difficult. Other than getting in an accident, how is a person supposed to know when autopilot is doing it's thing correctly, and when it isn't?
My passat has most assist systems and I like how it steers. It tries very hard to get my attention and get my hands back on the steering wheel. It barely lets me keep my hands 15 seconds of the wheel but this is perfect for what it's designed to do: keep you in the lane if you are being distracted by something (eg: baby).
Even more it has various modes to get your attention back including detecting tiredness and inability to hold the steering wheel on the lane.
If you can't drive responsibly with a baby on board lane assist isn't going to save your ass. If something happens to your baby which distracts you ask someone else in the car to take care of it, if there is nobody to help pull over at the earliest opportunity.
Lane assist will not save you from anything else but lane departure and there are a lot of other things that can go wrong while you allow yourself to be distracted because you rely on lane assist to keep you safe. That's not the intended use of this feature.
Also I find it interesting how much you assume about a person or situation from a single word.
I absolutely agree. There is absolutely no excuse for the driver to take their eyes off the road to assist a baby in the car. How would you assist a baby in the back seat while driving with a seat belt on anyway?
As I read about people using these features, I'm convinced they shouldn't exist. Yes, crazies will apply eyeliner while going sixty on a highway but I doubt driver assist is an adequate solution for such stupidity should a driver poke themselves in the eye.
If you have ever driven with a baby or child in a car you will have learned that they can cry or make a tantrum. Assist systems are intended to supplement and assist the driver in situations of increased stress. The idea that one would actively take their hands off the wheel to do something with a child is so preposterous ghat only commenters on HN could seriously suggest that.
Here's a state-by-state break-down:
You seem really defensive about a pretty inoffensive remark. Maybe you need to reflect on why you are so angry about something so trivial.
The sad thing is that Tesla has been repeatedly blaming the driver, instead of improving its technology in an openly earnest manner -- an approach that eventually made air travel the safest.
With so many sensors, car travel can eventually become safer than air travel in the future. If Tesla attitude improves, that future will arrive faster.
Here's what I did:
1 in 90 million rate: 1.1 x 10^8
Probability of No accident in one mile: 1 - 1.1 x 108 = ~9.9999999 x 10^8
Probability of No accident and driving 130 million miles: (9.99999999 x 10 ^ 8) ^ 130e6 = 0.24
Probability AT LEAST ONE ACCIDENT in 130 million miles = 1 - 0.24 = 0.76
So the true rate could really be worse than 1 in 90 million miles and they just got lucky that no incident prevented itself, which would make it more dangerous than a regular car.
And, that still includes motorcycle accident rates which shouldn't be included in a comparison to Teslas.
On the other hand, one also needs to consider accident rate. From what I understand, based on US averages, we should have had 50+ accidents out of 100+ million miles driven on AP, but we had about three.
Moreover, I wonder how the human drivers in the statistics fare if you correct for drunk driving / using cellphone, etc. In my opinion, any autopilot worth it's name should be an order of magnitude better than the average human driver.
Every time the auto pilot encounters a truck turning in front results in a fatality.
1 in 10 times a drunk driver fails to avoid the truck turning in time.
I would trust a drunk driver over Tesla's autopilot at this point. Don't die for Elon Musk's dream--being one of the first auto pilot fatalities isn't that big of an accomplishment.
Also, I'd give "Technopoly" a read.
I'd wager that if you compare it with cars at Tesla price point -- mostly luxury cars -- Tesla would look pretty bad.
Highly unscientific anecdotal evidence and personal experience living in an area with an unusually high concentration of such cars shows me that most people that own really fast cars do drive fast and enthusiastically from time to time but do so in a responsible manner and typically respectfully go with the flow, whereas a lot of people with more regular cars (mostly MPVs around here) have a habit of driving a good deal too close and a good deal too fast every single day. Many even engage in some truly reckless behaviour as soon as they feel threatened/frustrated, a telltale sign of power/control issues.
Your point about people seeking to drive faster is a valid one.
The counterpoint is there are people who cannot afford luxury cars but also want to drive faster -- they'll end up buying cheaper cars that can drive fast within their price range. Those are likely to be more fatal than the average luxury car.
1) Hypothesis on car age. An accident on a 2002 non-luxury car model is supposedly more likely to be fatal than an accident with your average new car.
This matters because Teslas with autopilot are relatively new compared to overall car population.
2) Hypothesis that low cost cars are more fatal than an average luxury cars.
3) Demographic hypothesis. Because of price point, the buyer demographic of newer luxury cars will be different than the overall demographic. Age group (e.g. teenagers vs. young adults vs. older adults), education level, profession.
The fatality rates are probably also notably better on late model cars relative to clunkers because of safety features.
I don't think more expensive cars are less likely to have that kind of accident, nor would it be less fatal.
My suspicion is zero.
At that time I don't recall that Tesla made any statement about how dangerous it was to deliberately push the limit of the system to entertains viewers.
This might have contributed to make him overconfident about autopilot.
Yeah who knows. I think Musk retweeted his video. Hindsight is 20/20. Tesla can still move forward and learn from this.
People are using it in this way because they're able to, and they'll continue to do so until they're not able to. I think we agree here actually. The name isn't the problem really the issue.
* The real autopilot doesn't require hands-on-the-yoke while it is engaged, but 'autopilot' does.
* Airplanes don't have to deal with complex traffic conditions requiring interaction with other airplanes
* Airplanes do not smash into lane barriers/trees if they veer a few feet off the expected course
* "Taking control" on an airplane could safely take tens of seconds, but for a car, that is far too long.
So no, autopilot isn't good enough for cars. Cars need something much better.
Airport Runways have typically have edge lighting, and if an aircraft is off-center during takeoff or landing, can hit them and do damage. Airliners land at 150mph+ so it doesn't take much deviation to hit the edge lights. Occasionally, this also happens during a gusty cross-wind landing while coupled to autopilot (auto-land), or immediately after a touchdown after a coupled landing.
Perhaps it is true that the semi-intelligent autopilot is good enough for these trained activities, but the untrained public doesn't realize that autopilot today is not full automation.
Pilots also receive ongoing currency training. This doesn't exist for automobile drivers. Maybe some states require a written test every few years? But in the three states I've lived in, no written, oral, or in-car test it recurrency training required since the original was issued when I was 15 or whenever.
Comparing pilot certification to that of a driver's is disingenuous.
Although if US drivers are so untrained, perhaps auto-pilot is even more relevant.
Chances of gaining control and awareness of the situation in a timely manner on an airplane are higher.
Why the hell are they so bullish on auto-pilot? I cannot see any sense in it. They must just concentrate more on the battery, charging-stations and efficiency/performance part of the equation. Instead, I am seeing more efforts are being thrown in not-so-relevant areas and that too aggressively.
Semi-automatic auto-pilot is the idea rejected even by Google. Tesla should learn fast and get back to its core business: battery powered electric cars, period and not the battery powered automatic electric cars.
I'd be happy if they prove this wrong.
edit: many typos and grammatical mistakes, sorry.
Tesla is still a tiny player in the car business and at some point a big brand will enter its market in force. They need to be bullish to maintain their image.
That said, their PR has always been overreactive. Remember all the dirt they thrown at people reviewing their car. Or how they came hard on people with their burning Tesla. Those were unnecessary "dickish". If Tesla has some actual blame to take, the tone set by their PR people is going to seriously backfire.
It's unfathomable why they don't just concentrate on their core competency (EV technology) and keep their AI experiments in R&D (where they belong).
Funnily, all of those are investing heavily in automated electric cars already, or even have products for semi-automated electric cars on the market.
Just less hype around those.
If Tesla just continues, they'll be dead in a year, and Volkswagen, which also got further encouraged by the EPA to develop electric cars, will just take over their market share.
Why? Autonomous cars =/= electric cars
With Tesla out, they can even squash the entire electric car efforts altogether and will plunder the money thus "saved" in the form of fat bonuses to the higher-ups (ala Volkswagen style), which they would have to invest in EV research otherwise.
I may seem exaggerating, but with the recent Volkswagen pollution scandal, many will agree with this. Tesla is better, Elon Musk is better in terms of innovation and their direction seems more promising for the public good. They are disrupting the fossil fuel giants who are hellbent in garnering as much money as they can at whatever costs.
I know I may (turn out to) be a foolish naive believing too much in Tesla and Musk, but I find myself helpless here.
Perhaps you live in a bubble, as most car makers barely raise an eyebrow over tesla as a competitor so far. Making tonloads of batteries is not necessarily better than making efficient, safe and less polluting "gas guzzlers".
From the outside, it seems there is a push to revive the american car industry through EV, supported by lenient and eager to see results US regulators. Of course, time will tell, and in the process hopefully we will all have safer and eco-friendly cars.
On the other hand self driving vehicles should be a different class of vehicles that's aptly moderated by the govts. A road is a complex thing to be in as an AI backed vehicle, especially considering the lack of the communication and organisation systems used in naval and aerial vehicles.
> At Ford Motor Company, executives told me about all the pushback they received when they introduced features we now view as standard. People resisted and feared seat belts, airbags, automatic transmission, automatic steering, antilock brakes, automatic brakes, and trailer backup assist. Ford does extensive focus groups on new features that push boundaries, holding off on introducing ones that too many people don’t understand.
> Tesla doesn’t operate that way. When the company’s Autopilot software, which combines lane-keeping technology with traffic-aware cruise control, was ready last Fall, Tesla pushed it out to its customers overnight in a software update. Suddenly, Tesla cars could steer and drive themselves. The feature lives in a legal gray area.
I'm not sure of the legalities in the US, but in the UK I'd suggest that there are a few things that they'd fall foul of:
o advertising standards, its not an autopilot
o trading standards, not making a safe product/warnings 
o death by dangerous driving 
o driving without due care and attention 
 There are layers of consumer protection that enforces things like no sharp edges, doesn't give off nasty vapours, does what the marketing says it does.
 these are crimes for the driver. If you are found to have been driving in a reckless way, non concentrating, or not having hands on the wheel, that your fault as a driver. very large fine and time in jail.
especially as the tesla has lots of sensors to prove what the driver was doing.
This argument is completely bogus. No driver in the nearby lanes is aware the other driver is drunk, a bad driver or emotionally unstable.
The entire idea of beta testing is to have your system on an "uncontrolled environment". The system has bugs, as any beta system does, and whoever uses it should be fully responsible for it. Yes, fully responsible.
Tell me what's the difference between using a beta system and not caring (leaving hands off the wheel, etc) and drunk driving? On both situations the driver is choosing to put everyone at risk, nobody knows about the fact until after an incident and if there is an incident, who's fault is that? The beer company? The car company?
If the driver assistance made cars crash because it ignored a command from the driver or something like that, then sure... it's 100% Tesla's fault. But clearly on all of these accidents the drivers were careless to the extreme by leaving hands off the wheel for several minutes.
We can discuss if they should try to improve how they handle people that remove their hands from the wheel, but that doesn't change the fact that ultimately the choice to be careless was made by the driver, not the car manufacturer.
> We can discuss if they should try to improve how they handle people that remove their hands from the wheel, but that doesn't change the fact that ultimately the choice to be careless was made by the driver, not the car manufacturer.
Yes, the driver is to blame for that, just as tesla is to blame for allowing that to happen in the first place. They have sensors to detect whether the driver has his hands on the wheel, and yet they allow the vehicle to keep going for minutes in a situation it can't control.
There's plenty of blame to go around.
The latter is a clearly defined criminal offence whereas the former is currently unclear. While the driver is certainly liable for reckless driving, that the marketing seems to have suggested that the feature is usable, is a fact.
You're literally now comparing Autopilot to criminal offenses and asking if it should be allowed, seems like you're suggesting Autopilot should be illegal.
Sure, drivers suck, this is a known bug in humans. I'd like to see Autopilot changed to counteract the human limitation rather than acting like humans should be perfect for a system to work.
This is how you get into bad UI design ("it is NOT my UI that sucks, the users just don't understand it! Read the manual!"). Except in this case people can literally die.
Signing an acknowledgement means there's something to acknowledge, like "we're not totally done with this." Any driver on the road can reasonably expect that any nearby car was sold fully functional.
The truck that the one Tesla crashed into was not (I believe) injured, although the truck's owner experienced property damage caused by Tesla (my opinion). The truck driver could have been injured or killed. If it hadn't been a truck, it could have been a white mini-van, and there almost certainly would have been injuries or deaths.
This think belongs on a test track, not crawling up my bumper.
Similarly, if you provide and advertise an Autopilot feature that isn't autopilot, it is your responsibility to ensure the driver is attentive, has their hands on the wheel and that you pushed a not-broken implementation to them. Other manufacturers have done this, yet Tesla hasn't.