The Tesla deaths per mile statistics are not that much better than overall US statistics and that involves all sorts of weather and road conditions that Autopilot does not. In addition, the death statistics also include older and or cheaper cars that do not have the recent safety features. A much better comparison is with vehicles costing the same as Teslas and comparing their crash statistics.
> ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package... The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.
This brings me to a crucial point. Nobody doubts that assistive technology save lives. Not just automatic emergency braking, but even lane keeping assists. What Tesla does is however take lane keeping assist and give it more responsibility of actively driving the car. It's not clear if that is safe. Even if it's safe, it's not clear if it's safer. You shouldn't be comparing autosteer to just a human, but a human with lane keeping assist. The difference is between the human driving actively, and the car driving itself actively.
What are you basing this on? AEB was available before Autosteer, and it says in the report that the 40% improvement was after Autosteer installation.
> What Tesla does is however take lane keeping assist and give it more responsibility of actively driving the car.
Autosteer specifically is new, but the precedent for "active control" isn't. TACC takes just as active a role in driving a car as Autosteer, and most people feel it is net helpful (regardless of manufacturer, but especially in Tesla's case because it can see things via radar that a human cannot, e.g. a second car ahead slamming on the brakes). Heck, a lot of people think even standard cruise control is net helpful! It's not clear to me why longitudinal control is okay to do actively but latitudinal control isn't; both require attention and vigilance. My belief is that people are more skeptical of active latitudinal control because it is so new. It's understandable, but it's not an intellectually satisfying reason to hold back the technology.
Read the whole article
> Tesla’s oft-touted figure is flawed for another reason, experts say: With this data set, you can’t separate the role of Autopilot from that of automatic emergency braking, which Tesla began releasing just a few months before Autopilot.
I hold the same view for TACC. And cruise control doesn't require reducing attention. That is the prime difference, active (longitudinal or latitudinal) control that encourages users to reduce attention but still relies on them, and leaves then liable is dangerous.
It's too bad that NHTSA didn't present the AEB-only cohort in the report (i.e. pre-AEB vs AEB vs AEB + Autosteer instead of just pre-Autosteer vs Autosteer).
I respectfully disagree about TACC and Autosteer (or put another way: my experience with them leads me to the opposite hypothesis, and thus my burden of proof is flipped from yours). Hopefully between Elon's promised quarterly reports, NHTSA's current and future investigations, and Tesla putting more cars on the road, we'll get better data on this moving forward than we've gotten to date.
So, let me reiterate -- we agree that autosteer is lane keeping assist given more responsibility. If that's helpful to the driver or dangerous as it distracts him is a debate were should have with better data, which we don't have. Unfortunately, I personally don't trust Tesla giving data without any hidden unsaid caveats, given their behavior till date.
However, there is another topic we should talk about. Do you think improving autosteer lane detection with vision will make a car driverless capable? Because that's precisely Tesla markets their technology as, with coast to coast drive, or cross country summon, or driverless taxis. I personally don't believe a line following robot is enough for self driving. Infact, I like Google's approach more, where they drive only when the road is free of any object (3d mapped by lidar).
Even then, I don't think that's enough either. For example, say a ball bounces on front of you. I stop expecting a child or a dog running after it. I think driving means interacting with the environment, and responding appropriately to it, and would require a general AI.
Yeah, I had read that previously. I understand NHTSA not wanting to be seen as a primary source of data that it did not collect, but I don't think Tesla handing over that data changes anything material here. I don't see how the source of the raw data is relevant to the report, barring a case of fraud or massive incompetence among those interpreting it.
> So, let me reiterate -- we agree that autosteer is lane keeping assist given more responsibility. If that's helpful to the driver or dangerous as it distracts him is a debate were should have with better data, which we don't have. Unfortunately, I personally don't trust Tesla giving data without any hidden unsaid caveats, given their behavior till date.
I hear you. I would distinguish though between Tesla citing data and crafting a story around it (e.g. what they did with the 1:340M miles figure), vs Tesla providing raw data upon request to a regulatory body like NHTSA or an advisory body like NTSB, and then having those bodies draw their own conclusions. So long as Tesla isn't committing fraud and the investigative methods are statistically sound, the source of the data shouldn't matter. I like this model because, by enabling Tesla to be the data source in a way that is trustworthy, we can get access to much more of it (and much more quickly) than if we had to know all possible queries in advance and force it to go through a third party. For all the recent riffraff between Tesla and NHTSA involving the recent accident, both parties have a long track record of saying they work well together.
> However, there is another topic we should talk about. Do you think improving autosteer lane detection with vision will make a car driverless capable? Because that's precisely Tesla markets their technology as, with coast to coast drive, or cross country summon, or driverless taxis. I personally don't believe a line following robot is enough for self driving. Infact, I like Google's approach more, where they drive only when the road is free of any object (3d mapped by lidar).
I agree that Tesla's neural network needs to do much more than line following, but I don't couple that with a LIDAR approach.
One reason I am so bullish on Tesla is that I think the conventional wisdom about self driving is grounded more in marketing and fundraising than in engineering. The great thing about LIDAR is that you can build a prototype-level self-driving car and put it on a real road relatively quickly. The problem with LIDAR is that its utility breaks down much faster than that of vision. If you want to drive in conditions where humans drive today -- heavy fog or rain, sleet, snow -- you can't rely on LIDAR; you have to do that with cameras and radar. Once you've achieved a system that can do that, LIDAR becomes largely redundant; in good conditions it's telling you what you already know, and in bad conditions it's telling you less than that. So I am a big fan of Tesla's philosophy here (and of Comma.ai's for the exact same reason). This isn't just projection, btw -- Nvidia has a neural network that's been outperforming humans at camera-based object detection in bad weather for over two years now. I'm not aware of anybody achieving that with LIDAR.
> Even then, I don't think that's enough either. For example, say a ball bounces on front of you. I stop expecting a child or a dog running after it. I think driving means interacting with the environment, and responding appropriately to it, and would require a general AI.
I think you're right that with self-driving cars, we will see new classes of accidents that we don't experience with human drivers. However, because humans are such terrible drivers in everyday situations, it is likely a huge net improvement to improve the general cases and have more failures around the edges. This is why examples like the beach ball, or even the Trolley Problem, don't move me. Those situations are exceedingly rare, and optimizing for them will only get us to a local maximum in the mission to save lives. The global maximum is to mitigate the consequences of e.g. texting and driving, or falling asleep at the wheel.
So, I don't expect the huge improvement when going from lane keeping assist to autosteer.
See my response to your sibling comment. AEB was available earlier than Autosteer, and it says in multiple places (both in the summary and in the graph) that the 40% figure is since the release of Autosteer. If I am misunderstanding or misreading the report, please tell me.
> There's no evidence that Autosteer has saved any lives, but there is evidence that it's taken at least 4.
That's not an accurate summary of the evidence. There are reports all over reddit, TMC, Twitter, Electrek, etc. of Autopilot preventing accidents. There are accounts of Autosteer in particular swerving when a truck drifts toward a car. If we're going to consider one-off anecdotes that go badly, we have to consider the ones that go well, too. And there's a lot of confirmation bias to fight to do that, because "Car Doesn't Crash And Erupt Into Flames" is not a headline you will ever read.