Hacker News new | past | comments | ask | show | jobs | submit login
After 6 months of working fine, Tesla software update drives at barriers again (reddit.com)
339 points by SamuelAdams 30 days ago | hide | past | web | favorite | 330 comments



I would be a lot more forgiving of these screwups if Tesla didn't constantly swear up and down that they've solved self-driving cars.

As far back as 2016 they were claiming they had full SDC capability above human driver safety [1], and their recent Model Y announcement suggests that the only thing holding it up is regulatory approval, and not failure to achieve the desired spec.

>Model Y will have Full Self-Driving capability, enabling automatic driving on city streets and highways pending regulatory approval, as well as the ability to come find you anywhere in a parking lot.

[1] https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

[2] https://news.ycombinator.com/item?id=19397942 (linking HN because it's hard to find the text in the page with their UI)


I don't want to dismiss your whole point, because it is certainly valid, but that isn't really the issue here. It is entirely possible for bugs like this to exist in the self driving tech and for Tesla to be correct in their claims that Autopilot is on average safer than a human driver.

It is obviously troubling to see self driving cars run into solid and stationary objects, but human drivers do that all the time too. The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans. You and I certainly don't have enough data to say one way or another on that. I would bet even Tesla doesn't have enough data to say definitively. However writing this tech off as unsafe just because it makes what seems like an obvious mistake is a great way to slow progress which will result in more human deaths long term.


I think you're missing the forest for the trees. We're not even at the point where we're comparing safety rates. Can a Model Y drive through DC? Can it ignore traffic signals in favor of following hand signals from a uniformed traffic officer, or follow the directions of a construction worker having traffic in turns through a road that they've decided to turn into one lane? Can it deal with an unannounced presidential motorcade rerouting traffic? Because that happens literally every day in DC. If Tesla can't handle that, it's inaccurate to say it has "full SDC," even before you get into an analysis of safety.


It's because Tesla marketing material doesn't tell the whole truth, just the selected bit that sounds impressive. The conditions in which the Autopilot is safer than a human driver are so narrow and under so many restrictions that it's obvious it's less "self driving" and more "driver assist".

Indeed, I also find it disingenuous to claim the Autopilot is safer than a human driver even if it actually can't function at all in 95% of situations on the road. Guess saying "possibly safer than a human driver at these 3 things and only under these very specific conditions" isn't so catchy. Definitely not thousands-of-dollars catchy.


It's because -in general- marketing material doesn't tell the whole truth, just the selected bit that sounds impressive.


but still there's quite some difference in the fine prints between a new MacBook being "two times faster than previous model#" and a Tesla "comes with full self driving hardware&"

# only on specific workloads

& don't ever get distracted because it can literally kill you


this is the point I see getting lost. This isn't someone overstating the how good a pair of pants make me look. It's more like selling a flame thrower as a weeding tool.

I know that sometimes these lines are objective, but acting like you can't tell okay and not okay apart because the line is gray in some cases just seems to be bad faith.

If you are okay with Tesla misleading selling a prototype feature in the name of disruption, own your argument. If this level of risk is just okay to you because of the potential long term upsides...I disagree but I respect your willingness to say it. Just don't make a different one because you are afraid of the social consequences of your actual point.


Don't you mean selling a weeding tool as a "flame thrower" ?


What you describe would be a disappointment. What he describe would be life-threatening.


Tesla did actually sell a propane torch that's a weeding tool (or asphalt surfacing tool, paint stripping tool etc) and they called it a flamethrower


Flamethrowers are regularly used as weeding tools though. The best way to kill weeds on the edge of a field or a ditch is to burn them.


Thank you for the assist.


Which, unfortunately, doesn't excuse them when users take Tesla at their word that their Autopilot feature is fully self-driving and end up in an accident and/or dead.


Is this hyperbole, or do you think Tesla actually said that?


Yeah and the auto dealer industry is notoriously deceptive too. Telsa told us they'd be different. Well they're not selling cars through dealers, that much is true, but it seems clear the deception is still in force.


That's probably very true but this isn't about the dealer as much as the manufacturer. When one says they have 4 crumple zones to absorb the energy of an impact and you later find out that the passengers were those crumple zones, someone goes to prison. When Tesla claims that their system is fully self driving at a level safer than a human driver they're doing the same. Because they know very well that system is just a glorified driver assist that can do any kind of driving only in the most narrow and perfect of conditions. Take it out of that narrow Autopilot "comfort zone" and you have plain old driver assist.


From what I understand, it can't even negotiate a traffic light.

Like here's a recent article swooning over it coming out sometime soon:

https://electrek.co/2018/12/09/tesla-autopilot-soon-traffic-...

The article there also says that the more capable version will require new hardware, which isn't something they have been admitting very readily over the years.


Furthermore, the demarcation between what it can handle and what it cannot is neither simple nor clear. As this case shows, you can be bowling down the sort of highway it is supposedly suitable for, yet suddenly come across something it cannot deal with - and maybe something it did negotiate successfully in the past.

As long as Tesla says that driver vigilance is required at all times, while simultaneously promoting it as if it were true automation, the risk to third parties, from Tesla autopilot users who don't understand what's going on here, is unacceptable.


As far as I'm aware, Tesla has never claimed their cars will be able to handle all those situations. "Self driving car" is a vague term that has different definitions depending on who you ask. You are considering "self driving" as level 5 autonomy and Tesla probably considers it somewhere between 3-4 autonomy. I don't think it makes sense to get angry because their definition of the phrase is not the same as your definition of the phrase.


Tesla skirts morality.

The feature is called 'Full self driving'. Their sales reps regularly told customers to take their hands off the steering wheel of their "Autopilot". Sure, the fine print says "always pay attention", but there's an entire marketing scheme going on here which is borderline dishonest.


> "Sure, the fine print says "always pay attention","

In fact the not-so-fine print in the user manual says to keep your hands on the wheels, but the promotional material tells a different story, as does Musk when he takes his hands off the wheel on national television.


Honestly, I'm surprised a false advertising case hasn't been pursued if this is the case. Marketing practices this inaccurate are well outside the puffery defense.


That’s just called “driving” and if the car can’t handle those situations it’s not a “self driving car.” It’s just a driver assistance feature.


> It is obviously troubling to see self driving cars run into solid and stationary objects, but human drivers do that all the time too. The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans. You and I certainly don't have enough data to say one way or another on that. I would bet even Tesla doesn't have enough data to say definitively. However writing this tech off as unsafe just because it makes what seems like an obvious mistake is a great way to slow progress which will result in more human deaths long term.

And where is the research to show that? To prove that within any significance, they would have to have millions and millions of miles to compare against human drivers. This is new technology and making false claims about its safety is dangerous.

We as a society force drug companies to rigorously show that their drugs do what they say they do and all of their side effects are properly accounted for. This should be the case here too.


The last time anyone tried to make that case statistically, it turned out to be bogus [1].

Bizarrely, this report was from the NHTSA, raising the sort of regulatory capture issues that have surfaced between Boeing and the FAA.

Also, when such a study is performed, we must be careful that technologies are not conflated to claim more than properly can be. The effectiveness of less powerful technologies such as lane-keeping assist and automatic emergency braking tells us nothing about the safety of full-authority automated driving.

[1] https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t...


> see self driving cars run into solid and stationary objects, but human drivers do that all the time too.

Human drivers who are distracted do that all the time, AP is supposed to avoid that, it is supposed to be alert all the time, but when it sees a stationary object, the result of their algo is, "it must be a sign we can somehow go through"

We need to make it very clear that self driving cars are better than driving drunk, or when you haven't slept in 24+ hours, but if you are a driver who pays attention, don't use this tech.

And it is not that I don't want the tech to take over the world, I wish I could just put my kids in a self driving car and have the car take them to the school that is 4 miles from home, but we are nowhere close to that, even with me in the driver seat, if I only have seconds to take over before I end up on a ditch or worse.


There are currently no self driving cars which a drunk or exhausted person could operate with an acceptable level of safety.


It’s far from clear in the statistics that self driving cars today (without the use of a safety driver) are safer than driving drunk. IMO Tesla autopilot, fairly clearly is less safe than driving drunk if a human is not carefully monitoring.


No, it's still the issue. If there's a bug in the software code for a programmable pacemaker, it doesn't matter if the fundamental design is flawed. For governmental regulators, it's the actual implimentation and real world performance that matters.

Sure, a bug fix for a routine being called is maybe easier to fix, but the real world performance is still the important metric to track.


Ok, but a lot of people are arguing something along the lines of "there is a bug in the pacemaker, let's take take out everyone's pacemakers".

I am not saying we shouldn't be critical of Tesla or hold them accountable for their product. My point is simply that their product doesn't have to be infallible for it to still be an improvement over the current solution.


A pacemaker has objective improvements over the alternative, namely, death. So if a pacemaker has a bug, it's still overwhelmingly better than the baseline scenario of "no pacemakers."

A self-driving Tesla is not objectively better than a regular, human-driven car. The jury's still out on whether any self-driving car is even as good as the average human driver, so if one of them has a bug that causes serious accidents in reproducible situations, that's not better than the baseline situation it's being compared to.


> So if a pacemaker has a bug, it's still overwhelmingly better than the baseline scenario of "no pacemakers."

Unless it goes of when it's not needed and kills you.


Right, but the probability of you dying because you do have a pacemaker is (in the situations for which a pacemaker is prescribed) far less than the probability of you dying because you don't have a pacemaker.

The same cannot yet be said about "self-driving" cars.


> Right, but the probability of you dying because you do have a pacemaker is (in the situations for which a pacemaker is prescribed) far less than the probability of you dying because you don't have a pacemaker.

That assumes the pacemaker works more often than it doesn't. (Which is the case now.) It's an unstated assumption that doesn't always apply when generalizing your example.


No it doesn't assume that. We don't install pacemakers to everyone at birth.

Even if pacemaker fails 90% of the time 90% chance of death is better than a 100% without it.


You're assuming "fails" only accounts for false negatives. False positives are a thing. If there was a 90% chance of a pacemaker going off when it wasn't needed, they wouldn't be used as it'd cause nearly as meany deaths as it prevents, assuming no false negatives.


Another reason pacemakers are not comparable is that pacemakers do not present anything like the potential threat to third parties that self-driving cars do. Two reasons we can say that with confidence is that, with pacemakers, the scenarios are simpler and we have good statistical data.


>> Another reason pacemakers are not comparable is that pacemakers do not present anything like the potential threat to third parties that self-driving cars do.

Unless the person wearing pacemaker is driving a non-self-driving car.


As I wrote, specifically because I knew someone would make this reply, in this case we have adequate statistics to make a good estimate of the minuscule risk.


i would not take those chances. if the self driving car is only slightly better then average driver in fatal accidents that's not good enough. most of accidents happen because of distractions (texting and so on) and if a self driving car is only slightly better then average driver I can get a lot better if I just don't use my smartphone while driving and don't sit behind a wheel tired.

so it's not enough to be better then average driver. It needs to be as good as "good" driver or better before I would trust it.


I wouldn't trust my life to proprietary software, because its makers' motives obviously don't align with my own. If my safety was really paramount, they'd open their software to scrutiny.

It's unethical to leave life-and-death decisions to a black-box algorithm, when I know it was written for the primary purpose of gathering more money. Safety is a constraint, not their goal.


You already do that right now and basically have done it every day of your life.

The power grid, railway, traffic lights, elevators, etc. all these systems are critical and closed source, and you don't see them killing people on a wide scale nor even on a small scale but regular basis.


Elevators I'll grant you, but the power grid, railway and traffic lights are all controlled by governments, councils and NGO-type organisations — they all have reliability as their primary concern, not churning out units for profit.

And it may be irrational, but wrongly-activating brakes feel like less of a risk than wrongly-activating steering or accelerators.

And anyway, “lots of things are untrustworthy” is not a good argument for trusting something else.


Yep. Add the code which controls ABS and ESP in modern cars.


You’re implying that agency is the element that is most desired. Ie if a human crashes and kills themselves/others, it is ok, because someone is at fault.

If the car is anywhere from slightly to quite a lot safer, but accidents that result in injuries/deaths occur, then it is not ok.

Psychologically you may feel this to be ‘right’ but I would prefer a world with less injuries and deaths all round. And one day the courts will too


no I'm just saying that average is not good enough.

imagine lung cancer for example, if we take average then it's 6% risk taking smookers and not smookes, but if I decided not to smoke then it's 0.2%. so in this case I can do better for lowering the chances then when I would accept some other solution that would let us all have it at 5% witch might seem ok but not for me.


wich


You are setting up a false dichotomy. You don't have to choose between Autopilot and paying attention. The video above is an obvious example. It didn't result in an accident because the driver was paying attention and intervened.

Also you aren't considering that other drivers can be the cause of a potential accident. You can assume that the other drivers on the road are average drivers with all the distractions that come along with that. If you make the other drivers on average slightly safer, that improves your safety even if your behavior is completely unchanged.


> You are setting up a false dichotomy. You don't have to choose between Autopilot and paying attention. The video above is an obvious example. It didn't result in an accident because the driver was paying attention and intervened.

If you have to pay attention, then you might as well be driving.


Not only that, this is a new type of driving where you have to actively pay attention and fight the vehicle.


You definitely aren't doing your commute in a Tesla, then.


This is just a silly argument to me. Would anyone suggest that cruise control is worthless because you still have to steer the car?


The more the car drives for you, the harder it is to pay attention and the slower the context switch when suddenly the situation demands it.

This is basic human nature.


Cruise control is useful to me because it means I won’t accidentally speed. But having to hold the wheel means I need to pay attention. If the car is driving itself human nature means you might not pay attention. Also if the car does something weird you have to decide in a split second if it is because of a hazard you haven’t seen but the car has, or because the software went wrong.


You do have to hold the wheel, precisely because you might find yourself having to assess what the car is doing "in a split second", and possibly issue corrective actions. You can't do that unless you are holding the wheel and are paying attention to what the car is doing! But all that means is that the car is not really driving itself; it is however dealing with the boring, predictable parts of the job and leaving the rest up to you - this makes it easier to be attentive, not harder. The tasks where it's hard to pay continued attention are ones where you have to do something that seems extremely predictable, but also has very rare events where it isn't. Computer assistance can actually help a lot with such cases.


The boring predictable parts of driving (lane keeping and speed keeping) are already basically unconscious muscle memory. It’s watching others for surprises that takes the mental energy.


Exactly - though speed keeping is nice for long road-trips simply not to get foot cramps.

Anyway, I'm yet to see a video of Tesla doing emergency breaks because of highway pileup where it's not only a couple of seconds after human driver could see a row of red breaking lights in front.


That makes no sense. If I can’t pay attention to the boring predictable part of driving how am I going to pay attention when I’m not even the one driving?

As for the rare unpredictable stuff, I might not be able to predict, say, a ladder falling off a work truck, but I can recognize it could fall off, prepare for that type of event, probably even recognize it before the car does, and if I don’t die, I can learn from it.

I don’t really see how also having to worry about the car itself creating a rare event helps. Now I’ve got to spend additional time deciding whether to take over and if I do, the time I have to react is reduced.


Well if marketing says you get a CNC mill and what you really get is three power feeds it's not worthless but it's still not what you were promised.


i was talking about self driving cars, not really Tesla autopilot. but do you really think this (paying attention and correcting autopilot mistakes) would not be a bigger problem knowing well that no.1 reason of accidents is not paying attention.

yea, other drivers are also part of this, but even then I would like something alot better then average. sure there is a chance a distracted teenager is riding somewhere around you, but it's not better if there is now 10 slightly less distracted teenagers around you.

average is really not a great measure here if we are talking about self driving cars.


It may be better than humans on average, but is it safer than a sober person who is driving responsibly in a similarly priced vehicle? Anecdotally, most accidents I've seen have been caused by at least one driver doing some stupid shit. Based on my own personal experiences, I'm just not comfortable surrending control of my vehicle. Especially not when we could be comparing apples to oranges. Safety of a self driving system should be evaluated against currently available safety technologies like blind spot detection etc. If you're including older vehicles that don't have the best possible non-self driving technology in your data, you're making a comparison that is inherently biased in favor of self driving cars.

All that being said, I do think self driving is what we will inevitably arrive at...we just need to have a higher degree of confidence in it before it's widely used in my opinion.


Hang on.

Unless something has changed, the Telsa self driving tech intentionally ignores stationary objects.

And Telsa are claiming this is a feature not a bug.

Am I understanding this correctly?

I don't know anyone who isn't suicidal who intentionally ignores stationary objects while driving.


Do you have anything to back this up? That seems like a pretty big claim to make unsubstantiated. If autopilot literally ignores staionary objects, you would see them crash much more often than they do.


It has been in the media.

https://www.google.com/search?q=tesla+ignores+stationary+obj...

Edit to add: apologies for terse response, a bit distracted at present


We shouldn't have to conjecture about this. We should be able to read and understand the algorithm.


> It is obviously troubling to see self driving cars run into solid and stationary objects, but human drivers do that all the time too. The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans

I fail to understand this line of reasoning. Are you saying because humans tend to run into solid objects it is okay if the self driving cars do it, only if it does less times than humans?


I think this graph:

https://upload.wikimedia.org/wikipedia/commons/a/a5/Causes_o...

puts a lot of arguments (and young people) to rest.


Not at all. It says that car accidents are a problem. It does not say that the answer is to pretend your driver assistance tech is actually self-driving tech.


I was addressing the point in the parent comment:

> The question shouldn't be whether this technology is perfect, it should be whether this technology is safer than humans.


Is it, though? If every car that drove that route was a Tesla on autopilot, you'd probably have more traffic fatalities than the national total, and that doesn't even factor in every other similar traffic barrier this can occur at.


Nn tech debt.


That’s one of those thorny AI-human problems, though — if I crash 1 in 10 drives, it’s my fault, and therefore much easier to rationalize to be, well, not my fault. If my Tesla crashes 1 in 100 drives, it’s a faulty machine and an obvious death trap.


If you crash 1 in 10 drives you have no business being on the road. Most people can drive for years without even having a single, minor accident.

If Tesla crashes 1 in 100 drives, it's absolutely a death trap since that's still an obscenely bad accident rate.


It is a classic trolley problem. Is it worth killing a group of innocent people if it will save a larger group of innocent people? I can certainly see why people might be pushing for caution against Tesla since we don't know the size of either of those groups.


It could theoretically be a trolley problem. In practice Tesla are probably not that safe if you compare them driving themselves on freeways to humans driving under the same conditions.


I was going to point you to a study showing that autosteer reduced accidents by 40%, but Google told me that this was recently shown to be wrong. In fact it increased accidents by 60%.

See https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t... for details and verification.


The the paragraph directly after that 60% number:

>So does that mean that Autosteer actually makes crashes 59 percent more likely? Probably not. Those 5,714 vehicles represent only a small portion of Tesla's fleet, and there's no way to know if they're representative. And that's the point: it's reckless to try to draw conclusions from such flawed data. NHTSA should have either asked Tesla for more data or left that calculation out of its report entirely.

I will go back to my original statement at the start of this thread. No one in these comments has the data to say definitively whether Autopilot is safer than a human driver. I am skeptical that Tesla even has enough data for that. But I am also skeptical of people who take that unknown and the occasional anecdotal data point like the above video as proof that Autopilot is inherently less safe than humans.


Tesla turned over the best data set to the NHTSA. They had more data on Autopilot usage and chose not to turn it over. It's thus fair to assume that the data they did not provide would not have benefited the company. So if even the Tesla-provided data set shows Tesla is worse than a normal driver, then it's logical to assume that a full data set would show the same--or worse.


Model Y will have Full Self-Driving capability, enabling automatic driving on city streets and highways pending regulatory approval, as well as the ability to come find you anywhere in a parking lot.

We've heard that before, for the Model X.


> As far back as 2016 they were claiming they had full SDC capability above human driver safety

One crash/incident doesn’t mean it’s less safe than a human driver on average, even if it’s something a human driver might have avoided.

We should expect different failure modes from a machine, but adopt it anyway if it avoids enough human failure modes to make up for it.

Source: my thermostat thought 33C was an acceptable temp once, but I still didn’t switch to full-manual HVAC control.


Don't Teslas fare significantly worse than cars in similar class in terms of driver fatalities?


Are you asking about Teslas or Teslas being driven by Autopilot? Autopilot so far has a fatality rate of roughly 0.25 per 100 million miles. Just for information's sake the overall rate in the US in 2017 was 1.16 per 100 million miles. Although there are plenty of caveats that should prevent you from comparing those numbers directly. The national number is for all cars, all drivers, all conditions, etc while those driving a Tesla in Autopilot are generally considered to be in a safer cars, safer conditions, and be safer drivers than average.


Tesla drivers have the luxury of turning autopilot off in suboptimal conditions. Human drivers don't.

Like you say, this number is really valuable only for Tesla marketing.


Tesla would have had 0 fatalities in total if the Autopilot disengaged just before hitting the barrier.


For all driving protocols P, and accidents A, P would have zero fatalities if it switched to a better protocol just before encountering the situation that led to A.


That's exactly what I am trying to say. Yet Tesla implies their tech is statistically superior to human drivers, yet can disengage at any time.


Ah okay. Sorry, easy to misread people on this topic.


That stat of 1.16 includes motor vehicles, motorcycles, pedestrians, buses, bicyclists and trucks. How is this a comparison to luxury electric cars again?


Dunno, maybe. I do wonder which cars Tesla gets lumped in with.

There can be a difference between a $75k ICE driver and a $75k electric driver.

The point is, one stupid decision by a machine doesn’t prove much.


So... Two planes falling out of the sky, killing 400 shouldn't be taken as proving anything?

Or how about 1 stupid decision by a radiotherapy machine? Ever heard of THERAC-25?

In each case, it was just 1 stupid decision by a machine.

The entire reason Engineering as a practice is a thing is because when you implement the capacity for a stupid decision into a system that is then mass produced, dire consequences can result.

I look down on any thought process that doesn't discriminate between the difference between 0 and 1.

If the system provably worked, that decision would not have happened (0). The stupid decision happened (1), however, which means it can happen again at a poorly understood confluence of circumstances.

To err is human, and we forgive each other every day for it.

To err as a machine is a condemnation to the refuse bucket, repair shop, or back to the drawing board.

To err so egregiously as a machine to cause an operator and those around them to lose their life is willful and moral negligence on the part of the system's designer. Slack is cut when good faith is demonstrated, but liability is unambiguous. The hazard would not be there if you hadn't put it there.


And people have died and been paralyzed as a direct result of the flu vaccine. Death and paralyses that would not have occurred had they not received that vaccine.

Does that mean the flu vaccine should get dumped in the refuse bin?

Similarly, some other aircraft flying today/tomorrow has automation with an unknown bug/issue that will cause loss of life. Should we disable everything except the 6-pack and stick and rudder?

It would have saved the lives killed by automation, but we would have more aviation death overall.


Liability is still clear, and in the specific case of medical practice, a degree of "we can't foresee everything" is implicit in that our understanding of the governing principles of the human body is incomplete.

Aviation does not have that excuse. The 737 MAX 8 system description is enumerated from the ground up. Seeing as there was so much recertification effort that didn't need to be done, it makes the failure to properly handle the MCAS implementation all the more damning.

This wasn't some subtle bug. This was an outright terrible design choice. Anyone with any experience composing complex systems out of smaller functional building blocks should have been able to look at the outputs, look at the inputs, and realize there was the potential for catastrophic malfunction.

As I've said elsewhere, automation should make flying a plane easier when functional. When non-functional, however, the pilot should still be able to salvage the plane. That requires clear communication of what automation does, and what it's failure modes are.


Does that mean you believe the decision to ground 737 max 8s is incorrect?


I don't have the fatalities stats, but their safety ratings have been perfect.

[0] https://www.cnbc.com/2018/09/20/tesla-model-3-earns-perfect-...


They solved it, they just need 100x faster GPUs in cars to be reliable outside datacenters ;-)


The passage says “will” which implies the future.


I have been yelling about this for a long time: Tesla is not going to be able to deliver full self-driving as promised. They don't have the hardware, for one (ranging is terrible; they need stereo cameras), and second, their software strategy seems to be a dead end.

They need sensor fusion. The system needs to make maximum use of all the information available to it: Where is the road striping? Where are the other cars going? Where are the road signs and signals? (If there's one in your path, you certainly shouldn't drive into it!) Are there camera-visible obstructions? What were the interpretations and actions of previous Tesla trips along the same route?

In these problem cases, all data except the left and right lane striping seems to be completely ignored. There was even more information at the fatal offramp location (cross-striping over the lane separation zone), which the vehicle drove straight over. The system is not making maximum use of the information available to it, in fact it is using hardly any of it at all, and fixating on what it thinks is a single most salient piece of data.

Sensor fusion algorithms tend to behave the opposite way-- each additional piece of data informs the interpretation of all the other data. You can have very poor-quality data, but if it is even moderately over-constrained, your state estimate can be very good in spite of it. I think it would be completely reasonable to have a neural net in the loop of a sensor fusion algorithm, with fusion constraints informing the NN's interpretation, and the NN's estimates feeding back into the fusion algorithm as uncertain data.

IMO Tesla will do at least one of:

* Very expensively retract their promise of full self driving for delivered vehicles

* Completely overhaul/redesign their driving software and start again nearly from scratch

* Get into a regulatory/legal tangle with the NTSA/courts/DOJ over all the dead people their system is making.


They do have stereo cameras and sensor fusing, as well as detecting more than just the lines in the road. Here is what the camera sees: https://www.youtube.com/watch?v=rACZACXgreQ


What in that video suggests sensor and/or stereo fusion to you?

I notice that the temporal coherence is pretty bad-- Pedestrians pop out of recognition when they go behind trees; lane/exit boundaries wiggle all over the place and occasionally frame-pop into different configurations. A kalman filter, for example, is a state estimator which maintains temporal coherence, and makes heavy use of previous estimates/sensor inference when computing the most updated estimate. It doesn't look to me like that kind of strategy is being used to maintain the vehicle's world model. IMO a good estimator wouldn't treat "a pedestrian popping out of existence" as the most likely estimate for any circumstance, let alone one where they were clearly present in the previous 50 frames. I don't doubt they're using KF on the vehicle's inertial movement, but based on the failures and this video, it sure doesn't look like it's using a fusion technique for the world model.

There are left and right-looking cameras, but the FOV overlap between them is not very substantial, and there can't be stereopsis where there is no overlap. Per the Tesla website, there are three forward-looking cameras, and they each have a different FOV. The parallax baseline between them is only a few centimeters, too, so the depth sensitivity isn't going to be spectacular. It's certainly possible that there could be some narrow-baseline stereo fusion, but it could only really happen inside the narrowest field of view, where the coverage overlaps with more than one camera. That's the circumstance where having a narrow baseline would hurt the most. Based on that it doesn't really seem like the system is well set-up for stereopsis; if it's there it seems like an afterthought.

I could certainly be wrong, as I don't have access to the code. Are you going by some other secondary source/information?


To be fair, it could be that this is what the camera segmentation does before it is combined with other sensors, and before it is used to update the word model (which then has temporal information)


Certainly two cameras with different FOVs could be combined to give the same depth data that a stereo camera setup could give, right?


not if they are on the same axis. then anything along that axis cannot have its depth determined


It can through other signals. You can drive successfully with one eye.


I've got one eye at 20-20 vision, and the second legally blind without correction. My drivers license has a little note that it's not legal for me to drive without my glasses, which I never wear under any other circumstances.

So it's not so clear cut as you make it out to be.

(And you know what? Even if it were legal for me to drive without those glasses, I'd still drive with them. Because ranging is important!)


That's not stereopsis. And it's terribly inaccurate.


No it isn’t. You can perceive depth through motion. And one eyed people can drive a car legally: https://itstillruns.com/can-drive-blind-one-eye-5689258.html


Stereo cameras overlap. Tesla's cameras are intended for 360-degree coverage, not overlapping vision.


I think you are confusing what the software is capable of today with hardware limitations in the sensors deployed in the cars.


Where is your evidence for these claims?


Musk is more optimistic than you - says late 2020 for sleep while the car drives https://www.youtube.com/watch?v=Y8dEYm8hzLo&t=10m25s

Also quite interestingly he says there will be a big jump forward in quality when they switch to their own computing hardware (18m40 or so)


> late 2020 for sleep while the car drives

Musk is often overly optimistic and he keeps underestimating the problem at hand. I call BS on this, it won't be ready in 5 or even 10 years. And then there is a regulatory approval.


Indeed though he's delivered some stuff. It'll be interesting to see how it goes.


Though the 2017 LA to NYC drive is behind schedule. (https://www.theverge.com/2016/10/19/13341100/tesla-self-driv...)


> Musk is more optimistic than you

Indeed.

What Musk says and what Tesla delivers are two completely separate things.


If my anecdote is useful for anything, my car drives about 90% of my 25mi commute today on its own, including leaving one highway and going over a ramp to then merge into another highway. Needless to say that I’m extremely happy with what has been delivered so far.


I think a careful interpretation shows how far that is. A self-driving system ought not be considered reliable until it can drive O(100 million miles) without disconnecting once, in order to match human reliability (that's about the distance between fatal accidents currently). A guaranteed disconnect within 25 miles is many zeros of missing reliability.

A Disney park engineer once relayed to me the philosophy for designing safe attractions in the parks: "If there's a one in a million chance of it happening, it'll happen multiple times per year," given attendance numbers which are in the millions.

A self-driving car needs to handle ordinary commute circumstances with 100.0% reliability, and one-in-a-million circumstances (which statistically you will have never personally encountered) with reliability literally above 99%.


On the flip side, with a million cars on the road, that is a lot of edge cases they see before they remove the steering wheel of their cars.


I don't know many humans that drive 100 million miles without disconnecting. The disconnect thing is more analogous to pulling over to check the map for which 5000 miles might be ok.


If the computer needs to pull over and check itself into a motel and nap for a few hours, I'm not going to object. But right now the computer "takes a break" by simply disengaging in the middle of the road. Presumably when this happens, the computer isn't even confident in it's ability to safely park the car on the shoulder, as a human driver would attempt in the event of an emergency (blown tire) or undrivable conditions (which for a human might be an intense blizzard or torrential downpour.)


Sure, and we are not there yet and nobody has claimed that we are.


This is the comment thread beginning with "Musk claims 2020 for sleep while the car drives". I think the point of this discussion is that Musk, has been implying that.


I’m pretty sure it is still 2019.


And you're expecting reliability to improve by a factor of ten million before 2020?


I am expecting significant improvements as more and more vehicles start providing data for the entire system to learn and I don’t care at all if it takes longer than that. I don’t want the technology to fail just because Musk is overly optimistic.


Hope your car doesn't drive you into a wall then.


I’m sure you care.


And if your trip on autopilot is 99.9% reliable, how would you feel about that stat?

Do you find yourself paying attention to the road context the entire time? I'm curious if your mental acuity has dropped over time as the car has driven you.


You start trusting it more as you use it and learn its flaws, so you can anticipate when it may do something stupid. I pay attention the entire time, but it is a much more relaxed experience. I’d still pay attention the entire time even if it were supposedly 100%.

It is very sad that a lot of people in this forum are hoping this never works. It is one of the most exciting advancements in technology that can benefit us all, but people here seem to be more interested in seeing Musk and Tesla fail rather than hoping they achieve this and bring the whole industry forward one more time, affecting millions of lives.


That is not what is happening here. People are rightly skeptical of the claims. As Theranos showed, you can't "fake it till you make it" when people's lives are involved.


> It is very sad that a lot of people in this forum are hoping this never works.

Top level OP here. I am personally rooting for self-driving to succeed and catch on. I just don't think Tesla's current strategy is likely to work, and their cavalier, unjustified overconfidence is either going to sink them or kill people, or both, neither of which is good for the future of self-driving.


We’ll see how this post will age.


He's also said 2016. And 2018.


He likes even numbers.


2048?


Additional context: on March 23, 2018, a Tesla Model X, while using autopilot, drove into a barrier and killed the driver. This was fixed in a software update, but the issue seems to have resurfaced.

NTSB analysis of the March 23 2018 collision: https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...

Tesla's statement of the March 23 2018 collision: https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...


Well, it wasn't really "fixed" -- the driver is still dead.


Sure, if you construe the meaning of 'fixed' to mean 'roll back all events and time', which I don't think anyone expects. They probably fixed the software bug.


I think that it's important to note that "we'll fix bugs via live patches" is a deadly decision when it comes to self driving software.

Just discussing the incident in terms of a "software bug" does a disservice to the severity of the issue.


Remember, autopilot and autosteer are not self-driving. Tesla is very explicit that the driver must remain alert, and supervise, at all times.

That being said, drowsy driving is a thing, and it's very easy to fall asleep behind the wheel. The car really needs a better strategy to handle this situation.


Tesla's legal department is very explicit, the marketing department? Not so much...


No, it does not. While I agree that testing patches on users with no testing beforehand is horrible, 1) I doubt Tesla does this, and 2) the person died before they fixed it, which GP seemed to be confused about.


The day I took delivery of my 2016 Model X with AP 1.0, Tesla announced AP 2.0. A friend of mine immediately ordered a Model X with AP 2.0 and rubbed it in my face.

For the entire next year, my AP 1.0 (which is non-Tesla technology -- Mobileye rocks) had no trouble doing adaptive cruise control and lane assist. Meanwhile his AP 2.0 would brake suddenly and swerve all over the place. It took a full year of OTA updates before his AP 2.0 was finally on-par with the functionality that I had the whole time. Of course, by then Tesla pulled a "we're sorry, but the princess is in another castle" and came out with AP 2.5.

Now this kind of stuff doesn't matter to me. I got tired of that company's shit and have pulled out of the Teslasphere entirely. I'm now driving a non-Tesla EV, and I'll never look back. I'm also letting my government representatives know that they should support a common EV charging standard and keep Tesla so-called "self-driving" shit off public roads.


Eventually we need the NTSB to certify updates before they are pushed out over the air. Similar to what the FAA does.

These OTA updates are not ok for large machinery and endagers not only the Tesla driver but all others on the road.


Yeah, it's a pretty crazy world where a game developer pushing an update to a game on Xbox live seems to have a harsher approval process to go through than an automaker pushing OTA updates that literally drive cars.


Indeed. I have never understood the cheering for OTA of something that can literally kill you. Were they limited to infotainment and the useless easter eggs I would not care, however AP OTA is downright scary, especially pushed by vendor instead of being initiated by an informed user.


It is initiated by the user ... the update only happens after you give say it's okay to do so.


According to the posters on Reddit, this update’s release notes were about “Dog Mode” and “Sentry Mode”.

Nothing warning that behavior of autopilot might change.


The AP can only kill you if you're distracted/asleep at the wheel and not paying attention. You're always in charge of driving the car appropriately, the "AP" can only provide tentative assistance of a very rough sort. It will try to guess at the control inputs you might want to provide (which does make things a bit more comfortable in the best case) but the actual control always rests with the human driver.


> You're always in charge of driving the car appropriately, the "AP" can only provide tentative assistance of a very rough sort.

It can "tentatively assist" you in killing yourself and potentially others, but we already know that from the disaster in Mountain View, where this exact same thing happened.

The problem with behavior changes in autopilot is that the car needs to react the same way every time.

If you're going around a corner on a road which you've driven on for years without issue, and the car all of a sudden does something unexpected, the panic and over-correction reaction that everyone instinctively has tends to cause more accidents than just holding your desired course does.

It doesn't matter if it's a skid, or your Tesla trying to accelerate you into a wall at 70 miles an hour. If the car does something the driver is not expecting it to do for any reason (from fatally defective software to ice on the ground) the driver performs sub-optimally as a result.

And since this is Tesla we're talking about (who bakes in features like your car not starting before you upgrade its software, which is another "feature" about these cars that's going to get someone killed sooner or later), I'm willing to bet that the car doesn't warn you that this might occur. It just works fine for 6 months, then an update gets pushed, the car tries to put you into a wall, and you cause an accident because you're trying to stop the car from killing you and didn't check your blind spot before that.

That is a UI and UX failure of the highest magnitude, and it is completely unacceptable, no matter how well it otherwise tends to work.


> The AP can only kill you if you're distracted/asleep at the wheel and not paying attention.

That's not true. There are many situations where the AP could make a sudden turn and kill you long before you had time to react even if you were paying attention. And in fact, the situation in the video seem to be not so far away from that.


Make a sudden turn? It's just lane following, so I really doubt that. And if your hands are on the wheel (as they should be) you'll very quickly become aware that the car is not staying in the lane as it's supposed to, and be in the best position to deliver the right input. Even looking at this video, the car didn't seem to "turn" suddenly and unexpectedly; if anything, it failed to physically take a turn even though it should have. Which is quite bad, but it's not startling to the point where a driver would be in trouble.


> Make a sudden turn? It's just lane following

You're begging the question [1]. The autopilot has full control authority over the steering wheel, so if it fails, nothing constrains it from making a sudden turn. If it is "just lane following" then it hasn't failed (yet).

[1] https://en.wikipedia.org/wiki/Begging_the_question


Your hands. If you're just resting them on the wheel, that's not enough to prevent the attention monitor from working, it actually has to measure some physical resistance. If you resist the automatic inputs too much, the AP cuts out.

You can test this yourself in a tesla by engaging cruise control, then hitting a turn signal. This would normally initiate an automatic lane change - but keep your hands tightly on the wheel as if you wanted to stay in the lane you're in. The wheel will attempt to turn, fail as you're preventing it from turning - and the AP disables.


> Your hands.

Sure, if you react fast enough.

Let's not lose the plot here. The original claim was:

"The AP can only kill you if you're distracted/asleep at the wheel and not paying attention."

(Emphasis added.)

And that's not true. It can kill you, quite simply, by producing the wrong control input in a situation where the available recovery time is less than your reaction time.

If you doubt this, then I challenge you to drive a car where the autopilot is under my control. (It will have to be remote control because no fucking way am I willing to be in the car with you when we do this experiment.)


What I'm trying to imply is that "not paying attention" is exactly what happened here.

The "attention checking" has a delay of a few seconds on it before it'll start warning you to grip the wheel. If your hands were on the wheel, and you were paying attention, there's no reaction time, since the erroneous control input would be overridden by your hands keeping you in your lane.

Put simply, if your grip on the wheel was loose enough to where the computer-generated move could physically move the wheel, you weren't in control of the vehicle and would be generating warnings.


No, it wouldn't. Having your hands are on the wheel doesn't insure that you're paying attention. And if you need to be paying attention 100% of the time anyway, what's the point of having an autopilot?


But having your hands off the wheel absolutely does mean you aren't paying attention. Which is why Tesla and most other self-driving systems I'm aware of check for it. It's a negative signal, not a positive one.

>And if you need to be paying attention 100% of the time anyway, what's the point of having an autopilot?

The same reason cruise control is a thing on every modern car. It down on fatigue, which in turn, should improve safety and comfort. You're still required to be in control of your speed, but the vehicle manages keeping you at the set speed.


This seems like a meaningless argument - we use fly-by-wire systems all the time and your point is true for most of them. Should we be suspicious of electronic throttle because it could theoretically hit the gas at a crosswalk when you tried to stop?


Should we be suspicious of any automation that can cause death and injury if it doesn't work right? Yes. Absolutely. These systems need to be very carefully vetted. When you don't do that, you end up with cars driving into cyclists [1] and barriers [2], and airplanes driving themselves into the ground [3]. Automation that puts civilian lives in danger is not something where you can "move fast and break things" because the things you end up breaking are innocent human beings.

[1] https://www.nytimes.com/2018/03/19/technology/uber-driverles...

[2] https://www.popularmechanics.com/technology/infrastructure/a...

[3] https://www.aviationcv.com/aviation-blog/2019/shocking-facts...


Here's autopilot swerving into oncoming traffic.

https://www.youtube.com/watch?v=ZBaolsFyD9I

Here's autopilot following a lane straight into a concrete barrier.

https://www.youtube.com/watch?v=-2ml6sjk_8c

You can't assume that autopilot won't screw up lane following and swerve into a large obstacle. In those situations it's not as simple as making sure the lane ahead of you is clear, you might only have a split second warning between Autopilot going into "casual murder mode" and a collision.


Consider how widely affected contemporary machine learning models are by adversarial examples [1]. I don't know the specific approach used by Tesla or their release process, but I would not be surprised at all if their software has similar shortcomings.

Granted, a ton of active research is going on trying to prevent these sorts of problems, but it definitely isn't a solved problem.

Basically, ask yourself this: how robust is this software? How are you measuring that?

[1] https://arxiv.org/pdf/1712.07107.pdf


It shouldn't be marketed as 'Autopilot' then. Less informed people will expect it to do more than it can. It's (IMHO, criminally) misleading to market something as being capable of doing something it cannot do.


The issue is that the way the feature is designed and marketed makes it very easy for the driver to be distracted and not paying attention.


OTA is fine. It's unregulated, unannounced OTA that's problematic.

I'd love for my VW to get updates OTA without taking it to the dealer. But, I don't want to receive those updates without knowledge that the update has been tested sufficiently (and given I don't trust the vendor, I'd like NTSB or similar government body to do this on my behalf).


Not eventually - now. This is putting everybody in danger, not just the owners of the cars.


> Eventually we need the NTSB to certify updates before they are pushed out over the air. Similar to what the FAA does.

You mean, similar to what FAA might do if it didn't allow manufucturer self-certification instead, right?

Also, wouldn't NHSTA, which does safety regulation and standards for autos, not NTSB, that does accident investigation, be the natural agency?


> Similar to what the FAA does.

The timing of this is precarious (given the recent 737 Max allegations)


I agree, but is there a way that could happen without slowing the process to a crawl? Depending on what’s involved it could easily push the gap between updates from months to years.


> I agree, but is there a way that could happen without slowing the process to a crawl?

I don't see why a more rigorous release process would slow progress down at all. All the iterations that lead to progress should be done on test vehicles, not customer vehicles.

"Move fast and break things" is a development model that should only be applied to low-importance, low-risk systems. Most software development work occurs on such systems, and I think that narrows the perspective of the software development community as a whole.


This is a feature, not a bug, and should be expected in life critical systems. Would you want Boeing to push updates out as frequently as Tesla does with the same sparse release notes Tesla provides (“bug fixes”) when safety system functionality is modified?

Disclaimer: I own a Model S


Imagine if Boeing got away with just pushing a software update to the 737 MAX8 and saying "it's fixed now".


Well, it might have prevented the second crash if they had treated the matter with a bit more urgency. Depends on whether the two incidents really had a common cause, which is looking like the case.

Of course it's also looking like they should have grounded the fleet after the first crash, given the history of the aircraft prior to its last flight.


This argument would be a bit different if we were losing 30,000 people a year in airliner crashes, instead of approximately zero.


You can still save lives with automotive safety systems and not have a dumpster fire of an SDLC process. There is a spectrum between one firmware update a year and "f* it we'll do it live".


I think a big step forward would be addressing something you mentioned earlier, with respect to documenting what changed in a given update. "Bug fixes" doesn't belong in a Spotify changelog, much less one for a product made by Boeing or Tesla.

It almost seems like a right-to-repair issue, where manufacturers are going out of their way to avoid documenting how their products actually work for fear of losing control over them or of disclosing details that a competitor or patent holder might find useful.

There definitely needs to be a strong regulatory response to that kind of behavior on the manufacturer's part when it comes to safety-critical updates, or even updates that might conceivably impact safety of life. Which, in the airplane business, is basically all of them.


You make excellent points, and I think the unfortunate answer is regulation will be required.


do you want faster updates or a machine that doesn't inadvertently kill you? because for life-critical systems you really can't do both


Autopilot can automatically record all incidents and then upload them to Tesla, with obfuscation and by user permit, to improve autopilot software. Tight feedback loop is much better for AI, IMHO.


Having the NTSB certify updates isn't going to increase safety. The accepted approach to safety as set in ISO 26262 largely focuses on making sure that the processes in place via which the software and hardware is designed, created and modified. The reason you wouldn't get regression if you were interested in FuSa is because you'd have a process in place to ensure that software can't be distributed without being tested and a process to ensure that bugs are included in the test suite.

It's quite clear in this case that Tesla doesn't have an organisational structure that fulfills the requirements for functional safety.


> Having the NTSB certify updates isn't going to increase safety.

It will if they run it through regression tests that Tesla doesn't seem to have the discipline to.


I think Tesla eventually has a catastrophic accident and is sued and/or criminally prosecuted into oblivion. I feel sorry for the people who are going to have to die for this to happen.

This trope that humans are bad drivers is, in general, crap. Humans are very good drivers. The US has 7.3 deaths per billion km driven. This means if you drive 50km a day, every day, you are (essentially) guaranteed to die... after 7500 YEARS. You have less than a 1 in 10 million chance of dying on any given trip you take. That is NOT risky, and is NOT dangerous.


> The US has 7.3 deaths per billion km driven.

This is a meaningless measurement. Segment those km by where they occur. Most of them are either:

1. "cruise-control compatible" kilometers—e.g. freeway straightaways—where you're surrounded by cars but they're all going the same speed in a straight line, and all you need to do to be safe is to go the same speed in a straight line as well.

2. "closed-course" kilometers—e.g. most rural roads, and most suburban roads any time other than rush-hour—where the road may curve and have intersections and such, but at any given time of day, it's probable that there aren't any other cars (or even pedestrians) on the road for you to collide with, no matter how bad your driving is. (Think "roads you'd have a teenager practice driving on." These roads are good for practice because there are effectively no accidents to get into.)

3. (a smaller segment, but still relevant because of the number of freight kilometers driven here:) "empty" kilometers—this has all the properties of segment 2, but also, the road is at grade, and there's nothing abutting the road (i.e. the road isn't a street), so even if you veer off the road, you're unlikely to hit anything. (Examples: the Nevada desert; Saskatchewan; most farmland.)

People point out that the safety-per-km stats for airplanes are a nonsense measurement, because what little crashing that airplanes do tends to mostly occur during the first and last 50km of the planned flight-path—so short flights are just as dangerous as long flights.

Well, the same goes for car accidents. Subtract all the "trivial" driving that humans and AIs can both do by just doing... nothing much at all, with no obstacles/hazards to evaluate, let alone react to. The kilometers that are left (freeway merging; city driving; suburban streets during rush-hour; parking in parking lots) are a lot more crash-y, and are the place where both human and AI competence is questionable.


I feel sorry for the people who are going to have to die for this to happen.

I try to explain this to friends who are far too optimistic on self-crashing car technology. Self-driving cars (SDC) ultimately trade one class of problems that result in death (human-attention deficit) for another class of growing issues (sensor malfunctions/incapabilities, software defects).

Ultimately SDC deaths end up as bugs/features on some random devteam's backlog, and I have no desire to have a JIRA ticket named in my honor.

In my opinion, by the time all the money and effort is spent making SDC's capable of successfully driving from point-A to point-B in the near infinite possible conditions they could encounter, it would have been 50x cheaper to simply build a fully modernized high-speed rail network over existing highways and roads.


> Self-driving cars (SDC) ultimately trade one class of problems that result in death (human-attention deficit) for another class of growing issues (sensor malfunctions/incapabilities, software defects).

One big difference between those classes is what you can do after an accident. With both, you can investigate why it happened and then make recommended changes to prevent it or reduce its chances in the future.

But with driver attention problems, such as drunk driving or driving while texting, it is easy for those recommendations to be ignored. We've been telling people not to drive drunk, not to text while driving, and so on for probably a century in the case of drunk driving, and for as long as texting has existed for texting...and people still do them frequently.

With a software defect, it is a lot easier to make sure that the fix actually gets deployed. Make it part of the annual registration renewal for cars that all safety updates have been applied to their self-driving systems.


You are saying that the cost to develop successful self-driving cars is 1-50 quadrillion dollars using an optimistic estimate of rail costs. That does not seem reasonable. Perhaps 200 billion across all companies have been invested in self-driving cars so far (I.e., Waymo is just a fraction of that.)


Agree, SWAG was heavily W. Updated. I still think we're 50 years off from near-flawless SDC's though, assuming current LOE.


> it would have been 50x cheaper to simply build a fully modernized high-speed rail network

Where do you get that number from?


> Ultimately SDC deaths end up as bugs/features on some random devteam's backlog, and I have no desire to have a JIRA ticket named in my honor.

Instead you can be the victim in a vehicular manslaughter case due to DUI/texting/etc.

As stated elsewhere on this thread, self-driving needs to be measurably better than a human. My state displays how many people have been killed thus far this year on the automatic traffic signs (used for amber alert, traffic info, etc as well). A 20% reduction in the 3000+ folks killed in 2018 would mean a whole lot for those saved.


You're looking at it strictly from a safety perspective which is a small part of the whole picture. If we looked at transportation from a safety perspective throughout history, we'd still be walking everywhere. What's the risk of crashing and dying while walking? Zero; it's fully optimized for safety. We rode horses and now drive cars to save time. Driving to work is unquestionably a better experience than riding a horse into work every day, but it could be better- we could be sleeping on our way into the office, or starting our work day as we leave our driveway, or eating a good breakfast and finishing our makeup (while not endangering everyone else).

Having commuting time available as what amounts to "free time" is an insane boost to daily life. A quick google says Americans spend 12.2 days per year in their cars. If 300 million people can save 12 days' time per year, you're freeing up ten million years of time every year.

I'm not advocating for throwing everyone in self driving cars untested and who cares how many people die, but if, on the journey to saving many millennia of time every year, a person is killed, why should the company be sued into oblivion? If we're going to sue everyone into oblivion whenever anything doesn't go quite right, why would any company ever try to take on difficult problems?


"What's the risk of crashing and dying while walking? Zero; it's fully optimized for safety."

Huh? Have you never seen or heard of anyone falling, hitting their head, and causing a concussion and/or death?


Cars have gotten so safe though. There's all this measures put in place to ensure your survival _in case of a crash_. Wouldn't it be more relevant to calculate the chances of getting injured / end up in a crash at all?

I mean just being caught up in a traffic accident and have no bodily harm done to you can be a traumatic event.


You're not wrong. There are plenty of accidents caused by inattention and poor driving that aren't fatal, but that do result in injuries... a quick search didn't give me much information but crashes with injuries appear to be about 100x more common than fatal crashes. This doesn't take into account how serious the injuries are, however.

So if we assume that serious injuries are 10x more common than fatalities, that raises the odds to one incident per 750 years, or 1 incident per 75 years of any accident at all. You're still talking about things that happen to people once or twice in their entire lives. Of course there will be outliers (I've been in 5 accidents myself, only 1 with injuries) but the odds of being in a crash don't seem to be worth the risk of trying to automate mousetraps... we should concentrate on replacing car-based transportation instead of trying to use magical black boxes to make it theoretically safer.


As an aside, as of 2019, you are more likely to die of an opiate overdose in the US than as a result of a car accident.


Yes, this is a very important distinction. Humans have gotten pretty good at not getting killed in car accidents, mostly due to increasing safety engineering and assistance features in cars. Humans are not very good at avoiding collisions entirely. In 2016 there were 6.3 million motor vehicle accidents reported to police. And NHTSA estimates about 10 million accidents go unreported each year (mostly minor fender benders and people damaging their own vehicles on fixed objects). To me, that's pretty clear evidence that humans are poor drivers.


Humans are not very good at avoiding collisions entirely.

So far, neither are Teslas...And the point of this reddit thread is that any progress that Tesla does make on this front can be instantly reverted in a future update.


> This trope that humans are bad drivers

Humans are bad in general. At least, self-driving cars will not be texting or using their phone while driving or involving in road rage. This list is really long.

"Nearly 1.25 million people die in road crashes each year, on average 3,287 deaths a day. An additional 20-50 million are injured or disabled. More than half of all road traffic deaths occur among young adults ages 15-44"

https://www.asirt.org/safe-travel/road-safety-facts/


Certainly we can do better than this tho? Your bar for very good driver" is pretty low. I'm sure everyone has at least one acquaintance they know who has died in a car crash. I can count 3 who have died from car crashes and 3 others who have been seriously injured. Humans arent going to get any better at driving. Something has got to change.


30k deaths a year in US, 50 years of driving. That is 1.5M deaths over your lifetime. Let’s say there are 690M drivers throughout your lifetime. You have a quarter of a percent chance of driving being your cause of death.

Also, 2.45% of deaths worldwide are from road accidents: https://ourworldindata.org/grapher/share-of-deaths-by-cause-...


yet the leading cause of death in youths is automobile accidents:

https://upload.wikimedia.org/wikipedia/commons/a/a5/Causes_o...


Well, we don't need self driving cars to fix that, just a way to increase the fatalities from cancer, heart disease, and Alzheimers in the young.


I hope not, yes there will be death, nothing is perfect, but i will just consider it an acceptable loss.


Scary... software developers at these robotic cars and their mistakes/bugs aren’t just going to bring down a business application(lose money) but kill their customers and innocent drivers.

Progress to where it’s safer is going to be a killer and we the drivers on the road are unwilling guinea pigs to billionaires’ dreams/goals.


Willing guinea pig here. I'm going to do my best not to die, but I'm excited about this tech and willing to deal with the drawbacks of being an early adopter.


It's great that you're willing to die for Tesla's profit, but you realize that autonomous vehicle-induced crashes affect everyone on the road? Even someone who doesn't consent to Tesla's TOS is still sharing the road with potentially dangerous software.

Granted, individual drivers are awful enough that it probably doesn't make that huge a difference in danger. But would you still feel the way if a family member was killed in an accident where their human-controlled car was rear-ended by a Tesla?


I don't think he said he's willing to die. He said he's willing to test it, and presumably he'll continue concentrating and will manually take over if the car does something dangerous.


He came fairly close, honestly. "I'm going to do my best not to die, but I'm excited about this tech" is recognizing, and accepting that death is a possibility as a result of Tesla's self driving process.


There's risk of dying every time I get behind the wheel of any car. There's no benefit to pretending real risk doesn't exist in any given scenario. I'm confident that I can be attentive and cautious enough using this tech to keep the risk similar to what it would be just driving normally.


That's assuming he's always able to intervene before it kills him. The argument is that he may not always be able to or prevent AP from behaving erratically enough that results in killing someone else (while he saves his own life).


Everyone puts their lives in the hands of other machines daily, for instance brakes or automated elevators or medical devices; driving is an inherently dangerous activity. Computer or human won't change that. If we delay computers getting better than humans, we will just have status quo which is 30k or more road deaths yearly in the US.


Elevators and medical devices go through extensive testing and certification processes before they ever go near being put into service. And when they are updated, they again go through extensive testing and certification.

Teslas, on the other hand, apparently change their handling and driving profile overnight at the whim of the software engineers at Tesla, without even telling the drivers, and introducing bugs like the OP that are liable to get someone killed.

They are not the same, and comparing them only highlights the issues that Tesla has around their OTA update practice.


Insane such type of commits into production need to be government regulated and scrutinized.


> apparently change their handling and driving profile overnight at the whim of the software engineers at Tesla, without even telling the drivers

OTA updates only happen after confirmed by the users. Where did you hear that it happens without user intervention?


According to the linked Reddit poster “ Tesla's only release notes for this release were DOG MODE and SENTRY MODE. They don't tell you there is a massive change to AP and to reset your expectations.“


What part of this behavior do you think is covered by:

* Improved DOG MODE

* Improved SENTRY MODE

Which were the release notes for the update?


You're right. The owner/driver has to approve the update, if I recall. But putting responsibility to review and understand release notes on the car's owner seems kind of absurd. And that's assuming that you have accurate and descriptive release notes, which was most certainly not the case for the described instance. In any case, clicking "update" is such a rote behavior for users on computers, phones, and now their Teslas, I'd argue that it's effectively no different than an update happening without notice or user intervention.

There's only so much you can learn from even the best release notes, period. The ever-so-common "bug fixes," for example, is so broad that it effectively means nothing at all. At best, it's telling the end user "this little update just changes some stuff hidden under the hood. You won't notice anything, so don't give it any thought."


Disclosure seems like a red herring, if there is no real choice other than to accept the update. If I get an update to my car that says "this may cause your car to explode at random times", and I don't want to scrap it, the only thing I can do is look around and see if other people are ignoring the warning, and then rationalize that it won't happen to me.

You can't ever look at consent outside of the context of the best available alternative to agreeing to something.


But they agreed to the terms of service, what is everyone complaining about?


On the other hand, if enough people die because a company rushed self driving to market before it's ready there's a very real chance of knee jerk regulation setting the technology back even further.


> But would you still feel the way if a family member was killed in an accident where their human-controlled car was rear-ended by a Tesla?

I don't have a dog in this fight, but appeals to emotion in order to drive irrational thinking do not make for constructive debate.


> appeals to emotion in order to drive irrational thinking do not make for constructive debate.

In a perfect world, sure. Real world, you will never have an inherently emotional situation (road safety) where the only voices heard are those of completely detached individuals.

As humans, we have to figure out ways to connect with them, and empathize with what they're feeling. Simply dismissing their concerns as driven by emotion isn't a winning strategy.


Understanding the emotional reactions of people to situations is important. "How would you feel if"-type statements do not do that. They do, and are often intended to, shut down conversation instead of foster it.


I disagree.

The gp said they're a "Willing guinea pig".

The parent pointed out it isn't just their lives on the line, but others, potentially including their family.

That isn't irrational at all. Or an appeal to emotion.


> we the drivers on the road are unwilling guinea pigs to billionaires’ dreams/goals.

we'll adapt, ie. adjust our behavior to account for that new factor. Police for example have already learnt how to pretty safely stop a Tesla on autopilot with a driver sleeping behind the wheel and not reacting to any signals (because of being deadly drunk for example).


I like comments where you can't tell if the author is defending something, or absolutely condemning it.


Right now 30,000-40,000 people die in car crashes annually: https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in.... And those are just fatalities: hundreds of thousands more are injured. How do we deal with those presently?

We just accept that amateurs should be hauling around at high speeds in several thousand pounds of missile.

The most relevant question is whether Tesla AP is safer or less safe than typical amateur drivers per 1,000 vehicle miles driven. I don't know the answer to that question.


Unwilling? Don't buy a Tesla. Don't believe the hype. It's that simple. Granted, there is nothing stopping any ECU from killing you, but I'd trust a company like Honda way before Tesla.


"Unwilling? Don't buy a Tesla."

That doesn't protect me from being killed by a Tesla. I am pretty neutral on the topic but I am getting the feeling that they are in danger of pushing out half baked stuff like we tend to do in software. For most things like software this is OK but maybe not for things that are moving at high speeds.


Reminder that it has been and still is the norm for the last half century for 30k people to die in car accidents each year. Many more injured and disabled.

Better yet, texting while driving increases the risk of an accident while driving 23x


Tesla crashes can injure people who don't drive a Tesla themselves.


So can Honda crashes.


Honda isn't experimenting on the public at large with unproven technology. One of their suppliers did, and ended up bankrupt as a result.


I generally have zero interest in cars and don't follow the new models, but my impression from articles I've seen in the last years is that Volvo is actually a top contender for driver assistant systems (when you don't fool yourself into thinking you have an autopilot, but you really want sensible safety augmentation features).

Is that impression accurate?


GM cruise is the best based on this article, when you take "keeping driver engaged" as one of the criteria: https://www.consumerreports.org/autonomous-driving/cadillac-...

Volvo's tech is last among the ones compared.


I have no idea. Volvo certainly has the culture to do something good out of it. But do they have the money and resources required? Today they are owned by Chinese Geely. I don't know what partnerships and capital they can work with to compete with the top contenders (who I assume have Silicon Valley capital behind them)


Geely has invested a lot into Volvo[1], and Volvo are innovating in interesting ways[2] . I would choose the electric Polestar 2 over a Tesla in the same price-bracket due to Volvo's culture of safety. Hopefully the cheaper versions will be released soon.

1. https://www.bloomberg.com/news/features/2018-05-24/volvo-is-...

2. https://arstechnica.com/cars/2019/02/volvo-spinoff-polestar-...


Waymo and Cadillac Super Cruise are generally considered the market leaders.


I wonder how do unit tests work with NN (or if they're even a relevant concept at all).

You could replay some testing video frames and make sure the objects are correctly identified, but i suppose that's already what training is about...

If an issue like that resurface, does it mean that the original frames leading to the 2018 accident aren't part of the training (or at least frames from someone driving in this kind of scenario) ?


Yes, the problem is indeed that there's no real way to "look into" a neural network and understand how it has been trained. All you can do is observe that the given inputs generate the desired outputs.

Even if there was training based on the 2018 frames, that doesn't mean that you have verifiably fixed the problem. It's difficult to train a neural network selectively – every time you "train" the network with additional data, you are increasing the chance that you are also teaching it something you didn't intend which then can have a side-effect in some seemingly unrelated scenario.

You can see this in real life with image recognition networks. Teach them too much and they gradually become less effective at identifying anything.


The frames from the 2018 should not be part of the training. They should be part of the test set to prove whatever training they do works.


Well, testing in the strict sense would be measuring accuracy on data not in the training set, and even accuracy on the training set data (which isn't guaranteed to be 100% I believe)


You don't unit test a NN. You can unit test certain functions, but fundamentally this is an integration test.

This is why serious scientific training is needed to understand these complex systems when health and safety are on the line.


"Autonomous vehicle integration test track" could make for a great setting in a spy thriller. The villain could own the megacorporation which makes the cars, and the heroes could find evidence of their evil plot among the sprawling acres of labs and potemkin streets. But then, in the distance, the sound of revving engines...

Seriously though, I wonder if that sort of physical test track will become popular. You would load your build onto an idle car, queue it up, and make sure that it didn't hit any of the silhouettes which spring up, unusual traffic and weather conditions, etc. They must already do that in some capacity, right?


> "Autonomous vehicle integration test track"

Just say "the 101", it's shorter.


Should be using integration tests, not unit tests. I.e. SITL simulations + having the car drive around test circuits / scenarios.


My first thought was, well someone is in store for a revamp of their regression test suite.


Unit tests are not sufficient for neural networks. Say the network takes 2D images of typical 224x224 pixels with 3 channels (RGB) of 8 bit values, this input space has 256^(224x244x3) = 3.5×10^362507 possibilities. Billions of years to test them all. This is before we consider stereo vision, 3D images and state over time. How to know which subset of these inputs are necessary to give reasonable coverage? Right now I don't think we have very good answers to these things. Of course one can always add a regression test (with some K mutations) when someone crashes. It is better than nothing, but hardly good assurance that something like this will never happen again.

The entire area of safety and quality assurance with neural networks is still in active research actively, from multiple angles. For some examples of how chaotic neural networks can be, look up 'adversarial examples'.


I have always wondered... Does the AP have some higher-level notion of object permanence, continuity (road behind a horizon or after a curve) and things like that? Does it track a pedestrian that is momentarily hidden behind an obstacle and will probably reemerge in a second or two on the other side? Does it expect that kids may run after that ball that just flew from behind a car? Does it continuously track and improve classification of all objects in the field of vision, with their trajectories and speeds if they are moving? Personally I don't think it does, otherwise it would not erratically slam into clearly visible and marked large objects in its way, or it would be aware of a truck moving in a perpendicular way and so on. I am of opinion that without such higher-level awareness it can never succeed, hope to learn about the state of the self-driving art.


In these videos, "Autopilot" is mentioned as the culprit, which seems to be a subset of the features Tesla has.

It seems to me there are 3 layers:

(level 0: Just Adaptive Cruise Control (use radar to adjust speed up to a max). Human still steers.

- 1: Adaptive Cruise Control (use radar to adjust speed up to a max) + Autosteer (camera's watch the lane markers and follow them). This is referred to as "Autopilot"

NOTE: This is where the accidents happen. The car isn't driving towards a barrier, it's following the lane markers and hits an error state. This is also NOT self driving.

- 2: "Nav on Autopilot". This is an additional function you turn on where the car has more intelligent (use this word loosely) capabilities on highways. The car will still do everything on level 1 combined with lane changes (using cameras to detect objects and trajectories differentiating cars from trucks from pedestrians from bikes from motorcycles etc). It will still follow lane lines, but with a lot of additional information (is there an object? am I merging? am I exiting? etc)

- 3: "Full Self Driving". This is an additional package that doesn't exist to the public. Internally I'm sure they're testing the functionality but this is using all of the sensors and algorithms and likely neural networks to decide what to do. A cool point though is that all Tesla's are likely running this code in "shadow mode" where data can be collected and assumptions can be tested without endangering any actual drivers. (see here for some cool data on this: https://electrek.co/2019/03/05/tesla-autopilot-detects-stop-...). "Hey, I think the car, if full self driving SHOULD take action X. *compare to what the driver actually does and log the data" over BILLIONS of miles

So when a Tesla hits the barrier or gets in an accident, we're actually running "#1" and people start freaking out. But when we get to the capabilities of #3 a lot of the "object permanence and continuity" stuff starts to come into play.

Full Disclosure: - I drive a Model 3 every day on Autopilot 75% of the time - I only 75% think self driving is capability under the current Tesla software suite but I bought the package anyways.


I don't think neural networks are wired to "remember" things. In theory, they could be hooked up that way. But your typical convolutional neural network is looking at things frame-by-frame.

In theory, ANNs could have an output layer that passes data from one frame to another frame to assist things. But there's no real programming to "hardcode" something like object permanence into an ANN. You pretty much throw a bunch of data into the system and hope for the best.


NNs are just the first step in the pipeline. Their outputs (detected objects, segmentation, etc) will be piped into other software that builds higher level models.

Considering the path-planning requirements I would be absolutely shocked if Autopilot wasn't build history models and estimated paths for objects around the vehicle (other cars etc).


Agreed; I imagine they use neural networks to detect and classify objects which are then saved into a scene-graph for use in pathing.

I expect what happened was that they trained their NNs for improved detection in one area but unknowingly reduced it in another. Perhaps now it can detect tricycles 99% but road barriers went down to only 30%. Having worked with NNs it's very common to see gains in one domain which come at a cost of reduced performance in another.


They must have known. I haven’t worked with NNs for a few years but I don’t believe the methodology has changed where you would stop testing it over different sets of training data.

Barriers are a pretty big part of driving on roads and highways and the only reason it would have been unknowingly reduced would be if they just weren’t testing the NN against data with them.


There are architectures that use CNN on image inputs, and LSTMs across frames of the video to keep memory.

https://arxiv.org/pdf/1609.06377.pdf


Thanks (to all in this subthread). I've skimmed through this doc, I have watched https://vimeo.com/274274744 linked from original reddit discussion and I have lost all remaining faith in AP as it is today. This should be in a closed alpha version, not anywhere near paying customers and marketed as FSD ready/feature ready as it's none of that and won't be for many years. I expect there will be at least a generation of Tesla cars sold with FSD-readiness package that will never see FSD in their lifetime.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: