Here's a good example - a Tesla on autopilot crashing into a temporary road barrier which required a lane change. This is a view from the dashcam of the vehicle behind the Tesla. At 00:21, things look normal. At 00:22, the Tesla should just be starting to turn to follow the lane and avoid the barrier, but it isn't. By 00:23, it's hit the wall. By the time the driver could have detected that failure, it was too late.
Big, solid, obvious orange obstacle. Freeway on a clear day. Tesla's system didn't detect it. By the time it was clear that the driver needed to take over, it was too late. This is why, as the head of Google's self driving effort once said, partial self driving "assistance" is inherently unsafe. Lane following assistance without good automatic braking kills.
This is the Tesla self-crashing car in action. Tesla fails at the basic task of self-driving - not hitting obstacles. If it doesn't look like the rear end of a car or truck, it gets hit. So far, one street sweeper, one fire truck, one disabled car, one crossing tractor trailer, and two freeway barriers have been hit. Those are the ones that got press attention. There are probably more incidents.
Automatic driving the Waymo way seems to be working. Automatic driving the Tesla way leaves a trail of blood and death. That is not an accident. It follows directly from Musk's decision to cut costs by trying to do the job with inadequate sensors and processing.
I have been thinking that too. At what point do you decide that the autopilot is making a mistake and take over? That's an almost impossible task to perform within the available time.
I can understand why this wouldn't happen from a business perspective, and it's also presumably not as simple to implement as I'm implying, but I can't think of a better way to get around the uncertainty of whether the car's operating in an expected way or not.
The "confidence meter" would almost certainly be 100% right up until it crashes into an obvious obstacle
Yeah, that's basically what I'm asking for, maybe a warning light at a certain threshold would be a better default, personally I'd still find a number more trustworthy though.
> The "confidence meter" would almost certainly be 100% right up until it crashes into an obvious obstacle
If image recognition algorithms have associated confidence levels, I'd be surprised something more complicated like road navigation was 100% certain all the time.
Unless you design the network to specifically answer "how similar is this?" and pair it with a training set and the output of another executing neural net, a "confidence meter" isn't possible.
This is one of the trickier bits to wrap one's mind around with neural networks. The systems we train for specialized tasks have no concept of 'confidence' or being wrong. There is no meta-awareness that judges how well a task is being performed in real-time outside of the training environment.
Humans don't suffer from this issue (as much) due to the mind bogglingly large and complex neural nets we have to deal with the world. Every time you set yourself to practicing a task, you are putting a previously trained neural network through further training and evolution. You can recognize when you are doing it because the process of doing so takes conscious effort. You are not 'just doing <the task>', you are 'doing <the task>, and comparing your performance against an ideation or metric of how you WANT <the task> to be performed'. This process is basically your pre-frontal cortex and occipital lobe for instance, tuning your hippocampus, sensory and motor cortex to perfect the perfect tennis swing.
When we train visual neural networks, we're talking levels of intelligence supplied by the occipital lobe , hippocampus, alone. Imagine every time you hear about a neural net that a human was lobotomized until they could perform ONLY that task with any reliability.
Kinda changes the comfort level with letting one of these take the wheel doesn't it?
Neural nets are REALLY neat. Don't get me wrong. I love them. Unfortunately, the real world is also INCREDIBLY hard to safely operate in. Many little things that humans 'just do' are the results of our own brains 'hacking' together older neural nets with a higher level one.
Calibrating the probabilities of machine learning algorithms is an old problem. By nature of disciminative algorithms and increasing model capacities, yes training typically pushes outputs to one extreme. There is a ton of information still maintained which can be properly calibrated for downstream ingestion, which anyone actually trying to integrate these into actual applications should be doing.
If for instance it failed to detect an obstacle how could it tell you that it failed detecting an obstacle? Or that it got the road markings wrong?
Given that the consequences for making a mistake might be people dying I certainly hope that's not how it works under the hood. Anything less than "I'm absolutely sure that I'm not currently speeding towards an obstacle" should automatically trigger a safety system that would cause the autopilot to disengage and give the control back to the driver, maybe engaging the emergency lights to let the other drivers know that something is amiss.
I wouldn't be surprised if in the moments preceding these crashes the autopilot algorithm was completely sure it was doing the right thing.
Of course these various crashes show that in fact the algorithm makes mistakes sometimes but I certainly hope that it's not aware that it's making mistakes, it just messes up while it thinks everything is going as planned.
Personally having tried Tesla AP1 iit was painfully clear I can never fully trust any assisted driving aid, they can fail at any point and I need to be ready to take over immediately. Particularly having to wrestle the steering wheel from AP1 as it slightly resists is reason enough to stop using it.
I think I'd find it a simpler metric to follow than the more qualitative information about car behaviour, as well as more useful to decide whether the car or I should be in control.
You're quite right about timing, I have no idea whether this metric would plummet quickly or gracefully drop so the driver would have time to take charge once a threshold was hit.
I want to stress that this is an underbaked idea and really just a thought experiment about what would make me personally feel more comfortable with driving a semi-autonomous vehicle.
This should be a law.
#1 job of any self driving system is to avoid obstacles. If you can’t do that reliably, it’s a fail!
It's just not good enough. Semi-autonomous cars can't be hitting stationary objects without reacting at all. If they fail at this basic task, then they shouldn't be allowed on the road.
You probably have 100ms to begin your action, 500ms to complete it.
Good luck with that if you're not perfectly alert.
The Queensland government estimates it takes drivers 1.5 seconds to react to an emergency  (I wouldn't be surprised if that was a 99th percentile figure, of course)
The only thing we can blame the victim for here is foolishly trusting Tesla's apparently crap software.
If you have to be constantly at this level of alertness, what is even the point of having an autopilot?
Your cognitive load is lowered by Autopilot enormously. You can pay better attention to what’s going on around you and anticipate issues (say, a car that drivers weirdly in front of you) because you’re not busy with the low-level driving.
Part of that is that you see a difficult road stretch coming up and disengage the assist IN ADVANCE - becuase you’re not a complete moron, know the lane assist is for driving in highway lanes and the thing you’re approaching is decisively not one.
If you over-automate something you end up having to spend more time supervising and fixing it than if you'd just done the thing yourself in the first place.
Auto-pilot will make driving a lot more like flying is for pilots: 99% extremely dull, 1% terrifying.
A funny feeling like that, I’m ready to disengage immediately and then 1s is plenty of time. But I wouldn’t use Autopilot near an obstacle like that to begin with - anticipate, take over for the weird stretches.
Regardless, you're saying that if you're not paying attention every single second of the trip, instantly ready to rect to even the slightest deviation from your intended trajectory, then you're doing it wrong.
What, exactly, is the point of an auto-pilot system if you need to be in that state of absolute alertness?
Also unless certified on certain roads with clear markers, drivers should be required to have hands on wheels at all times.
This is a great point. With regular cars, drivers can reliably predict where the car will be after 5 seconds. However, with Tesla, there is really no way to know what the auto-driver will do, and so the driver has to stay even more attentive to ensure that it does not do something wrong. It's like driving a car that is not under your control, and may go in random directions.
Of course if users get to used to such differences then those can become 'features' which only makes progress harder.
That means the autopilot doesn't help at all since you would have to basically drive manually all the time anyway.
I would think it'd be seen as a "distraction", those suction cup mounts for example are illegal in a lot of states.
The purpose of the autopilot is to increase safety for drivers that are not attentive. However, it turns out that Tesla requires drivers to be even more attentive than regular vehicles. This is not a sign of progression.
We can differ in opinions but I strongly feel that the eventual utopia of self driving cars will be much safer than the current world. And we are making progress on this daily. There will be few unfortunate incidents but in the long term a lot more lives will be saved.
Unless you're suggesting that Tesla is justified in actively making people less safe because it helps them develop self-driving technology faster?
In that case they should issue a recall, and not use real users as their test drivers. Once they get to their eventual goal, they can resume selling autopilot cars.
> We can differ in opinions but I strongly feel that the eventual utopia of self driving cars will be much safer than the current world. And we are making progress on this daily. There will be few unfortunate incidents but in the long term a lot more lives will be saved.
I totally agree with you that in the long term self-driving cars are much safer. What I do not agree with is Tesla's approach towards achieving that goal. They are selling an unsafe product while marketing it as much safer than the existing products. This is just plain fraud.
Sorry, but this is the exact wrong conclusion to make from the available data. Partially automated cars have been driven by average people on the public roads since 2006, when the Mercedes S class got active steering and ACC. There is no data that says that these systems, which are available on cars as affordable as e.g. a VW Polo, lead to more crashes. In fact the only data I have seen suggests that they reduce crashes and driver fatigue significantly.
The issue that such a feature is systematically crashing and even killing its users is very specific to Tesla. Tesla's system has had a tendency to run into stationary objects for as long as it has been available to the public. Teslas have a significantly higher rate of causing insurance claims. There are countless videos showing it happening, the two (possibly three) fatalities so far are just the tip of the iceberg. Tesla blamed MobilEye (their previous supplier of camera-based object detection hardware), but it seems fairly evident that Tesla's system has the exact same systemic issue even with their new in-house object detection software.
My opinion as an engineer working on this exact kind of system for one of Tesla's competitors is that Tesla's system just isn't robust enough to be allowed to be used like all other manufacturer's autosteer/ACC systems. IMO that's because their basic approach (doing simple line-following on the lane markings) is super naive. A much more intricate approach needed. One that does some level of reasoning on what is a driving surface and what isn't. This has been known in the automotive industry for over a decade. Sorry to be so absolute, but frankly it's just unnacceptable that Tesla is gambling their customer's lives on such a ridiculously half-baked implementation.
> Lane following assistance without good automatic braking kills.
Correct. It also kills when it lies to the driver about how confident it is in a given situation. Many of the crashes, instances of cars veering off into oncoming lanes etc. we have seen would not have happened, if Tesla would fall back to the driver in sketchy situations. However what it does do instead is not even notfy the driver that the system is hanging on by the barest of threads. This works out fine quite often and makes it seem like the car can detect the lanes successfully even in sketchy situations. Yet once the system (and the driver) runs out of luck, there is absolutely no chance to rectify the situation.
What the GP was referring to, which is what I've see it referred to in Google presentations is level 3 automation. That is what autopilot is supposed to be, where you can take you hands off the wheel and the car can drive itself but needs constant supervision.
What you're referring to is passive assistance such as collision detection which don't drive the car but provide alerts. There is nothing wrong with this technology, it works because the driver has to drive as they normally would but it provides assistance when it spots something the driver doesn't. Here humans do the object detection and the computers stay alert as a backup.
There is a problem when the car drives itself but the driver has to stay alert. This is a problem because the human is much better at object detection and the car is much better at staying alert.
A level 2 system has to be monitored 100% of the time, because it can not be trusted to warn the driver and fall back to manual automatically in every situation. A level 3 system is robust enough to make this guarantee, which means that the driver can take his/her hands off the wheel until prompted to take back control.
Autopilot is a level 2 system, because you can not take your hands of the wheel of a Tesla and expect to not die. In fact the car itself will tell you to put your hands back on the wheel after a short while. After the fatal crashes started getting into the media, Tesla themselves have stated that it's only a level 2 system. Their CEO's weird claims of Autopilot being able to handle 90% of all driving tasks, there being a coast-to-coast Autopilot demonstration run in 2017 etc. are just marketing BS.
However I still disagree with your original comment:
> Sorry, but this is the exact wrong conclusion to make from the available data. Partially automated cars have been driven by average people on the public roads since 2006, when the Mercedes S class got active steering and ACC. There is no data that says that these systems, which are available on cars as affordable as e.g. a VW Polo, lead to more crashes. In fact the only data I have seen suggests that they reduce crashes and driver fatigue significantly.
Firstly, I don't see any examples of VW Polo claiming to have a steering assist, even the most recent 2017 edition  only has level 1 features . Steering assist only came to the Mecedes S-Class in 2014 .
What I (and the parent comment you initially replied to) was tying to claim was that where as Level 1 features  (cruise control, automatic braking, and lane keeping) are great, level 2 (and level 3 according to Google/Waymo) is risky because humans cannot be trusted to be alert and ready to take over.
Tesla can easily fix this - disable autopilot if the user does not have his hands on the wheel. Disable autopilot at construction zones. Disable autopilot sooner when confidence level is lowered.
I test drove an “autopilot 2” Tesla a while back. Avoiding hitting trash cans and parked cars was much harder than it would have been if Autopilot had been off. When you’re on a road and the car decides to steer toward an obstacle, you have very little time to correct the car.
Apparently they loose the colour info and process black and white because it is easier. https://electrek.co/2017/05/16/tesla-autopilot-2-0-can-see/
This seems to have been an issue in some of the crashes and maybe was a mistake. See also for example the trailer an earlier Tesla crashed into https://www.theregister.co.uk/2017/06/20/tesla_death_crash_a...
Kind of obvious in colour but at the time it was probably a grey shape against a similar brightness grey sky
Also the big red fire truck http://autoweek.com/article/technology/ntsb-probe-autopilot-...
If that's true, it looks like a very big mistake to me, considering that we use distinguishable color a lot to signal risks.
But knowing all that, and now knowing Tesla made that short cut, there is no way on the face of this earth I am ever driving a Tesla, or for that matter, driving a car anywhere in the USA that has them on the road.
You guys have a really good train network, right?
About the US train network, it's good for freight but passenger trains are slow.
Tesla's stance to blame the driver without admitting that their hardware is inadequate will back them into a corner in the long term. They cannot transition from highway to city driving without significant changes to the autopilot software and hardware and this means they effectively have to start over from scratch. They have to start working on a serious competitor to Uber or Waymo now otherwise Tesla will be too late to the market.
If memory serves, Waymo's way seems to be summarised by the sentence "driver assist which we can have right now is inherently dangerous so let's go for full autonomy which we can't have for many years still".
A more accurate comparison of the two companies then might be that they are both trying to sell technology that they don't yet have (autonomous driving) and that only one of them is at least trying to develop (Waymo).
However, it should be absolutely clear that autonomous driving is currently only a promise and that this is true for every product being developed.
Honestly the autopilot has changed my driving experience and I hate when I have to drive cars without it.
I do agree that you need to be aware of your surroundings and ready to take over, though.
I often wonder if auto-pilot is truly 100% auto-pilot if it requires you to keep an eye out for its "Hey maybe I'll screw it up, so please keep a watch on me"
If your hands are on a steering wheel, you’re watching the road, and autopilot begins turning towards an obstacle, you should be aware enough to grip the wheel fully and prevent the incorrect turn.
End of story.
Edit: “Autopilot” is a great brand-name for the technology, but perhaps implies too much self-driving capability. Maybe “assisted steering” is a better term for this.
> End of story.
No, not true at all. If you’re in a car, paying complete attention, and holding the wheel, but keeping your arms loose enough that the car is fully controlling the steering, you somehow need to notice the car’s error and take over in something on the order of a second or perhaps much less. Keep in mind that, on many freeway ramps, there are places where you only miss an obstacle by a couple of feet when driving correctly. If the car suddenly stops following the lane correctly, you have very little time to fix it.
It seems to me that errors of the sort that Autopilot seems to make regularly are very difficult for even attentive drivers to recover from.
Is that supposed to make me feel better? If the car can go from fine to crashing into a barrier in only 6 seconds, that seems like a damnation of Autopilot more so than the driver.
It might not solve all these accidents but I think a HUD showing where the car plans to travel would work better. If you knew the car was aiming off the road before it turned, you could override the controls in time to correct. It still would require lots of attention and fast reaction time (and still may be less safe than manual driving), but it would at least be better than the situation we are in now.
If this was true, wouldn't Tesla vehicles have a higher rate of crashes, all things being equal? Is there any evidence that this is the case?
>the Model S had higher claim frequencies, higher claim severities and higher overall losses than other large luxury cars. Under collision coverage, for example, analysts estimated that the Model S's mileage-adjusted claim frequency was 37 percent higher than the comparison group, claim severity was 64 percent higher, and overall losses were 124 percent higher.
>>Is there any evidence that this is the case?
Porsche also has 3x the accident rate of Daewoo. That doesn't mean Daewoo cars are 3x as safe, it just means that people who are looking for a hot-rod buy a Porsche and not a Daewoo.
This is not the claim in the linked article. The linked article claims that Tesla cars have a higher claim frequency than comparable gasoline-powered cars (i.e. large luxury cars such as a Porsche), whereas for example the Nissan Leaf has a lower claim frequency than comparable gasoline-powered cars (namely the Nissan Versa).
Put another way, if your choice is between a Tesla and a gasoline-powered large luxury car, the Tesla is more dangerous. If your choice is between the Nissan Leaf and the Nissan Versa, the Leaf is less dangerous. There was no comparison between the relative danger from a Tesla and a Leaf.
I'm pointing out that all other things aren't equal, and you can't assume from overall crash data that you can tease out statistics about how safe a specific feature of the car is.
The comparison to the Fiat 500 is relevant because while the report didn't only compare Tesla vehicles to it, that's one of the comparisons.
Is a Tesla less safe than a Fiat 500 given that it's driven by the same sorts of drivers, in similar conditions and just as carefully as a Fiat 500? Maybe, but who knows? We don't have that data, since there's an up-front selection bias when you buy a high-performance luxury car.
I wasn't able to find the raw report mentioned in this article, but here's a similar older report they've published:
There you can see that the claim frequency of Tesla is indeed a bit higher than all other cars they're compared to, but this doesn't hold when adjusted for claim severity or overall losses. There cars like the BMW M6 and the Audi RS7 pull ahead of Tesla by far.
So at the very least you'd have to be making the claim that even if this data somehow showed how badly performing Autopilot was, that it was mainly causing things like minor scratches, not severe damage such as crashing into a freeway divider.
Just looking at these numbers there seems, to me anyway, to be a much stronger correlation between lack of safety and whether the buyer is a rich guy undergoing a mid-life crisis than any sort of Autopilot feature.
It isn't clear whether the demographics of Tesla drivers are more reckless than that of other luxury brands or not, as you point out Porsche drivers might tend to be more interested in going fast than Daewoo drivers. For Tesla on the one hand you attract people who are interested in helping the environment who I expect to be more conscientious and maybe therefore better drivers. But on the other hand there is Ludicrous Mode.
But since Tesla could have had a lower or higher crash rate than other brands and does have a higher rate we have to update our beliefs in the direction of the car being more likely to crash by conservation of expected evidence. Unless you'd argue that a lower crash rate means that Tesla's safety features prevent lots of crashes.
An example of the inverse of this concept is the roundabout or traffic circle, which has a higher rate of much lower severity accidents than traffic lights or stop signs
- edit - Removed irrelevant edit.
Any references to support that statement?
First, how do you know it was Musk's decision? And second, when you imply that by using lower cost sensors (cameras), Tesla is cutting costs, how do you support that?
Their lower cost of sensors might be offset by having to spend more on software development for image processing, for example. When summed up, the overall cost per car may increase significantly.
At least to me, it is not clear at all whether Tesla's self-driving technology cost per car is less or more than it is e.g. for cars which use lidar sensors.
In any case, self driving cars will have accidents, not the least because some accidents are unavoidable (random behaviour of other cars, animals or pedestrians, unexpected slippery roads, etc), but running heads on into a large visible static object should never happen. It’s a bug.
you are the one telling anecdotal experience, jfyi
We take great care in building our cars to save lives. Forty thousands Americans die on the roads each year. That's a statistic. But even a single death of a Tesla driver or passenger is a tragedy. This has affected everyone on our team deeply, and our hearts go out to the family and friends of Walter Huang.
We've recovered data that indicates Autopilot was engaged at the time of the accident. The vehicle drove straight into the barrier. In the five seconds leading up to the crash, neither Autopilot nor the driver took any evasive action.
Our engineers are investigating why the car failed to detect or avoid the obstacle. Any lessons we can take from this tragedy will be deployed across our entire fleet of vehicles. Saving other lives is the best we can hope to take away from an event like this.
In that same spirit, we would like to remind all Tesla drivers that Autopilot is not a fully-autonomous driving system. It's a tool to help attentive drivers avoid accidents that might have otherwise occurred. Just as with autopilots in aviation, while the tool does reduce workload, it's critical to always stay attentive. The car cannot drive itself. It can help, but you have to do your job.
We do realize, however, that a system like Autopilot can lure people into a false sense of security. That's one reason we are hard at work on the problem of fully autonomous driving. It will take a few years, but we look forward to some day making accidents like this a part of history.
This needs far more discussion. I just don't buy it. I don't believe that you can have a car engaged in auto-drive mode and remain attentive. I think our psychology won't allow it. When driving, I find that I must be engaged and on long trips I don't even enable cruise control because taking the accelerator input away from me is enough to cause my mind to wander. If I'm not in control of the accelerator and steering while simultaneously focused on threats including friendly officers attempting to remind me of the speed limit I space out fairly quickly. In observing how others drive, I don't think I'm alone. It's part of our nature. So then, how is it that you can have a car driving for you while simultaneously being attentive? I believe they are so mutually exclusive as to make it ridiculous to claim that such a thing is possible.
"The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat."
The result of this statement and the functionality that matches it is it creates a re-enforced false sense of security.
Does it matter whether the driver of the model X whose auto pilot drove straight into a center divider had his hands on the wheel if the outcome of applying autopilot is drivers focus less on the road? What is the point of two drivers one machine one human? You cannot compare car auto pilot to airplane they're not even in the same league. How often does a center divider just pop up at 20k ft?
Usually machinery either augments human capabilities by enhancing them, or entirely replaces them. This union caused by both driver and car piloting the vehicle has no point especially when it's imperfect.
I'm not opposed to Tesla's sale of such functionality, sell whatever you want, but I am opposed to the marketing material selling this in a way that contradicts the legal language required to protect Tesla...
There's risks in everything you do, but don't market a car as having the hardware to do 2x your customers driving capability and then have your legal material say: * btw don't take your hands off the steering wheel... especially when there's a several minute video showing exactly that.
Tesla customers must have the ability to make informed choices in the risks they take.
Tesla have sold people that the hardware they buy now will be capable of this in the future, but not now.
First let me state that I agree with this 110%!
I'm not sure if this is what you are getting at but I'm seeing a difference between the engineers exact definition of what the system is, what it does, and how it can be properly marketed to convey that in the most accurate way. I'm also seeing the marketing team saying whatever they can, within their legal limits (I imagine), in order to attract potential customers to this state-of-the-art system and technology within an already state-of-the-art automobile.
If we are both at the same time taking these two statements verbatim than which one wins out:
> Autopilot is not a fully-autonomous driving system. It's a tool to help attentive drivers avoid accidents that might have otherwise occurred. Just as with autopilots in aviation, while the tool does reduce workload, it's critical to always stay attentive. The car cannot drive itself. It can help, but you have to do your job.
> The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat.
If that's the crux of the issue that goes to court then who wins? The engineering, legal, marketing department, or do they all lose because the continuous system warnings that Autopilot requires attentive driving were ignored and a person who already knew and complained of the limits of that system decided to forego all qualms about it and fully trust in it this time around?
I feel like when I was first reading and discussing this topic I was way more in tune with the human aspect of the situation and story. I still feel a little peeved at myself for starting to evolve the way I'm thinking about this ordeal in a less human and more practical way.
If we allow innovation to be distinguished for reasons such as these will we ever see major growth in new technology sectors? That might be a little overblown but does the fact that Tesla's additions to safety and standards thus having a markedly lower accident and auto death rate mean nothing in context?
If Tesla is doing a generally good job and bringing up the averages on all sorts of safety standards while sprinting headlong towards even more marked improvements are we suddenly supposed to forget everything we know about automobiles and auto accidents / deaths while examining individual cases?
Each human life is important. This man's death was not needed and I'm sure nobody at Tesla, or anywhere for that matter, is anything besides torn up about having some hand in it. While profit is definitely a motive I think the means to get to the profit they seek Tesla knows they have to create a superior product and that includes superior features and superior safety standards. If Tesla is meeting and beating most of those goals and we have a situation such as this why do I feel (and I could be way wrong here) that Tesla is being examined as if they are an auto manufacturer with a history of lemons, deadly flipped car accidents, persistent problems, irate customers, or anything of the like in this situation?
For whatever reason it kind of reminds me of criminal vs. civil court cases. Criminal it's upon the State or Prosecution to prove their case. In the civil case the burden is on the Defense to prove their innocence. For some reason I feel like Tesla is in a criminal case but having to act like it's a civil case where if they don't prove themselves they will lose out big.
To me it feels like the proof is there. The data is there. The facts are known. The fact that every Tesla driver using Autopilot in that precise location doesn't suffer the same fate points toward something else going on but the driver's actions also don't seem to match up with what is known about him and the story being presented on the other side. It's really a hairy situation and I feel like it warrants all sorts of tip toeing around but I also have the feeling that allowing that "feeling" aspect to dictate the arguments for either side of this case are just working backwards.
And for what it's worth I don't own a Tesla, I've never thought about purchasing one. I like the idea, my brother's friend has one and it's neat to zoom around in but I'm just trying to look at this objectively from all sides without pissing too many people off. Sorry if I did that to you, it wasn't my intent.
My concern is that it looks like Tesla is 90% of the way there to full autonomy and the way the feature is marketing will lull even engineers who know more about how these systems work into a false sense of security and end up dying as a result -- they'll trust a system that shouldn't be trusted. There isn't a good system for detecting a lack of focus especially when it won't take more than a few milliseconds to go from good to tragic.
The human toll is irrelevant to the conversation, what's relevant is whether risks taken are being taken knowingly - you cannot market a self driving vehicle whose functionality "is 2x better than any human being" while simultaneously stating in your legal language to protect yourself: don't take your hands off the wheel - that's bs.
The human toll is absolutely relevant to the conversation: this is about people dying now and in the future. It seems cruel to discuss it in a "I'll sacrifice X to save Y" later, but it can reasonably be reduced to that.
I think it's safe to assume that this will drastically reduce driving related injuries and deaths.
Is the life taken by auto pilot worth less than the life taken by the aggressive driver who takes out an innocent driver? No.
I hope we eventually save lives as in net improvement in current death totals by using these technologies but the risks are not well communicated, the marketing is entirely out of sync with the risks and the "martyrs" we create thus to me look like victims.
I think beliefs such as these is fueled by the extremely naive implication that each death will cause the learning algorithm to "improve itself" so every self driving thing out there is safer owing to that death..
Some number of people, N are willing to risk their lives to use autonomous vehicles they'll die as a result. It should be just as clear to person using autopilot the risks involved not misled with marketing fluff that doesn't come close to reality. Martyrs not victims
This assumes that the self driving tech will continue to increase in competence and will at some point surpass humans. I somehow find that extremely optimistic, bordering in on being naive.
Consider something like OCR or object recognition alone, where similar tech is applied. Even with decades of research behind it, it really cannot come any where close to a human in terms of reliability. I am talking about stuff that can be trained endlessly with any sort of risk. Still it does not show an ever increasing capability.
Now, machine learning and AI is only part of the picture. The other part is the sensors. This again is not anywhere near the sensors a human is equipped with.
From what we have seen in the tech industry in recent years is that trust in a tech by the people, even intelligent ones such as people who are investing in it, is not based on logic (Theranos, uBeam etc). I think such a climate is exactly what is enabling tests such as these. But unlike others, these tests are actually putting unsuspecting lives on line. And that should not be allowed..
Please note that I artfully omitted a due date on my assumption. There's so much money involved here and so much initial traction that it is indeed reasonable to think that tech can surpass a "normal" driver.
I'm also biased against human drivers, plenty of whom should not be behind the wheel.
I don't think it is reasonable at all to reach that conclusion based on the money involved...You just can't force progress/breakthrough just by throwing money at all problem..
>I'm also biased against human drivers, plenty of whom should not be behind the wheel.
So I think it would be quite trivial to drastically increase the punishment of dangerous practices if caught. I mean, suspend license or ban for life if you are caught texting while driving or drunk driving.
You're also ignoring a key point: we have "self-driving" cars right now, but they're not good enough yet. Computer hardware is getting cheaper day by day, and right now the limiting factor appears to be the cost of sensors.
Both are not true. It does not need money for a man to have a great breakthrough idea. It is also not possible to guarantee generation a great idea by just throwing more and more money at researchers...
The best thing is to build a system to analyze your driving and figure out if you are in that 40% of people and then let it drive for you. Maybe drunk drivers, for example. It can do this per ride: “oh you’re driving recklessly, do you want me to take over?”
EVERYTHING ELSE SHOULD BE A STRICT IMPROVEMENT. Taking over driving and letting people stop paying attention is not a strict improvement.
The argument should NOT be about playing with people’s lives now so im the future some people can have a better system. That’s a ridiculous argument. Instead WHY DON’T THE COMPANIES COLLABORATE ON OPEN SOURCE SOFTWARE AND RESEARCH TO ALL BUILD ON EACH OTHER’S WORK? Capitalism and “intellectual property”, that’s why. In this case, a gift economy like SCIENCE or OPEN SOURCE is far far superior at saving lives. But we are so used to profit driven businesses, it’s not likely they will take such an approach to it.
What we have instead is companies like Waymo suing Uber and Uber having deadly accidents.
And what we SHOULD have is if an incremental improvement makes things safer, every car maker should be able to adopt it. There should be open source shops for this stuff like Linux that enjoy huge defensive patent portfolios.
Ain’t gonna happen I’m afraid.
Why not? That's how pioneers make progress, in new aircraft and spacecraft.
If people want to be on the bleeding edge, why not let them?
How can the cars improve if they are never allowed to drive?
The pioneers in this case are putting other people’s life at risk.
Wayne seems to demonstrate that improving self-driving cars without leaving a trail of bodies behind seems in the realm of possibility, so let’s measure Tesla against that standard.
They don't just say it in the legal language. The car is continually reminding the driver of it, as the article makes clear.
This is the definition of a false dichotomy and it implicitly puts the onus on early adopters to risk their lives (!) in order to achieve full autonomy. Why not put the onus on the car manufacturer to invest sufficient capital to make their cars safe!? To rephrase what you said with this perspective:
> ...developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of billions of investor dollars in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
This seems strictly better than the formulation you provided. How nuts is it that the assumption here is that people will have to die for this technology to be perfected. Why not pour 10x or 100x the current level of investment and build entire mock towns to test these cars in - with trained drivers emulating traffic scenarios? Why put profits ahead of people?
The problem is that Uber needs self driving cars in order to make money, and Tesla firmly believes that their system is safer than human drivers by themselves (even if a few people who wouldn't have otherwise died do, others who might have died won't and they believe those numbers make it worth it).
The problem is that you cna only learn so much without actual practice.
I think we need to keep the human driver in control, but have the computers learning through that constant, immediate feedback.
And get rid of misleading marketing and fatal user experience design errors.
I don't know what is stopping them from simulating everything inside a computer.
Record the input from all the sensors when a car equipped sensors is driven through real roads by a human driver.
Replay the sensor input, with enough random variations and let the algorithms train on it.
Continue adding to the library of sensor data by making the sensor car by driving it through more and more real life roads and in real life situations. Keep feeding the ever increasing library of sensor data to the algorithm, safely inside a computer.
Obviously they've already tried that, and it doesn't work. The map is not the territory.
In theory, there is no difference between theory and practice. In practice, there is.
What I mean is that. Do not "teach" the thing in real time. Instead collect the sensor data from the cars a human is driving (and also collect the human input also), and train the thing on it, safely inside the lab.
You say, they have done it already. But I am asking if they have done it enough. And if that is so, how come the accidents such as these are possible, when the situation is pretty out of a text book in basic driving?
It also will not ensure exclusion of manual vehicles, so it won't create the exclusion necessary for the predictable driving environment.
Sounds like requiring exclusive access - I apologize if that was a misinterpretation.
If you have human and automated drivers in the same roads, the computers have to be able to cope with the vagaries of human drivers.
How can you then get away from '"beta testing" our self driving cars on roads with human operators' if that is their deployment environment?
I generally agree with this philosophy but this is very optimistic, at least in the United States. This is a country where we can't even ban assault rifles let alone people from driving their own vehicles. You're going to see people drive their own vehicles for a very long time even if self driving technology is perfected.
Compare the above to hop in my car, drive to the freeway, turn on self-driving, turn off self-driving once i get off the freeway, find parking near where i'm going and walk in.
As a society, we've done alot more in the name of convenience.
And you need a fleet of rental cars so that people can actually get to their destination.
What's the relative cost of all that vs. pavement?
Commuter rail systems run at 2 minute headways or less. Long-distance trains mostly don't but that's largely due to excessive safety standards - for some reason we regulate trains to a much higher safety standard than cars. Even then, the higher top speeds of trains can make up for a certain amount of waiting and indirect routing. (Where I live, in London, trains are already faster than cars in the rush hour).
> What's the relative cost of all that vs. pavement?
When you include the land use and pollution? Cars can be cheaper for intercity distances when there's a lot of similarly-sized settlements, but within a city they waste too much space. And once you build cities for people rather than cars, cars lose a lot of their attraction for city-to-city travel as well, since you're in the same situation of having to change modes to get to your final destination.
That "some reason" is physics. According to a quick Google search, an average race car needs 400m of track length from 300 km/h to 0 km/h. A train will require something around 2500m, over 5x the distance, to brake from the same speed. Trains top out at -1.1m/s² deceleration, an ordinary car can get -10m/s² deceleration.
Part of the reason why is also that in a car, people are generally using their seatbelts - which means you can safely hit the brakes with full power. In a train, however, people will be walking around, standing, taking a dump on the loo - and no one will be using a belt. Unless you want to send people literally flying through the carriages, you can't go very much over that 1 m/s² barrier.
Because of this, you have the requirement of signalling blocks spaced in a way that a train at full speed can still come to a full stop before the next block signal. Also: a train can carry thousands of people. Have one train derail and crash into e.g. a bridge or crash with another train and you're looking with way, way, way more injuries and dead people than even a megacity could support, much less in a rural area.
If the cars are electric, I'm less sure.
Though your train of cars would likely have such low passenger density that a series of buses would be just as good. Special lanes just for buses are already a thing.
The point is driving is a freedom and getting rid of it in this country will be hard. I'd imagine self driving vehicles having more prevalence in China where the government can control what destinations you have access to and monitor your trips.
Many states (red) won't ban them for a very long time.
the impact on freedom to travel will have to be secured and decentralized without any government kill switches.
Which states? Maybe a few in New England, but I don't see that happening anywhere else. Counties perhaps, but there are rural areas pretty much everywhere, and people are going to want the freedom to drive their own vehicle.
In comparison, the economic impact/benefit of banning assault rifles is negligible (and definitely not transformative) even if I personally think it is the morally right thing to do. (Maybe we can make the case later if school security and active shooter drills become prohibitively expensive and/or annoying)
So people will relocate to avoid traffic? Why doesn't this happen today? Suppose San Francisco decided to not enforce self driving laws to protect small businesses and preserve community infrastructure and culture. Now suppose Phoenix (only picked because they've been progressive with self driving technology) does enforce self driving laws, would you expect a mass exodus from San Francisco to Phoenix?
Right now Phoenix is not even on the map for most of us. If they did something like this (at the right time), then it might be.
Why would the less-risky drivers move to self-driving cars first? Wouldn't some of the higher-risk demographics (e.g. the elderly) make the move first since they have more incentive to do so?
> Second, cities that go to self driving only will have a huge advantage in infrastructure utilization and costs as roads are used more efficiently (with smoother traffic) and parking lots/garages become a thing of the past. Residents will just push for it if it means not being stuck in traffic anymore.
I think self-driving cars will be really cool and reduce traffic accidents once they're perfected, but a lot of these assumptions don't make sense. Unless a critical mass switches to car-sharing, autonomous cars and no parking will make rush hour worse because now each car will make the round trip to work twice a day instead of just once. Also, what happens to the real-estate where the parking lots are now? The financially sound thing to do will probably be converting these lots to more offices/condos/malls. So urban density will increase - increasing traffic.
Even if autonomous cars radically improve traffic flow, I suspect we'll just get induced demand . More people will take cars instead of public transit and urban density will increase until traffic sucks again.
Elderly aren't usually considered higher risk. The young kids are, enthusiasts are, people who drive red sports cars are.
> Unless a critical mass switches to car-sharing, autonomous cars and no parking will make rush hour worse because now each car will make the round trip to work twice a day instead of just once.
Autonomous cars should be mostly fleet vehicles (otherwise you have to park it at home).
Isn't that just like in most of the major world cities where taxis are the norm rather than the exception? It isn't weird for a taxi in Beijing to make 5-6 morning commute rounds. But even then, there are a lot of reverse commutes to consider.
> The financially sound thing to do will probably be converting these lots to more offices/condos/malls.
While density can increase, convenient affordable personal transportation also allows the opposite to occur. Parks, nice places, and niche destinations, are also possible.
Think of it this way, once traffic is mitigated, urban planning can apply more balance to eliminate uneven reverse commute problems. There will still be an incentive to not move, but movement in itself wouldn't be that expensive (only 40 kuai to get to work in Beijing ~15km, I'm sure given the negligible labor costs, autonomous cars can manage that in the states).
We are asking that self driving cars be ALLOWED if the user chooses, even IF the safety is in doubt. This is because of just how extremely important this issue is.
Peopl I’ve said it before, and I’ll repeat it till eventually the tide turns on HN and elsewhere.
You will not have full autonomy unless you control the road itself.
At which point you are better off just making it mass transit.
Also, people are terrible at detecting objects directly in front of them and just like computers, the human brain can be cheated, overloaded, inept or inexperienced leading to an accident.
Now we have cars with lane assist, smart breaking, auto pilot features and that's only in the past 5-10 years.
Of all the places where technology can save lives, its definitely in vehicles/transportation.
How many optical illusions do you usually see in the roads while driving, that can result in an accident?
I am not even talking about the "people are terrible at detecting objects directly in front of them" part.
I mean, how can you be a human being and say this? If we were "terrible at detecting objects directly in front of us", we would have been predated out of existence a long time ago..
Sometimes you'll see multiple white lines, or lanes that appear to vere off due to dirt on the road. A bit of litter looks like a person, a kid looks like they might run out.
A lot of times I find I'm searching for something and can't see it but it was in my visual field. I think this worsens with age.
No. None of those qualify as brain being cheated.
That is the only thing I was responding in the start of this discussion. Essentially the person was saying human brain can be cheated just like a computer.
I am saying, No. Not just like a computer. Human brains does not get cheated so easily like a computer. Claiming that is outrageous and shows you have no idea of what you are talking about...
Check out this article, it is easy to never see a bike you're on a collision course with.
You may consider humans as bad drivers but Tesla's autopilot is even worse than that:
It can't even look in one direction!
I'm not claiming Tesla's system is currently better than a human, just that there is plenty of potential for a machine to outperform humans perceptually. As it is, Tesla's system isn't exactly the gold standard.
Last time I checked, I could move my eyes, up and down, side to side. I could also rotate my whole head, that also up and down and side to side.
And I am a human being.
Actually, one thing that I was curious about regarding this incident -- they say that authorities had to wait for a team of Tesla's engineers to show up to clean up burning mess of batteries. Luckily for everyone else trying to get somewhere on 101 that day, Tesla's HQ isn't too far away. What if next time one drives into a barrier it happens in a middle of Wyoming? Will the road stay closed until Tesla's engineers can hitch a ride on one of Musk's Falcons?
And $879B in USA per year.
It is thus very important.
For the 1.3 million people and their loved ones and to 20-50 million injured EVERY YEAR, yeah, it's really that important.
Is it ready today? No. We're in pretty violent agreement on that.
Will we get there? I don't see much reason to doubt that we will, eventually. It may require significant infrastructure changes.
It's pretty clear Waymo/Uber are pushing the envelope too hard, without adequate safeguards, but "only be acceptable if it were you and Mr. Musk...on Tesla's private proving grounds" is probably not pushing the envelope enough.
Even Waymo is "unleashing their stuff onto unsuspecting public" by driving them on public roads - lots of innocent bystanders potentially at risk there.
Both Waymo and even Uber do not pretend that their systems are ready for public use and at least allegedly have people who are paid to take over (granted, in Uber's case it's done as shadily as anything else Uber does). Tesla sells their half-baked stuff to everyone, with marketing that strongly implies that they can do self-driving now, if only not for those pesky validations and regulations. I think there's quite a bit of a difference.
A lot of deaths and injuries on the road happen in countries with bad infrastructure and rather cavalier attitude to rules of the road. Fixing those could save more people sooner than SDVs that they won't be able to afford any time soon. Not to consider that an SDV designed in the first world (well, Bay Area's roads are closer to third world, but still...) aren't going to work too well when everyone around drives like a maniac on a dirt road.
Not to say that SDVs wouldn't be neat, when they actually work, but this is a very SV approach, throwing technology to create overpriced solution to problems that could be solved much cheaper, but in a boring way that doesn't involve AI, ML, NN, and whatever other fashionable abbreviations.
Whose lives are we sacrificing? In the case of the Uber crash in Tempe and this Tesla crash in California, the people who died did not volunteer to risk their lives to advance research in autonomous vehicles.
I highly respect individuals who choose to risk their lives to better the world or make progress, like doctors fighting disease in Africa and astronauts going to space, but at the same time, I think this must always be a choice. Otherwise we could justify forcing prisoners to try new drugs as the first stage of clinical trials. Or worse things. Which is why there are extensive vetting before approval for clinical trials is given.
I do think that, once the safety of autonomous vehicles have been proven on a number of testbeds, but before they are ready for deployment, it is justifiable to drive them on public roads. Maybe without safety drivers. But until then, careful consideration should be given to their testing.
Uber should not have been able to run autonomous vehicles with safety drivers where the safety driver could be allowed to look away from the road for several seconds while the car was moving at >30mph. The car should automatically shutoff if it is not clear whether the safety driver is paying attention. And there should be legislation that bans any company that fails to implement basic safeguards like this from testing again for at least a decade, with severe fines. Probably speeds should also be limited to ~30mph for the first few years of testing while the technology is still so immature, as it is today.
Similarly, Tesla should not be allowed to deploy their Autopilot software to consumers before they conduct studies to show that it is reasonably safe. Repeated accidents have shown that Level 1 and Level 2 autonomous vehicles, where the car drives autonomously but the driver must be ready to intervene, is a failed model unless the car actively monitors that the driver is paying attention.
Overall I think justifying the current state of things by saying that people must be sacrificed for this technology to work is ridiculous. Basic safeguards are not being used, and if we require them, maybe autonomous vehicles will take a few years longer to reach deployment, but that thousands of lives could become tens.
Edit: I read in another comment that the Tesla car at least "alarms at you when you take your hands off the wheel". In that case I think what Tesla is doing is much more reasonable. (Not Uber, though.) Although I still feel like it is going to be hard to react to dangerous situations when the system operates correctly almost all the time (even if you are paying attention and have your hands on the wheel). But I'm not sure what the correct policy should be here, because I don't fully understand why people use this in the first place (since it sounds like Autopilot doesn't save you any work).
Cars should just be phased out in favor of mass transit everywhere.
Yes, you can live without the convenience of your car. No really, you can.
Now think about how you would enable that to happen. What local politicians are you willing to write to, or support, in order to enable a better mass transit option for you? And how would you enable more people to support those local politicians that make that decision?
This is the correct solution, since the AI solution of self-driving cars isn't going to happen. Their high fatality rates are going to remain high.
Maybe, but unless you can change the laws of nature, you can't build a mass transit system that can serve everyone full-time with reasonable efficiency and cost-effectiveness, and that's just meeting the minimum requirement of getting from A to B, without getting into all the other downsides of public vs. private transportation in terms of health, privacy, security, etc.
Let's see what that imagination can craft.
Achieving 24/7 mass transit, available with reasonable frequency for journeys over both short and long distances, would certainly require everyone to live in big cities with very high population densities. Here in the UK, we only have a handful of cities with populations of over one million today. That is the sort of scale you're talking about for that sort of transportation system to be at all viable, although an order of magnitude larger would be more practical. All of those cities have long histories and relatively inefficient layouts, which would make it quite difficult to scale them up dramatically without causing other fundamental problems with infrastructure and logistics.
So, in order to solve the problem of providing viable mass transit for everyone to replace their personal vehicles, you would first need to build, starting from scratch or at least from much smaller urban areas, perhaps 20-30 new big cities to house a few tens of millions of people.
You would then need all of those people to move to those new cities. You'd be destroying all of their former communities in the process, of course, and for about 10,000,000 of them, they'd be giving up their entire rural way of life. Also, since no-one could live in rural areas any more, your farming had better be 100% automated, along with any other infrastructure or emergency facilities you need to support your mass transit away from the big cities.
The UK is currently in the middle of a housing crisis, with an acute lack of supply caused by decades of under-investment and failure to build anywhere close to enough new homes. Today, we're lucky if we build 200,000 per year, while the typical demand is for at least 300,000, which means the problem is getting worse every year. The difference between home-owners and those who are renting or otherwise living in supported accommodation is one of the defining inequalities of our generation, with all the tensions and social problems that follow.
But sure, we could get everyone off private transportation and onto mass transit. All we'd have to do is uproot about 3/4 of our population, destroy their communities and in many cases their whole way of life, build new houses at least an order of magnitude faster than we have managed for the last several decades, achieve total automation in our out-of-city farming and other infrastructure, replace infrastructure for an entire nation that has been centuries in development... and then build all these wonderful new mass transit systems, which would still almost inevitably be worse than private transportation in several fundamental ways.
And that's not taking into account the fact that bicycle is a very viable way to move around in cities < 200 000 inhabitants.
I have actually never owned a car, I just rent some once in a while to go out somewhere where regular transports don't get me. I have lived in Sweden, France and Spain, in 10 cities from 25 000 to 12 million inhabitants. Never felt restricted. I actually feel much more restricted when I drive because I have to worry about parking, which is horrible in both Paris and Stockholm. Many people I know, even in rural Sweden or France, don't own a car because it is just super costly and the benefit is not worth it. It's very much a generation thing tough because my friends are mostly around 26-32 whereas nearly all the person I know over 35 owns a car, even if they don't actually have that much money and sometimes complain about it.
To provide a viable transport network, operating full-time with competitive journey times, without making a prohibitive financial loss or being environmentally unfriendly, you need a critical mass of people using each service you run. That generally means you need a high enough population density over a large enough urban area that almost all routes become "main routes" and almost all times become "busy times".
I lived car free in a small industrial UK city, we couldn't manage that with kids (too expensive for one).
Bus seats are awful, why?, because they're made vandal resistant (and hard wearing). They're too small for a lot of people now as well. So you need to remodel buses IMO; your going to need to be hotter on vandals, so change the approach of the courts. Things bifurcate across areas of society like that: Supermarkets, houses, zoning, etc. all are designed with mass car ownership as a central tenet.
No, I can't. Don't presume to tell other people what they need to live their lives.
A laudable goal doesn’t give anyone the right to kill people by taking unnecessary risks. The reason that Tesla and Uber do what they do the way they do, instead of a more conservative approach is an attempt to profit, not save lives. If you don’t have to spend lives to make progress, but choose to do so for economic experience, there’s s word for that: evil.
Racing drivers have also reported that when they are not driving at 100%, they are more prone to make mistakes or crash. Most famously, Ayrton Senna's infamous crash at Monaco when he was leading the field by a LONG way. When he was asked why he crashed at a fairly innocuous slow corner, he said that his engineer had asked him over the radio to 'take it easy' as there was no chance he would be challenged for 1st place before the finish line, so he relaxed a fraction and started thinking about the victory celebrations. And crashed.
Basically this: https://xkcd.com/1138/
You're not alone. I find the act of modulating my speed is what keeps me focused on the task of driving safely. Steering alone isn't enough; I can stay in my lane without tracking the vehicles around me or fully comprehending road conditions.
Until a Level 5 autonomous car is ready to drive me point A to point B while I watch a movie I will remain firmly in command of the vehicle.
Public roads are not laboratories. It’s not just Tesla owners who are participating in this, it’s everyone on the road with them.
It really annoys me when I have to constantly look at the speed to avoid getting above the limit and it is very tiring for long drives.
This is similar to the problem for pilots, who can be distracted by mundane tasks due the complexity of controls in modern aircraft. If these tasks are removed, the pilot can focus on what's more important.
According to NASA "
For the most part, crews handle concurrent task demands efficiently, yet crew preoccupation with one task to the detriment of other tasks is one of the more common forms of error in the cockpit."
The moment the back of my mind doesn't have to handle precise throttle control I find my mind wandering and my spacial awareness is shot. I guess maintaining speed is the fidget spinner that keeps me focused on the task of driving.
Does anyone know of psychology studies that measure human reaction time and skill when sometime like autopilot is engaged most of the time? I remember taking part in a similar study at Georgia Tech that involved firing at a target using a joystick. It was also simultaneously a contest because only the top scorer would get prize money. The study was conducted in two parts. In the 1st phase, the system had autotargeting engaged. All subjects had to do was press a button when the reticle was on the target in order to score. In the 2nd phase, which was a surprise, autotargetting was turned off. I won the contest and my score was miles ahead of anyone. I can't fully confirm it but I feel this happened because I was still actively aiming for the target even when autotargetting was active.
Yes. That's been much studied in the aviation community. NASA has the Multi-Attribute Test Battery to explicitly study this. It runs on Windows with a joystick, and is available from NASA. The person being tested has several tasks, one of which is simply to keep a marker on target with the joystick as the marker drifts. This simulates the most basic flying task - flying straight and level. This task can be put on "autopilot", and when the marker drifts, the "autopilot" will simulate moving the joystick to correct the position.
But sometimes the "autopilot" fails, and the marker starts drifting. The person being tested is supposed to notice this and take over. How long that takes is measured. That's exactly the situation which applies with Tesla's "autopilot".
There are many studies using MATB. See the references. This is well explored territory in aviation.
I don’t find that particularly challenging and in fact, when the autopilot is INOP, flights are slightly more mentally fatiguing because you have no offload and complex arrivals are much more work, but in cruise, you have to be paying attention either way. It’s not a time to read the newspaper, autopilot or not.
What I noticed that when I was following posted/safe speed limit, I was quickly losing focus, mind started wandering and eventually I felt I was falling asleep.
I do not remember what made me to speed up, but once I was about 30% faster than posted speed limits, and once I reached part of the way where road was quite bad + a lot of road work was happening, I realized that I much more alert.
As soon as I slowed down to posted speed limit speed I began drifting away again..
If anything, my anecdote confirms your theory - as soon as we perceive something safer, we pay way less attention. And Autopilot sounds like one of these safety things, which makes drivers less attentive and potentially missing dangerous situation, which otherwise would be caught by driver's mind.
I wonder if there is a way to introduce autopilot help without actually giving sense of security to the driver. Granted Tesla would lose so precious marketing angle, but if their autopilot would work somewhat like variable power steering system on background without obvious taking over control of the car, in the long haul that would be more beneficial?
I find rough motorway surfaces in my current vehicle induce heavy drowsiness at motorway speed limits (slight reduced at marginally higher speeds when the pitch is higher).
It's not a question of zero deaths, it's a question of reducing the number which means you need to look beyond individual events. Remember the ~90 people who died yesterday from a US car accident without making the news are far more important than a few individuals.
No we don't. Tesla likes to compare their deaths per mile to the national average. The problem is that their autopilot is not fit to drive everywhere or in all conditions that go into that average. There is no data to support that autopilot is safer overall. It may not even be safer in highway conditions given that we've seen it broadside a semi and now deviate from the lane into a barrier - both in normal to good conditions.
And really, driving conditions are responsible for a relatively small percentage of vehicle fatalities. Most often it's people doing really dumb things like driving 100+ MPH.
The only thing we actually know is these cars are safer on average than similar cars without these systems. That's not looking at how much they are used, just the existence of said safety system and likely relates to them being used when the drivers are extremely drunk or tired which are both extremely dangerous independent of weather conditions.
The US just mandated all new cars have backup cameras, but it seems like mandating AEB would make a bigger difference.
What do you know that the rest of us don't? The ones statistics I've seen on anyone's self-driving cars so far would barely support a hypothesis that they are as capable as an average driver in an average car while operating under highly favourable conditions.
I've been saying this for a while and it's interesting to see more people evolve to this point of view. There was a time when this idea was unpopular here--owed mostly to people claiming that autonomous cars are still safer than humans, so the risks were acceptable. I think there are philosophical and moral reasons why this is not good enough, but that goes off-topic a bit.
In any case, some automakers have now embraced the Level-5 only approach and I sincerely believe that goal will not be achieved until either:
1. We achieve AGI or
2. Our roads are inspected and standards are set to certify them for autonomous vehicles (e.g. lane marking requirements, temporary construction changes, etc.)
Otherwise, I don't believe we can simply unleash autonomous vehicles on any road in any conditions and expect them to perform perfectly. I also believe it's impossible to test for every scenario. The recent dashcam videos have convinced me further of this .
The fact that there are "known-unknowns" in the absence of complete test-ability is one major reason that "better than humans" is not an ethical standard. We simply can't release vehicles to the open roads when we know there are any situations in which a human would outperform them in potentially life-saving ways.
The solution might be a system where the driver drives at a higher level of abstraction, but ultimately still drives.
Driving should be like declarative programming.
For example, the driver still moves the steering wheel left and right, but the car handles the turn.
Or when the driver hits the breaks, which is now more of a on/off switch, the car handles the breaking.
The role for the driver is to remain engaged, indicating their intention at every moment and for the car to work out the details.
Edit: On second thought, that might end up being worse. I can think of situations where it might become ambiguous to the driver of what they are handling and what the car is handling. Maybe autopilot is all or nothing.
I can't even drive an automatic transmission car without getting way too distracted at times
Glad I'm not the only doing it. When driving on a highway I increase or decrease my car's speed by 10-15 km every 10 minutes or so, so that this variation can help me keep attentive about my surroundings.
Not saying it invalidates your argument, just that there are pretty wide-reaching consequences to that idea.
I think people that say that "autopilot" is a bad name for this feature don't really understand what an "autopilot" does.
> Full Self-Driving Hardware on All Cars
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
Whatever theory you have for Tesla's naming of their feature, it doesn't match with their marketing.
The page's title is "Autopilot | Tesla". It is the first result for "tesla autopilot" in search results. And "autopilot" appears 9 times on the page. So if that's not an intentional attempt to mislead consumers into conflating Autopilot with "full self-driving", then what would such an attempt look like, hypothetically?
That is damn crazy. You should consider your users to be complete idiots when improper use of the thing can endanger lives.
Why are we even debating this?
For this standard it would have to apply to every driver. Should drivers who do not google "Tesla autopilot", let alone ones that do and read on in a section about said autopilot feature, be punished with death in a two ton metal trap?
It doesn’t; what it does say is perfectly clear to me.
It remains an answer to "what would such an attempt [to intentionally mislead consumers] look like, hypothetically?"
This is actually harder to do then just driving the car.
Personally, I do not think calling Tesla's system 'autopilot' is the issue, but your claim that it is accurate is based on misunderstandings about the use of autopilots in aviation. It is not the case that their proper use puts airplanes on the edge of disaster were it not for the constant vigilance of the pilots.
People who say that "Autopilot" is a bad name for this feature aren't basing it on an imperfect understanding of what autopilot does in airplanes. They're basing it on how they believe people in general will interpret the term.
The funny thing is that it is the same people who are arguing for self driving tech by saying that "Humans will do dumb shit", is the same ones who justify Tesla by saying "Humans should not do stupid things (like ignoring the cars warning)"..
They aren't going to literally fall asleep, but much of the time pilots are reading a book and not directly paying attention in the same way that a driver is.
Since then, air traffic control procedures have been improved to avoid these situations, but nowadays e.g. over the Baltic Sea, Russian military planes are routinely flying with their transponder turned off so that flight control does not know where they are. So, this risk is still there.
Imagine the possible breaking components in that chain too. Bluetooth can fail, satellite can fail, cell can fail,
WiFi can fail, a USB cable can fail, there isn't a single piece of connectivity technology that would make me confident enough to delegate alerts to another device.
There is also an inherent failure of alarms in general in that even very loud ones can be ignored if they give false positives even once or twice. There is a body of study trying to address it. Some of the most fatal industrial accidents occured because alarms were either ignored or even fully switched off. We aren't good with alarms.
I think the meat of it though is that unless Auto-pilot works perfectly then you can't leave it alone. And if you can't leave it alone then what's the point?
The sell for autonomous cars isn't that people are just so darn tired of turning the steering wheel that they would really rather not. It's that we could potentially be more productive if we could shift our full attention to work/study/relaxation while commuting.
They should have issued a single, very simple statement that they are investigating the crash and that any resulting improvements would be distributed to all Tesla vehicles, so that such accidents can no longer happen even when
drivers are not paying sufficient attention to the road and ignore Autopilot warnings. Then double down on the ideea that Autopilot is already safer than manual driving when properly supervised and that it constantly improves.
The specifics of the accident, victim blaming, whether the driver had or not his hands on the wheel or was aware of Autopilot's problems is something that should be discussed by lawyers behind closed doors. And of course, deny it media attention and kill it in preliminary investigation, which I imagine they will have no problem in doing, he drove straight into a steel barrier for God's sake.
Silicon Valley needs to stop trying to make autonomous cars happen.
Right now self-driving cars aren't viable in the real-world due to their extremely high fatality rate, which are comparable to motorcycles.
Meanwhile, there are several car models with ZERO fatalities.
The name implies drivers don't need to drive and is most likely a big reason drivers are lured into a false sense of security.
I hate to be sanctimonious at people online but this is how people get killed. Is it not illegal to do this where you live? In the UK you'd be half way to losing your license if you got caught touching a phone while driving (and lose it instantly if within the first 2 years of becoming a qualified driver).
You just said it yourself, you can't trust it, so don't play with your phone while driving, lane assist or not.
Lane assist has also led me to be much better about always signaling lane changes lest the steering try to fight me.
That's exactly what the tesla driver must have thought too. Right until the auto pilot steered directly into a barrier. Volvo S's system may be better, but any lapse in attention can lead to the type of crash we are discussing about.
See, I'm not sure if you know this... but most people are not Pilots.. ( disclaimer, I'm not only a programmer, but also hold an A&P and avionics license, as well as a few engine ratings ).
It is ABSOLUTELY on a manufacturer to make sure their potentially life ending feature, is not named in a way that can confuse the target audience. You know. NON PILOT car drivers.
auto means "by itself, automatic"...
Of course “Autopilot” is intended to evoke the common meaning as a marketing tool, and not the nuanced, highly technical meaning understood by pilots. Understand though, that when someone argues against that point the pedantry is just a proxy for their fanaticism, and until the fanaticism dies, the excuses will be generated de novo. You’re bringing reason and logic to an emotional fight.
Forgive me but I've never seen any plane where, once in autopilot, the pilot/s are not checking and observing the conditions of the plane and making sure everything is alright.
I want you to go on wikipedia(is that not mainstream enough) and search for the term Autopilot. Reads its ACTUAL definition and come back.please.
Autopilots used in modern commercial airplanes are autonomous. You don't have to watch them, they will do their job. The airplane is either controlled by the pilots or the autopilot. There is a protocol to transfer the control between pilots and the autopilot, such that it is clear who is in charge of controlling the plane (there's even a protocol to transfer this between pilots).
The autopilot will signal when it is no longer able to control the plane (because of, e.g., technical faults in the sensors).
Yes, there are also autopilots in smaller airplanes which are more or less just a cruise control. But everything in between, where is it unclear who is doing what are where the limits of the capabilities are, have been scrapped because people died.
> doesn't mean tesla has to have the burden of people misinterpreting what it says.
Because Tesla is so pretty clear in stating what their autopilot is able to do and what not.
The cars simply lack the software to enable a fully autonomous vehicle. The phrasing indicates that if/when the software becomes available, the car would be theoretically capable of driving itself.
It's just a typical misleading marketing blurb; nothing more.
They don't actually know that they have hardware for full autonomy till they have a fully working hardware/software autonomy system; what they have is hardware that they hope will support full autonomy, and a willingness to present hopes as facts for marketing purposes.
But that wasn't the line of argument I was making. The parent commenter said this about people misunderstanding the term "autopilot"
> Just because people don't know the proper term or have an erroneous idea of the term, doesn't mean tesla has to have the burden of people misinterpreting what it says.
Seems like people might be mistaken because the phrase "Full Self-Driving" is literally the first thing on the official Tesla Autopilot page.
It's a fatal choice of words.
That is exactly what the CEO says
Hes a big talker and a bigger cult leader.
His fans will believe the news.
And it's not just this one, it's all the others that they'll suddenly be accepting liability for.
This sample statement makes it very clear that the user was misusing autopilot and trusting it beyond its intended function, but also shows sympathy for the family's situation.
Doesn't your statement admit that Tesla is at least partially at fault? Something their lawyers would probably never allow.
IANAL so take with a grain of salt. I once talked to a lawyer who used to work for a big hospital and handled the malpractice lawsuits against them. Three takeaways from the discussion:
1. Implying that a possibility exists that the hospital was at fault has no legal ramifications whatsoever.
2. The studies show an apology and admission has a significant impact on the amount paid to the patient if there is a settlement (in favor of the hospital).
3. Despite knowing 1 and 2, he and other lawyers advise their clients to deny wrongdoing all the way to the end.
"Don't smoke, ever."
'Oh, really? How did you model the benefits to me? By what margin did the costs exceed the benefits?'
"There are no benefits to smoking."
'You mean, no health benefits?'
"What's the difference?"
Should a product that flawed really be deployed on an honor system basis?
Because testing stuff before throwing it on the market isn't a thing anymore?
I forewent commenting on their pre-market testing because I assumed that flokie already knew that the cars and ML models they use had been extensively tested on tracks and in simulation before the first Tesla was allowed on California roads.
And they would have be complete idiots had they not done such testing, no investors would have funded that.
Reactions like flokie's were completely predictable the moment driver assistance techniques were thought of. The only acceptable response a company can have to such criticism is "we have tested this extensively and it is safer than driving manually".
Market forces aside, no car drives on roads in any US state without extensive testing and certification. All of the companies testing self-driving technology had to get special permits to do so.
Even just a reading of their actual statements offers some insight in to the amount of testing and data collection they are still doing.
I think it’d be a tough sell to get the blessing of Tesla’s legal team. Given Musk’s position he could override that of course, but it could still reduce the likelyhood of it going out.
Overall as much as I prefer it, I think most companies wouldn’t release something this direct and honest. Although that’s changing at some companies as they find that the goodwill built through lack of bullshit can sometimes outweigh distasteful liability defense techniques.
It seems like Tesla is careful not to dispel the illusion it can. Like how alcohol does not get you women- but all the adverts deftly imply it can, without actually saying it can.
I don't know if paraphrasing Joseph Stalin is the right way to go around this
I am a bit irked by the arrogance of this statement. The best we can hope for is to ensure the safety of the public.
And you, as a company, can be regulated out of business.
Pretend for a moment that the occurrence of the accident was a foregone conclusion. In such a scenario, the most positive thing that could be done would be to use the information to save the lives of others.
There's a smidgen of humility in the phrasing; that the accident might have been avoidable with the information that has been gained as a result of the accident. Of course, I presume that sort of sentiment would fall squarely under the "admission of guilt" umbrella that prevents companies from saying things like this.
Tesla is merely trying to take the next step in reducing that percentage.
Their strategy is sound and we so far have not come up with any alternatives that stand a remote chance of improving safety as much as self-driving. Even if they are largely unsuccessful, they are indeed trying to ensure the safety of the public.
Nope, and that's the whole difference. They will be killed by theirs actions, their choices, their inattention, or those of other drivers. They won't be killed by the machine.
With autopilot / pseudo-autopilot, they will be killed by the machine.
It is a huge difference, both in terms of regulations of people transportation safety, and in terms of human psychology, which makes a big difference between being in control and not being in control of a situation.
I can agree with the notion that the machine killed the person in all cases where the the machine does not include any controls for the person.
As a society, we currently recognize that the causes of accidents and the probability of occupant death are dependent on multiple factors, one of which is the car and it's safety features (or lack there of). https://en.wikipedia.org/wiki/Traffic_collision#Causes
We also already have a significant level of automation in almost all cars, yet we are rarely tempted to say that having cruise-control, automatic transmissions, electronic throttle control, or computer controlled fuel injection means we are not in control and therefore the machine is totally at fault in every accident.
Operating a car was much harder to get right before these things existed and the difference can still be observed in comparison to small aircraft operations.
Then and now we still blame some accidents on "driver/pilot error" while others are blamed on "engine failure", "structural failure", or "environmental factors".
I think having steering assistance or even true autopilot will not change this. In airplanes, the pilots have to know when and how to use an autopilot if the plane has one.
If the pilot switches on the autopilot and it tries to crash the plane, the pilot is expected to override and re-program it, failure to do so would be considered pilot error.
Similarly, drivers will have to know when and how to use cruse-control/steering-assist and should be expected to override it when it doesn't do the right thing.
"A sincere diplomat is like dry water or wooden iron."
"We think that a powerful and vigorous movement is impossible without differences - 'true conformity' is possible only in the cemetery."
"Education is a weapon whose effects depend on who holds it in his hands and at whom it is aimed."
Just curious, what’s your writing background?
Sure it might look neat to someone on the outside but it wouldn't take long to see it's nothing like the real thing made by someone who knows what they're doing.
This statement is fine.
Do you honestly believe the statement, as written, aligns with the goals Tesla sets for themselves when releasing a statement?
If yes, then how wildly incompetent must Tesla be to release the statement they did.
If no, then this statement does not serve the function of a statement released by Tesla.
The driver had previously reported to Tesla 7 to 10 times that there was an issue with autopilot, but Tesla told him "no there is no issue". There is also video evidence of this same issue happening in other parts of the world, but with similar road conditions. But again, Tesla's response has been "there is no issue."
And now their response is "the driver knew there was an issue but he used autopilot anyway"? Seriously? Either there is an issue or there isn't, Tesla. At first you said there is no issue, and now you're saying there is an issue? And as the cherry on top, you're blaming the driver for continuing to use the feature even after you told him repeatedly that it's okay to use?
For shame, Tesla. For shame.
I understand this is not a popular opinion on news forums anywhere nor do I write this to absolve in my view Tesla of their poor response nor of their poor choice in naming the system 'autopilot' to start.
But at what point does corporate or government responsibility end and personal responsibility start? If I knew something didn't work and that something could kill me and had reported the issue 7 to 10 times, I would be watching, like a hawk for the issue to recur, or more likely just not using it at all.
People have this insane idea that just because something isn't their fault they don't have to take corrective action in order to avoid the consequences. It is better to be alive than to be correct.
In this case, there is no evidence that the driver was misusing the car. But there is a carload of evidence that Autopilot failed.
At this point, the only thing Tesla is accomplishing with these public statements is adding more zeros to the eventual settlement/damage award.
And if they don't, what's the first thing you, as the customer, do? You stop using the product. If autopilot isn't working the way you think it should, you stop using autopilot. This driver did not do that. What kind of sense does that make?
Tesla's autopilot already slows down then stops moving if the user doesn't touch the steering wheel for a period time, but it gives plenty of warning to the user before that happens, as it should be.
Inconveniencing the driver and making it a learning experience vs crashing into concrete and killing him
Well. Maybe you are the naive one. Machinery shouldn’t give warnings. We already learned those lesson in industrial automation settings.
The average mentality of “just user error bro” seen on hackers new is what scares me the most. That is not how you build secure machines, far from it, and at this point my only hope not to be killed from an autonomous car in the next 30 years is for the regulations hammer coming crashing down full force on the wanton engineers that are treating loss of life as user error
this is a paper from a hundred years ago about the subject, this is not rocket science https://www.jstor.org/stable/1412972?seq=1#page_scan_tab_con...
The airbag recall affected millions and millions of cars. How many of those people were able to not drive their car until the problem was resolved?
You wouldn't have to stop using the car. Just stop using autopilot.
I think that ability to update is one of the most exciting things about Teslas compared to other cars, but it presents a huge organizational problem. How does Tesla communicate out those changes to their customers, mechanics, sales staff, and customer service people? I think the reporting of this specific issue and the original "there is no issue" response is simply from a company failing to communicate in multiple directions rather than any type of sinister motive that some people seem to be implying (Hanlon's razor and all).
Fast updates can be a great thing, but companies (in this case, Tesla) need to do a better job of figuring out when to apply this mentality of "move fast" and when to instead apply a mentality of "move slower because even the slightest error in this update could result in bodily harm to someone". There's a reason that most non-software industries (think aviation, medical, most car manufacturers not named Tesla) move excruciatingly slow when rolling out new features.
 - https://en.wikipedia.org/wiki/Trolley_problem
In this instance, it didn't get safer. It did the opposite, and it killed someone.
>I haven't seen anything to suggest Tesla is ignoring that data or being reckless in when they decide to roll out these changes.
You haven't? Because I'm sitting here looking at a story about a bug in Tesla's latest software that Tesla ignored and then it resulted in someone's death.
Then why are you making assumptions about it, then?
>You are drawing those conclusions with only half of the equation.
No, I'm not. The facts are: there was a problem that was reported to Tesla, Tesla ignored that problem, and now someone is dead because of that problem. No further information is needed to recognize that Tesla acted poorly here.
You've stated this repeatedly without proof. It has not been established he is dead because of that problem. The investigation may yet reach that conclusion, but until then you have no support for this assertion.
The safety claims Tesla/Musk make are debatable. There is a lawsuit over getting the raw data NHTSA used to make the "autosteer reduces collisions by 40%" . AEB also reduces the collision rate by 40 percent, which is mentioned in the same NHTSA report . And, supposedly AEB and autosteer were activated around the same time, so it might not be easy to tell which provided a benefit.
> I would bet that autopilot is still well in the positives in "lives saved" versus "lives ended"
You need a lot of data to show that. 1 death per 86 million miles driven is the average in the US, and that's across all types of vehicles, new and old, including motorcycles, bad weather (where autopilot is not likely to be engaged), and on all kinds of roads (autopilot is mostly engaged on divided highways).
There was also a death in China that looks like autopilot .
Not to mention the market telsa courts self selects for safer driving in general (rich, and informed).
To act like this is a clearcut case of the trolley dilemma and telsa is making a moral choice that saves life is misinformed bordering on disingenous.
There is a huge amount of indication that Telsa is acting both irresponsibly (refusing to admit fault, when its clear fault exists, refusing to address concerns, only answering with deflections), and recklessly (misleading marketing about fully capable autodriving hardware, calling a system that requires constant, careful manual attention "autopilot", enabling it by default, absolutely refusing to back down even a little)
That specific note is available on page 10-11 in the linked study. 
 - https://www.scribd.com/embeds/337007075/content?start_page=1...
AEB is clearly unreliable, as it didn't automatically slow the car in any of these cases where the car was pointed directly at a significant-sized stationary solid object.
I did worry about this sort of thing back when they mentioned nuisance trips of the radar-based AEB system, and that to deal with them they were basically just geotagging where these nuisance trips happened and disabling the system near those points. I'd like to know if AEB was actually active at the location where the fatality occurred.
Yes, AEB fails, but a driver + AEB is safer than a driver alone.
They do and I'm largely glad they do but I think you can go too far the other way, delaying advances that can save lives can result in more lives been lost because of the delay.
There has to be a middle point, you want new drugs (for example) reaching the market but with the risk minimised to a degree that is acceptable, you simply can't reduce the risk to 0.
All that said I'd be fascinated to read the assurance approaches that Tesla uses before pushing out updates to what is essentially a computer on wheels attached to batteries.
Is this a daily thing? Weekly? Forced-updating software is almost always bad, but it would be a nightmare in my car. "Good morning! The buttons on your steering wheel have been remapped, and the auto-nav now has a different set of quirks. Have a nice day, and we are not liable!"
Whether or not it's a full self-driving mode, or if it's an assisted auto-pilot, or even if it was just some lame GPS navigation, is completely irrelevant. The bottom line is: there was an issue that was reported, Tesla denied it, the man died because of that issue, and now Tesla is contradicting themselves by saying there is an issue, but are deflecting blame.
At this point, Tesla is eroding the ability for anyone to put trust in their cars. What if you have a Model X and one day it unsafely opens the doors while you are driving on the highway, so you take it to the dealer but they say "nope, can't replicate the issue, your car is safe to use". Are you going to believe them? Why, when they've just confirmed that they have a track record of being wrong?
Oh wait, that already happened: https://www.reddit.com/r/teslamotors/comments/6giiao/my_mode...
As for the relevance: sure I can. My post about Tesla's contradiction has no relevance on what kind of autopilot it was. Tesla denied there was an issue, contradicted themselves by then saying there is an issue, and now is casting blame on the user for that issue. That's not acceptable. It would be unacceptable if we were talking about windshield wipers, or tires, or door locks, or the fucking clock on the dashboard. It is all about Tesla's denial of the issue and contradiction of themselves, and the type of autopilot is completely irrelevant.
He went to them multiple times and they couldn't reproduce the problem. Sounds fairly standard. I'd be surprised if you can cite them saying there is definitely no problem.
Tesla don't make preconditions on where to use their autopilot. How many Tesla's discover a new 'problematic' strip of road each day? Is every first Tesla to drive down a new road to become a crash-test dummy when it turns out the autopilot just can't handle that next bend in the road?
It's absurd to think so. The product is not fit for purpose. I'm confident the law would agree.
This doesn't hold up if you refer back to the comment that states he contacted them 7-10 times.
But if the car consistently swerve toward the barrier in certains circumstances, doesn’t it increase the probability of a crash because it programmatically increase risk of accident in an otherwise safe situation. And then isn’t Tesla culpright of not treating bug report appropriately?
As a beta user of bank software where clicking transfer money ended up sending twice as much, wouldn’t it be my fault if I did it 7 more times when I acknowledged its behavior even if it wasn’t as advertised?
I realize my examples are hard to compare to this specific situation, but the software is opt in and is littered with disclaimers because it’s not a level 3+ autonomous vehicle.
To quote Tesla statement "We empathize with Mr. Huang's family,(...), but the false impression that Autopilot is unsafe will cause harm to others on the road." and the last sentence is by far the worst "The reason that other families are not on TV is because their loved ones are still alive."
Seriouly the most urgent thing to do for Tesla is maybe to fire the head of PR. How can anybody in his right mind have vetted this last sentence?
Agreed. Unfortunately, Elon himself probably wrote that.
Bad example. "Hey, this is broken, it did this." You say. They say "Nope, no problem/ fixed." You do it again. "Still broken!" "All good!". Again...
Tesla would argue that the proximate cause was that the driver was not paying attention.
> And now Tesla is saying that there is an issue, and that it's his fault for using the system.
Tesla is not saying that it's his fault for using the system. They are saying it is not Tesla's fault if the driver wasn't paying attention, despite being presented with repeated warnings closely preceding the impact.
If my toaster causes me serious injury because I used it in the bath, that is my problem.
A toaster is not normally used in the bath, so that does not create product liability issues. A car is normally used for driving...in fact, that's a car's primary use...so product liability would attach to its driving-related functional failure.
No, they don't, otherwise they could disclaim everything.
I am not a lawyer, but it is not clear to me just yet that the product failed. We don't know what the investigation will show.
It's not clear to me whether it was a design defect or a manufacturing defect, but it's pretty clear that the lane keeping system drove the vehicle into a barrier which was not, in fact, located in a traffic lane, killing the occupant, which is pretty obviously a failure.
Reflecting on your reply and gamblor956's post below, and after reviewing the published photos, I concede this looks like the most likely scenario, and if that is what actually occurred, it would unequivocally be a failure.
(That said, there's still a chance, however small, that whatever seems obvious now ends up being invalidated by the investigation. I just don't like feeling certain about things like this, especially from a distance.)
Everybody knows ad is alerted by the fact that self driving technology is not there yet. The driver knew that.
there's something fundamentally wrong in releasing that to the public then, your authority for vehicle safety should be VERY concerned about this, people getting killed aren't "bugs to iron out".
Words have consequences and in this case deadly consequences.
If things worked the way you want it to, no car company would every say anything to any customer other than bland statements drafted by lawyers.
If the driver had used it according to the instructions (with hands on the wheel, while driving the car, watching the road as a driver always should), the crash would not have occurred.
Tesla is between a rock and a hard place, they need to make it crystal clear that the car performed as expected. The message is not intended for the deceased driver’s family but for everyone listening, including everyone who owns a Tesla.
It’s not an “autopilot”, it’s a lane assist. I see this as a naming failure more than anything else.
That hasn't even been definitively established yet (regardless of what Tesla says), as the multiple videos of post-update Teslas steering directly toward crash barriers demonstrate.
Let me put it very clearly for you: it is not acceptable, in any shape or manner, to expect your users to have to "save themselves" from your product killing them every 5 seconds. If your product requires that, it is a bad product.
Furthermore, your entire argument of "it's just a level 2 system..." falls apart when you consider that Tesla jumps through several marketing hoops to deceive people and make them believe it isn't just a level 2 system. When you visit Tesla's webpage on autopilot , the first and only thing you see at the top of the page are the following words in big, bold text:
>Full Self-Driving Hardware on All Cars
>All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
It is not until you get to nearly the very bottom of the page that Tesla displays a disclaimer that "when we say 'full self-driving', we don't actually mean full self-driving, because that would be illegal and isn't available yet." There is only one single sentence, buried within a paragraph mid-page, that says drivers must remain alert and prepare to take over. The rest of the page is also filled with language indicating that autopilot is capable of turning itself, switching lanes itself, speeding itself up/slowing itself down, etc.
It is not unreasonable whatsoever for someone to read this webpage and assume that autopilot will magically take them whereever they need to go. This webpage is in stark contrast to the reality, where autopilot will actively steer you into a concrete barrier and kill you if you do not babysit it. And to make things worse, Tesla apparently considers this to be a "feature".
According to your Twitter, you own a Tesla, so it seems you have a little bit of fanboy-ist bias going on here. I'd suggest you take off the Tesla fanboy ballcap and consider how you would feel if this were a different product. If someone sold you a smartphone, but told you "this phone never requires charging because it has a mini nuclear reactor in it. Be careful though, because if you don't keep your eyes on the phone at all times and press the 'please don't explode and kill me' button every 5 minutes, you'll die!" would you be defending that phone in the same way you're defending Tesla?
Also, have you used any adaptive cruise control system? Because they break your rule.
Yes it is. Otherwise you're asking all car manufacturers to stop selling cars. "I have to constantly keep my car between the lines or I'll cause an accident" is an acceptable standard. That's how every car on the road works today. The liability is on the user to operate it properly.
You accuse the poster of fanboy-ism, but you don't realize how much Tesla tells the driver that they need to pay attention and keep their hands on the steering wheel while they're using autopilot. So maybe your lack of experience with the car makes you pull those baseless arguments.
While I agree Tesla is deceptive with their wording and the way they sell their autopilot ("have the hardware needed for full self-driving capability"), it's not legally wrong. They have the hardware for it (according to them), but at the moment, the software they have is a glorified lane assist. They specifically mention in that same page:
> Every driver is responsible for remaining alert and active when using Autopilot, and must be prepared to take action at any time.
No, it is not. No other car on the market today actively steers you toward an obstacle. Every non-autopilot car would require the driver (or another human actor, such as another driver) to first put themselves in a dangerous situation before requiring they get themselves out of it. Tesla autopilot, on the other hand, puts the driver in a dangerous situation on it's own. That is the crucial, fundamental difference.
I can only name one.
Automatic cruise control is also not expected to be perfect at slamming on the brakes to avoid rear-enders. The driver is responsible.
But for the current kind of systems, what I would really like is something to take evasive action, not drive for me. I know some non autopilot cars have those systems in place, but not sure what more a Tesla car would give me in protection.
I guess I need to take one for a spin one of these days.
I believe Tesla should take responsibilities for this. But I think they fear this could just kill their entire company if they make a PA regarding safety being their fault.
All in all, it's not right, not long ago they were claiming to have the highest safety ratings, now they're in denial over a death.
What I hope now is that there come a new fresh enterprise that prioritises the range variable, and can produce serious electric alternatives to conventional buses and scooters (e.g. Vespa) too.
https://www.youtube.com/watch?v=WX0bR_EQ47E Tesla Model X with AP2.5, SW 2018.6.1
https://www.youtube.com/watch?v=6zK2Om8Q0IA MS 75, AP1, SW 2018.6.1
1. https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil.... "works for 6 months with zero issues. Then one Friday night you get an update. Everything works that weekend, and on your way to work on Monday. Then, 18 minutes into your commute home, it drives straight at a barrier at 60 MPH."
you can see the Tesla hugs the left line divider. When the lanes split, the line doesn't -- it continues into the opposite lane.
Regardless, that's a totally common occurrence on the road. It seems absurd that the car will trust a painted line over any type of radar or imagery showing a rapidly approaching wall...
EDIT: watch both videos again --
Notice in the video where the car DOESN'T head towards the divider, the Tesla autopilot dashboard shows that it can see the car ahead of it.
In the one where it DOES head towards the divider, it doesn't have a car in front of it to follow.
I am wondering if in the first instance, the Tesla was confused by the lines, but saw a car ahead which cleared up its confusion (maybe it weighs trailing another car as better than following lines).
In the other instance, all it had to work with was the lines (and all the radar and other stuff it has that should avoid all this...).
Sorta ironic. In a sense autopilot is trusting/deferring to the judgment of the driver of the car in front of it for course determination.
There's a car ahead of us. It took Path A and didn't crash into a deadly wall.
Therefore, if we also take Path A, we won't crash into a deadly wall, regardless of what any paint on the ground says.
but damn, tighten this stuff up when people's lives are on the line, please.
What I don't get is why the radar doesn't pick up the barrier itself. It should have been getting a nice, healthy return signal from it, lots of metal at all kinds of angles. What I'm saying is, this is not a stealthy barrier.
All of these radar systems seem to have problems with stationary objects https://www.wired.com/story/tesla-autopilot-why-crash-radar/
I'm having trouble understanding why that would be the case. Why anyone wouldn't want the radar active when going down the highway.
It is not as if it doesn't work at highway speed...
It is likely that they're using at least 2-D radar, so the system should be able to discriminate between an object directly in the path of travel, and one that is off to the side.
Additional limitations are listed in page 96.
"Automatic Emergency Braking operates only when driving between approximately 7 mph (10 km/h) and 90 mph (150 km/h)." -- Model S Manual (you can find it online)
Edit: To clarify, in most of these crashes, it's usually a combination of Tesla's fault and the driver's fault. Which is to say, the driver could have avoided the crash, but Tesla could also have avoided it, and the driver's biggest fault lies in trusting the "autopilot" more than they should. In the case of this particular crash, I certainly agree that the driver should have been paying more attention to the road, especially because they already knew the autopilot had issues at that particular point, and if they were paying attention they could have avoided the crash. However, the fact that autopilot can't handle that barrier correctly is a problem especially because the driver reported that exact issue to Tesla multiple times in the past (heck, Tesla probably should have blacklisted that particular GPS location to force the user to take control when approaching that point, if they can't handle the barrier correctly in autopilot). Similarly, Tesla allows drivers the most freedom to ignore the road of any of the "autopilot"-like systems, and they continue to call their system "autopilot"¹, both of which only serves to make this sort of crash much more likely.
¹Yes I know it's technically correct, but it doesn't match what the general public thinks when they hear the term "autopilot". It gives the wrong impression that the system is smarter and more reliable than it actually is.
Full Self-Driving Hardware on All Cars
All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
 Tesla thinks they'll have full self driving with just cameras but I'm not convinced. Regardless of how level 4+ autonomy is achieved, there's still some serious work that needs to happen on the software side
given the behaviour of customers (putting down lump sums of money without knowing when they receive their car, camping out in front of stores), dedicated only communities (a 200k user subreddit for a car company?) and so forth I'm going to go with yes and it seems to border on the cult-ish.
There is a cult of personality around Musk and Tesla that is simply irrational
Edit: Instead of downvoting me, can the downvoter please point out where the trolley problem involves a human in the loop? This is a level 2 system, where a human driver has to remain alert and ready to take over. That's not the trolley problem.
Sure autodriving is entering the uncanny valley and that's different. Lots of things are different about it. Instead of prosecuting one person for causing an accident, now a whole line of vehicles is 'at fault' and maybe get sidelined. Good news there is, they can all be fixed at once.
It'll end up more about the law and insurance implications, than what we feel about it.
"This speaker's hardware is ready for stereo sound. The speaker is not capable of playing stereo music in general."
"This smartphone's hardware is ready for multitouch input. The phone is not capable of responding to multitouch input in general."
I think there is a reason everyone else is using Lidar.
Blaming the user and pointing to the ToS is their fig leaf against liability. When autopilot is used as required by their ToS, it is useless. When it is used as advertised, it is dangerous. They know better, and they don't give a damn. First to market, and all that.
Really? Having an autopilot system with an incredibly fast reaction time able brake/swerve/avoid accidents faster than humans is "useless"?
Promoting the system with the name 'Autopilot' is, to my mind, quite a bit less than extremely clear communication. This name provides Tesla's critics with a constant line of attack, which will continue right up until they reach level 5. I consider myself a hardcore Tesla fan but I think they've botched their messaging, and this latest statement isn't helping.
> ... but people keep getting confused about it, so they keep doubling down on their message.
Well, that's exactly the problem, isn't it: people keep getting confused. If your message isn't getting through, instead of just yelling it louder you could try eliminating all ambiguity, for instance by changing the name. If this were marketed as simply 'adaptive cruise control with steering', nobody could claim they were misled.
Is it, though? Autopilot has never in the history of those systems meant hands off controls, attention off driving. Pilots with their autopilots on in airplanes are still flying and watching gauges.
You are correct about the history of autopilot systems in planes, but when it comes to mass communication technically correct is not necessarily the best kind of correct. I wouldn't assume most people are aware of how autopilot systems work in planes.
Edit: just to make it clear, I don't believe this accident was caused by any confusion about the name. Rather, as pointed out by others in this discussion, I think the fact that the system works great for long stretches (some say they never experienced any problems) makes it exceedingly hard for drivers to remain alert. The brain's optimization tendencies are so strong that I think systems relying on people to stay alert when the action-effect-evaluation feedback loop is broken are inherently unsafe. My point is that with suboptimal messaging, Tesla is making it harder for themselves to keep the public on their side when tragedies like this occur.
It remains an issue, and a very well-explored one that is extremely relevant to car automation. https://www.newyorker.com/science/maria-konnikova/hazards-au...
No, not at all; the discussion I was responding to was specifically about Tesla's messaging (see my other comments at  and ).
Oct'16 video on https://www.tesla.com/autopilot says Driver is only there for legal reasons
Yes, safety should improve significantly due to autonomy features, even if regs disallow no driver present
23 Jan 2017"
That may come back to haunt them. For those not clicking it, the above link goes to a video showing the Autopilot in action. The white-on-black, full-screen text prefacing the demo is not doing them any favours in the debate about their messaging (all-caps are in the original):
"THE PERSON IN THE DRIVER'S SEAT IS ONLY THERE FOR LEGAL REASONS."
"HE IS NOT DOING ANYTHING. THE CAR IS DRIVING ITSELF."
Ads like those explicitly include 'do not attempt' disclaimers; this video does not. The statement it does include only emphasizes how advanced the system is.
> so when they say the driver wasn't necessary for the demo I don't personally extrapolate that to say anything about real life.
You do not, but the question is how many jurors would see it the same way when a highly skilled lawyer argues that this video is misleading, as part of his efforts to hurt Tesla in a liability suit.
As a safety matter, I am pretty confident that nobody gets through configuration and delivery (let alone a test drive) thinking this is what their car does when they turn on Autopilot.
To call the Tesla Model S an "autonomous vehicle" simply because it has a feature called autopilot is disingenuous.
But in my view, the name "autopilot" itself describes a device for pilots, providing a very similar level of partial assistance to Tesla's current feature. It is accurate. The problem arises when it's sold beyond its traditional meaning.
I strongly suspect that is a small minority of buyers.
A reasonable person would not take this to mean that the car is driverless.
So, which is it, Tesla? Is hands off wheel allowed, or not allowed? If it's not allowed, why do you program your sensors to tolerate the situation?
It used to allow you _fifteen minutes_...
This is something they really ought to do. Also to proactively blacklist places where other drivers commonly disengage Autopilot because there might be something tricky there that it can't handle
> a person harmed, injured, or killed as a result of a crime, accident, or other event or action.
Whether they call their system "Autopilot" is immaterial. You are the responsibly party as the driver, and this is made clear both at purchase time and during vehicle use.
If you don’t agree, do not engage the safety system. As the operator, you have final authority.
A recent example is the ignition switch recall that old GM had. It took operator action to trigger the fault, but GM still repaired the faulty cars and paid liability claims.
Expect Tesla to pay a massive fine as a result of this. Assuming, of course that they survive the lawsuit and the loss of buyers...
The fault existing is why they paid claims to the people that the fault killed, and why they did a recall.
And faults in safety systems can leave their manufacturers liable, just like faults in any other system. The fact that it is a safety system doesn't mean anything changes.
> If you don’t agree, do not engage the safety system. As the operator, you have final authority.
"Don't like, don't use" has never been an acceptable answer in discussions of product safety-- in neither the court of law nor the court of public opinion-- and that's not going to change now.
People suck at remaining alert and attentive when doing boring shit. So when autopilot was all over the road and screwing up every 30 seconds, it wasn't much of a problem. Now that it can navigate highways for an hour or more without needing a safety critical intervention, we have a problem.
Additionally, I'm not a professional driver, I don't have any autopilot on my car, but I've definitely briefly fell asleep while driving late at night. What do you think rumble strips are for?
Such an extremely low number of (but extremely highly publicized) fatality is not reason enough to say autopilot is a net negative. In fact, the extreme media focus that every fatality during autopilot causes should goad improvement of autopilot safety until it's far better than human drivers.
(And let's not pretend that this autopilot error doesn't also happen with human drivers... The only reason this failure was fatal is because less than two weeks earlier, a human driver crashed into the same barrier--likely due to the same confusion--and destroyed its effectiveness since the highway department was slow to reset it.)
There is no path from Autopilot as it is to full autonomy without doing a major shift in strategy. Google figured out very early on that the incremental approach is unfeasible. Most of the other big car companies at the outset were intending to develop autonomy incrementally as well, but then they thought about it for a few minutes.
As the technology improves, conditions can be relaxed for full self-driving.
The very fact that in this case, the driver had multiple problems with this same spot, and that other drivers also noticed problems in this area, points to an obvious mitigation strategy: flag problem areas geographically for autopilot, develop specific strategies that address the problem, and verify it has been fixed by examining how other Teslas on the road in "ghost" mode (i.e. with the human driving but the software pretending to self-drive) respond to the changes.
Tesla, due to its massive and well-connected fleet, has a lot of tools in the toolbox to address these problems, in some cases more than Google (although I agree Waymo's self-driving capability is currently more impressive).
This is what I have been thinking as well, except it appears that Tesla doesn't currently have a way to handle such scenarios.
Ideally, they should be recording when a driver makes a sharp swerve to correct autopilot's naïve lane marker following algorithm, then use that information to inform future drives using autopilot on that stretch by disabling autopilot as that patch of road is approached, while they work on a reliable fix.
Considering they have enough data to do realtime traffic analysis, they easily should have enough data to determine problem areas for the autopilot.
However, Tesla calling it "Autopilot" should be no longer allowed. It's clear that it is most certainly not that, and customers are not treating this feature in a careful and cautious fashion, and I do believe that's due to Tesla overselling Autopilot's abilities. In addition, its warning systems are obviously not enough to get drivers to take corrective action and ensure their hands are on the wheel. (Why you can take your hands off the wheel at all when engaging autopilot is a mystery to me.)
Blaming the victim certainly doesn't sit well with me.
They refuse to acknowledge a bug in THEIR software which drove his car in to a concrete barrier and pin all the blame on him. Even so far as to tease him and ask why he would even use their product in that area (which Tesla allows).
It must feel so so so sad to have your loved one die and then to read this heartless BS from tesla.
I could use your exact same argument to say Ford isn't doing enough to force people to put on their seat belt. It's a silly argument. It provides seatbelts, alarms when you fail to put it on. At that point, any harm is user-error. That's the exact same case here.
Yes. When the alternative is having a crash they should find a way to stop. [edit: of course I don't mean to suddenly stop. But if the system can't trust the driver is alert it should disengage or stop. As we've seen now multiple times, if it continues it will result in a crash.]
The current behavior is not only dangerous to the driver of a tesla. It puts everyone in the road at risk, most of whom have no contract with tesla and never agreed to be guinea-pigs in their so-called "autonomous" experiment.
There are two possibilities:
1) The driver is actually incapacitated. Then slowing down and stopping is a million times safer than continuing to drive.
2) The driver was looking at his mobile phone. Then he'll probably quickly react and take control of the car again.
I don't see the scenario where continuing to drive is safe when the car knows the driver isn't paying attention.
This is not specific to Tesla.
Trivial example: A coworker has a car that has blind spot detection. She knows the manufacturer says "you should still check, it is not 100% accurate". And she admitted that despite knowing this, she does not check her blind spots and relies solely on blind spot detection.
More serious example:
I have a car that has adaptive cruise control. You can get on the highway, set the CC to 60 mph, but if traffic slows down, it automatically slows down and maintains a distance to the next car. It speeds up to 60 mph when the traffic picks up again.
I love it, but for the inexperienced driver, this is dangerous. It requires one to be more vigilant than regular CC. Every so often (rare, but has happened a few times), it will not detect that the vehicle in front of me is a vehicle, and will not slow down. The more you rely on it, the more vigilant you need to be. My wife has never used CC before, and this would be too much for her to handle, compared to simpler CC.
It's a general problem. Tesla is just a convenient target.
I don't know anything about your wife, but that does sound perhaps a touch condescending. Besides that I think you make a very good point, one that many people seem to want to wish away.
That's probably because you don't know anything about my wife :-)
This isn't about my wife. I would never recommend anyone to learn CC on this type of system. They should learn CC on a regular car, and once they're comfortable with that, switch to something like this. You simply have to be more alert for a "safe" CC than a regular one.
No, only every ~ 1 minute.
>What the hell are they going to do if you take your hands off the wheel
Detect barrier and apply auto emergency brakes and Disengage autopilot like GM's Super Cruise does if it detects your face is not looking at the road.
Meh, autopilots on boats don't do anything more than steer a straight compass course.
Many marine autopilots integrate with AIS, sonar, and charts for accident avoidance and alerts. For example on some larger ships it is able to calculate if at the current speed a collision would occur and reduce speed to avoid it. Some can do "port to port" auto-steer, taking the vesicle through narrow channels.
I would have never guessed "hands on wheel" is detected only if the driver yanks the wheel. I figured it was camera based or that the wheel had capacitive sensors.
Still very new only only on a single model so far, but it's at least the right direction.
I think that Tesla's Autopilot function is comparable to planes. That's probably how they will justify it. However there might be a legitimate point in that people don't have that kind of understanding of what "Autopilot" means.
So even if the "capabilities" may be comparable between the two systems, the amount of supervision they require is not, so I don't think they should use the term. It may be "lane-keeping cruise control" isn't nearly as sexy, but it doesn't establish unrealistic expectations which get people killed.
For unmanned aircraft, autoland is an safety feature. After loss of link, an autopilot might be configured to land at a designated location, rather than just circling until it runs out of fuel and drops out of the sky. Though admittedly, crashing in a safe place is less serious when the vehicle has no people in it.
The Tesla autopilot needs much closer supervision than any of the autopilots I've worked with. That's more a result of its domain than of its sophistication, but it still makes me very uncomfortable with the name.
I agree, Tesla shouldn't call it autopilot, call it Highway Assistance or something.
Or even, at this point, I don't see why it should even exist, if it fails on basic situations.
Additionally...Autopilot in planes is hands off. The pilots look at the gauges but they're not actively maintaining their grip on the controls.
First Cat III autoland aircraft was the Shorts Belfast freighter. Sud Caravelle and HS Trident airliners started at lower categories and worked up to III.
"We are very sorry for the family's loss.
According to the family, Mr. Huang was well aware that Autopilot was not perfect and, specifically, he told them it was not reliable in that exact location, yet he nonetheless engaged Autopilot at that location. The crash happened on a clear day with several hundred feet of visibility ahead, which means that the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so.
The fundamental premise of both moral and legal liability is a broken promise, and there was none here. Tesla is extremely clear that Autopilot requires the driver to be alert and have hands on the wheel. This reminder is made every single time Autopilot is engaged. If the system detects that hands are not on, it provides visual and auditory alerts. This happened several times on Mr. Huang's drive that day.
We empathize with Mr. Huang's family, who are understandably facing loss and grief, but the false impression that Autopilot is unsafe will cause harm to others on the road. NHTSA found that even the early version of Tesla Autopilot resulted in 40% fewer crashes and it has improved substantially since then. The reason that other families are not on TV is because their loved ones are still alive."
I mean... clearly (hopefully) this isn't the attitude being exhibited inside the company. We all know there's a big strike team or whatnot that's been assembled to figure out exactly what happened, and how. We all know that it's probably a preventable failure. So that's what we need to see, Elon. Not this excuse making.
No, it doesn't just "sound like", it's actually
> extremely clear that Autopilot requires the driver to be alert and have hands on the wheel. This reminder is made every single time Autopilot is engaged. If the system detects that hands are not on, it provides visual and auditory alerts.
Clearly that's not the case, because if it "required" it the car would cease to function if you took your hands off the wheel. Instead, the Tesla continues to drive itself for some time and then after a few seconds warns you, but that's about it. If you want to see what a real requirement looks like, go explore any of the other car makers that have rolled out similar systems at this point. Those really, actually do require "the driver to be alert and have hands on the wheel."
Has everyone forgotten Tesla's marketing from when they first rolled this poorly implemented system out?
I'd think that Tesla is the very antithesis of a faceless corporation, for better or worse.
This is very misleading, he might have seen the barrier well in time but the car could have steered towards it in the last second, like it does in several of the videos demonstrating this issue.
In the videos from others, the autopilot begins to move with the lane, but then crosses over into the barrier as it approaches. There's no way to anticipate that this is going to happen by simply looking ahead of you. There are no warning signs that something strange is about to occur. By the time you've processed that something is happening, it's well underway.
If you were to simulate this, I wonder how many people would be able to properly react? I think this would be an interesting experiment; sit people in driving simulators and have them drive around for 30 minutes or something and randomly introduce this. Even in the context of an experiment where people are likely to be more attentive due to the novelty of the experience, I'd expect to see quite slow reactions with fairly high frequency.
It sounds like Mobileye knows what it takes to make self-driving work given present technology, and Tesla didn't believe them and thought they were doing something so simple that their job could be automated.
Every AI system that works and makes important decisions has corner cases that are designed by humans. Turns out that computer programmers do a job that matters!
If the system detects that hands are not on the wheel, perhaps something else should be done that will incentivize the driver to disengage Autopilot. Perhaps, send warnings alerts to law enforcement, insurance companies, or emergency contacts? Not sure what the fallback procedure is when Autopilot detects dangerous situations for extended periods of time - presumably disengaging completely would be worse, unless there's a safe way for Autopilot to pull over to the shoulder at the next available opportunity.
There's a very simple solution: Don't fucking include it in the car. Require the users to show up at the car dealership and sign a legal waiver to allow the activation of the new, insecure, and dangerous feature.
There are no cars on the road with airbags that sometimes work and sometimes cause 3rd degree burns. There are no seatbelts that only work if the road isn't shiny.
If you want to advertise the autopilot as a safety feature available to the general public, make sure it is one.
If it's not, and you advertise it as such, you're legally liable for all the deaths your lie will cause.
Personally, I'm technically knowledgable enough and I know enough about Tesla's history to be wary of their products and never trust them, but what if you're the average consumer?
You would rightly expect any car automation system that comes pre-installed in your car and actually drives your god damn car to be safe enough to be there in the first place!
Tesla is just taking advantage of the lack of legislation and standards on autonomous driving. They're selling a defective and dangerous product. They should be held liable for it.
And I actually know a guy who slept on the wheel, feel sideways into the passenger's seat, then crashed into a tree. The steering wheel crushed the driver's seat where his chest had been. He would presumably die if he wore the seatbelt.
Of course, both seatbelts and airbags save many more lives than they take.
... What about the millions of cars being recalled right now for deadly faulty Takata airbags?
I believe that if that happens, it won't re-engage again until after the car is stopped, shut off, and re-started.
When you do this it presents you with a window that outlines your liability that you must accept. It looks like this:
Other car manufacturer use trained personel for testing.
I mean this is silly and should be illegal.
> Tesla is extremely clear that Autopilot requires the driver to be alert and have hands on the wheel
> the false impression that Autopilot is unsafe will cause harm to others on the road
"Autopilot is not perfect and you need to continually monitor and second guess what it's doing.... but it's not unsafe!"
Sounds like some really mixed messages here.
I think anyone that's ever developed software knows how naive (to be charitative) that reasoning is.
I don't get why they continue down this path. Just rename the goddamn feature into something that actually makes sense.
Tesla key people must be in some kind of Elon Musk reality distortion field.
It should be called, ”I’m Feeling Lucky.”
“And really there’s a sweet spot you have to find where you use automation to relieve the pilot of some amount of workload.”
http://nationalpost.com/news/world/disaster-on-autopilot-how... 10 years ago airlines could have gone 100% automation, but a hybrid approach really has proven safer.
Moreover, autopilot in planes is hands-free. The pilot is monitoring gauges but is not maintaining physical contact with the controls.
The reason that airlines went with hybrid automation is because take-offs and landings are the most dangerous parts of the flight, and if there were any autopilot issues at those times there likely wouldn't be sufficient time for the pilots to take over and recover. Overseas, some airlines allow pilots to let autopilot handle takeoff/landing in good weather.
Further, pilots are not alone in the cockpit. So, even when one of the pilots is going to take a nap it still ends up as a safety benefit worth the cost of 2 or some times 3 pilots on a long trip.
And as the other commentor who brought up the sleeping pilots noted, in many cases, both pilots fell asleep during the flight.
... and that sweetspot, for highway driving, looks to be around the 1998 level of having automatic transmission + cruise control.
As we've seen examples in two recent fatal accidents, a driver with too little to do is dangerous.
Tesla autopilot does seem to have net safety benefits though the data is far from conclusive. Honestly, I expect insurance companies probably have the least biased data but even then they may not have the same trade-offs in terms of damage vs fatalities as we would like.
The basics of good usability is that whatever the average person/user is likely to do is to be considered to be what the system should deal with.
As you say yourself, people somehow, super mysteriously, seem to think that the feature name Autopilot implies L5 driving. That's fine.
My point is that the only proper response to that kind of expectation setting is to live up to L5 autonomous driving capabilities. As we both know that is not yet available, so.. don't use that feature name until it actually is possible. Quite simple.
Even the flawed system seems to have a net safety benefit. It's one thing to say wait, it's another thing to say kill more people, because I don't like the semantics being used.
If i take any 2018 model year car with front crash avoidance (outside of a tesla :P), and try to drive it into a barrier like this, depending on speed, most avoid the collision, all of them slowed down (AFAICT - i went through the summaries by category on iihs.org).
None of them require you have your hands on the wheel to do that.
In most jurisdictions, to show a defective design, it's enough to show the existence of an alternative design that would have worked (it doesn't usually even have to have been implemented) , and that it is feasible, economically feasible, and would still perform the original function. Almost all of these cases live and die by the cost of that design and whether it's "too much".
Unfortunately for Tesla, there are tons of them here, they all almost certainly cost less than what Tesla is doing, and also all exist. So i'm positive any lawsuit will either settle, or they will show plenty of them would have worked, by driving cars into an identical setup barrier, and Tesla will lose.
(This assumes, of course, Tesla can't force them into a jurisdiction with weird defective design rules)
They are in complete denial that autosteering can seduce reasonable people into lowering their guard and mentally relaxing. Again, what other immediate user benefit would it provide? What would a reasonable person expect autopilot to do?
Isn’t it contradictory to offer an automation feature while claiming that legally it does ZERO automation for you?
Note: collision avoidance and emergency braking are on by default even without autopilot. “Autopilot” here really means the autosteering feature.
Additionaly I am very concerned about the OTA autopilot updates which do not go through any government or 3rd party certification. In fact I have read on hacker news that Tesla will sometimes push code changes to cars on the same day.
Tell people explicitly not to use auto pilot. Tesla can still use the onboard cameras and everything else for training the AI.
But it’s nothing like a plane. Concrete walls don’t suddenly appear in the sky. The required reaction time to avoid an autopilot collision in some of these videos is seconds.
And in the car menu, "autopilot" is actually called "autosteering" as a feature to activate. And overall it is just a very fancy cruise control, which is able to keep the proper distance to cars in front of you and keep inside your lane.
I use the Auto-Steer, only when road is empty, is in a reasonably straight line and I am in the middle lane of at least 3 lane road. Otherwise I don't use it. It might also be fine to use it in stop and go on a divided highway because there is on way you will die at those speeds even if you are in an accident.
But sadly, some people trust the car automation with much more than what it is capable of. This leads to accidents like these. And I would blame Tesla here. Tesla has clearly been misleading in marketing this feature. Not everyone is very savvy and they sometimes take what a company says at face value. I have really smart friends who have seen the AutoPilot link [https://www.tesla.com/autopilot] and think that Tesla already have these advanced features.
In the end Tesla is just slightly more advanced than autonomous features of other manufacturers. However, Tesla claims to be much more advanced than others. Because it claims a lot more credit than it deserves, it's only fair that they also face criticism proportionately when the system fails.
Normally, I am against unnecessary regulation. However in this case, I think it is important to bring regulations around consistent messaging and 3rd party rating for autonomous features from all manufactures. We can't have "Move fast and break things" kill people. It's not the same thing as a social app losing a few posts.
Tesla should rename it's feature to "Drive Assist+" whatever, because that is all it is.
On an ending note - I feel sad that the deceased continued to use Autopilot at this spot, when it was known to them that is has failed at the same spot multiple times. If someone well educated, tech savvy and already aware of the problem can fall for this, I worry that a lot more people might do worse.
Future promises if made should come with renaming the product to highlight its capabilities.
Tesla Autonomous features have the brains of a 2 year old and consistency of a machine. Which is why there is some value to it.
From the recent accidents it seems clear that Tesla autopilot is just not that great in it's current form. Even if it's following a line that it mistakes for a lane marker, how is it ok that it can't stop for a large obstacle in front of the car when given plenty of distance before a crash? How is this not a totally unacceptable type of failure for a system like this?
It's basically cruise control that has some lane keeping abilities. If you put your (non-TACC) car in cruise control on the freeway and then ignored everything around you, you will rear end someone, and it's not cruise controls fault. If you put a lane-keeping assist car (like Tesla or others) and completely ignored the road, it will crash eventually. AutoPilot is cruise control with some lane keeping abilities, it is not self-driving. You choose when to activate it (just like cruise control), and every single time you do it tells you to pay attention and keep your hands on the wheel.
And this is exactly why calling it Autopilot is immoral and dangerous.
Wow, this sounds like a horrible, tasteless burn to end the statement with. And I am a Tesla fan.
Whoever was responsible for that line _really_ needs a little bit of empathy, especially when their product is partially at fault for causing this debacle in the first place.
I can't see why they couldn't say something like "We are proud of this record, but continue to strive to improve and save even more families from this pain" -- still supports the product (which I'm not is the right path to take, but it's clearly the path they _are_ taking), but acknowledges the grief and avoids so directly/crassly comparing to it others' happiness.
It's also grammatically incorrect "The reason ... is because". Just bizarre.
If I command my autopilot to fly me into terrain or into a thunderstorm, it will do so. If I command it to climb at a vertical speed that the engine power can’t support, it will fly the plane into an aerodynamic stall. If I’m flying on autopilot, I still have to pay close attention and still log pilot time as “sole manipulator of the controls”. If I crash on autopilot, it’s still overwhelmingly [human] “pilot error”, not “autopilot error”.
People hear autopilot and incorrectly assume it’s, “Hey George, fly me to LaGuardia and wake me upon landing.”
Unless you absolutely have to (in case with a doctor), you do not outsource the life-threatening decisions to someone else. If you're tired and cannot drive, park and have a rest instead of watching movies while driving on Autopilot.
What I'm trying to say is that in the end it's your decision, and you have to live with the consequences.
Tesla markets their vehicles are self-driving cars that are safer than humans, this is dishonest.
It's because of the legal system and how easy it is to sue for stuff. It's easier to stick a label on than fight a court case. The US leads the world in the number of court cases https://tentmaker.org/Quotes/lawyers-per-capita.html
And why don’t you that, precisely? If you don’t understand the physics of the thing how are you supossed to know that using it on low power is different that bathing the cat on warm water, for example?
The rest of the world seems to get by okay here.
It’s not just your decision when those choices impact everyone who shares your environment. If you want to beta test features on a closed course, be my guest, but on public roads you need to think of people other than yourself.
The fact that the driver received disengagement warnings prior in the drive is not very relevant to the accident per se. Might as well say the driver has received multiple disengagement warnings during his ownership of the Tesla. It is not relevant to the actual circumstances of the accident. Note that they do not say "he received disengagement warnings right before the accident and chose not to respond". Tesla instead says that his hands were not detected on the wheel 6 seconds prior to the accident. Okay, so thats the case, but obviously the Autopilot system allows that to happen, as disengagement warnings were not active at that time, therefore THE SYSTEM WAS OPERATING WITHIN ITS ESTABLISHED BOUNDS.
This leaves it down to, the autopilot system drove into the barrier while the driver wasn't paying attention, but was operating the vehicle within the defined bounds of the semi-autonomous system, but violated the parameter of "pay attention to the road and be ready to take control". He did not receive any system warning before the accident, he just simply chose not to correct the system behavior before the accident happened. Everything else is just trying to kick up dust and obfuscate the facts of the situation it seems.
The issue with Tesla's semi-autonomous system and to a certain extent, other systems, is that there is a poor boundary between system responsibilities and user responsibilities. This causes issues around the boundary where sometimes the autonomous system will accomplish something, but sometimes it can't and the user must. Because of this fuzzy boundary, I think its almost more important to invest into disengagement logic, equipment, and programming than pure self-driving technology. A good example of investment into this arena is actually GMs Super Cruise, which seems to have MUCH better user engagement detection and much sharper boundaries than systems like Tesla's
The issue with their statements is that they have no problem throwing their customers under the bus. The proper release is simply "We are investigating the circumstances of this tragic event in cooperation with the NTSB. something something nice words about the family"
If pushed, refuse to discuss the issue while it's under investigation. This is a perfectly justifiable position, lots of groups don't make statements while an issue is under investigation; and you let the NTSB report throw the customer under the bus.
It's not nearly as definitive as people (and Tesla) seem to claim it as.
edit: The full analysis (from page 10 of the link):
>5.4 Crash rates. ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to^21 and after Autopilot installation.^22 Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.
Further, the same NHTSA report notes that AEB reduces rear-end collisions by 40 percent. Supposedly, AEB and autosteer were enabled around the same time.
Poor people with bad cars, no education and unsafe driving practices probably have more accidents.
I don't think these stats mean much.
The statistic has nothing to do with people not driving a Tesla.
“...airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.”