Here's a good example - a Tesla on autopilot crashing into a temporary road barrier which required a lane change. This is a view from the dashcam of the vehicle behind the Tesla. At 00:21, things look normal. At 00:22, the Tesla should just be starting to turn to follow the lane and avoid the barrier, but it isn't. By 00:23, it's hit the wall. By the time the driver could have detected that failure, it was too late.
Big, solid, obvious orange obstacle. Freeway on a clear day. Tesla's system didn't detect it. By the time it was clear that the driver needed to take over, it was too late. This is why, as the head of Google's self driving effort once said, partial self driving "assistance" is inherently unsafe. Lane following assistance without good automatic braking kills.
This is the Tesla self-crashing car in action. Tesla fails at the basic task of self-driving - not hitting obstacles. If it doesn't look like the rear end of a car or truck, it gets hit. So far, one street sweeper, one fire truck, one disabled car, one crossing tractor trailer, and two freeway barriers have been hit. Those are the ones that got press attention. There are probably more incidents.
Automatic driving the Waymo way seems to be working. Automatic driving the Tesla way leaves a trail of blood and death. That is not an accident. It follows directly from Musk's decision to cut costs by trying to do the job with inadequate sensors and processing.
I have been thinking that too. At what point do you decide that the autopilot is making a mistake and take over? That's an almost impossible task to perform within the available time.
I can understand why this wouldn't happen from a business perspective, and it's also presumably not as simple to implement as I'm implying, but I can't think of a better way to get around the uncertainty of whether the car's operating in an expected way or not.
The "confidence meter" would almost certainly be 100% right up until it crashes into an obvious obstacle
Yeah, that's basically what I'm asking for, maybe a warning light at a certain threshold would be a better default, personally I'd still find a number more trustworthy though.
> The "confidence meter" would almost certainly be 100% right up until it crashes into an obvious obstacle
If image recognition algorithms have associated confidence levels, I'd be surprised something more complicated like road navigation was 100% certain all the time.
Unless you design the network to specifically answer "how similar is this?" and pair it with a training set and the output of another executing neural net, a "confidence meter" isn't possible.
This is one of the trickier bits to wrap one's mind around with neural networks. The systems we train for specialized tasks have no concept of 'confidence' or being wrong. There is no meta-awareness that judges how well a task is being performed in real-time outside of the training environment.
Humans don't suffer from this issue (as much) due to the mind bogglingly large and complex neural nets we have to deal with the world. Every time you set yourself to practicing a task, you are putting a previously trained neural network through further training and evolution. You can recognize when you are doing it because the process of doing so takes conscious effort. You are not 'just doing <the task>', you are 'doing <the task>, and comparing your performance against an ideation or metric of how you WANT <the task> to be performed'. This process is basically your pre-frontal cortex and occipital lobe for instance, tuning your hippocampus, sensory and motor cortex to perfect the perfect tennis swing.
When we train visual neural networks, we're talking levels of intelligence supplied by the occipital lobe , hippocampus, alone. Imagine every time you hear about a neural net that a human was lobotomized until they could perform ONLY that task with any reliability.
Kinda changes the comfort level with letting one of these take the wheel doesn't it?
Neural nets are REALLY neat. Don't get me wrong. I love them. Unfortunately, the real world is also INCREDIBLY hard to safely operate in. Many little things that humans 'just do' are the results of our own brains 'hacking' together older neural nets with a higher level one.
Calibrating the probabilities of machine learning algorithms is an old problem. By nature of disciminative algorithms and increasing model capacities, yes training typically pushes outputs to one extreme. There is a ton of information still maintained which can be properly calibrated for downstream ingestion, which anyone actually trying to integrate these into actual applications should be doing.
If for instance it failed to detect an obstacle how could it tell you that it failed detecting an obstacle? Or that it got the road markings wrong?
Given that the consequences for making a mistake might be people dying I certainly hope that's not how it works under the hood. Anything less than "I'm absolutely sure that I'm not currently speeding towards an obstacle" should automatically trigger a safety system that would cause the autopilot to disengage and give the control back to the driver, maybe engaging the emergency lights to let the other drivers know that something is amiss.
I wouldn't be surprised if in the moments preceding these crashes the autopilot algorithm was completely sure it was doing the right thing.
Of course these various crashes show that in fact the algorithm makes mistakes sometimes but I certainly hope that it's not aware that it's making mistakes, it just messes up while it thinks everything is going as planned.
Personally having tried Tesla AP1 iit was painfully clear I can never fully trust any assisted driving aid, they can fail at any point and I need to be ready to take over immediately. Particularly having to wrestle the steering wheel from AP1 as it slightly resists is reason enough to stop using it.
I think I'd find it a simpler metric to follow than the more qualitative information about car behaviour, as well as more useful to decide whether the car or I should be in control.
You're quite right about timing, I have no idea whether this metric would plummet quickly or gracefully drop so the driver would have time to take charge once a threshold was hit.
I want to stress that this is an underbaked idea and really just a thought experiment about what would make me personally feel more comfortable with driving a semi-autonomous vehicle.
This should be a law.
#1 job of any self driving system is to avoid obstacles. If you can’t do that reliably, it’s a fail!
It's just not good enough. Semi-autonomous cars can't be hitting stationary objects without reacting at all. If they fail at this basic task, then they shouldn't be allowed on the road.
You probably have 100ms to begin your action, 500ms to complete it.
Good luck with that if you're not perfectly alert.
The Queensland government estimates it takes drivers 1.5 seconds to react to an emergency  (I wouldn't be surprised if that was a 99th percentile figure, of course)
The only thing we can blame the victim for here is foolishly trusting Tesla's apparently crap software.
If you have to be constantly at this level of alertness, what is even the point of having an autopilot?
Your cognitive load is lowered by Autopilot enormously. You can pay better attention to what’s going on around you and anticipate issues (say, a car that drivers weirdly in front of you) because you’re not busy with the low-level driving.
Part of that is that you see a difficult road stretch coming up and disengage the assist IN ADVANCE - becuase you’re not a complete moron, know the lane assist is for driving in highway lanes and the thing you’re approaching is decisively not one.
If you over-automate something you end up having to spend more time supervising and fixing it than if you'd just done the thing yourself in the first place.
Auto-pilot will make driving a lot more like flying is for pilots: 99% extremely dull, 1% terrifying.
A funny feeling like that, I’m ready to disengage immediately and then 1s is plenty of time. But I wouldn’t use Autopilot near an obstacle like that to begin with - anticipate, take over for the weird stretches.
Regardless, you're saying that if you're not paying attention every single second of the trip, instantly ready to rect to even the slightest deviation from your intended trajectory, then you're doing it wrong.
What, exactly, is the point of an auto-pilot system if you need to be in that state of absolute alertness?
Also unless certified on certain roads with clear markers, drivers should be required to have hands on wheels at all times.
This is a great point. With regular cars, drivers can reliably predict where the car will be after 5 seconds. However, with Tesla, there is really no way to know what the auto-driver will do, and so the driver has to stay even more attentive to ensure that it does not do something wrong. It's like driving a car that is not under your control, and may go in random directions.
Of course if users get to used to such differences then those can become 'features' which only makes progress harder.
That means the autopilot doesn't help at all since you would have to basically drive manually all the time anyway.
I would think it'd be seen as a "distraction", those suction cup mounts for example are illegal in a lot of states.
The purpose of the autopilot is to increase safety for drivers that are not attentive. However, it turns out that Tesla requires drivers to be even more attentive than regular vehicles. This is not a sign of progression.
We can differ in opinions but I strongly feel that the eventual utopia of self driving cars will be much safer than the current world. And we are making progress on this daily. There will be few unfortunate incidents but in the long term a lot more lives will be saved.
Unless you're suggesting that Tesla is justified in actively making people less safe because it helps them develop self-driving technology faster?
In that case they should issue a recall, and not use real users as their test drivers. Once they get to their eventual goal, they can resume selling autopilot cars.
> We can differ in opinions but I strongly feel that the eventual utopia of self driving cars will be much safer than the current world. And we are making progress on this daily. There will be few unfortunate incidents but in the long term a lot more lives will be saved.
I totally agree with you that in the long term self-driving cars are much safer. What I do not agree with is Tesla's approach towards achieving that goal. They are selling an unsafe product while marketing it as much safer than the existing products. This is just plain fraud.
Sorry, but this is the exact wrong conclusion to make from the available data. Partially automated cars have been driven by average people on the public roads since 2006, when the Mercedes S class got active steering and ACC. There is no data that says that these systems, which are available on cars as affordable as e.g. a VW Polo, lead to more crashes. In fact the only data I have seen suggests that they reduce crashes and driver fatigue significantly.
The issue that such a feature is systematically crashing and even killing its users is very specific to Tesla. Tesla's system has had a tendency to run into stationary objects for as long as it has been available to the public. Teslas have a significantly higher rate of causing insurance claims. There are countless videos showing it happening, the two (possibly three) fatalities so far are just the tip of the iceberg. Tesla blamed MobilEye (their previous supplier of camera-based object detection hardware), but it seems fairly evident that Tesla's system has the exact same systemic issue even with their new in-house object detection software.
My opinion as an engineer working on this exact kind of system for one of Tesla's competitors is that Tesla's system just isn't robust enough to be allowed to be used like all other manufacturer's autosteer/ACC systems. IMO that's because their basic approach (doing simple line-following on the lane markings) is super naive. A much more intricate approach needed. One that does some level of reasoning on what is a driving surface and what isn't. This has been known in the automotive industry for over a decade. Sorry to be so absolute, but frankly it's just unnacceptable that Tesla is gambling their customer's lives on such a ridiculously half-baked implementation.
> Lane following assistance without good automatic braking kills.
Correct. It also kills when it lies to the driver about how confident it is in a given situation. Many of the crashes, instances of cars veering off into oncoming lanes etc. we have seen would not have happened, if Tesla would fall back to the driver in sketchy situations. However what it does do instead is not even notfy the driver that the system is hanging on by the barest of threads. This works out fine quite often and makes it seem like the car can detect the lanes successfully even in sketchy situations. Yet once the system (and the driver) runs out of luck, there is absolutely no chance to rectify the situation.
What the GP was referring to, which is what I've see it referred to in Google presentations is level 3 automation. That is what autopilot is supposed to be, where you can take you hands off the wheel and the car can drive itself but needs constant supervision.
What you're referring to is passive assistance such as collision detection which don't drive the car but provide alerts. There is nothing wrong with this technology, it works because the driver has to drive as they normally would but it provides assistance when it spots something the driver doesn't. Here humans do the object detection and the computers stay alert as a backup.
There is a problem when the car drives itself but the driver has to stay alert. This is a problem because the human is much better at object detection and the car is much better at staying alert.
A level 2 system has to be monitored 100% of the time, because it can not be trusted to warn the driver and fall back to manual automatically in every situation. A level 3 system is robust enough to make this guarantee, which means that the driver can take his/her hands off the wheel until prompted to take back control.
Autopilot is a level 2 system, because you can not take your hands of the wheel of a Tesla and expect to not die. In fact the car itself will tell you to put your hands back on the wheel after a short while. After the fatal crashes started getting into the media, Tesla themselves have stated that it's only a level 2 system. Their CEO's weird claims of Autopilot being able to handle 90% of all driving tasks, there being a coast-to-coast Autopilot demonstration run in 2017 etc. are just marketing BS.
However I still disagree with your original comment:
> Sorry, but this is the exact wrong conclusion to make from the available data. Partially automated cars have been driven by average people on the public roads since 2006, when the Mercedes S class got active steering and ACC. There is no data that says that these systems, which are available on cars as affordable as e.g. a VW Polo, lead to more crashes. In fact the only data I have seen suggests that they reduce crashes and driver fatigue significantly.
Firstly, I don't see any examples of VW Polo claiming to have a steering assist, even the most recent 2017 edition  only has level 1 features . Steering assist only came to the Mecedes S-Class in 2014 .
What I (and the parent comment you initially replied to) was tying to claim was that where as Level 1 features  (cruise control, automatic braking, and lane keeping) are great, level 2 (and level 3 according to Google/Waymo) is risky because humans cannot be trusted to be alert and ready to take over.
Tesla can easily fix this - disable autopilot if the user does not have his hands on the wheel. Disable autopilot at construction zones. Disable autopilot sooner when confidence level is lowered.
I test drove an “autopilot 2” Tesla a while back. Avoiding hitting trash cans and parked cars was much harder than it would have been if Autopilot had been off. When you’re on a road and the car decides to steer toward an obstacle, you have very little time to correct the car.
Apparently they loose the colour info and process black and white because it is easier. https://electrek.co/2017/05/16/tesla-autopilot-2-0-can-see/
This seems to have been an issue in some of the crashes and maybe was a mistake. See also for example the trailer an earlier Tesla crashed into https://www.theregister.co.uk/2017/06/20/tesla_death_crash_a...
Kind of obvious in colour but at the time it was probably a grey shape against a similar brightness grey sky
Also the big red fire truck http://autoweek.com/article/technology/ntsb-probe-autopilot-...
If that's true, it looks like a very big mistake to me, considering that we use distinguishable color a lot to signal risks.
But knowing all that, and now knowing Tesla made that short cut, there is no way on the face of this earth I am ever driving a Tesla, or for that matter, driving a car anywhere in the USA that has them on the road.
You guys have a really good train network, right?
About the US train network, it's good for freight but passenger trains are slow.
Tesla's stance to blame the driver without admitting that their hardware is inadequate will back them into a corner in the long term. They cannot transition from highway to city driving without significant changes to the autopilot software and hardware and this means they effectively have to start over from scratch. They have to start working on a serious competitor to Uber or Waymo now otherwise Tesla will be too late to the market.
If memory serves, Waymo's way seems to be summarised by the sentence "driver assist which we can have right now is inherently dangerous so let's go for full autonomy which we can't have for many years still".
A more accurate comparison of the two companies then might be that they are both trying to sell technology that they don't yet have (autonomous driving) and that only one of them is at least trying to develop (Waymo).
However, it should be absolutely clear that autonomous driving is currently only a promise and that this is true for every product being developed.
Honestly the autopilot has changed my driving experience and I hate when I have to drive cars without it.
I do agree that you need to be aware of your surroundings and ready to take over, though.
I often wonder if auto-pilot is truly 100% auto-pilot if it requires you to keep an eye out for its "Hey maybe I'll screw it up, so please keep a watch on me"
If your hands are on a steering wheel, you’re watching the road, and autopilot begins turning towards an obstacle, you should be aware enough to grip the wheel fully and prevent the incorrect turn.
End of story.
Edit: “Autopilot” is a great brand-name for the technology, but perhaps implies too much self-driving capability. Maybe “assisted steering” is a better term for this.
> End of story.
No, not true at all. If you’re in a car, paying complete attention, and holding the wheel, but keeping your arms loose enough that the car is fully controlling the steering, you somehow need to notice the car’s error and take over in something on the order of a second or perhaps much less. Keep in mind that, on many freeway ramps, there are places where you only miss an obstacle by a couple of feet when driving correctly. If the car suddenly stops following the lane correctly, you have very little time to fix it.
It seems to me that errors of the sort that Autopilot seems to make regularly are very difficult for even attentive drivers to recover from.
Is that supposed to make me feel better? If the car can go from fine to crashing into a barrier in only 6 seconds, that seems like a damnation of Autopilot more so than the driver.
It might not solve all these accidents but I think a HUD showing where the car plans to travel would work better. If you knew the car was aiming off the road before it turned, you could override the controls in time to correct. It still would require lots of attention and fast reaction time (and still may be less safe than manual driving), but it would at least be better than the situation we are in now.
If this was true, wouldn't Tesla vehicles have a higher rate of crashes, all things being equal? Is there any evidence that this is the case?
>the Model S had higher claim frequencies, higher claim severities and higher overall losses than other large luxury cars. Under collision coverage, for example, analysts estimated that the Model S's mileage-adjusted claim frequency was 37 percent higher than the comparison group, claim severity was 64 percent higher, and overall losses were 124 percent higher.
>>Is there any evidence that this is the case?
Porsche also has 3x the accident rate of Daewoo. That doesn't mean Daewoo cars are 3x as safe, it just means that people who are looking for a hot-rod buy a Porsche and not a Daewoo.
This is not the claim in the linked article. The linked article claims that Tesla cars have a higher claim frequency than comparable gasoline-powered cars (i.e. large luxury cars such as a Porsche), whereas for example the Nissan Leaf has a lower claim frequency than comparable gasoline-powered cars (namely the Nissan Versa).
Put another way, if your choice is between a Tesla and a gasoline-powered large luxury car, the Tesla is more dangerous. If your choice is between the Nissan Leaf and the Nissan Versa, the Leaf is less dangerous. There was no comparison between the relative danger from a Tesla and a Leaf.
I'm pointing out that all other things aren't equal, and you can't assume from overall crash data that you can tease out statistics about how safe a specific feature of the car is.
The comparison to the Fiat 500 is relevant because while the report didn't only compare Tesla vehicles to it, that's one of the comparisons.
Is a Tesla less safe than a Fiat 500 given that it's driven by the same sorts of drivers, in similar conditions and just as carefully as a Fiat 500? Maybe, but who knows? We don't have that data, since there's an up-front selection bias when you buy a high-performance luxury car.
I wasn't able to find the raw report mentioned in this article, but here's a similar older report they've published:
There you can see that the claim frequency of Tesla is indeed a bit higher than all other cars they're compared to, but this doesn't hold when adjusted for claim severity or overall losses. There cars like the BMW M6 and the Audi RS7 pull ahead of Tesla by far.
So at the very least you'd have to be making the claim that even if this data somehow showed how badly performing Autopilot was, that it was mainly causing things like minor scratches, not severe damage such as crashing into a freeway divider.
Just looking at these numbers there seems, to me anyway, to be a much stronger correlation between lack of safety and whether the buyer is a rich guy undergoing a mid-life crisis than any sort of Autopilot feature.
It isn't clear whether the demographics of Tesla drivers are more reckless than that of other luxury brands or not, as you point out Porsche drivers might tend to be more interested in going fast than Daewoo drivers. For Tesla on the one hand you attract people who are interested in helping the environment who I expect to be more conscientious and maybe therefore better drivers. But on the other hand there is Ludicrous Mode.
But since Tesla could have had a lower or higher crash rate than other brands and does have a higher rate we have to update our beliefs in the direction of the car being more likely to crash by conservation of expected evidence. Unless you'd argue that a lower crash rate means that Tesla's safety features prevent lots of crashes.
An example of the inverse of this concept is the roundabout or traffic circle, which has a higher rate of much lower severity accidents than traffic lights or stop signs
- edit - Removed irrelevant edit.
Any references to support that statement?
First, how do you know it was Musk's decision? And second, when you imply that by using lower cost sensors (cameras), Tesla is cutting costs, how do you support that?
Their lower cost of sensors might be offset by having to spend more on software development for image processing, for example. When summed up, the overall cost per car may increase significantly.
At least to me, it is not clear at all whether Tesla's self-driving technology cost per car is less or more than it is e.g. for cars which use lidar sensors.
In any case, self driving cars will have accidents, not the least because some accidents are unavoidable (random behaviour of other cars, animals or pedestrians, unexpected slippery roads, etc), but running heads on into a large visible static object should never happen. It’s a bug.
you are the one telling anecdotal experience, jfyi
We take great care in building our cars to save lives. Forty thousands Americans die on the roads each year. That's a statistic. But even a single death of a Tesla driver or passenger is a tragedy. This has affected everyone on our team deeply, and our hearts go out to the family and friends of Walter Huang.
We've recovered data that indicates Autopilot was engaged at the time of the accident. The vehicle drove straight into the barrier. In the five seconds leading up to the crash, neither Autopilot nor the driver took any evasive action.
Our engineers are investigating why the car failed to detect or avoid the obstacle. Any lessons we can take from this tragedy will be deployed across our entire fleet of vehicles. Saving other lives is the best we can hope to take away from an event like this.
In that same spirit, we would like to remind all Tesla drivers that Autopilot is not a fully-autonomous driving system. It's a tool to help attentive drivers avoid accidents that might have otherwise occurred. Just as with autopilots in aviation, while the tool does reduce workload, it's critical to always stay attentive. The car cannot drive itself. It can help, but you have to do your job.
We do realize, however, that a system like Autopilot can lure people into a false sense of security. That's one reason we are hard at work on the problem of fully autonomous driving. It will take a few years, but we look forward to some day making accidents like this a part of history.
This needs far more discussion. I just don't buy it. I don't believe that you can have a car engaged in auto-drive mode and remain attentive. I think our psychology won't allow it. When driving, I find that I must be engaged and on long trips I don't even enable cruise control because taking the accelerator input away from me is enough to cause my mind to wander. If I'm not in control of the accelerator and steering while simultaneously focused on threats including friendly officers attempting to remind me of the speed limit I space out fairly quickly. In observing how others drive, I don't think I'm alone. It's part of our nature. So then, how is it that you can have a car driving for you while simultaneously being attentive? I believe they are so mutually exclusive as to make it ridiculous to claim that such a thing is possible.
"The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat."
The result of this statement and the functionality that matches it is it creates a re-enforced false sense of security.
Does it matter whether the driver of the model X whose auto pilot drove straight into a center divider had his hands on the wheel if the outcome of applying autopilot is drivers focus less on the road? What is the point of two drivers one machine one human? You cannot compare car auto pilot to airplane they're not even in the same league. How often does a center divider just pop up at 20k ft?
Usually machinery either augments human capabilities by enhancing them, or entirely replaces them. This union caused by both driver and car piloting the vehicle has no point especially when it's imperfect.
I'm not opposed to Tesla's sale of such functionality, sell whatever you want, but I am opposed to the marketing material selling this in a way that contradicts the legal language required to protect Tesla...
There's risks in everything you do, but don't market a car as having the hardware to do 2x your customers driving capability and then have your legal material say: * btw don't take your hands off the steering wheel... especially when there's a several minute video showing exactly that.
Tesla customers must have the ability to make informed choices in the risks they take.
Tesla have sold people that the hardware they buy now will be capable of this in the future, but not now.
First let me state that I agree with this 110%!
I'm not sure if this is what you are getting at but I'm seeing a difference between the engineers exact definition of what the system is, what it does, and how it can be properly marketed to convey that in the most accurate way. I'm also seeing the marketing team saying whatever they can, within their legal limits (I imagine), in order to attract potential customers to this state-of-the-art system and technology within an already state-of-the-art automobile.
If we are both at the same time taking these two statements verbatim than which one wins out:
> Autopilot is not a fully-autonomous driving system. It's a tool to help attentive drivers avoid accidents that might have otherwise occurred. Just as with autopilots in aviation, while the tool does reduce workload, it's critical to always stay attentive. The car cannot drive itself. It can help, but you have to do your job.
> The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat.
If that's the crux of the issue that goes to court then who wins? The engineering, legal, marketing department, or do they all lose because the continuous system warnings that Autopilot requires attentive driving were ignored and a person who already knew and complained of the limits of that system decided to forego all qualms about it and fully trust in it this time around?
I feel like when I was first reading and discussing this topic I was way more in tune with the human aspect of the situation and story. I still feel a little peeved at myself for starting to evolve the way I'm thinking about this ordeal in a less human and more practical way.
If we allow innovation to be distinguished for reasons such as these will we ever see major growth in new technology sectors? That might be a little overblown but does the fact that Tesla's additions to safety and standards thus having a markedly lower accident and auto death rate mean nothing in context?
If Tesla is doing a generally good job and bringing up the averages on all sorts of safety standards while sprinting headlong towards even more marked improvements are we suddenly supposed to forget everything we know about automobiles and auto accidents / deaths while examining individual cases?
Each human life is important. This man's death was not needed and I'm sure nobody at Tesla, or anywhere for that matter, is anything besides torn up about having some hand in it. While profit is definitely a motive I think the means to get to the profit they seek Tesla knows they have to create a superior product and that includes superior features and superior safety standards. If Tesla is meeting and beating most of those goals and we have a situation such as this why do I feel (and I could be way wrong here) that Tesla is being examined as if they are an auto manufacturer with a history of lemons, deadly flipped car accidents, persistent problems, irate customers, or anything of the like in this situation?
For whatever reason it kind of reminds me of criminal vs. civil court cases. Criminal it's upon the State or Prosecution to prove their case. In the civil case the burden is on the Defense to prove their innocence. For some reason I feel like Tesla is in a criminal case but having to act like it's a civil case where if they don't prove themselves they will lose out big.
To me it feels like the proof is there. The data is there. The facts are known. The fact that every Tesla driver using Autopilot in that precise location doesn't suffer the same fate points toward something else going on but the driver's actions also don't seem to match up with what is known about him and the story being presented on the other side. It's really a hairy situation and I feel like it warrants all sorts of tip toeing around but I also have the feeling that allowing that "feeling" aspect to dictate the arguments for either side of this case are just working backwards.
And for what it's worth I don't own a Tesla, I've never thought about purchasing one. I like the idea, my brother's friend has one and it's neat to zoom around in but I'm just trying to look at this objectively from all sides without pissing too many people off. Sorry if I did that to you, it wasn't my intent.
My concern is that it looks like Tesla is 90% of the way there to full autonomy and the way the feature is marketing will lull even engineers who know more about how these systems work into a false sense of security and end up dying as a result -- they'll trust a system that shouldn't be trusted. There isn't a good system for detecting a lack of focus especially when it won't take more than a few milliseconds to go from good to tragic.
The human toll is irrelevant to the conversation, what's relevant is whether risks taken are being taken knowingly - you cannot market a self driving vehicle whose functionality "is 2x better than any human being" while simultaneously stating in your legal language to protect yourself: don't take your hands off the wheel - that's bs.
The human toll is absolutely relevant to the conversation: this is about people dying now and in the future. It seems cruel to discuss it in a "I'll sacrifice X to save Y" later, but it can reasonably be reduced to that.
I think it's safe to assume that this will drastically reduce driving related injuries and deaths.
Is the life taken by auto pilot worth less than the life taken by the aggressive driver who takes out an innocent driver? No.
I hope we eventually save lives as in net improvement in current death totals by using these technologies but the risks are not well communicated, the marketing is entirely out of sync with the risks and the "martyrs" we create thus to me look like victims.
I think beliefs such as these is fueled by the extremely naive implication that each death will cause the learning algorithm to "improve itself" so every self driving thing out there is safer owing to that death..
Some number of people, N are willing to risk their lives to use autonomous vehicles they'll die as a result. It should be just as clear to person using autopilot the risks involved not misled with marketing fluff that doesn't come close to reality. Martyrs not victims
This assumes that the self driving tech will continue to increase in competence and will at some point surpass humans. I somehow find that extremely optimistic, bordering in on being naive.
Consider something like OCR or object recognition alone, where similar tech is applied. Even with decades of research behind it, it really cannot come any where close to a human in terms of reliability. I am talking about stuff that can be trained endlessly with any sort of risk. Still it does not show an ever increasing capability.
Now, machine learning and AI is only part of the picture. The other part is the sensors. This again is not anywhere near the sensors a human is equipped with.
From what we have seen in the tech industry in recent years is that trust in a tech by the people, even intelligent ones such as people who are investing in it, is not based on logic (Theranos, uBeam etc). I think such a climate is exactly what is enabling tests such as these. But unlike others, these tests are actually putting unsuspecting lives on line. And that should not be allowed..
Please note that I artfully omitted a due date on my assumption. There's so much money involved here and so much initial traction that it is indeed reasonable to think that tech can surpass a "normal" driver.
I'm also biased against human drivers, plenty of whom should not be behind the wheel.
I don't think it is reasonable at all to reach that conclusion based on the money involved...You just can't force progress/breakthrough just by throwing money at all problem..
>I'm also biased against human drivers, plenty of whom should not be behind the wheel.
So I think it would be quite trivial to drastically increase the punishment of dangerous practices if caught. I mean, suspend license or ban for life if you are caught texting while driving or drunk driving.
You're also ignoring a key point: we have "self-driving" cars right now, but they're not good enough yet. Computer hardware is getting cheaper day by day, and right now the limiting factor appears to be the cost of sensors.
Both are not true. It does not need money for a man to have a great breakthrough idea. It is also not possible to guarantee generation a great idea by just throwing more and more money at researchers...
The best thing is to build a system to analyze your driving and figure out if you are in that 40% of people and then let it drive for you. Maybe drunk drivers, for example. It can do this per ride: “oh you’re driving recklessly, do you want me to take over?”
EVERYTHING ELSE SHOULD BE A STRICT IMPROVEMENT. Taking over driving and letting people stop paying attention is not a strict improvement.
The argument should NOT be about playing with people’s lives now so im the future some people can have a better system. That’s a ridiculous argument. Instead WHY DON’T THE COMPANIES COLLABORATE ON OPEN SOURCE SOFTWARE AND RESEARCH TO ALL BUILD ON EACH OTHER’S WORK? Capitalism and “intellectual property”, that’s why. In this case, a gift economy like SCIENCE or OPEN SOURCE is far far superior at saving lives. But we are so used to profit driven businesses, it’s not likely they will take such an approach to it.
What we have instead is companies like Waymo suing Uber and Uber having deadly accidents.
And what we SHOULD have is if an incremental improvement makes things safer, every car maker should be able to adopt it. There should be open source shops for this stuff like Linux that enjoy huge defensive patent portfolios.
Ain’t gonna happen I’m afraid.
Why not? That's how pioneers make progress, in new aircraft and spacecraft.
If people want to be on the bleeding edge, why not let them?
How can the cars improve if they are never allowed to drive?
The pioneers in this case are putting other people’s life at risk.
Wayne seems to demonstrate that improving self-driving cars without leaving a trail of bodies behind seems in the realm of possibility, so let’s measure Tesla against that standard.
They don't just say it in the legal language. The car is continually reminding the driver of it, as the article makes clear.
The problem is that Uber needs self driving cars in order to make money, and Tesla firmly believes that their system is safer than human drivers by themselves (even if a few people who wouldn't have otherwise died do, others who might have died won't and they believe those numbers make it worth it).
The problem is that you cna only learn so much without actual practice.
I think we need to keep the human driver in control, but have the computers learning through that constant, immediate feedback.
And get rid of misleading marketing and fatal user experience design errors.
I don't know what is stopping them from simulating everything inside a computer.
Record the input from all the sensors when a car equipped sensors is driven through real roads by a human driver.
Replay the sensor input, with enough random variations and let the algorithms train on it.
Continue adding to the library of sensor data by making the sensor car by driving it through more and more real life roads and in real life situations. Keep feeding the ever increasing library of sensor data to the algorithm, safely inside a computer.
Obviously they've already tried that, and it doesn't work. The map is not the territory.
In theory, there is no difference between theory and practice. In practice, there is.
What I mean is that. Do not "teach" the thing in real time. Instead collect the sensor data from the cars a human is driving (and also collect the human input also), and train the thing on it, safely inside the lab.
You say, they have done it already. But I am asking if they have done it enough. And if that is so, how come the accidents such as these are possible, when the situation is pretty out of a text book in basic driving?
It also will not ensure exclusion of manual vehicles, so it won't create the exclusion necessary for the predictable driving environment.
Sounds like requiring exclusive access - I apologize if that was a misinterpretation.
If you have human and automated drivers in the same roads, the computers have to be able to cope with the vagaries of human drivers.
How can you then get away from '"beta testing" our self driving cars on roads with human operators' if that is their deployment environment?
This is the definition of a false dichotomy and it implicitly puts the onus on early adopters to risk their lives (!) in order to achieve full autonomy. Why not put the onus on the car manufacturer to invest sufficient capital to make their cars safe!? To rephrase what you said with this perspective:
> ...developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of billions of investor dollars in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
This seems strictly better than the formulation you provided. How nuts is it that the assumption here is that people will have to die for this technology to be perfected. Why not pour 10x or 100x the current level of investment and build entire mock towns to test these cars in - with trained drivers emulating traffic scenarios? Why put profits ahead of people?
I generally agree with this philosophy but this is very optimistic, at least in the United States. This is a country where we can't even ban assault rifles let alone people from driving their own vehicles. You're going to see people drive their own vehicles for a very long time even if self driving technology is perfected.
Compare the above to hop in my car, drive to the freeway, turn on self-driving, turn off self-driving once i get off the freeway, find parking near where i'm going and walk in.
As a society, we've done alot more in the name of convenience.
And you need a fleet of rental cars so that people can actually get to their destination.
What's the relative cost of all that vs. pavement?
Commuter rail systems run at 2 minute headways or less. Long-distance trains mostly don't but that's largely due to excessive safety standards - for some reason we regulate trains to a much higher safety standard than cars. Even then, the higher top speeds of trains can make up for a certain amount of waiting and indirect routing. (Where I live, in London, trains are already faster than cars in the rush hour).
> What's the relative cost of all that vs. pavement?
When you include the land use and pollution? Cars can be cheaper for intercity distances when there's a lot of similarly-sized settlements, but within a city they waste too much space. And once you build cities for people rather than cars, cars lose a lot of their attraction for city-to-city travel as well, since you're in the same situation of having to change modes to get to your final destination.
That "some reason" is physics. According to a quick Google search, an average race car needs 400m of track length from 300 km/h to 0 km/h. A train will require something around 2500m, over 5x the distance, to brake from the same speed. Trains top out at -1.1m/s² deceleration, an ordinary car can get -10m/s² deceleration.
Part of the reason why is also that in a car, people are generally using their seatbelts - which means you can safely hit the brakes with full power. In a train, however, people will be walking around, standing, taking a dump on the loo - and no one will be using a belt. Unless you want to send people literally flying through the carriages, you can't go very much over that 1 m/s² barrier.
Because of this, you have the requirement of signalling blocks spaced in a way that a train at full speed can still come to a full stop before the next block signal. Also: a train can carry thousands of people. Have one train derail and crash into e.g. a bridge or crash with another train and you're looking with way, way, way more injuries and dead people than even a megacity could support, much less in a rural area.
If the cars are electric, I'm less sure.
Though your train of cars would likely have such low passenger density that a series of buses would be just as good. Special lanes just for buses are already a thing.
The point is driving is a freedom and getting rid of it in this country will be hard. I'd imagine self driving vehicles having more prevalence in China where the government can control what destinations you have access to and monitor your trips.
Many states (red) won't ban them for a very long time.
the impact on freedom to travel will have to be secured and decentralized without any government kill switches.
Which states? Maybe a few in New England, but I don't see that happening anywhere else. Counties perhaps, but there are rural areas pretty much everywhere, and people are going to want the freedom to drive their own vehicle.
In comparison, the economic impact/benefit of banning assault rifles is negligible (and definitely not transformative) even if I personally think it is the morally right thing to do. (Maybe we can make the case later if school security and active shooter drills become prohibitively expensive and/or annoying)
So people will relocate to avoid traffic? Why doesn't this happen today? Suppose San Francisco decided to not enforce self driving laws to protect small businesses and preserve community infrastructure and culture. Now suppose Phoenix (only picked because they've been progressive with self driving technology) does enforce self driving laws, would you expect a mass exodus from San Francisco to Phoenix?
Right now Phoenix is not even on the map for most of us. If they did something like this (at the right time), then it might be.
Why would the less-risky drivers move to self-driving cars first? Wouldn't some of the higher-risk demographics (e.g. the elderly) make the move first since they have more incentive to do so?
> Second, cities that go to self driving only will have a huge advantage in infrastructure utilization and costs as roads are used more efficiently (with smoother traffic) and parking lots/garages become a thing of the past. Residents will just push for it if it means not being stuck in traffic anymore.
I think self-driving cars will be really cool and reduce traffic accidents once they're perfected, but a lot of these assumptions don't make sense. Unless a critical mass switches to car-sharing, autonomous cars and no parking will make rush hour worse because now each car will make the round trip to work twice a day instead of just once. Also, what happens to the real-estate where the parking lots are now? The financially sound thing to do will probably be converting these lots to more offices/condos/malls. So urban density will increase - increasing traffic.
Even if autonomous cars radically improve traffic flow, I suspect we'll just get induced demand . More people will take cars instead of public transit and urban density will increase until traffic sucks again.
Elderly aren't usually considered higher risk. The young kids are, enthusiasts are, people who drive red sports cars are.
> Unless a critical mass switches to car-sharing, autonomous cars and no parking will make rush hour worse because now each car will make the round trip to work twice a day instead of just once.
Autonomous cars should be mostly fleet vehicles (otherwise you have to park it at home).
Isn't that just like in most of the major world cities where taxis are the norm rather than the exception? It isn't weird for a taxi in Beijing to make 5-6 morning commute rounds. But even then, there are a lot of reverse commutes to consider.
> The financially sound thing to do will probably be converting these lots to more offices/condos/malls.
While density can increase, convenient affordable personal transportation also allows the opposite to occur. Parks, nice places, and niche destinations, are also possible.
Think of it this way, once traffic is mitigated, urban planning can apply more balance to eliminate uneven reverse commute problems. There will still be an incentive to not move, but movement in itself wouldn't be that expensive (only 40 kuai to get to work in Beijing ~15km, I'm sure given the negligible labor costs, autonomous cars can manage that in the states).
We are asking that self driving cars be ALLOWED if the user chooses, even IF the safety is in doubt. This is because of just how extremely important this issue is.
Peopl I’ve said it before, and I’ll repeat it till eventually the tide turns on HN and elsewhere.
You will not have full autonomy unless you control the road itself.
At which point you are better off just making it mass transit.
Also, people are terrible at detecting objects directly in front of them and just like computers, the human brain can be cheated, overloaded, inept or inexperienced leading to an accident.
Now we have cars with lane assist, smart breaking, auto pilot features and that's only in the past 5-10 years.
Of all the places where technology can save lives, its definitely in vehicles/transportation.
How many optical illusions do you usually see in the roads while driving, that can result in an accident?
I am not even talking about the "people are terrible at detecting objects directly in front of them" part.
I mean, how can you be a human being and say this? If we were "terrible at detecting objects directly in front of us", we would have been predated out of existence a long time ago..
Sometimes you'll see multiple white lines, or lanes that appear to vere off due to dirt on the road. A bit of litter looks like a person, a kid looks like they might run out.
A lot of times I find I'm searching for something and can't see it but it was in my visual field. I think this worsens with age.
No. None of those qualify as brain being cheated.
That is the only thing I was responding in the start of this discussion. Essentially the person was saying human brain can be cheated just like a computer.
I am saying, No. Not just like a computer. Human brains does not get cheated so easily like a computer. Claiming that is outrageous and shows you have no idea of what you are talking about...
Check out this article, it is easy to never see a bike you're on a collision course with.
You may consider humans as bad drivers but Tesla's autopilot is even worse than that:
It can't even look in one direction!
I'm not claiming Tesla's system is currently better than a human, just that there is plenty of potential for a machine to outperform humans perceptually. As it is, Tesla's system isn't exactly the gold standard.
Last time I checked, I could move my eyes, up and down, side to side. I could also rotate my whole head, that also up and down and side to side.
And I am a human being.
Actually, one thing that I was curious about regarding this incident -- they say that authorities had to wait for a team of Tesla's engineers to show up to clean up burning mess of batteries. Luckily for everyone else trying to get somewhere on 101 that day, Tesla's HQ isn't too far away. What if next time one drives into a barrier it happens in a middle of Wyoming? Will the road stay closed until Tesla's engineers can hitch a ride on one of Musk's Falcons?
And $879B in USA per year.
It is thus very important.
For the 1.3 million people and their loved ones and to 20-50 million injured EVERY YEAR, yeah, it's really that important.
Is it ready today? No. We're in pretty violent agreement on that.
Will we get there? I don't see much reason to doubt that we will, eventually. It may require significant infrastructure changes.
It's pretty clear Waymo/Uber are pushing the envelope too hard, without adequate safeguards, but "only be acceptable if it were you and Mr. Musk...on Tesla's private proving grounds" is probably not pushing the envelope enough.
Even Waymo is "unleashing their stuff onto unsuspecting public" by driving them on public roads - lots of innocent bystanders potentially at risk there.
Both Waymo and even Uber do not pretend that their systems are ready for public use and at least allegedly have people who are paid to take over (granted, in Uber's case it's done as shadily as anything else Uber does). Tesla sells their half-baked stuff to everyone, with marketing that strongly implies that they can do self-driving now, if only not for those pesky validations and regulations. I think there's quite a bit of a difference.
A lot of deaths and injuries on the road happen in countries with bad infrastructure and rather cavalier attitude to rules of the road. Fixing those could save more people sooner than SDVs that they won't be able to afford any time soon. Not to consider that an SDV designed in the first world (well, Bay Area's roads are closer to third world, but still...) aren't going to work too well when everyone around drives like a maniac on a dirt road.
Not to say that SDVs wouldn't be neat, when they actually work, but this is a very SV approach, throwing technology to create overpriced solution to problems that could be solved much cheaper, but in a boring way that doesn't involve AI, ML, NN, and whatever other fashionable abbreviations.
Whose lives are we sacrificing? In the case of the Uber crash in Tempe and this Tesla crash in California, the people who died did not volunteer to risk their lives to advance research in autonomous vehicles.
I highly respect individuals who choose to risk their lives to better the world or make progress, like doctors fighting disease in Africa and astronauts going to space, but at the same time, I think this must always be a choice. Otherwise we could justify forcing prisoners to try new drugs as the first stage of clinical trials. Or worse things. Which is why there are extensive vetting before approval for clinical trials is given.
I do think that, once the safety of autonomous vehicles have been proven on a number of testbeds, but before they are ready for deployment, it is justifiable to drive them on public roads. Maybe without safety drivers. But until then, careful consideration should be given to their testing.
Uber should not have been able to run autonomous vehicles with safety drivers where the safety driver could be allowed to look away from the road for several seconds while the car was moving at >30mph. The car should automatically shutoff if it is not clear whether the safety driver is paying attention. And there should be legislation that bans any company that fails to implement basic safeguards like this from testing again for at least a decade, with severe fines. Probably speeds should also be limited to ~30mph for the first few years of testing while the technology is still so immature, as it is today.
Similarly, Tesla should not be allowed to deploy their Autopilot software to consumers before they conduct studies to show that it is reasonably safe. Repeated accidents have shown that Level 1 and Level 2 autonomous vehicles, where the car drives autonomously but the driver must be ready to intervene, is a failed model unless the car actively monitors that the driver is paying attention.
Overall I think justifying the current state of things by saying that people must be sacrificed for this technology to work is ridiculous. Basic safeguards are not being used, and if we require them, maybe autonomous vehicles will take a few years longer to reach deployment, but that thousands of lives could become tens.
Edit: I read in another comment that the Tesla car at least "alarms at you when you take your hands off the wheel". In that case I think what Tesla is doing is much more reasonable. (Not Uber, though.) Although I still feel like it is going to be hard to react to dangerous situations when the system operates correctly almost all the time (even if you are paying attention and have your hands on the wheel). But I'm not sure what the correct policy should be here, because I don't fully understand why people use this in the first place (since it sounds like Autopilot doesn't save you any work).
Cars should just be phased out in favor of mass transit everywhere.
Yes, you can live without the convenience of your car. No really, you can.
Now think about how you would enable that to happen. What local politicians are you willing to write to, or support, in order to enable a better mass transit option for you? And how would you enable more people to support those local politicians that make that decision?
This is the correct solution, since the AI solution of self-driving cars isn't going to happen. Their high fatality rates are going to remain high.
Maybe, but unless you can change the laws of nature, you can't build a mass transit system that can serve everyone full-time with reasonable efficiency and cost-effectiveness, and that's just meeting the minimum requirement of getting from A to B, without getting into all the other downsides of public vs. private transportation in terms of health, privacy, security, etc.
Let's see what that imagination can craft.
Achieving 24/7 mass transit, available with reasonable frequency for journeys over both short and long distances, would certainly require everyone to live in big cities with very high population densities. Here in the UK, we only have a handful of cities with populations of over one million today. That is the sort of scale you're talking about for that sort of transportation system to be at all viable, although an order of magnitude larger would be more practical. All of those cities have long histories and relatively inefficient layouts, which would make it quite difficult to scale them up dramatically without causing other fundamental problems with infrastructure and logistics.
So, in order to solve the problem of providing viable mass transit for everyone to replace their personal vehicles, you would first need to build, starting from scratch or at least from much smaller urban areas, perhaps 20-30 new big cities to house a few tens of millions of people.
You would then need all of those people to move to those new cities. You'd be destroying all of their former communities in the process, of course, and for about 10,000,000 of them, they'd be giving up their entire rural way of life. Also, since no-one could live in rural areas any more, your farming had better be 100% automated, along with any other infrastructure or emergency facilities you need to support your mass transit away from the big cities.
The UK is currently in the middle of a housing crisis, with an acute lack of supply caused by decades of under-investment and failure to build anywhere close to enough new homes. Today, we're lucky if we build 200,000 per year, while the typical demand is for at least 300,000, which means the problem is getting worse every year. The difference between home-owners and those who are renting or otherwise living in supported accommodation is one of the defining inequalities of our generation, with all the tensions and social problems that follow.
But sure, we could get everyone off private transportation and onto mass transit. All we'd have to do is uproot about 3/4 of our population, destroy their communities and in many cases their whole way of life, build new houses at least an order of magnitude faster than we have managed for the last several decades, achieve total automation in our out-of-city farming and other infrastructure, replace infrastructure for an entire nation that has been centuries in development... and then build all these wonderful new mass transit systems, which would still almost inevitably be worse than private transportation in several fundamental ways.
And that's not taking into account the fact that bicycle is a very viable way to move around in cities < 200 000 inhabitants.
I have actually never owned a car, I just rent some once in a while to go out somewhere where regular transports don't get me. I have lived in Sweden, France and Spain, in 10 cities from 25 000 to 12 million inhabitants. Never felt restricted. I actually feel much more restricted when I drive because I have to worry about parking, which is horrible in both Paris and Stockholm. Many people I know, even in rural Sweden or France, don't own a car because it is just super costly and the benefit is not worth it. It's very much a generation thing tough because my friends are mostly around 26-32 whereas nearly all the person I know over 35 owns a car, even if they don't actually have that much money and sometimes complain about it.
To provide a viable transport network, operating full-time with competitive journey times, without making a prohibitive financial loss or being environmentally unfriendly, you need a critical mass of people using each service you run. That generally means you need a high enough population density over a large enough urban area that almost all routes become "main routes" and almost all times become "busy times".
I lived car free in a small industrial UK city, we couldn't manage that with kids (too expensive for one).
Bus seats are awful, why?, because they're made vandal resistant (and hard wearing). They're too small for a lot of people now as well. So you need to remodel buses IMO; your going to need to be hotter on vandals, so change the approach of the courts. Things bifurcate across areas of society like that: Supermarkets, houses, zoning, etc. all are designed with mass car ownership as a central tenet.
No, I can't. Don't presume to tell other people what they need to live their lives.
A laudable goal doesn’t give anyone the right to kill people by taking unnecessary risks. The reason that Tesla and Uber do what they do the way they do, instead of a more conservative approach is an attempt to profit, not save lives. If you don’t have to spend lives to make progress, but choose to do so for economic experience, there’s s word for that: evil.
Racing drivers have also reported that when they are not driving at 100%, they are more prone to make mistakes or crash. Most famously, Ayrton Senna's infamous crash at Monaco when he was leading the field by a LONG way. When he was asked why he crashed at a fairly innocuous slow corner, he said that his engineer had asked him over the radio to 'take it easy' as there was no chance he would be challenged for 1st place before the finish line, so he relaxed a fraction and started thinking about the victory celebrations. And crashed.
Basically this: https://xkcd.com/1138/
You're not alone. I find the act of modulating my speed is what keeps me focused on the task of driving safely. Steering alone isn't enough; I can stay in my lane without tracking the vehicles around me or fully comprehending road conditions.
Until a Level 5 autonomous car is ready to drive me point A to point B while I watch a movie I will remain firmly in command of the vehicle.
Public roads are not laboratories. It’s not just Tesla owners who are participating in this, it’s everyone on the road with them.
It really annoys me when I have to constantly look at the speed to avoid getting above the limit and it is very tiring for long drives.
This is similar to the problem for pilots, who can be distracted by mundane tasks due the complexity of controls in modern aircraft. If these tasks are removed, the pilot can focus on what's more important.
According to NASA "
For the most part, crews handle concurrent task demands efficiently, yet crew preoccupation with one task to the detriment of other tasks is one of the more common forms of error in the cockpit."
The moment the back of my mind doesn't have to handle precise throttle control I find my mind wandering and my spacial awareness is shot. I guess maintaining speed is the fidget spinner that keeps me focused on the task of driving.
Does anyone know of psychology studies that measure human reaction time and skill when sometime like autopilot is engaged most of the time? I remember taking part in a similar study at Georgia Tech that involved firing at a target using a joystick. It was also simultaneously a contest because only the top scorer would get prize money. The study was conducted in two parts. In the 1st phase, the system had autotargeting engaged. All subjects had to do was press a button when the reticle was on the target in order to score. In the 2nd phase, which was a surprise, autotargetting was turned off. I won the contest and my score was miles ahead of anyone. I can't fully confirm it but I feel this happened because I was still actively aiming for the target even when autotargetting was active.
Yes. That's been much studied in the aviation community. NASA has the Multi-Attribute Test Battery to explicitly study this. It runs on Windows with a joystick, and is available from NASA. The person being tested has several tasks, one of which is simply to keep a marker on target with the joystick as the marker drifts. This simulates the most basic flying task - flying straight and level. This task can be put on "autopilot", and when the marker drifts, the "autopilot" will simulate moving the joystick to correct the position.
But sometimes the "autopilot" fails, and the marker starts drifting. The person being tested is supposed to notice this and take over. How long that takes is measured. That's exactly the situation which applies with Tesla's "autopilot".
There are many studies using MATB. See the references. This is well explored territory in aviation.
I don’t find that particularly challenging and in fact, when the autopilot is INOP, flights are slightly more mentally fatiguing because you have no offload and complex arrivals are much more work, but in cruise, you have to be paying attention either way. It’s not a time to read the newspaper, autopilot or not.
What I noticed that when I was following posted/safe speed limit, I was quickly losing focus, mind started wandering and eventually I felt I was falling asleep.
I do not remember what made me to speed up, but once I was about 30% faster than posted speed limits, and once I reached part of the way where road was quite bad + a lot of road work was happening, I realized that I much more alert.
As soon as I slowed down to posted speed limit speed I began drifting away again..
If anything, my anecdote confirms your theory - as soon as we perceive something safer, we pay way less attention. And Autopilot sounds like one of these safety things, which makes drivers less attentive and potentially missing dangerous situation, which otherwise would be caught by driver's mind.
I wonder if there is a way to introduce autopilot help without actually giving sense of security to the driver. Granted Tesla would lose so precious marketing angle, but if their autopilot would work somewhat like variable power steering system on background without obvious taking over control of the car, in the long haul that would be more beneficial?
I find rough motorway surfaces in my current vehicle induce heavy drowsiness at motorway speed limits (slight reduced at marginally higher speeds when the pitch is higher).
It's not a question of zero deaths, it's a question of reducing the number which means you need to look beyond individual events. Remember the ~90 people who died yesterday from a US car accident without making the news are far more important than a few individuals.
No we don't. Tesla likes to compare their deaths per mile to the national average. The problem is that their autopilot is not fit to drive everywhere or in all conditions that go into that average. There is no data to support that autopilot is safer overall. It may not even be safer in highway conditions given that we've seen it broadside a semi and now deviate from the lane into a barrier - both in normal to good conditions.
And really, driving conditions are responsible for a relatively small percentage of vehicle fatalities. Most often it's people doing really dumb things like driving 100+ MPH.
The only thing we actually know is these cars are safer on average than similar cars without these systems. That's not looking at how much they are used, just the existence of said safety system and likely relates to them being used when the drivers are extremely drunk or tired which are both extremely dangerous independent of weather conditions.
The US just mandated all new cars have backup cameras, but it seems like mandating AEB would make a bigger difference.
What do you know that the rest of us don't? The ones statistics I've seen on anyone's self-driving cars so far would barely support a hypothesis that they are as capable as an average driver in an average car while operating under highly favourable conditions.
I've been saying this for a while and it's interesting to see more people evolve to this point of view. There was a time when this idea was unpopular here--owed mostly to people claiming that autonomous cars are still safer than humans, so the risks were acceptable. I think there are philosophical and moral reasons why this is not good enough, but that goes off-topic a bit.
In any case, some automakers have now embraced the Level-5 only approach and I sincerely believe that goal will not be achieved until either:
1. We achieve AGI or
2. Our roads are inspected and standards are set to certify them for autonomous vehicles (e.g. lane marking requirements, temporary construction changes, etc.)
Otherwise, I don't believe we can simply unleash autonomous vehicles on any road in any conditions and expect them to perform perfectly. I also believe it's impossible to test for every scenario. The recent dashcam videos have convinced me further of this .
The fact that there are "known-unknowns" in the absence of complete test-ability is one major reason that "better than humans" is not an ethical standard. We simply can't release vehicles to the open roads when we know there are any situations in which a human would outperform them in potentially life-saving ways.
The solution might be a system where the driver drives at a higher level of abstraction, but ultimately still drives.
Driving should be like declarative programming.
For example, the driver still moves the steering wheel left and right, but the car handles the turn.
Or when the driver hits the breaks, which is now more of a on/off switch, the car handles the breaking.
The role for the driver is to remain engaged, indicating their intention at every moment and for the car to work out the details.
Edit: On second thought, that might end up being worse. I can think of situations where it might become ambiguous to the driver of what they are handling and what the car is handling. Maybe autopilot is all or nothing.
I can't even drive an automatic transmission car without getting way too distracted at times
Glad I'm not the only doing it. When driving on a highway I increase or decrease my car's speed by 10-15 km every 10 minutes or so, so that this variation can help me keep attentive about my surroundings.
Not saying it invalidates your argument, just that there are pretty wide-reaching consequences to that idea.
I think people that say that "autopilot" is a bad name for this feature don't really understand what an "autopilot" does.
> Full Self-Driving Hardware on All Cars
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
Whatever theory you have for Tesla's naming of their feature, it doesn't match with their marketing.
The page's title is "Autopilot | Tesla". It is the first result for "tesla autopilot" in search results. And "autopilot" appears 9 times on the page. So if that's not an intentional attempt to mislead consumers into conflating Autopilot with "full self-driving", then what would such an attempt look like, hypothetically?
That is damn crazy. You should consider your users to be complete idiots when improper use of the thing can endanger lives.
Why are we even debating this?
For this standard it would have to apply to every driver. Should drivers who do not google "Tesla autopilot", let alone ones that do and read on in a section about said autopilot feature, be punished with death in a two ton metal trap?
It doesn’t; what it does say is perfectly clear to me.
It remains an answer to "what would such an attempt [to intentionally mislead consumers] look like, hypothetically?"
This is actually harder to do then just driving the car.
Personally, I do not think calling Tesla's system 'autopilot' is the issue, but your claim that it is accurate is based on misunderstandings about the use of autopilots in aviation. It is not the case that their proper use puts airplanes on the edge of disaster were it not for the constant vigilance of the pilots.
People who say that "Autopilot" is a bad name for this feature aren't basing it on an imperfect understanding of what autopilot does in airplanes. They're basing it on how they believe people in general will interpret the term.
The funny thing is that it is the same people who are arguing for self driving tech by saying that "Humans will do dumb shit", is the same ones who justify Tesla by saying "Humans should not do stupid things (like ignoring the cars warning)"..
They aren't going to literally fall asleep, but much of the time pilots are reading a book and not directly paying attention in the same way that a driver is.
Since then, air traffic control procedures have been improved to avoid these situations, but nowadays e.g. over the Baltic Sea, Russian military planes are routinely flying with their transponder turned off so that flight control does not know where they are. So, this risk is still there.
Imagine the possible breaking components in that chain too. Bluetooth can fail, satellite can fail, cell can fail,
WiFi can fail, a USB cable can fail, there isn't a single piece of connectivity technology that would make me confident enough to delegate alerts to another device.
There is also an inherent failure of alarms in general in that even very loud ones can be ignored if they give false positives even once or twice. There is a body of study trying to address it. Some of the most fatal industrial accidents occured because alarms were either ignored or even fully switched off. We aren't good with alarms.
I think the meat of it though is that unless Auto-pilot works perfectly then you can't leave it alone. And if you can't leave it alone then what's the point?
The sell for autonomous cars isn't that people are just so darn tired of turning the steering wheel that they would really rather not. It's that we could potentially be more productive if we could shift our full attention to work/study/relaxation while commuting.
They should have issued a single, very simple statement that they are investigating the crash and that any resulting improvements would be distributed to all Tesla vehicles, so that such accidents can no longer happen even when
drivers are not paying sufficient attention to the road and ignore Autopilot warnings. Then double down on the ideea that Autopilot is already safer than manual driving when properly supervised and that it constantly improves.
The specifics of the accident, victim blaming, whether the driver had or not his hands on the wheel or was aware of Autopilot's problems is something that should be discussed by lawyers behind closed doors. And of course, deny it media attention and kill it in preliminary investigation, which I imagine they will have no problem in doing, he drove straight into a steel barrier for God's sake.
Silicon Valley needs to stop trying to make autonomous cars happen.
Right now self-driving cars aren't viable in the real-world due to their extremely high fatality rate, which are comparable to motorcycles.
Meanwhile, there are several car models with ZERO fatalities.
The name implies drivers don't need to drive and is most likely a big reason drivers are lured into a false sense of security.
I hate to be sanctimonious at people online but this is how people get killed. Is it not illegal to do this where you live? In the UK you'd be half way to losing your license if you got caught touching a phone while driving (and lose it instantly if within the first 2 years of becoming a qualified driver).
You just said it yourself, you can't trust it, so don't play with your phone while driving, lane assist or not.
Lane assist has also led me to be much better about always signaling lane changes lest the steering try to fight me.
That's exactly what the tesla driver must have thought too. Right until the auto pilot steered directly into a barrier. Volvo S's system may be better, but any lapse in attention can lead to the type of crash we are discussing about.
See, I'm not sure if you know this... but most people are not Pilots.. ( disclaimer, I'm not only a programmer, but also hold an A&P and avionics license, as well as a few engine ratings ).
It is ABSOLUTELY on a manufacturer to make sure their potentially life ending feature, is not named in a way that can confuse the target audience. You know. NON PILOT car drivers.
auto means "by itself, automatic"...
Of course “Autopilot” is intended to evoke the common meaning as a marketing tool, and not the nuanced, highly technical meaning understood by pilots. Understand though, that when someone argues against that point the pedantry is just a proxy for their fanaticism, and until the fanaticism dies, the excuses will be generated de novo. You’re bringing reason and logic to an emotional fight.
Forgive me but I've never seen any plane where, once in autopilot, the pilot/s are not checking and observing the conditions of the plane and making sure everything is alright.
I want you to go on wikipedia(is that not mainstream enough) and search for the term Autopilot. Reads its ACTUAL definition and come back.please.
Autopilots used in modern commercial airplanes are autonomous. You don't have to watch them, they will do their job. The airplane is either controlled by the pilots or the autopilot. There is a protocol to transfer the control between pilots and the autopilot, such that it is clear who is in charge of controlling the plane (there's even a protocol to transfer this between pilots).
The autopilot will signal when it is no longer able to control the plane (because of, e.g., technical faults in the sensors).
Yes, there are also autopilots in smaller airplanes which are more or less just a cruise control. But everything in between, where is it unclear who is doing what are where the limits of the capabilities are, have been scrapped because people died.
> doesn't mean tesla has to have the burden of people misinterpreting what it says.
Because Tesla is so pretty clear in stating what their autopilot is able to do and what not.
The cars simply lack the software to enable a fully autonomous vehicle. The phrasing indicates that if/when the software becomes available, the car would be theoretically capable of driving itself.
It's just a typical misleading marketing blurb; nothing more.
They don't actually know that they have hardware for full autonomy till they have a fully working hardware/software autonomy system; what they have is hardware that they hope will support full autonomy, and a willingness to present hopes as facts for marketing purposes.
But that wasn't the line of argument I was making. The parent commenter said this about people misunderstanding the term "autopilot"
> Just because people don't know the proper term or have an erroneous idea of the term, doesn't mean tesla has to have the burden of people misinterpreting what it says.
Seems like people might be mistaken because the phrase "Full Self-Driving" is literally the first thing on the official Tesla Autopilot page.
It's a fatal choice of words.
That is exactly what the CEO says
Hes a big talker and a bigger cult leader.
His fans will believe the news.
And it's not just this one, it's all the others that they'll suddenly be accepting liability for.
This sample statement makes it very clear that the user was misusing autopilot and trusting it beyond its intended function, but also shows sympathy for the family's situation.
Doesn't your statement admit that Tesla is at least partially at fault? Something their lawyers would probably never allow.
IANAL so take with a grain of salt. I once talked to a lawyer who used to work for a big hospital and handled the malpractice lawsuits against them. Three takeaways from the discussion:
1. Implying that a possibility exists that the hospital was at fault has no legal ramifications whatsoever.
2. The studies show an apology and admission has a significant impact on the amount paid to the patient if there is a settlement (in favor of the hospital).
3. Despite knowing 1 and 2, he and other lawyers advise their clients to deny wrongdoing all the way to the end.
"Don't smoke, ever."
'Oh, really? How did you model the benefits to me? By what margin did the costs exceed the benefits?'
"There are no benefits to smoking."
'You mean, no health benefits?'
"What's the difference?"
Should a product that flawed really be deployed on an honor system basis?
Because testing stuff before throwing it on the market isn't a thing anymore?
I forewent commenting on their pre-market testing because I assumed that flokie already knew that the cars and ML models they use had been extensively tested on tracks and in simulation before the first Tesla was allowed on California roads.
And they would have be complete idiots had they not done such testing, no investors would have funded that.
Reactions like flokie's were completely predictable the moment driver assistance techniques were thought of. The only acceptable response a company can have to such criticism is "we have tested this extensively and it is safer than driving manually".
Market forces aside, no car drives on roads in any US state without extensive testing and certification. All of the companies testing self-driving technology had to get special permits to do so.
Even just a reading of their actual statements offers some insight in to the amount of testing and data collection they are still doing.
I think it’d be a tough sell to get the blessing of Tesla’s legal team. Given Musk’s position he could override that of course, but it could still reduce the likelyhood of it going out.
Overall as much as I prefer it, I think most companies wouldn’t release something this direct and honest. Although that’s changing at some companies as they find that the goodwill built through lack of bullshit can sometimes outweigh distasteful liability defense techniques.
It seems like Tesla is careful not to dispel the illusion it can. Like how alcohol does not get you women- but all the adverts deftly imply it can, without actually saying it can.
I am a bit irked by the arrogance of this statement. The best we can hope for is to ensure the safety of the public.
And you, as a company, can be regulated out of business.
Pretend for a moment that the occurrence of the accident was a foregone conclusion. In such a scenario, the most positive thing that could be done would be to use the information to save the lives of others.
There's a smidgen of humility in the phrasing; that the accident might have been avoidable with the information that has been gained as a result of the accident. Of course, I presume that sort of sentiment would fall squarely under the "admission of guilt" umbrella that prevents companies from saying things like this.
Tesla is merely trying to take the next step in reducing that percentage.
Their strategy is sound and we so far have not come up with any alternatives that stand a remote chance of improving safety as much as self-driving. Even if they are largely unsuccessful, they are indeed trying to ensure the safety of the public.
Nope, and that's the whole difference. They will be killed by theirs actions, their choices, their inattention, or those of other drivers. They won't be killed by the machine.
With autopilot / pseudo-autopilot, they will be killed by the machine.
It is a huge difference, both in terms of regulations of people transportation safety, and in terms of human psychology, which makes a big difference between being in control and not being in control of a situation.
I can agree with the notion that the machine killed the person in all cases where the the machine does not include any controls for the person.
As a society, we currently recognize that the causes of accidents and the probability of occupant death are dependent on multiple factors, one of which is the car and it's safety features (or lack there of). https://en.wikipedia.org/wiki/Traffic_collision#Causes
We also already have a significant level of automation in almost all cars, yet we are rarely tempted to say that having cruise-control, automatic transmissions, electronic throttle control, or computer controlled fuel injection means we are not in control and therefore the machine is totally at fault in every accident.
Operating a car was much harder to get right before these things existed and the difference can still be observed in comparison to small aircraft operations.
Then and now we still blame some accidents on "driver/pilot error" while others are blamed on "engine failure", "structural failure", or "environmental factors".
I think having steering assistance or even true autopilot will not change this. In airplanes, the pilots have to know when and how to use an autopilot if the plane has one.
If the pilot switches on the autopilot and it tries to crash the plane, the pilot is expected to override and re-program it, failure to do so would be considered pilot error.
Similarly, drivers will have to know when and how to use cruse-control/steering-assist and should be expected to override it when it doesn't do the right thing.
I don't know if paraphrasing Joseph Stalin is the right way to go around this
"A sincere diplomat is like dry water or wooden iron."
"We think that a powerful and vigorous movement is impossible without differences - 'true conformity' is possible only in the cemetery."
"Education is a weapon whose effects depend on who holds it in his hands and at whom it is aimed."
Just curious, what’s your writing background?
Sure it might look neat to someone on the outside but it wouldn't take long to see it's nothing like the real thing made by someone who knows what they're doing.
This statement is fine.
Do you honestly believe the statement, as written, aligns with the goals Tesla sets for themselves when releasing a statement?
If yes, then how wildly incompetent must Tesla be to release the statement they did.
If no, then this statement does not serve the function of a statement released by Tesla.