Tesla's system demands more from drivers than manual driving. The driver has to detect automation failures after they occur, but before a crash. This requires faster reaction time than merely avoiding obstacles.
Here's a good example - a Tesla on autopilot crashing into a temporary road barrier which required a lane change.[1] This is a view from the dashcam of the vehicle behind the Tesla. At 00:21, things look normal. At 00:22, the Tesla should just be starting to turn to follow the lane and avoid the barrier, but it isn't. By 00:23, it's hit the wall. By the time the driver could have detected that failure, it was too late.
Big, solid, obvious orange obstacle. Freeway on a clear day. Tesla's system didn't detect it. By the time it was clear that the driver needed to take over, it was too late. This is why, as the head of Google's self driving effort once said, partial self driving "assistance" is inherently unsafe. Lane following assistance without good automatic braking kills.
This is the Tesla self-crashing car in action. Tesla fails at the basic task of self-driving - not hitting obstacles. If it doesn't look like the rear end of a car or truck, it gets hit. So far, one street sweeper, one fire truck, one disabled car, one crossing tractor trailer, and two freeway barriers have been hit. Those are the ones that got press attention. There are probably more incidents.
Automatic driving the Waymo way seems to be working. Automatic driving the Tesla way leaves a trail of blood and death. That is not an accident. It follows directly from Musk's decision to cut costs by trying to do the job with inadequate sensors and processing.
" Tesla's system demands more from drivers than manual driving. The driver has to detect automation failures after they occur, but before a crash. This requires faster reaction time than merely avoiding obstacles."
I have been thinking that too. At what point do you decide that the autopilot is making a mistake and take over? That's an almost impossible task to perform within the available time.
If I was driving a car like that I think I'd feel safer if there was a "confidence meter" available, I have no idea about how these auto-drive systems work but I'm guessing there's some metric for how confident the car is that it's going in the correct direction. Exposing that information as a basic percentage meter on the dash somewhere would make me feel more confident about using the auto-drive.
I can understand why this wouldn't happen from a business perspective, and it's also presumably not as simple to implement as I'm implying, but I can't think of a better way to get around the uncertainty of whether the car's operating in an expected way or not.
> If they had the ability to measure confidence in the system, they would use it to issue warnings or disable autopilot
Yeah, that's basically what I'm asking for, maybe a warning light at a certain threshold would be a better default, personally I'd still find a number more trustworthy though.
> The "confidence meter" would almost certainly be 100% right up until it crashes into an obvious obstacle
If image recognition algorithms have associated confidence levels, I'd be surprised something more complicated like road navigation was 100% certain all the time.
The problem is, given a neural network, the decided course of action IS at 100% confidence.
Unless you design the network to specifically answer "how similar is this?" and pair it with a training set and the output of another executing neural net, a "confidence meter" isn't possible.
This is one of the trickier bits to wrap one's mind around with neural networks. The systems we train for specialized tasks have no concept of 'confidence' or being wrong. There is no meta-awareness that judges how well a task is being performed in real-time outside of the training environment.
Humans don't suffer from this issue (as much) due to the mind bogglingly large and complex neural nets we have to deal with the world. Every time you set yourself to practicing a task, you are putting a previously trained neural network through further training and evolution. You can recognize when you are doing it because the process of doing so takes conscious effort. You are not 'just doing <the task>', you are 'doing <the task>, and comparing your performance against an ideation or metric of how you WANT <the task> to be performed'. This process is basically your pre-frontal cortex and occipital lobe for instance, tuning your hippocampus, sensory and motor cortex to perfect the perfect tennis swing.
When we train visual neural networks, we're talking levels of intelligence supplied by the occipital lobe , hippocampus, alone. Imagine every time you hear about a neural net that a human was lobotomized until they could perform ONLY that task with any reliability.
Kinda changes the comfort level with letting one of these take the wheel doesn't it?
Neural nets are REALLY neat. Don't get me wrong. I love them. Unfortunately, the real world is also INCREDIBLY hard to safely operate in. Many little things that humans 'just do' are the results of our own brains 'hacking' together older neural nets with a higher level one.
Have you actually worked on applying neural networks in the real world?
Calibrating the probabilities of machine learning algorithms is an old problem. By nature of disciminative algorithms and increasing model capacities, yes training typically pushes outputs to one extreme. There is a ton of information still maintained which can be properly calibrated for downstream ingestion, which anyone actually trying to integrate these into actual applications should be doing.
There is actually some sort of classification confidence regarding outputs. It's not a binary yes or no for output neurons regardless of what abstraction you've observed in whichever tensorflow or other high level API being used.
>I'd be surprised something more complicated like road navigation was 100% certain all the time.
If for instance it failed to detect an obstacle how could it tell you that it failed detecting an obstacle? Or that it got the road markings wrong?
Given that the consequences for making a mistake might be people dying I certainly hope that's not how it works under the hood. Anything less than "I'm absolutely sure that I'm not currently speeding towards an obstacle" should automatically trigger a safety system that would cause the autopilot to disengage and give the control back to the driver, maybe engaging the emergency lights to let the other drivers know that something is amiss.
I wouldn't be surprised if in the moments preceding these crashes the autopilot algorithm was completely sure it was doing the right thing.
Of course these various crashes show that in fact the algorithm makes mistakes sometimes but I certainly hope that it's not aware that it's making mistakes, it just messes up while it thinks everything is going as planned.
Anything less than "I'm absolutely sure that I'm not currently speeding towards an obstacle" - that's an unattainable goal; there's no way to actually reach this, whether a human or a robot is driving. I'm overwhelmingly sure during the most part of my drives, and mostly sure for the rest - but absolutely? Never.
On top of all that is going on in traffic you would add interpreting one more abstracted symbol to your cognitive load? The situation can change in milliseconds during driving, you can never ever be entirely sure nothing unexpected could happen.
Personally having tried Tesla AP1 iit was painfully clear I can never fully trust any assisted driving aid, they can fail at any point and I need to be ready to take over immediately. Particularly having to wrestle the steering wheel from AP1 as it slightly resists is reason enough to stop using it.
> On top of all that is going on in traffic you would add interpreting one more abstracted symbol to your cognitive load?
I think I'd find it a simpler metric to follow than the more qualitative information about car behaviour, as well as more useful to decide whether the car or I should be in control.
You're quite right about timing, I have no idea whether this metric would plummet quickly or gracefully drop so the driver would have time to take charge once a threshold was hit.
I want to stress that this is an underbaked idea and really just a thought experiment about what would make me personally feel more comfortable with driving a semi-autonomous vehicle.
Couldn't they use that fancy LCD screen to show what the autopilot sees (or an approximation of it) for the next 150 feet or so? At least then you'd know if you were about to plow into a barrier.
Also what is the point of such an auto pilot? Seems really dangerous. Telling people you don’t have to drive but you have to pay attention ignores fundamental psychology.
If it’s a car that doesn’t require human driver to have her/his hands on the steering wheel at all times, and it crashes. It’s the car’s fault. i.e it collided into another semi-stationary object when it could steer away or brake.
This should be a law.
#1 job of any self driving system is to avoid obstacles. If you can’t do that reliably, it’s a fail!
It is going to come down to the insurance companies. These first few crashes are going to be a nightmare of legal issues and set a lot of precedent. Likely, Tesla and the others are not going to be fined a lot, but they are going to be held accountable in some small way. As such, the insurance companies are going to set very stringent requirements for the 'new' auto-driving features or they are going to come up with some crazy clauses when using the features.
It's a lot like sitting in the passenger seat next to a driver who suddenly turns out to be having a stroke. At what point do you grab the wheel? Clearly, it's a very dangerous situation that you don't want to happen, and certainly not on a regular basis.
But if the driver is having a stroke, you could likely get visual or auditory cues from the driver, you wouldn't have to infer it from the vehicle behaviour.
Try it. it’s not impossible. I do it all the time. Have you ever noticed how passengers are still paying attention to what’s going on and sometimes even turn their head to see where the car is turning? it’s like that, except you’re in the drivers seat. it’s not more work than driving.
Ok, imagine if you are a passenger in a car, sitting next to the driver. You are heading towards a turn, with a solid barrier in your path if you don't turn. Your driver is awake, holding the wheel firmly with eyes wide open. Do you think there's any chance you could intervene between realizing that the driver is not turning and the car hitting the barrier? You could be paying 200% attention to the road, but there's just not enough time - by the time you even realize that the driver is not turning the wheel to impact there is like 1 second of time, maybe two - could you grab the wheel and pull it to one side in that time? And even if you could, can you guarantee that you won't pull it too much and into the path of fully loaded 18-wheeler which will promptly flatten your car?
It's just not good enough. Semi-autonomous cars can't be hitting stationary objects without reacting at all. If they fail at this basic task, then they shouldn't be allowed on the road.
That crash in the video is one you'd have very little time to avoid. Recognizing that the car isn't just going a bit wide but is on a direct collision course with the barrier takes time, and then you need to determine a course of action, reacting without over-reacting.
You probably have 100ms to begin your action, 500ms to complete it.
Good luck with that if you're not perfectly alert.
Trust me, when the car starts doing something you wouldn't do you feel it immediately IF you're paying attention. Remember the car doesn't do anything fancy -- it's just supposed to drive straight centered in the lane
Seems to me in the video there's no intervention needed at the 21 second mark, and the car's already hit the barrier at the 22 second mark. Youtube doesn't provide sub-second time marks, but I'd estimate it's 21.5 second before I'd expect the car to start turning, giving the driver 0.5 seconds to notice the failure, take control of the vehicle, and execute the swerve required.
The Queensland government estimates it takes drivers 1.5 seconds to react to an emergency [1] (I wouldn't be surprised if that was a 99th percentile figure, of course)
The only thing we can blame the victim for here is foolishly trusting Tesla's apparently crap software.
Did you drive with such a system? If you did, you would know there’s a HUGE difference between having to manage minutae (just keeping yourself in a lane is surprising amount of work) and being free to pay attention to your surroundings.
Your cognitive load is lowered by Autopilot enormously. You can pay better attention to what’s going on around you and anticipate issues (say, a car that drivers weirdly in front of you) because you’re not busy with the low-level driving.
Part of that is that you see a difficult road stretch coming up and disengage the assist IN ADVANCE - becuase you’re not a complete moron, know the lane assist is for driving in highway lanes and the thing you’re approaching is decisively not one.
This is the automation paradox, and one that Tesla itself has been facing with their assembly lines.
If you over-automate something you end up having to spend more time supervising and fixing it than if you'd just done the thing yourself in the first place.
What video? I'm just talking about my actual experience driving the car. And yeah, people can react in less than one second. One second is a long time when reacting to an imminent collision.
If you've been driving on a very long, very straight, exceedingly boring stretch of road with zero traffic then your reaction times will go down. It's very difficult to stay alert in those situations and there's a frighteningly high level of single vehicle collisions on those sorts of roads caused by a sudden change in conditions or even the angle of the road.
Auto-pilot will make driving a lot more like flying is for pilots: 99% extremely dull, 1% terrifying.
The way it diverted from the trajectory you would take. It got off the expected path, or drivers a few km/h faster in a road arc that you would, that sort of thing raises alarm in the driver right away.
A funny feeling like that, I’m ready to disengage immediately and then 1s is plenty of time. But I wouldn’t use Autopilot near an obstacle like that to begin with - anticipate, take over for the weird stretches.
1000ms is plenty of time, but in this particular crash you have 500ms total between the car veering out of the lane and impacting the wall.
Regardless, you're saying that if you're not paying attention every single second of the trip, instantly ready to rect to even the slightest deviation from your intended trajectory, then you're doing it wrong.
What, exactly, is the point of an auto-pilot system if you need to be in that state of absolute alertness?
When I am a passenger I do not always pay attention, only sometimes. And if something happens it will the driver's fault, not mine. It would be a very stressful ride if I had to always observe the driver and if I didn't intervene anytime every problem would be my responsibility.
I definitely think auto-pilot is a really bad name. It should be “advanced cruise control”. Because essentially that’s what it is. Follow the lane and maintain distance behind a car.
Also unless certified on certain roads with clear markers, drivers should be required to have hands on wheels at all times.
> Tesla's system demands more from drivers than manual driving.
This is a great point. With regular cars, drivers can reliably predict where the car will be after 5 seconds. However, with Tesla, there is really no way to know what the auto-driver will do, and so the driver has to stay even more attentive to ensure that it does not do something wrong. It's like driving a car that is not under your control, and may go in random directions.
There seems to be a valley between the way the user expects something works and how it actually does. When users detect a difference they may compensate, if they survive long enough to learn.
Of course if users get to used to such differences then those can become 'features' which only makes progress harder.
The problem is that you don't know what the car is going to do. E.g. if you see a barrier you would mentally prepare to steer. But if you trust the car you don't prepare and then suddenly have to take over at the last moment when you realize the car is doing something wrong.
That means the autopilot doesn't help at all since you would have to basically drive manually all the time anyway.
Exactly - the only safe way to drive using autopilot assist is to keep your hands on the wheel and mime steering every corner... and at that point how is it assistance?
Maybe one solution would be to actually show you what the car is "thinking". E.g. it could project the outlines of what it's seeing and what it's planning to do on the windshield.
In this case I'd agree that the cons outweigh the pros. Surprising Tesla can legally market the feature or even offer it to the general public in its current form.
No - this is not a fair comparison. Tesla, without the autodrive feature, functions like a regular car. It is upto you if you want to turn on the autodrive or not.
Well if you are worried about safety with autopilot on don't turn it on. To expect a system to function 100% from day 1 or even on day 1000 is unreal. Look at aeroplanes - they used to be notorious with their safety records in the 50s and 60s. However, with each crash, problems were identified and improvements were made. This is what led to the current world where aeroplanes are one of the safest mode of traveling.
Sorry, comparison with airplanes is not fair in my opinion.
The purpose of the autopilot is to increase safety for drivers that are not attentive. However, it turns out that Tesla requires drivers to be even more attentive than regular vehicles. This is not a sign of progression.
Not true in my opinion. While for now the positioning of autopilot is to increase safety for non attentive drivers, eventually autopilot's main goal is to have completely self driven cars and that is where the industry is heading (and Tesla wants to be in the forefronts of it).
We can differ in opinions but I strongly feel that the eventual utopia of self driving cars will be much safer than the current world. And we are making progress on this daily. There will be few unfortunate incidents but in the long term a lot more lives will be saved.
> While for now the positioning of autopilot is to increase safety for non attentive drivers, eventually autopilot's main goal is to have completely self driven cars and that is where the industry is heading (and Tesla wants to be in the forefronts of it).
In that case they should issue a recall, and not use real users as their test drivers. Once they get to their eventual goal, they can resume selling autopilot cars.
> We can differ in opinions but I strongly feel that the eventual utopia of self driving cars will be much safer than the current world. And we are making progress on this daily. There will be few unfortunate incidents but in the long term a lot more lives will be saved.
I totally agree with you that in the long term self-driving cars are much safer. What I do not agree with is Tesla's approach towards achieving that goal. They are selling an unsafe product while marketing it as much safer than the existing products. This is just plain fraud.
> This is why, as the head of Google's self driving effort once said, partial self driving "assistance" is inherently unsafe.
Sorry, but this is the exact wrong conclusion to make from the available data. Partially automated cars have been driven by average people on the public roads since 2006, when the Mercedes S class got active steering and ACC. There is no data that says that these systems, which are available on cars as affordable as e.g. a VW Polo, lead to more crashes. In fact the only data I have seen suggests that they reduce crashes and driver fatigue significantly.
The issue that such a feature is systematically crashing and even killing its users is very specific to Tesla. Tesla's system has had a tendency to run into stationary objects for as long as it has been available to the public. Teslas have a significantly higher rate of causing insurance claims. There are countless videos showing it happening, the two (possibly three) fatalities so far are just the tip of the iceberg. Tesla blamed MobilEye (their previous supplier of camera-based object detection hardware), but it seems fairly evident that Tesla's system has the exact same systemic issue even with their new in-house object detection software.
My opinion as an engineer working on this exact kind of system for one of Tesla's competitors is that Tesla's system just isn't robust enough to be allowed to be used like all other manufacturer's autosteer/ACC systems. IMO that's because their basic approach (doing simple line-following on the lane markings) is super naive. A much more intricate approach needed. One that does some level of reasoning on what is a driving surface and what isn't. This has been known in the automotive industry for over a decade. Sorry to be so absolute, but frankly it's just unnacceptable that Tesla is gambling their customer's lives on such a ridiculously half-baked implementation.
> Lane following assistance without good automatic braking kills.
Correct. It also kills when it lies to the driver about how confident it is in a given situation. Many of the crashes, instances of cars veering off into oncoming lanes etc. we have seen would not have happened, if Tesla would fall back to the driver in sketchy situations. However what it does do instead is not even notfy the driver that the system is hanging on by the barest of threads. This works out fine quite often and makes it seem like the car can detect the lanes successfully even in sketchy situations. Yet once the system (and the driver) runs out of luck, there is absolutely no chance to rectify the situation.
> Partially automated cars have been driven by average people on the public roads since 2006,
What the GP was referring to, which is what I've see it referred to in Google presentations is level 3 automation. That is what autopilot is supposed to be, where you can take you hands off the wheel and the car can drive itself but needs constant supervision.
What you're referring to is passive assistance such as collision detection which don't drive the car but provide alerts. There is nothing wrong with this technology, it works because the driver has to drive as they normally would but it provides assistance when it spots something the driver doesn't. Here humans do the object detection and the computers stay alert as a backup.
There is a problem when the car drives itself but the driver has to stay alert. This is a problem because the human is much better at object detection and the car is much better at staying alert.
I'm talking about active level 2 systems, which the Autopilot is an example of.
A level 2 system has to be monitored 100% of the time, because it can not be trusted to warn the driver and fall back to manual automatically in every situation. A level 3 system is robust enough to make this guarantee, which means that the driver can take his/her hands off the wheel until prompted to take back control.
Autopilot is a level 2 system, because you can not take your hands of the wheel of a Tesla and expect to not die. In fact the car itself will tell you to put your hands back on the wheel after a short while. After the fatal crashes started getting into the media, Tesla themselves have stated that it's only a level 2 system. Their CEO's weird claims of Autopilot being able to handle 90% of all driving tasks, there being a coast-to-coast Autopilot demonstration run in 2017 etc. are just marketing BS.
Yep, fair enough, I was wrong Autopilot is level 2.
However I still disagree with your original comment:
> Sorry, but this is the exact wrong conclusion to make from the available data. Partially automated cars have been driven by average people on the public roads since 2006, when the Mercedes S class got active steering and ACC. There is no data that says that these systems, which are available on cars as affordable as e.g. a VW Polo, lead to more crashes. In fact the only data I have seen suggests that they reduce crashes and driver fatigue significantly.
Firstly, I don't see any examples of VW Polo claiming to have a steering assist, even the most recent 2017 edition [1] only has level 1 features [0]. Steering assist only came to the Mecedes S-Class in 2014 [2].
What I (and the parent comment you initially replied to) was tying to claim was that where as Level 1 features [0] (cruise control, automatic braking, and lane keeping) are great, level 2 (and level 3 according to Google/Waymo) is risky because humans cannot be trusted to be alert and ready to take over.
I don't think it's a problem specific to Tesla; the main difference is that Tesla's system instills a lot more confidence in drivers than tools like lane assist and adaptive cruise control (source: I have those; I don't trust them). It's a marketing and user experience problem; marketing because Tesla still insists on calling it autopilot, and user experience because it works pretty good most of the time - in normal conditions. It fails when there's a truck crossing the road or when it's confused by roadworks. The lane assist in my car also gets confused at roadworks, but it's much more likely to turn off and will start yelling at me with annoying tones within seconds of not detecting any steering input.
Tesla can easily fix this - disable autopilot if the user does not have his hands on the wheel. Disable autopilot at construction zones. Disable autopilot sooner when confidence level is lowered.
What if confidence is lowered when a crash is imminent?
I test drove an “autopilot 2” Tesla a while back. Avoiding hitting trash cans and parked cars was much harder than it would have been if Autopilot had been off. When you’re on a road and the car decides to steer toward an obstacle, you have very little time to correct the car.
I don't know much about cars but I guess the point you're missing is that Tesla is the first (and only) company to market automated driving assistance as Autopilot. My first impression when I heard that first was that the car knew how to go from point A to point B with no or minimal manual intervention. With other cars it's marketed as what it is: driving tech that helps avoid accidents. But youtube is full of videos people letting go of the steering wheel on highways and stuff in Teslas.
Note that it's not (only) about colors... Human vision has trouble picking out colors in the dark, yet it still allows us to process world around us to a surprising degree. However until the "brain" of the car is good enough, it is simply irresponsible of Tesla not to use all the data available, from as many different sensors as possible. Testa should use color. And they should use Lidar.
Yeah I guess the perception has to be good enough to not crash in situations where humans wouldn't. It seems not quite there with tesla. Waymo seems to be better in that regard.
My understanding is that the eye doesn't have a great dynamic range, but our brain tricks us (as with a lot of vision). When looking at dark things, we can't make out bright detail and vice versa. As soon as you look at a bright area to make out detail, the eye adjusts, just like a camera.
I (and I'm sure many others) did this for my Ph.D. project (group robotics) because it was the only way to get anything close to an acceptable frame rate on real time video using the ARM boards available. Colour processing is a big overhead, and the higher the definition of the camera, the worse it gets - so there's a definite trade off between the extra information from colour, and getting more frames/s to process. When you're extracting information from movement - i.e. the differences between frames, colour isn't what you're looking at in any case.
But knowing all that, and now knowing Tesla made that short cut, there is no way on the face of this earth I am ever driving a Tesla, or for that matter, driving a car anywhere in the USA that has them on the road.
I've had my suspicions but this confirms it. Tesla's autopilot cannot detect non-moving obstacles. This thing is a killing machine! The problem is not caused by an opaque machine learning network running wild or anything of similar complexity. It's something as basic as a lack of sensor input. With the current hardware on Tesla cars full autonomy is simply not possible. It's basically a glorified lane keeping assistant with marketing that fools people into risking their lives.
Tesla's stance to blame the driver without admitting that their hardware is inadequate will back them into a corner in the long term. They cannot transition from highway to city driving without significant changes to the autopilot software and hardware and this means they effectively have to start over from scratch. They have to start working on a serious competitor to Uber or Waymo now otherwise Tesla will be too late to the market.
If that's true, that would be cutting the corners to a criminal degree. "Safety Red? Signal Orange? Traffic Yellow? Nah, we're good with Meaningless Grey, Meaningless Grey and Meaningless Grey."
>> Automatic driving the Waymo way seems to be working. Automatic driving the Tesla way leaves a trail of blood and death.
If memory serves, Waymo's way seems to be summarised by the sentence "driver assist which we can have right now is inherently dangerous so let's go for full autonomy which we can't have for many years still".
A more accurate comparison of the two companies then might be that they are both trying to sell technology that they don't yet have (autonomous driving) and that only one of them is at least trying to develop (Waymo).
However, it should be absolutely clear that autonomous driving is currently only a promise and that this is true for every product being developed.
This isn't true at all. I have driven my Model S almost every day with autopilot for the past couple years. After getting used to it, you can very easily "feel" when turning and braking is supposed to happen and have plenty of time to take over if it doesn't. At first when I was getting used to it I would disengage frequently. Now I hardly do, and when I need to I have plenty of time to react.
Honestly the autopilot has changed my driving experience and I hate when I have to drive cars without it.
I do agree that you need to be aware of your surroundings and ready to take over, though.
> Tesla's system demands more from drivers than manual driving.
I often wonder if auto-pilot is truly 100% auto-pilot if it requires you to keep an eye out for its "Hey maybe I'll screw it up, so please keep a watch on me"
This is what it's commonly called currently. Not super weird if you consider that there is an autopilot on planes - which only keeps direction and altitude. You also have to keep en eye on it and of course steer it to where you want it to go.
Autopilot on a modern commercial aircraft does much more than hold heading and altitude. It's quite possible for autopilot to fly the plane autonomously almost entirely from takeoff to landing.
Autopilots on aircraft are remarkably dumb in comparison to even the simplest forms of self driving. They are quite capable of flying an aircraft into the side of a mountain if that is what the pilot selects.
Almost almost, iff on happy path and in good enough weather and this and that...In other words, just what we have here: "it helps you fly/drive, unless it doesn't, but you're still in command and responsible for taking over if it becomes unhappy." With flying, there's rarely a situation such as "concrete wall at 12 o'clock" - this might be the distinction w/r/t driving.
Does the autopiloted steering wheel turn as if piloted by a human driver?
If your hands are on a steering wheel, you’re watching the road, and autopilot begins turning towards an obstacle, you should be aware enough to grip the wheel fully and prevent the incorrect turn.
End of story.
Edit: “Autopilot” is a great brand-name for the technology, but perhaps implies too much self-driving capability. Maybe “assisted steering” is a better term for this.
Yes. What's disconcerting, though, is that the autopilot will often take a different line through a turn than I would. Still within the lane, just not what I would do. For me that felt like I had to correct the autopilot even when I didn't, so I didn't trust it. That was on a test drive. I've never enabled autopilot on my own Tesla.
> If your hands are on a steering wheel, you’re watching the road, and autopilot begins turning towards an obstacle, you should be aware enough to grip the wheel fully and prevent the incorrect turn.
> End of story.
No, not true at all. If you’re in a car, paying complete attention, and holding the wheel, but keeping your arms loose enough that the car is fully controlling the steering, you somehow need to notice the car’s error and take over in something on the order of a second or perhaps much less. Keep in mind that, on many freeway ramps, there are places where you only miss an obstacle by a couple of feet when driving correctly. If the car suddenly stops following the lane correctly, you have very little time to fix it.
It seems to me that errors of the sort that Autopilot seems to make regularly are very difficult for even attentive drivers to recover from.
Yes, the wheel turns, and the driver can over-ride/take over from the Autopilot at any time by just turning, or resisting the auto-pilot's turning of the wheel.
Nope. The wheel does turn, in videos I've seen, but with this video the problem is that the vehicle moves straight ahead as the yellow wall pops up in front of it, in which case the wheel would not indicate anything too.
Related to this, Musk's latest statement says the driver who died did not have his hands on the wheel for "6 seconds".
Is that supposed to make me feel better? If the car can go from fine to crashing into a barrier in only 6 seconds, that seems like a damnation of Autopilot more so than the driver.
Yes, with Tesla you are not a driver, you are a driving teacher for Tesla autopilot. And, as anyone who ever tried to teach their own kid how to drive knows, being a teacher is inherently more difficult than simply driving.
Yes, the problem is that you don't know the car has done something stupid until after it's already happened. By then, it's likely too late because obstacles on the road are very nearby.
It might not solve all these accidents but I think a HUD showing where the car plans to travel would work better. If you knew the car was aiming off the road before it turned, you could override the controls in time to correct. It still would require lots of attention and fast reaction time (and still may be less safe than manual driving), but it would at least be better than the situation we are in now.
Another way of looking at that crash could be that Tesla needs to be clearer about the specific environments in which AP can be used. Those road works were clearly visible much earlier than 1 second before the crash. The drive should've switched to manual control ahead of time, but probably didn't think they needed to.
Once you have enough Waymo cars on the road you'll see that the Waymo way kills people too. I would put all these vehicles in karantine mode until the industry or gov develops proper regulation/tests. The car should't be able to change the direction unless it proves it can never fail identifying obstacles.
>the Model S had higher claim frequencies, higher claim severities and higher overall losses than other large luxury cars. Under collision coverage, for example, analysts estimated that the Model S's mileage-adjusted claim frequency was 37 percent higher than the comparison group, claim severity was 64 percent higher, and overall losses were 124 percent higher.
http://www.iihs.org/iihs/sr/statusreport/article/52/4/4
>>Is there any evidence that this is the case?
>Yes.
If there is evidence for it, this isn't it. Tesla cars having higher claim frequencies compared to say the puny Fiat 500 Electric (noted in the article) could just as well be explained by people driving it dangerously because it has "Ludicrous Mode".
Porsche also has 3x the accident rate of Daewoo. That doesn't mean Daewoo cars are 3x as safe, it just means that people who are looking for a hot-rod buy a Porsche and not a Daewoo[1].
> If there is evidence for it, this isn't it. Tesla cars having higher claim frequencies compared to say the puny Fiat 500 Electric (noted in the article) could just as well be explained by people driving it dangerously because it has "Ludicrous Mode".
This is not the claim in the linked article. The linked article claims that Tesla cars have a higher claim frequency than comparable gasoline-powered cars (i.e. large luxury cars such as a Porsche), whereas for example the Nissan Leaf has a lower claim frequency than comparable gasoline-powered cars (namely the Nissan Versa).
Put another way, if your choice is between a Tesla and a gasoline-powered large luxury car, the Tesla is more dangerous. If your choice is between the Nissan Leaf and the Nissan Versa, the Leaf is less dangerous. There was no comparison between the relative danger from a Tesla and a Leaf.
This thread started off with the GP comment by crabasa saying "Wouldn't Tesla vehicles have a higher rate of crashes [if Autopilot was so dangerous], all things being equal?".
I'm pointing out that all other things aren't equal, and you can't assume from overall crash data that you can tease out statistics about how safe a specific feature of the car is.
The comparison to the Fiat 500 is relevant because while the report didn't only compare Tesla vehicles to it, that's one of the comparisons.
Is a Tesla less safe than a Fiat 500 given that it's driven by the same sorts of drivers, in similar conditions and just as carefully as a Fiat 500? Maybe, but who knows? We don't have that data, since there's an up-front selection bias when you buy a high-performance luxury car.
I wasn't able to find the raw report mentioned in this article, but here's a similar older report they've published:
There you can see that the claim frequency of Tesla is indeed a bit higher than all other cars they're compared to, but this doesn't hold when adjusted for claim severity or overall losses. There cars like the BMW M6 and the Audi RS7 pull ahead of Tesla by far.
So at the very least you'd have to be making the claim that even if this data somehow showed how badly performing Autopilot was, that it was mainly causing things like minor scratches, not severe damage such as crashing into a freeway divider.
Just looking at these numbers there seems, to me anyway, to be a much stronger correlation between lack of safety and whether the buyer is a rich guy undergoing a mid-life crisis than any sort of Autopilot feature.
Evidence isn't the same as conclusive proof. Yes, the different demographics meant that we can't conclude that any difference is due to Telsa's autopilot. However it is evidence that Teslsa is more crash prone.
It isn't clear whether the demographics of Tesla drivers are more reckless than that of other luxury brands or not, as you point out Porsche drivers might tend to be more interested in going fast than Daewoo drivers. For Tesla on the one hand you attract people who are interested in helping the environment who I expect to be more conscientious and maybe therefore better drivers. But on the other hand there is Ludicrous Mode.
But since Tesla could have had a lower or higher crash rate than other brands and does have a higher rate we have to update our beliefs in the direction of the car being more likely to crash by conservation of expected evidence. Unless you'd argue that a lower crash rate means that Tesla's safety features prevent lots of crashes.
No, because as noted in my sibling comment[1] that would assume that you've got the same demographic buying a Lexus as a BMWs. That's been shown not to be the case, there's a selection bias where people who are going to drive unsafely buy higher performance vehicles.
Not necessarily. It would likely lead to a lower rate accidents, but with much higher severity.
An example of the inverse of this concept is the roundabout or traffic circle, which has a higher rate of much lower severity accidents than traffic lights or stop signs
Is there any evidence it’s not? Does Tesla publish their crash rate? Also note that they are an extremely small segment of the vehicle population so it may not be a statistically valid sample yet.
> That is not an accident. It follows directly from Musk's decision to cut costs by trying to do the job with inadequate sensors and processing.
Any references to support that statement?
First, how do you know it was Musk's decision? And second, when you imply that by using lower cost sensors (cameras), Tesla is cutting costs, how do you support that?
Their lower cost of sensors might be offset by having to spend more on software development for image processing, for example. When summed up, the overall cost per car may increase significantly.
At least to me, it is not clear at all whether Tesla's self-driving technology cost per car is less or more than it is e.g. for cars which use lidar sensors.
Do you own a Tesla? I do. I’ve driven one for 3 years now. It definitely makes road trips easier and reduces fatigue. I believe it requires the same amount of attention but less physical exertion. Your polemic description of “trail of blood and death” is really based on anecdotal descriptions instead of reality. Car accidents happen and will continue to happen for a long time. The goal is to reduce the number of accidents.
I have never driven a (partially) self driving car but my experience of regular cars is that I struggle to keep my attention on the road (or staying awake) in long trips as a passenger while I have no such problems as a driver. So I could imagine how the driver of a self driving car, particularly after a long time with no incident, could have low reaction times while given even less time to react than if he was driving.
In any case, self driving cars will have accidents, not the least because some accidents are unavoidable (random behaviour of other cars, animals or pedestrians, unexpected slippery roads, etc), but running heads on into a large visible static object should never happen. It’s a bug.
I do since Dec'16. Any common car with adaptive cruise control should make road trips easier and reduce fatigue without any false sense of security. You own the older gen AP1 which should be way more stable and apparently doesn't have this particular "barrier" bug.
Tesla probably shouldn't be saying anything about this at all, even just to avoid giving it more news cycles. But if they were going to say something, here's what they should have said the first time.
----
We take great care in building our cars to save lives. Forty thousands Americans die on the roads each year. That's a statistic. But even a single death of a Tesla driver or passenger is a tragedy. This has affected everyone on our team deeply, and our hearts go out to the family and friends of Walter Huang.
We've recovered data that indicates Autopilot was engaged at the time of the accident. The vehicle drove straight into the barrier. In the five seconds leading up to the crash, neither Autopilot nor the driver took any evasive action.
Our engineers are investigating why the car failed to detect or avoid the obstacle. Any lessons we can take from this tragedy will be deployed across our entire fleet of vehicles. Saving other lives is the best we can hope to take away from an event like this.
In that same spirit, we would like to remind all Tesla drivers that Autopilot is not a fully-autonomous driving system. It's a tool to help attentive drivers avoid accidents that might have otherwise occurred. Just as with autopilots in aviation, while the tool does reduce workload, it's critical to always stay attentive. The car cannot drive itself. It can help, but you have to do your job.
We do realize, however, that a system like Autopilot can lure people into a false sense of security. That's one reason we are hard at work on the problem of fully autonomous driving. It will take a few years, but we look forward to some day making accidents like this a part of history.
>t's a tool to help attentive drivers avoid accidents that might have otherwise occurred.
This needs far more discussion. I just don't buy it. I don't believe that you can have a car engaged in auto-drive mode and remain attentive. I think our psychology won't allow it. When driving, I find that I must be engaged and on long trips I don't even enable cruise control because taking the accelerator input away from me is enough to cause my mind to wander. If I'm not in control of the accelerator and steering while simultaneously focused on threats including friendly officers attempting to remind me of the speed limit I space out fairly quickly. In observing how others drive, I don't think I'm alone. It's part of our nature. So then, how is it that you can have a car driving for you while simultaneously being attentive? I believe they are so mutually exclusive as to make it ridiculous to claim that such a thing is possible.
I don't buy this either nor should we it's not how the feature is marketed.
"The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat."
The result of this statement and the functionality that matches it is it creates a re-enforced false sense of security.
Does it matter whether the driver of the model X whose auto pilot drove straight into a center divider had his hands on the wheel if the outcome of applying autopilot is drivers focus less on the road? What is the point of two drivers one machine one human? You cannot compare car auto pilot to airplane they're not even in the same league. How often does a center divider just pop up at 20k ft?
Usually machinery either augments human capabilities by enhancing them, or entirely replaces them. This union caused by both driver and car piloting the vehicle has no point especially when it's imperfect.
I'm not opposed to Tesla's sale of such functionality, sell whatever you want, but I am opposed to the marketing material selling this in a way that contradicts the legal language required to protect Tesla...
There's risks in everything you do, but don't market a car as having the hardware to do 2x your customers driving capability and then have your legal material say: * btw don't take your hands off the steering wheel... especially when there's a several minute video showing exactly that.
Tesla customers must have the ability to make informed choices in the risks they take.
Which is, by the way, part of why I love the marketing for Mobileye (at least in Israel, haven't seen e.g. American ads). It's marketed not as driving the car, but as stepping in when the human misses something. Including one adorable TV spot starring an argumentative couple who used to argue about who's a better driver, and now uses the frequency of Mobileye interventions as a scoring system. Kind of like autonomous car disengagement numbers :-P
There is a solution for this - if the driver shows any type of pattern of not using the feature safely, disable the feature. Autopilot and comparable functionality from other vehicles should be considered privileges that can be revoked.
Systems that are semi autonomous where there's some expectation of intervention work well in those scenarios, keep the car in lane markers on the highway, etc. Make sure the users hands are on the wheel but for fully autonomous even if your hands are on the wheel, how does it know your paying attention?
This is exactly what the Tesla does. It periodically "checks" that you are there by prompting you to hold the steering wheel (requiring a firm grip, not just hands on the wheel). If you don't, the car slows to a stop and disables autopilot for the remainder of the drive.
> I'm not opposed to Tesla's sale of such functionality, sell whatever you want, but I am opposed to the marketing material selling this in a way that contradicts the legal language required to protect Tesla...
First let me state that I agree with this 110%!
I'm not sure if this is what you are getting at but I'm seeing a difference between the engineers exact definition of what the system is, what it does, and how it can be properly marketed to convey that in the most accurate way. I'm also seeing the marketing team saying whatever they can, within their legal limits (I imagine), in order to attract potential customers to this state-of-the-art system and technology within an already state-of-the-art automobile.
If we are both at the same time taking these two statements verbatim than which one wins out:
> Autopilot is not a fully-autonomous driving system. It's a tool to help attentive drivers avoid accidents that might have otherwise occurred. Just as with autopilots in aviation, while the tool does reduce workload, it's critical to always stay attentive. The car cannot drive itself. It can help, but you have to do your job.
and
> The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat.
If that's the crux of the issue that goes to court then who wins? The engineering, legal, marketing department, or do they all lose because the continuous system warnings that Autopilot requires attentive driving were ignored and a person who already knew and complained of the limits of that system decided to forego all qualms about it and fully trust in it this time around?
I feel like when I was first reading and discussing this topic I was way more in tune with the human aspect of the situation and story. I still feel a little peeved at myself for starting to evolve the way I'm thinking about this ordeal in a less human and more practical way.
If we allow innovation to be distinguished for reasons such as these will we ever see major growth in new technology sectors? That might be a little overblown but does the fact that Tesla's additions to safety and standards thus having a markedly lower accident and auto death rate mean nothing in context?
If Tesla is doing a generally good job and bringing up the averages on all sorts of safety standards while sprinting headlong towards even more marked improvements are we suddenly supposed to forget everything we know about automobiles and auto accidents / deaths while examining individual cases?
Each human life is important. This man's death was not needed and I'm sure nobody at Tesla, or anywhere for that matter, is anything besides torn up about having some hand in it. While profit is definitely a motive I think the means to get to the profit they seek Tesla knows they have to create a superior product and that includes superior features and superior safety standards. If Tesla is meeting and beating most of those goals and we have a situation such as this why do I feel (and I could be way wrong here) that Tesla is being examined as if they are an auto manufacturer with a history of lemons, deadly flipped car accidents, persistent problems, irate customers, or anything of the like in this situation?
For whatever reason it kind of reminds me of criminal vs. civil court cases. Criminal it's upon the State or Prosecution to prove their case. In the civil case the burden is on the Defense to prove their innocence. For some reason I feel like Tesla is in a criminal case but having to act like it's a civil case where if they don't prove themselves they will lose out big.
To me it feels like the proof is there. The data is there. The facts are known. The fact that every Tesla driver using Autopilot in that precise location doesn't suffer the same fate points toward something else going on but the driver's actions also don't seem to match up with what is known about him and the story being presented on the other side. It's really a hairy situation and I feel like it warrants all sorts of tip toeing around but I also have the feeling that allowing that "feeling" aspect to dictate the arguments for either side of this case are just working backwards.
And for what it's worth I don't own a Tesla, I've never thought about purchasing one. I like the idea, my brother's friend has one and it's neat to zoom around in but I'm just trying to look at this objectively from all sides without pissing too many people off. Sorry if I did that to you, it wasn't my intent.
No one wins when someone dies... I'm sure your right that Tesla employees are torn up.
My concern is that it looks like Tesla is 90% of the way there to full autonomy and the way the feature is marketing will lull even engineers who know more about how these systems work into a false sense of security and end up dying as a result -- they'll trust a system that shouldn't be trusted. There isn't a good system for detecting a lack of focus especially when it won't take more than a few milliseconds to go from good to tragic.
I have to preface my post to say that I think developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of thousands of lives in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem. But I think you're right. I think the "best" way to move forward until we have perfected the technology is not something that drives for you, but something that will completely take over the millisecond the car detects that something terrible is about to happen. People will be engaged because they have to be engaged, to drive the car. The machine can still gather all the data and ship it off to HQ to improve itself (and compare its own decisions to those of the human driver, which IMO is infinitely more valuable). But if there's one thing the average person is terrible at, it's reacting quickly to terrible situations. You're absolutely right that people can't be trusted to remain actively engaged when something else is doing the driving. Great example with the cruise control, too.
No one dies so someone 10 years from now can watch a full episode of the family guy unimpeded for the duration of their commute.
The human toll is irrelevant to the conversation, what's relevant is whether risks taken are being taken knowingly - you cannot market a self driving vehicle whose functionality "is 2x better than any human being" while simultaneously stating in your legal language to protect yourself: don't take your hands off the wheel - that's bs.
Plenty of people die right now because they just got a text or they had no other way home from the bar, etc.
The human toll is absolutely relevant to the conversation: this is about people dying now and in the future. It seems cruel to discuss it in a "I'll sacrifice X to save Y" later, but it can reasonably be reduced to that.
I think it's safe to assume that this will drastically reduce driving related injuries and deaths.
It's a deceptive to assume autopilot saves lives when it too has taken them. The number of people with access to auto pilot is far fewer to statistically determine how many more center divider deaths we might have if everyone were it's passenger.
Is the life taken by auto pilot worth less than the life taken by the aggressive driver who takes out an innocent driver? No.
I hope we eventually save lives as in net improvement in current death totals by using these technologies but the risks are not well communicated, the marketing is entirely out of sync with the risks and the "martyrs" we create thus to me look like victims.
> Is the life taken by auto pilot worth less than the life taken by the aggressive driver who takes out an innocent driver? No.
I think beliefs such as these is fueled by the extremely naive implication that each death will cause the learning algorithm to "improve itself" so every self driving thing out there is safer owing to that death..
That's not the thrust of my point... Talking about how many people have to die to perfect autonomous vehicles is pointless, some people are willing to jump out of airplanes & they fully understand the risks.
Some number of people, N are willing to risk their lives to use autonomous vehicles they'll die as a result. It should be just as clear to person using autopilot the risks involved not misled with marketing fluff that doesn't come close to reality. Martyrs not victims
>I think it's safe to assume that this will drastically reduce driving related injuries and deaths.
This assumes that the self driving tech will continue to increase in competence and will at some point surpass humans. I somehow find that extremely optimistic, bordering in on being naive.
Consider something like OCR or object recognition alone, where similar tech is applied. Even with decades of research behind it, it really cannot come any where close to a human in terms of reliability. I am talking about stuff that can be trained endlessly with any sort of risk. Still it does not show an ever increasing capability.
Now, machine learning and AI is only part of the picture. The other part is the sensors. This again is not anywhere near the sensors a human is equipped with.
From what we have seen in the tech industry in recent years is that trust in a tech by the people, even intelligent ones such as people who are investing in it, is not based on logic (Theranos, uBeam etc). I think such a climate is exactly what is enabling tests such as these. But unlike others, these tests are actually putting unsuspecting lives on line. And that should not be allowed..
It is optimistic. Is it naive? Only in the sense that I don't do development in that realm and I can only base my assessment on what's publicly discussed.
Please note that I artfully omitted a due date on my assumption. There's so much money involved here and so much initial traction that it is indeed reasonable to think that tech can surpass a "normal" driver.
I'm also biased against human drivers, plenty of whom should not be behind the wheel.
>There's so much money involved here and so much initial traction that it is indeed reasonable to think that tech can surpass a "normal" driver.
I don't think it is reasonable at all to reach that conclusion based on the money involved...You just can't force progress/breakthrough just by throwing money at all problem..
>I'm also biased against human drivers, plenty of whom should not be behind the wheel.
So I think it would be quite trivial to drastically increase the punishment of dangerous practices if caught. I mean, suspend license or ban for life if you are caught texting while driving or drunk driving.
Money absolutely matters. If there's no money, there's no development. And vice versa. That funded development isn't a guarantee of success, but it raises the odds to be non-zero.
You're also ignoring a key point: we have "self-driving" cars right now, but they're not good enough yet. Computer hardware is getting cheaper day by day, and right now the limiting factor appears to be the cost of sensors.
>Money absolutely matters. If there's no money, there's no development. And vice versa.
Both are not true. It does not need money for a man to have a great breakthrough idea. It is also not possible to guarantee generation a great idea by just throwing more and more money at researchers...
Here is the messy situation: maybe this system is better at avoiding accidents than 40% of the people 99.999% of the time.
The best thing is to build a system to analyze your driving and figure out if you are in that 40% of people and then let it drive for you. Maybe drunk drivers, for example. It can do this per ride: “oh you’re driving recklessly, do you want me to take over?”
EVERYTHING ELSE SHOULD BE A STRICT IMPROVEMENT. Taking over driving and letting people stop paying attention is not a strict improvement.
The argument should NOT be about playing with people’s lives now so im the future some people can have a better system. That’s a ridiculous argument. Instead WHY DON’T THE COMPANIES COLLABORATE ON OPEN SOURCE SOFTWARE AND RESEARCH TO ALL BUILD ON EACH OTHER’S WORK? Capitalism and “intellectual property”, that’s why. In this case, a gift economy like SCIENCE or OPEN SOURCE is far far superior at saving lives. But we are so used to profit driven businesses, it’s not likely they will take such an approach to it.
What we have instead is companies like Waymo suing Uber and Uber having deadly accidents.
And what we SHOULD have is if an incremental improvement makes things safer, every car maker should be able to adopt it. There should be open source shops for this stuff like Linux that enjoy huge defensive patent portfolios.
Pioneers usually are people well aware that what they’re doing is risky. I doubt that the victims of the last Tesla crashes and of the latest Uber crash regarded themselves as Pioneers. They probably just wanted to safely arrive at their destination and relied on a feature marketed as being capable to bring them there.
The pioneers in this case are putting other people’s life at risk.
Wayne seems to demonstrate that improving self-driving cars without leaving a trail of bodies behind seems in the realm of possibility, so let’s measure Tesla against that standard.
The cars can improve by being pieces of soft foam emulating the aerodynamics of a car while atop a metal base with wheels and an engine inside the foam. They would be fully autonomous with no human driver, avoid collisions as much as possible and yet fluffy enough to not hurt anyone even at high speeds.
I disagree with your initial sentiment. In my opinion, we can have self driving cars without a large human toll. I just think we need to stop trying to merge self driving cars into a road system designed for human operators. Moreover, we should not be "beta testing" our self driving cars on roads with human operators. Accidents will happen, as ML models can and do go unstable from time to time. Instead, we should look to update our roads and infrastructure to be better suited to automated cars. Till then, I hope those martyred in the name of self driving technology are not near and dear to you (even if you'd feel it's worth it).
To me beta testing should be a long period of time where the computer is running while humans are driving, with deviations between what the computer would do if it had control versus what the human actually does being recorded for future training. The value add is that the computer can still be used to alert to dangerous conditions, or potentially even overriding in certain circumstances (applying break when lidar sees an obstacle at night when the human driver didn't see it).
The problem is that Uber needs self driving cars in order to make money, and Tesla firmly believes that their system is safer than human drivers by themselves (even if a few people who wouldn't have otherwise died do, others who might have died won't and they believe those numbers make it worth it).
It's surprising that this isn't the standard right now. I'm certain the people at Tesla/Uber/Waymo have considered this - I'm curious why this approach isn't more common.
How about all those martyred by prolonging our current manual driving system for the years band decades it will take to roll out separate infrastructure for vehicles no one owns because they can't drive them anywhere?
I think we need to keep the human driver in control, but have the computers learning through that constant, immediate feedback.
And get rid of misleading marketing and fatal user experience design errors.
>but have the computers learning through that constant, immediate feedback.
I don't know what is stopping them from simulating everything inside a computer.
Record the input from all the sensors when a car equipped sensors is driven through real roads by a human driver.
Replay the sensor input, with enough random variations and let the algorithms train on it.
Continue adding to the library of sensor data by making the sensor car by driving it through more and more real life roads and in real life situations. Keep feeding the ever increasing library of sensor data to the algorithm, safely inside a computer.
Not following you here. What do you mean by "The map is not the territory."..
What I mean is that. Do not "teach" the thing in real time. Instead collect the sensor data from the cars a human is driving (and also collect the human input also), and train the thing on it, safely inside the lab.
You say, they have done it already. But I am asking if they have done it enough. And if that is so, how come the accidents such as these are possible, when the situation is pretty out of a text book in basic driving?
I don't think it'll take years to update our infrastructure. For example, we could embed beacons into catseyes to make it easier to know where the road boundaries are etc. Also, we could make sections of the highway available with the new infrastructure piece by piece. It is just as progressive as your suggestion, but the problem becomes a whole lot easier to solve when you target change towards infrastructure as well as the car itself.
"I just think we need to stop trying to merge self driving cars into a road system designed for human operators. Moreover, we should not be "beta testing" our self driving cars on roads with human operators."
Sounds like requiring exclusive access - I apologize if that was a misinterpretation.
If you have human and automated drivers in the same roads, the computers have to be able to cope with the vagaries of human drivers.
How can you then get away from '"beta testing" our self driving cars on roads with human operators' if that is their deployment environment?
> I have to preface my post to say that I think developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of thousands of lives in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
This is the definition of a false dichotomy and it implicitly puts the onus on early adopters to risk their lives (!) in order to achieve full autonomy. Why not put the onus on the car manufacturer to invest sufficient capital to make their cars safe!? To rephrase what you said with this perspective:
> ...developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of billions of investor dollars in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
This seems strictly better than the formulation you provided. How nuts is it that the assumption here is that people will have to die for this technology to be perfected. Why not pour 10x or 100x the current level of investment and build entire mock towns to test these cars in - with trained drivers emulating traffic scenarios? Why put profits ahead of people?
This reply is a classic straw man (and one of the main reasons I left Facebook behind). You are making an assumption here that's wrong and I hate that I have to speak to what you've said here because you are reading words that I didn't type. I personally would not choose to put profits ahead of people. But I didn't say anything about profit. I deliberately left profit and money completely out of my post for a reason. You also seem to be suggesting that throwing money at the problem is going to magically make it perfectly safe. You are looking for guarantees and I'm sorry to break it to you, but there are no guarantees in life. "Screws fall out all the time". People are going to die in the process of developing self-driving automobile technology. People are going to die in situations that have nothing to do with self-driving automobile technology. Deaths are inevitable and I am saying that it is worth a perceived significant loss of human life to close the gap using technology so that the number of people dying on the roads every year approaches zero.
> I have to preface my post to say that I think developing self-driving automobiles is so important that it's worth the implied cost of potentially tens of thousands of lives in order to perfect the technology, because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
I generally agree with this philosophy but this is very optimistic, at least in the United States. This is a country where we can't even ban assault rifles let alone people from driving their own vehicles. You're going to see people drive their own vehicles for a very long time even if self driving technology is perfected.
I think there is a key point that will result in the freedom of being able to drive being stripped long before assault rifles. Imagine if i create a private road from SF to LA and say that only self driving cars can drive on it. The vehicles on this road are all inter-connected, allowing them to travel at speeds in excess of 150+ MPH, and since the road that i've created is completely flat it still allows it to be a smooth ride. But if i allow cars who's actions can't be predicted (Car driven by a human), it then becomes impossible to safely drive at these speeds. So me, and the road owner, bans cars driven by people on my private toll road. As this becomes more prelevent, i will no longer have the want or need to drive my own car because then it takes me 6 Hours to get to LA instead of 2. All the while there no reason for me to get rid of my assault rifle, because i really enjoy firing it out the window of my self-driving car at 150MPH on the way to LA.
In a train I have to physically go to the train station, park my car, walk and find which train/subway to hop on, sit next to other people on a crowded, confined space, possibly get off and get on another train going to a different destination, get off the the train and walk/ get a rental car to where i want to actually go.
Compare the above to hop in my car, drive to the freeway, turn on self-driving, turn off self-driving once i get off the freeway, find parking near where i'm going and walk in.
As a society, we've done alot more in the name of convenience.
> Are you going to having a train running every few minutes to really make the delays comparable?
Commuter rail systems run at 2 minute headways or less. Long-distance trains mostly don't but that's largely due to excessive safety standards - for some reason we regulate trains to a much higher safety standard than cars. Even then, the higher top speeds of trains can make up for a certain amount of waiting and indirect routing. (Where I live, in London, trains are already faster than cars in the rush hour).
> What's the relative cost of all that vs. pavement?
When you include the land use and pollution? Cars can be cheaper for intercity distances when there's a lot of similarly-sized settlements, but within a city they waste too much space. And once you build cities for people rather than cars, cars lose a lot of their attraction for city-to-city travel as well, since you're in the same situation of having to change modes to get to your final destination.
> for some reason we regulate trains to a much higher safety standard than cars
That "some reason" is physics. According to a quick Google search, an average race car needs 400m of track length from 300 km/h to 0 km/h. A train will require something around 2500m, over 5x the distance, to brake from the same speed. Trains top out at -1.1m/s² deceleration, an ordinary car can get -10m/s² deceleration.
Part of the reason why is also that in a car, people are generally using their seatbelts - which means you can safely hit the brakes with full power. In a train, however, people will be walking around, standing, taking a dump on the loo - and no one will be using a belt. Unless you want to send people literally flying through the carriages, you can't go very much over that 1 m/s² barrier.
Because of this, you have the requirement of signalling blocks spaced in a way that a train at full speed can still come to a full stop before the next block signal. Also: a train can carry thousands of people. Have one train derail and crash into e.g. a bridge or crash with another train and you're looking with way, way, way more injuries and dead people than even a megacity could support, much less in a rural area.
My point is that the overall safety standard is wildly disproportionate even so. The rate of deaths/passenger/mile that we accept as completely normal for cars would be regarded as disastrous for a train system.
I mean, if there’s the demand, sure. Lots of commuter trains run at that sort of rate.
Though your train of cars would likely have such low passenger density that a series of buses would be just as good. Special lanes just for buses are already a thing.
So you exit the self driving private road to enter a public road where there are still local residents who insist on driving their own vehicle. Some of these residents have never been to location X and have no interest in it. They care about their neighborhood and getting around however they want.
The point is driving is a freedom and getting rid of it in this country will be hard. I'd imagine self driving vehicles having more prevalence in China where the government can control what destinations you have access to and monitor your trips.
You'll see municipalities and then states banning cars starting off using soft incentive based approaches, then harder approaches once enough people switch over.
Many states (red) won't ban them for a very long time.
the impact on freedom to travel will have to be secured and decentralized without any government kill switches.
You'll see municipalities and then states banning cars
Which states? Maybe a few in New England, but I don't see that happening anywhere else. Counties perhaps, but there are rural areas pretty much everywhere, and people are going to want the freedom to drive their own vehicle.
Economic incentives and competition will eventually cause it to happen regardless of sentiment. First, insurance rates for manually driven cars will shoot through the roof as less risky drivers moving to self driving cars decimate that risk pool (like if gun owners were required insurance for misuse). Second, cities that go to self driving only will have a huge advantage in infrastructure utilization and costs as roads are used more efficiently (with smoother traffic) and parking lots/garages become a thing of the past. Residents will just push for it if it means not being stuck in traffic anymore. Or worse, people and companies will relocate to cities with exclusive self driving car policies, creating a huge penalty for cities that don’t or can’t do that.
In comparison, the economic impact/benefit of banning assault rifles is negligible (and definitely not transformative) even if I personally think it is the morally right thing to do. (Maybe we can make the case later if school security and active shooter drills become prohibitively expensive and/or annoying)
> Or worse, people and companies will relocate to cities with exclusive self driving car policies
So people will relocate to avoid traffic? Why doesn't this happen today? Suppose San Francisco decided to not enforce self driving laws to protect small businesses and preserve community infrastructure and culture. Now suppose Phoenix (only picked because they've been progressive with self driving technology) does enforce self driving laws, would you expect a mass exodus from San Francisco to Phoenix?
Additionally, it isn't really SF vs. Phoenix. Think global competition: if developing mega cities in Asia adopts this before American cities do, they will be able to more quickly catch up with and very likely exceed their American counter parts in a short period of time economically.
> First, insurance rates for manually driven cars will shoot through the roof as less risky drivers moving to self driving cars decimate that risk pool (like if gun owners were required insurance for misuse).
Why would the less-risky drivers move to self-driving cars first? Wouldn't some of the higher-risk demographics (e.g. the elderly) make the move first since they have more incentive to do so?
> Second, cities that go to self driving only will have a huge advantage in infrastructure utilization and costs as roads are used more efficiently (with smoother traffic) and parking lots/garages become a thing of the past. Residents will just push for it if it means not being stuck in traffic anymore.
I think self-driving cars will be really cool and reduce traffic accidents once they're perfected, but a lot of these assumptions don't make sense. Unless a critical mass switches to car-sharing, autonomous cars and no parking will make rush hour worse because now each car will make the round trip to work twice a day instead of just once. Also, what happens to the real-estate where the parking lots are now? The financially sound thing to do will probably be converting these lots to more offices/condos/malls. So urban density will increase - increasing traffic.
Even if autonomous cars radically improve traffic flow, I suspect we'll just get induced demand [1]. More people will take cars instead of public transit and urban density will increase until traffic sucks again.
> Wouldn't some of the higher-risk demographics (e.g. the elderly)
Elderly aren't usually considered higher risk. The young kids are, enthusiasts are, people who drive red sports cars are.
> Unless a critical mass switches to car-sharing, autonomous cars and no parking will make rush hour worse because now each car will make the round trip to work twice a day instead of just once.
Autonomous cars should be mostly fleet vehicles (otherwise you have to park it at home).
Isn't that just like in most of the major world cities where taxis are the norm rather than the exception? It isn't weird for a taxi in Beijing to make 5-6 morning commute rounds. But even then, there are a lot of reverse commutes to consider.
> The financially sound thing to do will probably be converting these lots to more offices/condos/malls.
While density can increase, convenient affordable personal transportation also allows the opposite to occur. Parks, nice places, and niche destinations, are also possible.
Think of it this way, once traffic is mitigated, urban planning can apply more balance to eliminate uneven reverse commute problems. There will still be an incentive to not move, but movement in itself wouldn't be that expensive (only 40 kuai to get to work in Beijing ~15km, I'm sure given the negligible labor costs, autonomous cars can manage that in the states).
We are asking that self driving cars be ALLOWED if the user chooses, even IF the safety is in doubt. This is because of just how extremely important this issue is.
While I agree with your conclusion, the opening line strikes me as silly. Why is it "so important" to have self-driving cars? These cars that can't detect stationary objects directly in front of them are nowhere close to the self-driving pipe dream that's been around for a century. Maybe by 2118 we'll be making more progress.
Also, people are terrible at detecting objects directly in front of them and just like computers, the human brain can be cheated, overloaded, inept or inexperienced leading to an accident.
Now we have cars with lane assist, smart breaking, auto pilot features and that's only in the past 5-10 years.
Of all the places where technology can save lives, its definitely in vehicles/transportation.
> Also, people are terrible at detecting objects directly in front of them and just like computers, the human brain can be cheated, overloaded, inept or inexperienced leading to an accident.
How many optical illusions do you usually see in the roads while driving, that can result in an accident?
I am not even talking about the "people are terrible at detecting objects directly in front of them" part.
I mean, how can you be a human being and say this? If we were "terrible at detecting objects directly in front of us", we would have been predated out of existence a long time ago..
Dips aren't quite an optical illusion; nor are blind spots, or obscured vehicles (behind frame of the car or behind another vehicle), but those are all quite common and are similar to illusions (you see imperfectly).
Sometimes you'll see multiple white lines, or lanes that appear to vere off due to dirt on the road. A bit of litter looks like a person, a kid looks like they might run out.
A lot of times I find I'm searching for something and can't see it but it was in my visual field. I think this worsens with age.
They're similar in the sense that you don't see what you need to see; in the limited locus of "ability to safely control a vehicle" I consider them similar.
Optical Illusion is not similar to not being able to see an object behind an opaque object. And when you say "brain can be cheated", it means an optical illusion.
That is the only thing I was responding in the start of this discussion. Essentially the person was saying human brain can be cheated just like a computer.
I am saying, No. Not just like a computer. Human brains does not get cheated so easily like a computer. Claiming that is outrageous and shows you have no idea of what you are talking about...
We're not talking about complicated scenarios with multiple moving actors. Tesla's autopilot cannot even do something as basic as detect stationary obstacles that are directly in front of the car. It will crash into barriers even if the highway is completely devoid of other cars.
You may consider humans as bad drivers but Tesla's autopilot is even worse than that:
I'm talking about the pitfalls of human perception, and the low-hanging fruit of ways that self-driving systems can potentially outperform humans.
I'm not claiming Tesla's system is currently better than a human, just that there is plenty of potential for a machine to outperform humans perceptually. As it is, Tesla's system isn't exactly the gold standard.
I am not really sure if development of SDVs is really that important, but even if it were, your proposal would only be acceptable if it were you and Mr. Musk racing your Teslas on Tesla's private proving grounds. Somewhere in the Kalahari desert seems to be an acceptable location. The moment people "making a sacrifice" are unsuspecting customers, and eventually innocent bystanders you are veering very much into Dr. Mengele's territory.
Actually, one thing that I was curious about regarding this incident -- they say that authorities had to wait for a team of Tesla's engineers to show up to clean up burning mess of batteries. Luckily for everyone else trying to get somewhere on 101 that day, Tesla's HQ isn't too far away. What if next time one drives into a barrier it happens in a middle of Wyoming? Will the road stay closed until Tesla's engineers can hitch a ride on one of Musk's Falcons?
So we should kill even more people, who had never signed up to be guinea pigs, so that maybe there will be a self-driving car at some point? Which most of those dying in those crashes will not be able to afford anytime soon anyway...
Thnat's assuming that the replacement actually is safer, which in case of the Auto-Pilot is not the case now, and not necessarily the case ever. There is a reason Waymo isn't unleashing their stuff onto unsuspecting public.
> "I am not really sure if development of SDVs is really that important"
For the 1.3 million people and their loved ones and to 20-50 million injured EVERY YEAR, yeah, it's really that important.
Is it ready today? No. We're in pretty violent agreement on that.
Will we get there? I don't see much reason to doubt that we will, eventually. It may require significant infrastructure changes.
It's pretty clear Waymo/Uber are pushing the envelope too hard, without adequate safeguards, but "only be acceptable if it were you and Mr. Musk...on Tesla's private proving grounds" is probably not pushing the envelope enough.
Even Waymo is "unleashing their stuff onto unsuspecting public" by driving them on public roads - lots of innocent bystanders potentially at risk there.
Both Waymo and even Uber do not pretend that their systems are ready for public use and at least allegedly have people who are paid to take over (granted, in Uber's case it's done as shadily as anything else Uber does). Tesla sells their half-baked stuff to everyone, with marketing that strongly implies that they can do self-driving now, if only not for those pesky validations and regulations. I think there's quite a bit of a difference.
A lot of deaths and injuries on the road happen in countries with bad infrastructure and rather cavalier attitude to rules of the road. Fixing those could save more people sooner than SDVs that they won't be able to afford any time soon. Not to consider that an SDV designed in the first world (well, Bay Area's roads are closer to third world, but still...) aren't going to work too well when everyone around drives like a maniac on a dirt road.
Not to say that SDVs wouldn't be neat, when they actually work, but this is a very SV approach, throwing technology to create overpriced solution to problems that could be solved much cheaper, but in a boring way that doesn't involve AI, ML, NN, and whatever other fashionable abbreviations.
IIRC, it was also Volvo who a few years back said that they would gladly take on any liability issues for their self-driving cars. Only to backtrack on that a short while later after having learned what liability laws in the U.S. actually look like, saying that they wouldn't take on such liability until the laws are changed to be more in their favor. So there's that ...
> because that's what people do; make sacrifices to improve the world we live in so that future generations don't have to know the same problem.
Whose lives are we sacrificing? In the case of the Uber crash in Tempe and this Tesla crash in California, the people who died did not volunteer to risk their lives to advance research in autonomous vehicles.
I highly respect individuals who choose to risk their lives to better the world or make progress, like doctors fighting disease in Africa and astronauts going to space, but at the same time, I think this must always be a choice. Otherwise we could justify forcing prisoners to try new drugs as the first stage of clinical trials. Or worse things. Which is why there are extensive vetting before approval for clinical trials is given.
I do think that, once the safety of autonomous vehicles have been proven on a number of testbeds, but before they are ready for deployment, it is justifiable to drive them on public roads. Maybe without safety drivers. But until then, careful consideration should be given to their testing.
Uber should not have been able to run autonomous vehicles with safety drivers where the safety driver could be allowed to look away from the road for several seconds while the car was moving at >30mph. The car should automatically shutoff if it is not clear whether the safety driver is paying attention. And there should be legislation that bans any company that fails to implement basic safeguards like this from testing again for at least a decade, with severe fines. Probably speeds should also be limited to ~30mph for the first few years of testing while the technology is still so immature, as it is today.
Similarly, Tesla should not be allowed to deploy their Autopilot software to consumers before they conduct studies to show that it is reasonably safe. Repeated accidents have shown that Level 1 and Level 2 autonomous vehicles, where the car drives autonomously but the driver must be ready to intervene, is a failed model unless the car actively monitors that the driver is paying attention.
Overall I think justifying the current state of things by saying that people must be sacrificed for this technology to work is ridiculous. Basic safeguards are not being used, and if we require them, maybe autonomous vehicles will take a few years longer to reach deployment, but that thousands of lives could become tens.
Edit: I read in another comment that the Tesla car at least "alarms at you when you take your hands off the wheel". In that case I think what Tesla is doing is much more reasonable. (Not Uber, though.) Although I still feel like it is going to be hard to react to dangerous situations when the system operates correctly almost all the time (even if you are paying attention and have your hands on the wheel). But I'm not sure what the correct policy should be here, because I don't fully understand why people use this in the first place (since it sounds like Autopilot doesn't save you any work).
In that case tesla's autopilot is a red herring. It's not a fully autonomous system. If you're willing to sacrifice human lives then please sacrifice them on systems that actually have a chance of working. Tesla's autopilot isn't one of them, it's most likely never going to reduce the fatality rate below the skill of a sober human because it's just a simple lane keeping and cruise control assistant.
Cars should just be phased out in favor of mass transit everywhere.
Yes, you can live without the convenience of your car. No really, you can.
Now think about how you would enable that to happen. What local politicians are you willing to write to, or support, in order to enable a better mass transit option for you? And how would you enable more people to support those local politicians that make that decision?
This is the correct solution, since the AI solution of self-driving cars isn't going to happen. Their high fatality rates are going to remain high.
Yes, you can live without the convenience of your car. No really, you can.
Maybe, but unless you can change the laws of nature, you can't build a mass transit system that can serve everyone full-time with reasonable efficiency and cost-effectiveness, and that's just meeting the minimum requirement of getting from A to B, without getting into all the other downsides of public vs. private transportation in terms of health, privacy, security, etc.
There's no need to make anything up. Mass transit systems are relatively efficient if and only if they are used on routes popular enough to replace enough private vehicles to offset their greater size and operating costs (both physical and financial). That usually means big cities, or major routes in smaller cities at busier times.
Achieving 24/7 mass transit, available with reasonable frequency for journeys over both short and long distances, would certainly require everyone to live in big cities with very high population densities. Here in the UK, we only have a handful of cities with populations of over one million today. That is the sort of scale you're talking about for that sort of transportation system to be at all viable, although an order of magnitude larger would be more practical. All of those cities have long histories and relatively inefficient layouts, which would make it quite difficult to scale them up dramatically without causing other fundamental problems with infrastructure and logistics.
So, in order to solve the problem of providing viable mass transit for everyone to replace their personal vehicles, you would first need to build, starting from scratch or at least from much smaller urban areas, perhaps 20-30 new big cities to house a few tens of millions of people.
You would then need all of those people to move to those new cities. You'd be destroying all of their former communities in the process, of course, and for about 10,000,000 of them, they'd be giving up their entire rural way of life. Also, since no-one could live in rural areas any more, your farming had better be 100% automated, along with any other infrastructure or emergency facilities you need to support your mass transit away from the big cities.
The UK is currently in the middle of a housing crisis, with an acute lack of supply caused by decades of under-investment and failure to build anywhere close to enough new homes. Today, we're lucky if we build 200,000 per year, while the typical demand is for at least 300,000, which means the problem is getting worse every year. The difference between home-owners and those who are renting or otherwise living in supported accommodation is one of the defining inequalities of our generation, with all the tensions and social problems that follow.
But sure, we could get everyone off private transportation and onto mass transit. All we'd have to do is uproot about 3/4 of our population, destroy their communities and in many cases their whole way of life, build new houses at least an order of magnitude faster than we have managed for the last several decades, achieve total automation in our out-of-city farming and other infrastructure, replace infrastructure for an entire nation that has been centuries in development... and then build all these wonderful new mass transit systems, which would still almost inevitably be worse than private transportation in several fundamental ways.
Why so big though? I lived in a 25 000 people town in Sweden and did not need a car more than a few week ends per year. There was 5 bus lines for local transport, and long distance busses and trains with quite high frequency.
And that's not taking into account the fact that bicycle is a very viable way to move around in cities < 200 000 inhabitants.
I have actually never owned a car, I just rent some once in a while to go out somewhere where regular transports don't get me. I have lived in Sweden, France and Spain, in 10 cities from 25 000 to 12 million inhabitants. Never felt restricted. I actually feel much more restricted when I drive because I have to worry about parking, which is horrible in both Paris and Stockholm. Many people I know, even in rural Sweden or France, don't own a car because it is just super costly and the benefit is not worth it. It's very much a generation thing tough because my friends are mostly around 26-32 whereas nearly all the person I know over 35 owns a car, even if they don't actually have that much money and sometimes complain about it.
You've almost answered your own question, I think. Providing mass transit on popular routes at peak times is relatively easy. It's more difficult when you need to get someone from A to B that is 100 miles away, and then back again the same day. It's more difficult when you are getting someone from A to B at the start of the evening, but their shift finishes at 4am and then they need to get home again.
To provide a viable transport network, operating full-time with competitive journey times, without making a prohibitive financial loss or being environmentally unfriendly, you need a critical mass of people using each service you run. That generally means you need a high enough population density over a large enough urban area that almost all routes become "main routes" and almost all times become "busy times".
You're right. But you're going to have to change the whole of society to achieve that end - from the law, through planning and building, through entertainment, shopping and all, to farming, ... the whole kaboodle.
I lived car free in a small industrial UK city, we couldn't manage that with kids (too expensive for one).
Bus seats are awful, why?, because they're made vandal resistant (and hard wearing). They're too small for a lot of people now as well. So you need to remodel buses IMO; your going to need to be hotter on vandals, so change the approach of the courts. Things bifurcate across areas of society like that: Supermarkets, houses, zoning, etc. all are designed with mass car ownership as a central tenet.
This is certainly possible and I would welcome it but this is something that cannot be done overnight. It will take decades to convince politicians and more decades to upgrade the existing infrastructure.
If you’re willing to die for this, then by all means go ahead and sign up to be a dummy on a test track. If you know other people who feel the same way, sign up as a group. If you’re just talking about letting other people die so that someday, maybe we’ll have fully automated cars, that’s monstrous, especially when they’re not volunteers and don’t get to opt out!
A laudable goal doesn’t give anyone the right to kill people by taking unnecessary risks. The reason that Tesla and Uber do what they do the way they do, instead of a more conservative approach is an attempt to profit, not save lives. If you don’t have to spend lives to make progress, but choose to do so for economic experience, there’s s word for that: evil.
I agree with the psychology aspect of driving. I've seen it mentioned many times that a large majority of auto accidents occur a few minutes from the driver's home, and usually on their way home. Apparently, being close to their neighbourhood and in familiar surrounds, the driver's attention tends to wane as they get distracted with other things that they have to do when they get to their house.
Racing drivers have also reported that when they are not driving at 100%, they are more prone to make mistakes or crash. Most famously, Ayrton Senna's infamous crash at Monaco when he was leading the field by a LONG way. When he was asked why he crashed at a fairly innocuous slow corner, he said that his engineer had asked him over the radio to 'take it easy' as there was no chance he would be challenged for 1st place before the finish line, so he relaxed a fraction and started thinking about the victory celebrations. And crashed.
>I don't even enable cruise control because taking the accelerator input away from me is enough to cause my mind to wander
You're not alone. I find the act of modulating my speed is what keeps me focused on the task of driving safely. Steering alone isn't enough; I can stay in my lane without tracking the vehicles around me or fully comprehending road conditions.
Until a Level 5 autonomous car is ready to drive me point A to point B while I watch a movie I will remain firmly in command of the vehicle.
The problem as always with driving is that you can be as attentive, sober, cautious as humanly possible... while the guy who jumps the median into your windscreen may not be. We need to be more concerned and proactive about stopping this running experiment with half-asses automation in which we all unwillingly participate. I want lvl 5 automation just like anyone, but I don’t believe it’s anywhere close, and I’m not interested in being part of a Tesla or Uber’s attempt to be even richer.
Public roads are not laboratories. It’s not just Tesla owners who are participating in this, it’s everyone on the road with them.
Noticed this also, no need to monitor and adjust the speed which is a mundane task (in cruise control traffic conditions). Eyes can be on the road instead.
This is similar to the problem for pilots, who can be distracted by mundane tasks due the complexity of controls in modern aircraft. If these tasks are removed, the pilot can focus on what's more important.
According to NASA "
For the most part, crews handle concurrent task demands efficiently, yet crew preoccupation with one task to the detriment of other tasks is one of the more common forms of error in the cockpit."
I think growing up in a snowy climate where super precise throttle control is critical to not ending up in ditches plays a huge role in why I zone out when using cruise control. The fine motor skill of throttle control occupies the back of my mind while my conscious thoughts rotate through the mirrors, track other vehicles and watch for obstacles. I can maintain a speed within a couple km/hr for a very long time without needing to glance at the speedo at all.
The moment the back of my mind doesn't have to handle precise throttle control I find my mind wandering and my spacial awareness is shot. I guess maintaining speed is the fidget spinner that keeps me focused on the task of driving.
I totally agree with this sentiment -- the only reason I drive a safe speed is that I use cruise control constantly. Then I don't have to think about speed, and I can focus on everything else.
Adaptive cruise control is a bit annoying in traffic, as the safety buffer causes cars who don’t care about tailgating to easily move into the gap causing my speed to jolt around a lot (and eventually get stuck behind some slow moving vehicle). It just doesn’t work well in heavy two lane traffic I guess.
> This needs far more discussion. I just don't buy it. I don't believe that you can have a car engaged in auto-drive mode and remain attentive. I think our psychology won't allow it.
Does anyone know of psychology studies that measure human reaction time and skill when sometime like autopilot is engaged most of the time? I remember taking part in a similar study at Georgia Tech that involved firing at a target using a joystick. It was also simultaneously a contest because only the top scorer would get prize money. The study was conducted in two parts. In the 1st phase, the system had autotargeting engaged. All subjects had to do was press a button when the reticle was on the target in order to score. In the 2nd phase, which was a surprise, autotargetting was turned off. I won the contest and my score was miles ahead of anyone. I can't fully confirm it but I feel this happened because I was still actively aiming for the target even when autotargetting was active.
Does anyone know of psychology studies that measure human reaction time and skill when sometime like autopilot is engaged most of the time?
Yes. That's been much studied in the aviation community.[1] NASA has the Multi-Attribute Test Battery to explicitly study this.[2] It runs on Windows with a joystick, and is available from NASA. The person being tested has several tasks, one of which is simply to keep a marker on target with the joystick as the marker drifts. This simulates the most basic flying task - flying straight and level. This task can be put on "autopilot", and when the marker drifts, the "autopilot" will simulate moving the joystick to correct the position.
But sometimes the "autopilot" fails, and the marker starts drifting. The person being tested is supposed to notice this and take over. How long that takes is measured. That's exactly the situation which applies with Tesla's "autopilot".
There are many studies using MATB. See the references. This is well explored territory in aviation.
As a user of autopilot in aviation context, I do remain engaged and connected to the flight while the autopilot handles the routine course following and altitude hold/tracking responsibilities.
I don’t find that particularly challenging and in fact, when the autopilot is INOP, flights are slightly more mentally fatiguing because you have no offload and complex arrivals are much more work, but in cruise, you have to be paying attention either way. It’s not a time to read the newspaper, autopilot or not.
Quite a lot years ago I had to drive for 6 hours straight at night to get on time to place where I needed to be (air flight was not available back then for me)
What I noticed that when I was following posted/safe speed limit, I was quickly losing focus, mind started wandering and eventually I felt I was falling asleep.
I do not remember what made me to speed up, but once I was about 30% faster than posted speed limits, and once I reached part of the way where road was quite bad + a lot of road work was happening, I realized that I much more alert.
As soon as I slowed down to posted speed limit speed I began drifting away again..
If anything, my anecdote confirms your theory - as soon as we perceive something safer, we pay way less attention. And Autopilot sounds like one of these safety things, which makes drivers less attentive and potentially missing dangerous situation, which otherwise would be caught by driver's mind.
I wonder if there is a way to introduce autopilot help without actually giving sense of security to the driver. Granted Tesla would lose so precious marketing angle, but if their autopilot would work somewhat like variable power steering system on background without obvious taking over control of the car, in the long haul that would be more beneficial?
Assuming you were in a vehicle with ICE speed relates to vibration/noise which can, at the wrong pitch, cause drowsiness easily. This is why parents will take their children out in the car if a child is not sleeping well.
I find rough motorway surfaces in my current vehicle induce heavy drowsiness at motorway speed limits (slight reduced at marginally higher speeds when the pitch is higher).
Your belief is meaningless, we have hard data that shows a net benefit to these systems.
It's not a question of zero deaths, it's a question of reducing the number which means you need to look beyond individual events. Remember the ~90 people who died yesterday from a US car accident without making the news are far more important than a few individuals.
>> our belief is meaningless, we have hard data that shows a net benefit to these systems.
No we don't. Tesla likes to compare their deaths per mile to the national average. The problem is that their autopilot is not fit to drive everywhere or in all conditions that go into that average. There is no data to support that autopilot is safer overall. It may not even be safer in highway conditions given that we've seen it broadside a semi and now deviate from the lane into a barrier - both in normal to good conditions.
Specific failures are again meaningless. Computers don't fail the way people do, on the other hand people also regularly fall asleep at the wheel and do similar dumb things.
And really, driving conditions are responsible for a relatively small percentage of vehicle fatalities. Most often it's people doing really dumb things like driving 100+ MPH.
The only thing we actually know is these cars are safer on average than similar cars without these systems. That's not looking at how much they are used, just the existence of said safety system and likely relates to them being used when the drivers are extremely drunk or tired which are both extremely dangerous independent of weather conditions.
So how about we adopt much cheaper and simpler solutions like drowsiness detection (Volvos have these), automatic emergency braking (I think every brand has this as an option now), breathalizer locks, speed limiters etc?
The US just mandated all new cars have backup cameras, but it seems like mandating AEB would make a bigger difference.
Your belief is meaningless, we have hard data that shows a net benefit to these systems.
What do you know that the rest of us don't? The ones statistics I've seen on anyone's self-driving cars so far would barely support a hypothesis that they are as capable as an average driver in an average car while operating under highly favourable conditions.
>I don't believe that you can have a car engaged in auto-drive mode and remain attentive
I've been saying this for a while and it's interesting to see more people evolve to this point of view. There was a time when this idea was unpopular here--owed mostly to people claiming that autonomous cars are still safer than humans, so the risks were acceptable. I think there are philosophical and moral reasons why this is not good enough, but that goes off-topic a bit.
In any case, some automakers have now embraced the Level-5 only approach and I sincerely believe that goal will not be achieved until either:
1. We achieve AGI or
2. Our roads are inspected and standards are set to certify them for autonomous vehicles (e.g. lane marking requirements, temporary construction changes, etc.)
Otherwise, I don't believe we can simply unleash autonomous vehicles on any road in any conditions and expect them to perform perfectly. I also believe it's impossible to test for every scenario. The recent dashcam videos have convinced me further of this [0].
The fact that there are "known-unknowns" in the absence of complete test-ability is one major reason that "better than humans" is not an ethical standard. We simply can't release vehicles to the open roads when we know there are any situations in which a human would outperform them in potentially life-saving ways.
I suppose its perhaps how they marketed the feature. We have parking assist feature in many cars, there's a reason its not auto-park instead. If it really was a feature to help attentive drivers avoid accidents, it probably would have been called a driving-assist tech and not auto-pilot.
I agree, I find the same of myself, and I beat myself up over any loss of concentration.
The solution might be a system where the driver drives at a higher level of abstraction, but ultimately still drives.
Driving should be like declarative programming.
For example, the driver still moves the steering wheel left and right, but the car handles the turn.
Or when the driver hits the breaks, which is now more of a on/off switch, the car handles the breaking.
The role for the driver is to remain engaged, indicating their intention at every moment and for the car to work out the details.
Edit: On second thought, that might end up being worse. I can think of situations where it might become ambiguous to the driver of what they are handling and what the car is handling. Maybe autopilot is all or nothing.
> When driving, I find that I must be engaged and on long trips I don't even enable cruise control because taking the accelerator input away from me is enough to cause my mind to wander.
Glad I'm not the only doing it. When driving on a highway I increase or decrease my car's speed by 10-15 km every 10 minutes or so, so that this variation can help me keep attentive about my surroundings.
This comparison really only works if you mean engaging autopilot with a bunch of other planes flying in close formation and if clouds were made of steel and concrete. Most of the time that autopilot is engaged on a flight there is next to zero risk of a collision. Commercial pilots also get a lot of training specifically related to what autopilot is and isn’t appropriate for.
Even as a private pilot you are taught to continuously scan your instruments and surroundings as you fly through the air with autopilot enabled. Flight training is much more extensive than driving tests (although still not too bad) and they really drive procedures into you.
Which is why pilots are still in control of the plane while autopilot is active. Even while active, there is still a "pilot flying" and pilots are still responsible for scanning gauges and readouts and verifying the autopilot is doing what they expect. They do not just turn on autopilot and goof off
Just like Tesla's auto pilot? Drivers are supposed to be "flying", scanning gauges and the road ahead to ensure the autopilot is doing what they expect...
I think people that say that "autopilot" is a bad name for this feature don't really understand what an "autopilot" does.
The text at the top of the homepage for Tesla Autopilot is this:
> Full Self-Driving Hardware on All Cars
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
Whatever theory you have for Tesla's naming of their feature, it doesn't match with their marketing.
You are reading into that text more than you should. Autopilot and full self-driving are two separate features. Autopilot can be used today. Full self-driving can be purchased today but won't be activated until some unknown future date. Those features are separate configurable options when purchasing the car that come with their own prices. Tesla makes it clear enough that any owner should know those are two separate features. The text you highlighted is simply promising that any car purchased today can activate full self-driving down the line with no additional hardware costs.
> You are reading into that text more than you should.
The page's title is "Autopilot | Tesla". It is the first result for "tesla autopilot" in search results. And "autopilot" appears 9 times on the page. So if that's not an intentional attempt to mislead consumers into conflating Autopilot with "full self-driving", then what would such an attempt look like, hypothetically?
Is it crazy to hold a driver to a higher standard than simply Googling "Tesla autopilot" and only reading the first paragraph of the first result? If you read that entire page, the difference between autopilot and full self-driving is clear. If you read the car's manual, the difference is clear. If you look at the configurator for the car, the difference is clear when you have to pay $3,000 extra for full self-driving. I am not sure how any responsible Tesla owner could think that this is only a single feature.
> Is it crazy to hold a driver to a higher standard than simply Googling "Tesla autopilot" and only reading the first paragraph of the first result?
For this standard it would have to apply to every driver. Should drivers who do not google "Tesla autopilot", let alone ones that do and read on in a section about said autopilot feature, be punished with death in a two ton metal trap?
I really don't see how this is different than other features of a car like cruise control. It is up to the driver to educate themselves about cruise control. I was not part of my driver education class. There were no questions about it during the tests to get my license. I didn't learn how it worked until I was in my 20s when I first owned a car that had cruise control and I learned by reading that car's manual. I don't think anyone would have blamed the manufacturer if I killed myself because I didn't understand how cruise control worked or if I used it improperly.
It isn’t crazy to ask that, but I think it is crazy to view failing to create two pages as an intentional attempt to deceive or as something that absolves drivers of their own responsibilities.
If autopilot crashes at a lower rate than the average driver, they are correct, but autopilot+attentive driver would still be better than either alone.
The current thread is about what Tesla means by use of "autopilot". The parent commenter was telling us that Tesla only intends for it to have the same meaning as it does in aviation. My response is pointing out how Tesla seems to imply "autopilot" involves "full self-driving".
At what point is an attentive driver expected to notice that autopilot has silently failed? The video linked at the top of the thread has a <1 second interval between the car failing to follow its lane, and plowing into a concrete barrier.
This is actually harder to do then just driving the car.
The fatal impact may not be seconds away but the event that sets in motion the series of actions that results in that fatal impact may take only seconds.
The issue is how long between the problem first manifesting itself and a crash becoming inevitable. Note that even as short a time as ten seconds is an order of magnitude longer than one second. There are at most only a few rare corner cases in aviation where proper use of the autopilot could take the airplane within a minute of an irretrievable situation that would not have occurred under manual control.
I'm not a pilot so I do not know the most dangerous situations when flying under autopilot. What I was trying to emphasize is that even under autopilot airplanes require constant attention. My understanding is that if the pilot or co-pilot leaves the cockpit for any reason the remaining pilot puts on an oxygen mask in case of decompression because the time frame before blackout is so tiny. The point is that autopilot in aviation is a tool that can be employed by pilots but cannot function safely on its own. From this viewpoint Tesla's Autopilot is accurately named although the public does not have the same understanding.
There are a lot of things in aviation that are done out of an abundance of caution (and rightly so) rather than because flights are routinely on the edge of disaster. Depressurization is not an autopilot issue, and putting on a mask is not the same as constant vigilance. Even when not using autopilot, pilots in cruise may be attending to matters like navigation and systems management that would be extremely dangerous if performed while driving.
Personally, I do not think calling Tesla's system 'autopilot' is the issue, but your claim that it is accurate is based on misunderstandings about the use of autopilots in aviation. It is not the case that their proper use puts airplanes on the edge of disaster were it not for the constant vigilance of the pilots.
If the pilots are not flying, then it can be just a short time away from a crash. Like when the pilot is not paying attention and by the time the auto pilot can no longer fly, the pilot doesn't have enough situational awareness to take over.
That is very much an outlier, and if it were at all relevant to the issue it would further weaken your case, as these three pilots had several minutes to sort things out. Questioning the assumptions underlying the assumed safety of airplane autopilot use can only weaken the claim that Tesla's 'autopilot' is safe.
This isn't a debate about dictionary definitions, it's a debate about human behavior.
People who say that "Autopilot" is a bad name for this feature aren't basing it on an imperfect understanding of what autopilot does in airplanes. They're basing it on how they believe people in general will interpret the term.
So you’re saying that Tesla drivers are only educated by marketing materials and ignore what the car says every time they engage the autopilot feature?
They are saying that Tesla drivers are not super humans, and only average every day, garden variety human beings...
The funny thing is that it is the same people who are arguing for self driving tech by saying that "Humans will do dumb shit", is the same ones who justify Tesla by saying "Humans should not do stupid things (like ignoring the cars warning)"..
They aren't going to literally fall asleep, but much of the time pilots are reading a book and not directly paying attention in the same way that a driver is.
Planes don't suddenly crash into a mountain because you have looked away for two seconds, there is much more time to react in case the autopilot doesn't behave correctly.
They can very suddenly crash into another plane because you have looked away for two seconds. It has actually happened (though nowadays, technology has made it easier to avoid these accidents).
Planes are supposed to maintain a 5 mile separation distance. You aren't going to break that down in two seconds. (But head on, with both planes traveling 600 MPH, you can do it in 15 seconds. But both pilots would have to be inattentive for that time.)
They are supposed to, but if flight control doesn't help them do it, pilots have only very little time to react to what they meet. This was demonstrated in the Hughes Airwest collision with an F-4 in 1971.
Since then, air traffic control procedures have been improved to avoid these situations, but nowadays e.g. over the Baltic Sea, Russian military planes are routinely flying with their transponder turned off so that flight control does not know where they are. So, this risk is still there.
I don't buy it either, and the airline business has lots of history to show that it is bunk. Autopilot must either be better than humans or not be present. I'm sure the car autopilot engineers have learned much from airplanes. And I'm pretty sure that Tesla management has overrode the engineers concerns because they'd rather move fast and break things.
If a person uses a device while using autopilot, (which seems highly likely—not sure in thi instance) wouldn’t it be advisable to have the alerts come to them directly on whichever device they are using? The alert breaks them out of whatever task they are focusing on. If the alert is coming from the car I can see how a lot of us could ignore it.
What a world. Where we can't even take enough responsibility to be present enough to hear the "You are about to die" bell chime.
Imagine the possible breaking components in that chain too. Bluetooth can fail, satellite can fail, cell can fail,
WiFi can fail, a USB cable can fail, there isn't a single piece of connectivity technology that would make me confident enough to delegate alerts to another device.
There is also an inherent failure of alarms in general in that even very loud ones can be ignored if they give false positives even once or twice. There is a body of study trying to address it. Some of the most fatal industrial accidents occured because alarms were either ignored or even fully switched off. We aren't good with alarms.
I think the meat of it though is that unless Auto-pilot works perfectly then you can't leave it alone. And if you can't leave it alone then what's the point?
The sell for autonomous cars isn't that people are just so darn tired of turning the steering wheel that they would really rather not. It's that we could potentially be more productive if we could shift our full attention to work/study/relaxation while commuting.
It seems like we are in an “in-between” state where we are using humans to assuage the fears of people that aren’t sure if neural networks can drive better than humans. The goal is to eventually focus on something else. If it’s just about making driving safer I would think it’s more of an incremental innovation step versus the breakthrough concept of being able to do something else while being commuted completely by a neural-network driven vehicle. The bridge to get to this breakthrough hopefully isn’t hacked apart by naysayers. Every death should be met with empathy and a desire to strengthen this bridge and quicken the speed of crossing it.
I try to tell people that if you cannot take a nap behind the wheel of a self-driving car, then the automaker has failed to produce a self driving car. If you have to be attentive behind the wheel of a self driving car, then you might as well just steer it yourself.
The general sentiment is correct but the wording implies some degree of acceptance of liability: Autopilot was engaged... took no evasive action... engineers are investigating... failed to detect.
They should have issued a single, very simple statement that they are investigating the crash and that any resulting improvements would be distributed to all Tesla vehicles, so that such accidents can no longer happen even when
drivers are not paying sufficient attention to the road and ignore Autopilot warnings. Then double down on the ideea that Autopilot is already safer than manual driving when properly supervised and that it constantly improves.
The specifics of the accident, victim blaming, whether the driver had or not his hands on the wheel or was aware of Autopilot's problems is something that should be discussed by lawyers behind closed doors. And of course, deny it media attention and kill it in preliminary investigation, which I imagine they will have no problem in doing, he drove straight into a steel barrier for God's sake.
Doesn't help that fucking Elon Musk and half of Silicon Valley keeps saying AI technology will solve all driving problems, when they should know fully well that autonomous cars are never going to happen without structural changes to roads themselves.
Silicon Valley needs to stop trying to make autonomous cars happen.
What structural changes? Considering that self-driving cars are already running daily in many cities, those changes must be fairly minor since they've already been implemented in those cities.
Volvo's messaging is less reckless than Tesla's. They call their similar feature "Pilot assist." It's also always been stricter about trying to make sure the driver is engaged when it's enabled. As a Volvo owner, I'll admit I find it annoying at times, but I think it's also helped drill into me that I shouldn't trust Pilot Asssist not to drive me into a barrier. It's amazing at keeping me in my lane when I'm fiddling with my podcast feed though.
>It's amazing at keeping me in my lane when I'm fiddling with my podcast feed though.
I hate to be sanctimonious at people online but this is how people get killed. Is it not illegal to do this where you live? In the UK you'd be half way to losing your license if you got caught touching a phone while driving (and lose it instantly if within the first 2 years of becoming a qualified driver).
You just said it yourself, you can't trust it, so don't play with your phone while driving, lane assist or not.
He didn’t mentioned a phone however... it’s 2018 people that can afford that type of car must have podcast friendly onboard entertainement. Which is as dangerous as radio fiddling but not illegal.
I have a Volvo as well and it is annoying when the dashboard goes nuts warning of impending doom when experience tells me that the vehicle/obstacle ahead isn't actually an issue. That said, it has saved my bacon at least once at an unfamiliar highway exit in rush hour when traffic went from 40mph to a dead stop almost immediately.
Lane assist has also led me to be much better about always signaling lane changes lest the steering try to fight me.
> It's amazing at keeping me in my lane when I'm fiddling with my podcast feed though.
That's exactly what the tesla driver must have thought too. Right until the auto pilot steered directly into a barrier. Volvo S's system may be better, but any lapse in attention can lead to the type of crash we are discussing about.
I wonder if Tesla decided they want to own the term 'autopilot' at a short term expense, forcing other manufacturers to use less obvious names down the track. Cause it seems strange they would stick with a term that could encourage lawsuits and frivolous behaviour by drivers.
I find this argument so often. Autopilot has never been a term for autonomous, just as its used in aviation. Just because people don't know the proper term or have an erroneous idea of the term, doesn't mean tesla has to have the burden of people misinterpreting what it says.
See, I'm not sure if you know this... but most people are not Pilots.. ( disclaimer, I'm not only a programmer, but also hold an A&P and avionics license, as well as a few engine ratings ).
It is ABSOLUTELY on a manufacturer to make sure their potentially life ending feature, is not named in a way that can confuse the target audience. You know. NON PILOT car drivers.
Arguing with Tesla/Musk fanatics now is like arguing with Facebook/Zuck fanatics was ten years ago, or saying that a Google was bad news in their “don’t be evil” days. You’re right, but only time and loads of evidence will convince some people that what they desperately want to believe isn’t true.
Of course “Autopilot” is intended to evoke the common meaning as a marketing tool, and not the nuanced, highly technical meaning understood by pilots. Understand though, that when someone argues against that point the pedantry is just a proxy for their fanaticism, and until the fanaticism dies, the excuses will be generated de novo. You’re bringing reason and logic to an emotional fight.
I would really prefer not being called a fanatic just because i believe that the term "autopilot" isn't a proxy for "autonomous". I've always seen that autonomous is a goal of tesla but they have always said their system is limited, and that it requires vigilance.
Forgive me but I've never seen any plane where, once in autopilot, the pilot/s are not checking and observing the conditions of the plane and making sure everything is alright.
And yet, I don't recall ever in any documentary or so, having seen the pilots get up and leave once the autopilot is on? They have humongous checklists to parse, do they not?
I want you to go on wikipedia(is that not mainstream enough) and search for the term Autopilot. Reads its ACTUAL definition and come back.please.
> Autopilot has never been a term for autonomous, just as its used in aviation.
Autopilots used in modern commercial airplanes are autonomous. You don't have to watch them, they will do their job. The airplane is either controlled by the pilots or the autopilot. There is a protocol to transfer the control between pilots and the autopilot, such that it is clear who is in charge of controlling the plane (there's even a protocol to transfer this between pilots).
The autopilot will signal when it is no longer able to control the plane (because of, e.g., technical faults in the sensors).
Yes, there are also autopilots in smaller airplanes which are more or less just a cruise control. But everything in between, where is it unclear who is doing what are where the limits of the capabilities are, have been scrapped because people died.
> doesn't mean tesla has to have the burden of people misinterpreting what it says.
Because Tesla is so pretty clear in stating what their autopilot is able to do and what not.
Do you believe Tesla bears the burden of maintaining its own homepage? tesla.com/autopilot currently has "Full Self-Driving Hardware on All Cars" as its top headline and has had that for awhile now.
Well, they do have the hardware, that isn't the issue.
The cars simply lack the software to enable a fully autonomous vehicle. The phrasing indicates that if/when the software becomes available, the car would be theoretically capable of driving itself.
It's just a typical misleading marketing blurb; nothing more.
They don't actually know that they have hardware for full autonomy till they have a fully working hardware/software autonomy system; what they have is hardware that they hope will support full autonomy, and a willingness to present hopes as facts for marketing purposes.
Yes, it is the issue because no one has achieved full self-driving yet so Tesla simply has no idea what hardware may be required to achieve that level of functionality in real-world situations.
But that wasn't the line of argument I was making. The parent commenter said this about people misunderstanding the term "autopilot"
> Just because people don't know the proper term or have an erroneous idea of the term, doesn't mean tesla has to have the burden of people misinterpreting what it says.
Seems like people might be mistaken because the phrase "Full Self-Driving" is literally the first thing on the official Tesla Autopilot page.
In a sense though, without the software the hardware isn't self-driving, at least enough to be misleading. If you saw "Full Voice-Recognition Hardware on All Computers", you would expect it to actually recognise voices, not just come with a microphone.
The problem is that Tesla creates the misinterpretation by explicitly stating that Autopilot is a self-driving system rather than a driver-assist system.
But check out my Full Self Driving AP2 hardware, driving coast to coast! You can even ask your car to earn you money on Tesla Network! Tesla Autopilot twice as safe as humans in 2016! Sentient AI will kill humans!
You seem to understand the difference. Everyone in this thread seems to understand the difference. So why should anybody believe a failure to understand the term is a problem?
What a stark difference in tone this kind of statement would have made. Tesla's statement reeks of a desire to protect themselves from liability or a potential lawsuit. It's very sad to see them adopting such language in the face of such a tragedy.
Not Tesla, but they can afford to pay someone to write a thoughtful and sympathetic response to a tragedy in a way that also protects them from a lawsuit.
This sample statement makes it very clear that the user was misusing autopilot and trusting it beyond its intended function, but also shows sympathy for the family's situation.
They say they can't possibly afford to have cars download the maps at the wifi of service stations, yet they don't see any issue with leeching off Starbucks' free wifi. How anyone could have written that e-mail with a straight face is beyond me.
The problem with releasing statements like this is they can be used against Tesla in court. Anything that seems like an admission of guilt or responsibility will be used against them, which is why we see so little of it.
>Doesn't your statement admit that Tesla is at least partially at fault? Something their lawyers would probably never allow.
IANAL so take with a grain of salt. I once talked to a lawyer who used to work for a big hospital and handled the malpractice lawsuits against them. Three takeaways from the discussion:
1. Implying that a possibility exists that the hospital was at fault has no legal ramifications whatsoever.
2. The studies show an apology and admission has a significant impact on the amount paid to the patient if there is a settlement (in favor of the hospital).
3. Despite knowing 1 and 2, he and other lawyers advise their clients to deny wrongdoing all the way to the end.
Trust is the final layer of competition, trust comes with accountability, taking responsibility. If Tesla et al don't have this fear driving them to do right, then they will lose our respect and our hearts.
How should one deploy such a product at all?
Actual usage is really the only way anyone will know if the models are trained appropriately to handle most/all situations they will encounter in the real world.
> Actual usage is really the only way anyone will know if the models are trained appropriately to handle most/all situations they will encounter in the real world.
Because testing stuff before throwing it on the market isn't a thing anymore?
Surely you do not assume that Tesla had done no testing at all before selling these things.
I forewent commenting on their pre-market testing because I assumed that flokie already knew that the cars and ML models they use had been extensively tested on tracks and in simulation before the first Tesla was allowed on California roads.
And they would have be complete idiots had they not done such testing, no investors would have funded that.
Reactions like flokie's were completely predictable the moment driver assistance techniques were thought of. The only acceptable response a company can have to such criticism is "we have tested this extensively and it is safer than driving manually".
Market forces aside, no car drives on roads in any US state without extensive testing and certification. All of the companies testing self-driving technology had to get special permits to do so.
"And furthermore, here is why we don't see fit to stop using the name Autopilot, even as we recognize that false sense of security it implies to the average person: ..."
You don't think there's an implied difference between Autopilot vs a name like Driver Assist? Even Co-pilot would be be a better name as it implies an expectation that the driver is still ultimately responsible.
I'm asking for data. The data doesn't care what you and I think. And the data will reflect what Tesla drivers think, as opposed to people who've only heard Tesla's marketing. Tesla drivers get reminded about the limitations of the autopilot system each time they enable it.
The literally hundreds of posts on HN about it being a problem are data that it is. And that's a tech-savvy crowd that theoretically would know the limitations of the system.
There are literally hundreds of posts on HN claiming that it's a problem. That's data for it being a commonly-held belief, not data for it being a problem with actual Tesla drivers.
The only posts I saw on HN are claiming that it is a problem. If anything, that's evidence that it's not actually a problem, because people are recognising that it _might_ be misleading, rather than _being_ misled.
Tesla has world-class PR, let's assume they're saying the right thing. What circumstances might they be facing to which this message is a correct response?
I personally like your style and tone. At a glance I would only replace his first name with Mr. to increase respect and depersonalize it.
I think it’d be a tough sell to get the blessing of Tesla’s legal team. Given Musk’s position he could override that of course, but it could still reduce the likelyhood of it going out.
Overall as much as I prefer it, I think most companies wouldn’t release something this direct and honest. Although that’s changing at some companies as they find that the goodwill built through lack of bullshit can sometimes outweigh distasteful liability defense techniques.
They’re really digging themselves a hole. Can you imagine explaining to a jury that your “autopilot” isn’t “fully-autonomous” and trying to justify that as not misleading by pointing to “autopilots in aviation?”
It's called a 'brand name' and people are surrounded by them. I can't eat my Apple MacBook, despite the fact it is misleadingly branded as fruit. Everyone in this thread seems perfectly capable of understanding that 'autopilot' doesn't mean 'fully autonomous', and they got to that understanding through the fairly mundane and routine method of thinking about it.
It seems like Tesla is careful not to dispel the illusion it can. Like how alcohol does not get you women- but all the adverts deftly imply it can, without actually saying it can.
Great copywriting. It seems in part this is a user interface issue in that the limitations of the system are not clearly enough surfaced so people’s expectations are off-base.
"from an event like this" is an important part of the statement.
Pretend for a moment that the occurrence of the accident was a foregone conclusion. In such a scenario, the most positive thing that could be done would be to use the information to save the lives of others.
There's a smidgen of humility in the phrasing; that the accident might have been avoidable with the information that has been gained as a result of the accident. Of course, I presume that sort of sentiment would fall squarely under the "admission of guilt" umbrella that prevents companies from saying things like this.
I think of it this way:
Most car manufacturers are releasing their products on the public year after year, knowing full well that a decent percentage of people that drive away from the dealership will be killed by the thing they just bought.
Tesla is merely trying to take the next step in reducing that percentage.
Their strategy is sound and we so far have not come up with any alternatives that stand a remote chance of improving safety as much as self-driving. Even if they are largely unsuccessful, they are indeed trying to ensure the safety of the public.
> Most car manufacturers are releasing their products on the public year after year, knowing full well that a decent percentage of people that drive away from the dealership will be killed by the thing they just bought.
Nope, and that's the whole difference. They will be killed by theirs actions, their choices, their inattention, or those of other drivers. They won't be killed by the machine.
With autopilot / pseudo-autopilot, they will be killed by the machine.
It is a huge difference, both in terms of regulations of people transportation safety, and in terms of human psychology, which makes a big difference between being in control and not being in control of a situation.
I agree that it is a psychological difference but our behavior as a society suggests a recognition that the machine and it's makers have a role in whether or not people die in cars.
This is why we demand safer cars and sue car makers when their designs failed or did not meet our expectations in protecting the occupants.
I can agree with the notion that the machine killed the person in all cases where the the machine does not include any controls for the person.
As a society, we currently recognize that the causes of accidents and the probability of occupant death are dependent on multiple factors, one of which is the car and it's safety features (or lack there of). https://en.wikipedia.org/wiki/Traffic_collision#Causes
We also already have a significant level of automation in almost all cars, yet we are rarely tempted to say that having cruise-control, automatic transmissions, electronic throttle control, or computer controlled fuel injection means we are not in control and therefore the machine is totally at fault in every accident.
Operating a car was much harder to get right before these things existed and the difference can still be observed in comparison to small aircraft operations.
Then and now we still blame some accidents on "driver/pilot error" while others are blamed on "engine failure", "structural failure", or "environmental factors".
I think having steering assistance or even true autopilot will not change this. In airplanes, the pilots have to know when and how to use an autopilot if the plane has one.
If the pilot switches on the autopilot and it tries to crash the plane, the pilot is expected to override and re-program it, failure to do so would be considered pilot error.
Similarly, drivers will have to know when and how to use cruse-control/steering-assist and should be expected to override it when it doesn't do the right thing.
Stalin probably didn't say it and the quote is usually used to highlight how the human brain cannot comprehend the devastation of a hundred deaths while it can easily feel grief for one death.
It'd be more like releasing a clone of Dropbox called "Dropbox for Backing Up Files" that made your files public and gave your credit card info to Russian hackers.
Sure it might look neat to someone on the outside but it wouldn't take long to see it's nothing like the real thing made by someone who knows what they're doing.
I used to be in awe of Teslas and I have always been a big fan of Musk, and even after the crash I still imagined myself one day buying a Tesla. But after this response, I've completely lost respect for the company, for Musk, and I don't think I'll ever buy one or advocate for someone buying one.
The driver had previously reported to Tesla 7 to 10 times that there was an issue with autopilot, but Tesla told him "no there is no issue". There is also video evidence of this same issue happening in other parts of the world, but with similar road conditions. But again, Tesla's response has been "there is no issue."
And now their response is "the driver knew there was an issue but he used autopilot anyway"? Seriously? Either there is an issue or there isn't, Tesla. At first you said there is no issue, and now you're saying there is an issue? And as the cherry on top, you're blaming the driver for continuing to use the feature even after you told him repeatedly that it's okay to use?
> The driver had previously reported to Tesla 7 to 10 times
I understand this is not a popular opinion on news forums anywhere nor do I write this to absolve in my view Tesla of their poor response nor of their poor choice in naming the system 'autopilot' to start.
But at what point does corporate or government responsibility end and personal responsibility start? If I knew something didn't work and that something could kill me and had reported the issue 7 to 10 times, I would be watching, like a hawk for the issue to recur, or more likely just not using it at all.
People have this insane idea that just because something isn't their fault they don't have to take corrective action in order to avoid the consequences. It is better to be alive than to be correct.
In the US, we have laws that require products to work as advertised. If they don't, generally strict product liability applies to the manufacturer, seller, and all middleman (in this case, they're all Tesla), regardless of whether the customer-victim was misusing the product.
In this case, there is no evidence that the driver was misusing the car. But there is a carload of evidence that Autopilot failed.
At this point, the only thing Tesla is accomplishing with these public statements is adding more zeros to the eventual settlement/damage award.
> In the US, we have laws that require products to work as advertised.
And if they don't, what's the first thing you, as the customer, do? You stop using the product. If autopilot isn't working the way you think it should, you stop using autopilot. This driver did not do that. What kind of sense does that make?
That's a very naive comment. What if there's no emergency lane? Should it just crash towards whatever is on the right side of the car? :)
Tesla's autopilot already slows down then stops moving if the user doesn't touch the steering wheel for a period time, but it gives plenty of warning to the user before that happens, as it should be.
Inconveniencing the driver and making it a learning experience vs crashing into concrete and killing him
Well. Maybe you are the naive one. Machinery shouldn’t give warnings. We already learned those lesson in industrial automation settings.
The average mentality of “just user error bro” seen on hackers new is what scares me the most. That is not how you build secure machines, far from it, and at this point my only hope not to be killed from an autonomous car in the next 30 years is for the regulations hammer coming crashing down full force on the wanton engineers that are treating loss of life as user error
tesla's already detect people not having the hands on the wheel, for starters. then you can fairly easily see where the eyes are looking and I guess with some more effort if the eyes are engaged or wandering.
I agree completely with you and also with grandparent. I don't think these perspectives are in conflict at all. You have to take responsibility for your own safety. But at the same time, Tesla's response makes me lose respect for them.
At the heart of this issue is the fact Teslas are the first car that can have its functionality changed drastically from day to day (this "drive into a median" bug is believed to be a recent issue introduced within the last few months of autopilot updates).
I think that ability to update is one of the most exciting things about Teslas compared to other cars, but it presents a huge organizational problem. How does Tesla communicate out those changes to their customers, mechanics, sales staff, and customer service people? I think the reporting of this specific issue and the original "there is no issue" response is simply from a company failing to communicate in multiple directions rather than any type of sinister motive that some people seem to be implying (Hanlon's razor and all).
As someone elsewhere in this thread put it, this is a case where silicon valley's posterchild slogan "move fast and break things" has turned into a reality of "move fast and end lives".
Fast updates can be a great thing, but companies (in this case, Tesla) need to do a better job of figuring out when to apply this mentality of "move fast" and when to instead apply a mentality of "move slower because even the slightest error in this update could result in bodily harm to someone". There's a reason that most non-software industries (think aviation, medical, most car manufacturers not named Tesla) move excruciatingly slow when rolling out new features.
It is the classic trolley[1] problem and Tesla is flipping the switch on the tracks. I don't think there is anything to indicate that Tesla's approach is leading to more deaths. Considering there has only been two confirmed fatalities in which autopilot was partially at fault (AFAIK), I would bet that autopilot is still well in the positives in "lives saved" versus "lives ended". Whether that is the right ethical or moral decision is something that can be debated forever, but Musk and in turn Tesla have made it very clear where they stands on it.
But this isn't a question about no autopilot vs autopilot. This is a question about the previous version of autopilot vs the current version. If the previous version of autopilot (read: the one that didn't swerve into barriers and kill someone) was working, why did they update everyone to this new version which apparently has a life-threatening bug? Was it necessary to roll out this update so quickly? My guess is no, and that with less of a "move fast and break things" mentality, they might have tested the update more and fixed this issue before it went live.
The same principle applies. Autopilot is not a binary feature that either works or doesn't. It is an evolving system that should continuously get better and safer. Sure, Tesla could have delayed this specific update in favor of more testing. But how many preventable accidents would happen during that testing period that an earlier roll out could have prevented? Neither you or I know the answer to that. Tesla has access to all the autopilot data and therefore should be able to more accurately predict that number. I haven't seen anything to suggest Tesla is ignoring that data or being reckless in when they decide to roll out these changes.
>It is an evolving system that should continuously get better and safer.
In this instance, it didn't get safer. It did the opposite, and it killed someone.
>I haven't seen anything to suggest Tesla is ignoring that data or being reckless in when they decide to roll out these changes.
You haven't? Because I'm sitting here looking at a story about a bug in Tesla's latest software that Tesla ignored and then it resulted in someone's death.
You are drawing those conclusions with only half of the equation. Like I said in my previous comment, neither of us know how many accidents were prevented or lives saved by that update.
>Like I said in my previous comment, neither of us know how many accidents were prevented or lives saved by that update.
Then why are you making assumptions about it, then?
>You are drawing those conclusions with only half of the equation.
No, I'm not. The facts are: there was a problem that was reported to Tesla, Tesla ignored that problem, and now someone is dead because of that problem. No further information is needed to recognize that Tesla acted poorly here.
I said Tesla failed in my first comment. I am not defending them regarding this particular accident. I am simply explaining the potential logic behind their decisions and why this single failure doesn't mean their approach is inherently wrong. The only assumption I am making is that Tesla has good intentions with their aggressive autopilot releases.
> and now someone is dead because of that problem.
You've stated this repeatedly without proof. It has not been established he is dead because of that problem. The investigation may yet reach that conclusion, but until then you have no support for this assertion.
> I don't think there is anything to indicate that Tesla's approach is leading to more deaths.
The safety claims Tesla/Musk make are debatable. There is a lawsuit over getting the raw data NHTSA used to make the "autosteer reduces collisions by 40%" [1]. AEB also reduces the collision rate by 40 percent, which is mentioned in the same NHTSA report [2]. And, supposedly AEB and autosteer were activated around the same time, so it might not be easy to tell which provided a benefit.
> I would bet that autopilot is still well in the positives in "lives saved" versus "lives ended"
You need a lot of data to show that. 1 death per 86 million miles driven is the average in the US, and that's across all types of vehicles, new and old, including motorcycles, bad weather (where autopilot is not likely to be engaged), and on all kinds of roads (autopilot is mostly engaged on divided highways).
There was also a death in China that looks like autopilot [3].
there is also no indication that it is _saving_ lives either, stats that are about easy, relatively safe situations, where humans take over when challenges occur are not any kind of meaningful indicator.
Not to mention the market telsa courts self selects for safer driving in general (rich, and informed).
To act like this is a clearcut case of the trolley dilemma and telsa is making a moral choice that saves life is misinformed bordering on disingenous.
There is a huge amount of indication that Telsa is acting both irresponsibly (refusing to admit fault, when its clear fault exists, refusing to address concerns, only answering with deflections), and recklessly (misleading marketing about fully capable autodriving hardware, calling a system that requires constant, careful manual attention "autopilot", enabling it by default, absolutely refusing to back down even a little)
As far as I know there has been only a single public study on the safety of Tesla's autopilot. It compared miles driven in Teslas without the autopilot hardware and safety features versus miles driven in Teslas with those features. They showed a 40% reduction in crashes. I think it is fair to assume that a 40% reduction in crashes resulted in numerous lives being saved. I don't see how coming to that conclusion is "misinformed bordering on disingenous".
That specific note is available on page 10-11 in the linked study. [1]
This isn't limited to no autopilot vs full autopilot. If autopilot is so good at correcting a drunk driver's accident, then let it run in a shadow mode only making corrective actions in emergencies. Why is AEB bundled in with Autosteer in all stats? AEB is undoubtedly good, Autosteer is debatable.
> AEB is undoubtedly good, Autosteer is debatable.
AEB is clearly unreliable, as it didn't automatically slow the car in any of these cases where the car was pointed directly at a significant-sized stationary solid object.
I did worry about this sort of thing back when they mentioned nuisance trips of the radar-based AEB system, and that to deal with them they were basically just geotagging where these nuisance trips happened and disabling the system near those points. I'd like to know if AEB was actually active at the location where the fatality occurred.
> There's a reason that most non-software industries (think aviation, medical, most car manufacturers not named Tesla) move excruciatingly slow when rolling out new features.
They do and I'm largely glad they do but I think you can go too far the other way, delaying advances that can save lives can result in more lives been lost because of the delay.
There has to be a middle point, you want new drugs (for example) reaching the market but with the risk minimised to a degree that is acceptable, you simply can't reduce the risk to 0.
All that said I'd be fascinated to read the assurance approaches that Tesla uses before pushing out updates to what is essentially a computer on wheels attached to batteries.
> functionality changed drastically from day to day
Is this a daily thing? Weekly? Forced-updating software is almost always bad, but it would be a nightmare in my car. "Good morning! The buttons on your steering wheel have been remapped, and the auto-nav now has a different set of quirks. Have a nice day, and we are not liable!"
If Tesla was shipping a Level 3 system, your analysis would be great. But they aren't. They're shipping a Level 2 system where you're reminded each time that you enable it that you've got to remain alert. And which dings at you if you don't keep your hands on the steering wheel.
This is irrelevant. Before his death, Tesla told the driver that there was no issue with the autopilot feature. This likely contributed to his continued use of the feature. That feature, because of the same issue (which Tesla denied existed) killed him. And now Tesla is saying that there is an issue, and that it's his fault for using the system.
Whether or not it's a full self-driving mode, or if it's an assisted auto-pilot, or even if it was just some lame GPS navigation, is completely irrelevant. The bottom line is: there was an issue that was reported, Tesla denied it, the man died because of that issue, and now Tesla is contradicting themselves by saying there is an issue, but are deflecting blame.
At this point, Tesla is eroding the ability for anyone to put trust in their cars. What if you have a Model X and one day it unsafely opens the doors while you are driving on the highway, so you take it to the dealer but they say "nope, can't replicate the issue, your car is safe to use". Are you going to believe them? Why, when they've just confirmed that they have a track record of being wrong?
You can't just blanket say everything is irrelevant because it doesn't fit your narrative. If he reported the system wasn't working, why would he continue using it especially in a known problematic area?
Because he was assured by Tesla that it was working, and there was no problem. They were clearly wrong.
As for the relevance: sure I can. My post about Tesla's contradiction has no relevance on what kind of autopilot it was. Tesla denied there was an issue, contradicted themselves by then saying there is an issue, and now is casting blame on the user for that issue. That's not acceptable. It would be unacceptable if we were talking about windshield wipers, or tires, or door locks, or the fucking clock on the dashboard. It is all about Tesla's denial of the issue and contradiction of themselves, and the type of autopilot is completely irrelevant.
There's a difference between "couldn't find a problem" and "no problem".
He went to them multiple times and they couldn't reproduce the problem. Sounds fairly standard. I'd be surprised if you can cite them saying there is definitely no problem.
Nobody was forcing him to use the feature if he knew it wasn’t working as intended on that particular stretch of road. My issue with your statements are you’re shifting 110% of the blame to Tesla when that isn’t remotely the case.
And my issue with your statements is that you're trying to have it both ways: you're claiming both that "there is no problem" and "Tesla told him about the problem so it's his fault". You have to pick one.
Nowhere do I say there is no problem so I’m not sure why you’re quoting that. I acknowledge there may be a problem since he shouldn’t have been using autopilot on that known problematic strip of road.
> he shouldn’t have been using autopilot on that known problematic strip of road.
Tesla don't make preconditions on where to use their autopilot. How many Tesla's discover a new 'problematic' strip of road each day? Is every first Tesla to drive down a new road to become a crash-test dummy when it turns out the autopilot just can't handle that next bend in the road?
It's absurd to think so. The product is not fit for purpose. I'm confident the law would agree.
Sure they do, it isn’t enabled 100% of the time. It only enables it when it has the confidence it can perform its task. There are plenty of times where it wouldn’t be available to me for various reasons, either road conditions or weather conditions.
Because it’s on his daily commuting path? If the autopilot globallly function well and give him added security why should he disconnect it?
But if the car consistently swerve toward the barrier in certains circumstances, doesn’t it increase the probability of a crash because it programmatically increase risk of accident in an otherwise safe situation. And then isn’t Tesla culpright of not treating bug report appropriately?
Based on his complaints I don’t see how we can infer it gave him added security which would warrant continued use. If I knew a tool I used around the house wasn’t working completely as advertised I wouldn’t continue using the feature that wasn’t working for me.
As a beta user of bank software where clicking transfer money ended up sending twice as much, wouldn’t it be my fault if I did it 7 more times when I acknowledged its behavior even if it wasn’t as advertised?
I realize my examples are hard to compare to this specific situation, but the software is opt in and is littered with disclaimers because it’s not a level 3+ autonomous vehicle.
As I see it it’s a minor bug with horrible consequences. What is shocking is not the bug itself, nor the lack of a quick bugfix, but inability to acknowledge the problem and lack of basic and sincere human empathy.
To quote Tesla statement "We empathize with Mr. Huang's family,(...), but the false impression that Autopilot is unsafe will cause harm to others on the road." and the last sentence is by far the worst "The reason that other families are not on TV is because their loved ones are still alive."
Seriouly the most urgent thing to do for Tesla is maybe to fire the head of PR. How can anybody in his right mind have vetted this last sentence?
> As a beta user of bank software where clicking transfer money ended up sending twice as much, wouldn’t it be my fault if I did it 7 more times when I acknowledged its behavior even if it wasn’t as advertised?
Bad example. "Hey, this is broken, it did this." You say. They say "Nope, no problem/ fixed." You do it again. "Still broken!" "All good!". Again...
I think it’s a fine example when applied to 7 different times. That’s past a reasonable amount of times to place your trust in a system at that point. You’re just self inflicting pain if you do the same thing over and over again and expect different results.
I'm a new Model S over, and it's hard for me to believe anyone would believe them. The autopilot makes egregious errors during any prolonged use. You should feel more alert and engaged when using it at this point.
Tesla would argue that the proximate cause was that the driver was not paying attention.
> And now Tesla is saying that there is an issue, and that it's his fault for using the system.
Tesla is not saying that it's his fault for using the system. They are saying it is not Tesla's fault if the driver wasn't paying attention, despite being presented with repeated warnings closely preceding the impact.
Failed under what conditions, though? The manufacturer gets to specify the conditions under which they consider their product to be safe and suitable for a particular purpose. The car was not used under those conditions and the result was injury through misuse of the product.
If my toaster causes me serious injury because I used it in the bath, that is my problem.
Product liability (in the US) attaches to the normal usage of a product, whether that is an intended usage of the product or not.
A toaster is not normally used in the bath, so that does not create product liability issues. A car is normally used for driving...in fact, that's a car's primary use...so product liability would attach to its driving-related functional failure.
> I am not a lawyer, but it is not clear to me just yet that the product failed.
It's not clear to me whether it was a design defect or a manufacturing defect, but it's pretty clear that the lane keeping system drove the vehicle into a barrier which was not, in fact, located in a traffic lane, killing the occupant, which is pretty obviously a failure.
> It's not clear to me whether it was a design defect or a manufacturing defect, but it's pretty clear that the lane keeping system drove the vehicle into a barrier which was not, in fact, located in a traffic lane, killing the occupant, which is pretty obviously a failure.
Reflecting on your reply and gamblor956's post below, and after reviewing the published photos, I concede this looks like the most likely scenario, and if that is what actually occurred, it would unequivocally be a failure.
(That said, there's still a chance, however small, that whatever seems obvious now ends up being invalidated by the investigation. I just don't like feeling certain about things like this, especially from a distance.)
Car steered itself into a concrete wall. That is, by definition, failure. It doesn't matter whether it was properly following its coding to follow lane lines.
Look, I'm old and cynical too, but we can't do character assassination in this way. You need to take issue with specific facts or your argument comes across as extremely weak.
The post was downvoted almost to oblivion, and blocked until the admins released it. Just read the comments. Most of the "character assassination" is commentors attacking the OP on Reddit.
Could you please point to the source where it says that Tesla said it wasn’t an issue? All I read describes that they couldn’t reproduce the issue. That is something different.
there's something fundamentally wrong in releasing that to the public then, your authority for vehicle safety should be VERY concerned about this, people getting killed aren't "bugs to iron out".
That makes absolutely no sense. You're basically saying that Tesla's statement made it OK that the driver did not pay attention to what was going on, which is something which is required 100% of the time when Autopilot is functioning correctly. (It's a L2 system, and reminds the driver to continue paying attention each time it is turned on.)
If things worked the way you want it to, no car company would every say anything to any customer other than bland statements drafted by lawyers.
I think the core of the issue lies in what “using autopilot” means.
If the driver had used it according to the instructions (with hands on the wheel, while driving the car, watching the road as a driver always should), the crash would not have occurred.
Tesla is between a rock and a hard place, they need to make it crystal clear that the car performed as expected. The message is not intended for the deceased driver’s family but for everyone listening, including everyone who owns a Tesla.
It’s not an “autopilot”, it’s a lane assist. I see this as a naming failure more than anything else.
> they need to make it crystal clear that the car performed as expected.
That hasn't even been definitively established yet (regardless of what Tesla says), as the multiple videos of post-update Teslas steering directly toward crash barriers demonstrate.
No, it is absolutely not. I'm not sure why you keep posting this as if it is definitive.
Let me put it very clearly for you: it is not acceptable, in any shape or manner, to expect your users to have to "save themselves" from your product killing them every 5 seconds. If your product requires that, it is a bad product.
Furthermore, your entire argument of "it's just a level 2 system..." falls apart when you consider that Tesla jumps through several marketing hoops to deceive people and make them believe it isn't just a level 2 system. When you visit Tesla's webpage on autopilot [1], the first and only thing you see at the top of the page are the following words in big, bold text:
>Full Self-Driving Hardware on All Cars
>All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
It is not until you get to nearly the very bottom of the page that Tesla displays a disclaimer that "when we say 'full self-driving', we don't actually mean full self-driving, because that would be illegal and isn't available yet." There is only one single sentence, buried within a paragraph mid-page, that says drivers must remain alert and prepare to take over. The rest of the page is also filled with language indicating that autopilot is capable of turning itself, switching lanes itself, speeding itself up/slowing itself down, etc.
It is not unreasonable whatsoever for someone to read this webpage and assume that autopilot will magically take them whereever they need to go. This webpage is in stark contrast to the reality, where autopilot will actively steer you into a concrete barrier and kill you if you do not babysit it. And to make things worse, Tesla apparently considers this to be a "feature".
According to your Twitter, you own a Tesla, so it seems you have a little bit of fanboy-ist bias going on here. I'd suggest you take off the Tesla fanboy ballcap and consider how you would feel if this were a different product. If someone sold you a smartphone, but told you "this phone never requires charging because it has a mini nuclear reactor in it. Be careful though, because if you don't keep your eyes on the phone at all times and press the 'please don't explode and kill me' button every 5 minutes, you'll die!" would you be defending that phone in the same way you're defending Tesla?
Great research! Having used autopilot in my girlfriend’s car, I’m aware of the notifications when autopilot is turned on. You should check it out, instead of assuming that drivers only see the marketing materials you quote.
Also, have you used any adaptive cruise control system? Because they break your rule.
>Let me put it very clearly for you: it is not acceptable, in any shape or manner, to expect your users to have to "save themselves" from your product killing them every 5 seconds. If your product requires that, it is a bad product.
Yes it is. Otherwise you're asking all car manufacturers to stop selling cars. "I have to constantly keep my car between the lines or I'll cause an accident" is an acceptable standard. That's how every car on the road works today. The liability is on the user to operate it properly.
You accuse the poster of fanboy-ism, but you don't realize how much Tesla tells the driver that they need to pay attention and keep their hands on the steering wheel while they're using autopilot. So maybe your lack of experience with the car makes you pull those baseless arguments.
While I agree Tesla is deceptive with their wording and the way they sell their autopilot ("have the hardware needed for full self-driving capability"), it's not legally wrong. They have the hardware for it (according to them), but at the moment, the software they have is a glorified lane assist. They specifically mention in that same page:
> Every driver is responsible for remaining alert and active when using Autopilot, and must be prepared to take action at any time.
>That's how every car on the road works today. The liability is on the user to operate it properly.
No, it is not. No other car on the market today actively steers you toward an obstacle. Every non-autopilot car would require the driver (or another human actor, such as another driver) to first put themselves in a dangerous situation before requiring they get themselves out of it. Tesla autopilot, on the other hand, puts the driver in a dangerous situation on it's own. That is the crucial, fundamental difference.
There is a huge difference between not-avoiding an accident and causing an accident. It didn't just drift, it actively pointed itself toward the barrier. That shouldn't happen at level 2, or level 3, or level 0.
I've never driven a Tesla or any other kind of "autopilot" car.
But for the current kind of systems, what I would really like is something to take evasive action, not drive for me. I know some non autopilot cars have those systems in place, but not sure what more a Tesla car would give me in protection.
I guess I need to take one for a spin one of these days.
Some interesting research on “assist” functionality was mentioned as a part of this fascinating talk on self driving cars and their perception of the world.
I didn't know Tesla had an exchange with that person over previous occurences of the problem.
I believe Tesla should take responsibilities for this. But I think they fear this could just kill their entire company if they make a PA regarding safety being their fault.
All in all, it's not right, not long ago they were claiming to have the highest safety ratings, now they're in denial over a death.
Tesla was one of the best tech I've ever watched happen, up until when they decided to introduce the Autopilot. I guess they found out they can't easily improve on the range, which was the feature, up until this mess of a branding and PR. They were the good, benevolent techies, but they became angry and arrogant rich guys just in a flash under the slightest of criticism, and when they were to blame.
What I hope now is that there come a new fresh enterprise that prioritises the range variable, and can produce serious electric alternatives to conventional buses and scooters (e.g. Vespa) too.
This is gonna be interesting as apparently the "Barrier" [1] bug is only in newer AP2 models per latest A/B tests done by cpddan (who did the very first crash reproduction scenario in Chicago [2])
1. https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil.... "works for 6 months with zero issues. Then one Friday night you get an update. Everything works that weekend, and on your way to work on Monday. Then, 18 minutes into your commute home, it drives straight at a barrier at 60 MPH."
it definitely has to do with how the lane marker stripe on the road crosses into the other lane at the divider.
you can see the Tesla hugs the left line divider. When the lanes split, the line doesn't -- it continues into the opposite lane.
Regardless, that's a totally common occurrence on the road. It seems absurd that the car will trust a painted line over any type of radar or imagery showing a rapidly approaching wall...
EDIT: watch both videos again --
Notice in the video where the car DOESN'T head towards the divider, the Tesla autopilot dashboard shows that it can see the car ahead of it.
In the one where it DOES head towards the divider, it doesn't have a car in front of it to follow.
I am wondering if in the first instance, the Tesla was confused by the lines, but saw a car ahead which cleared up its confusion (maybe it weighs trailing another car as better than following lines).
In the other instance, all it had to work with was the lines (and all the radar and other stuff it has that should avoid all this...).
I've mentioned before on here, it's clear teslas autopilot isn't really any better than other lane assist from other car manufacturers. At this point its just a neat party trick.
It doesn't work based on car follow - but it does switch to that mode momentarily in extreme cases when lanes completely disappear and there is a car in front and when it does - it make the leading car full blue.
I get that the vision system might have been fooled by how the lane widens before the split.
What I don't get is why the radar doesn't pick up the barrier itself. It should have been getting a nice, healthy return signal from it, lots of metal at all kinds of angles. What I'm saying is, this is not a stealthy barrier.
The thing is, there are a lot of stationary objects alongside/near major highways (signs, bridge supports, barriers, etc.). The radar systems are not 100% accurate about whether a particular stationary object is in the lane. In nearly all cases, stationary objects are not in your lane (if they were, the previous driver in the lane would have hit them). Thus the radars are tuned to detect non-stationary objects, as those are slowing cars, merging cars, etc. all of which are commonly in one's lane, and which require action from the Autopilot.
Radar on the Teslas, if it works like how I am familiar with, just detects "things". Things that have a decent radar return. This class of things does indeed include cars and other vehicles, but should also include walls and such.
It is likely that they're using at least 2-D radar, so the system should be able to discriminate between an object directly in the path of travel, and one that is off to the side.
Sorry for not having quoted the exact paragraph in my very first comment. You can find the reference just above; it stops to work at around 90 mph (150 km/h) - Tesla Model S manual, Collision Avoidance Assist, page 95.
"Automatic Emergency Braking operates only when driving between approximately 7 mph (10 km/h) and 90 mph (150 km/h)." -- Model S Manual (you can find it online)
if road marks have priority over radars then tesla has a huge security hole on their hand. people can just paint brighter lines at a diagonal and crash them.
I have to wonder why Tesla keeps doubling down on victim-blaming. Is this a case of Elon Musk being insecure about criticism (since he's demonstrated a tendency in the past to react poorly to it)? Or is Tesla trying to actually lower their legal liability?
Edit: To clarify, in most of these crashes, it's usually a combination of Tesla's fault and the driver's fault. Which is to say, the driver could have avoided the crash, but Tesla could also have avoided it, and the driver's biggest fault lies in trusting the "autopilot" more than they should. In the case of this particular crash, I certainly agree that the driver should have been paying more attention to the road, especially because they already knew the autopilot had issues at that particular point, and if they were paying attention they could have avoided the crash. However, the fact that autopilot can't handle that barrier correctly is a problem especially because the driver reported that exact issue to Tesla multiple times in the past (heck, Tesla probably should have blacklisted that particular GPS location to force the user to take control when approaching that point, if they can't handle the barrier correctly in autopilot). Similarly, Tesla allows drivers the most freedom to ignore the road of any of the "autopilot"-like systems, and they continue to call their system "autopilot"¹, both of which only serves to make this sort of crash much more likely.
¹Yes I know it's technically correct, but it doesn't match what the general public thinks when they hear the term "autopilot". It gives the wrong impression that the system is smarter and more reliable than it actually is.
All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
That seems like an incredibly misleading way to market it. I could throw a LIDAR[0] and some cameras on my car and say "wow I have all the hardware necessary for autonomy". That's irrelevant if self-driving is dependent on some software updates that are coming at an unspecified point in the future (or gradually over time, with some surprise regressions like this one along the way).
[0] Tesla thinks they'll have full self driving with just cameras but I'm not convinced. Regardless of how level 4+ autonomy is achieved, there's still some serious work that needs to happen on the software side
You're touching on one of the things that Tesla does that frustrates me to no end. They have this attitude of software being the solution to everything. Waymo has a reason they've mounted lidar to the roofs of their cars, it isn't because they can't build good enough software or their ML models aren't accurate enough, it's because they've realised that you can't just throw a bunch of cameras on a car, add some computer vision algorithms and market it to consumers as being fully self driving capable.
Compare that with the message displayed each time the driver turns on Autopilot. Is there any evidence that Tesla drivers are more influenced by marketing or by the message that they see every day?
>Is there any evidence that Tesla drivers are more influenced by marketing or by the message that they see every day?
given the behaviour of customers (putting down lump sums of money without knowing when they receive their car, camping out in front of stores), dedicated only communities (a 200k user subreddit for a car company?) and so forth I'm going to go with yes and it seems to border on the cult-ish.
There is a cult of personality around Musk and Tesla that is simply irrational
I'm willing to bet that the NTSA report on this accident will explore that issue. The one number that Tesla has given included the effect of automatic emergency braking, so it's not a pure autopilot number.
Even if they do show the AP is safer than a human driver, the idea that a machine/software killing people is pretty unsettling to most of us. It's one thing to have a human driver make an error and kill himself and other and another to have a machine do it on a somewhat regular basis, even if in the long run less people die overall. There's something really uncomfortable for society. The justification for it feels very cold and calculated, even if it is right. We've arrived at Peter Singer's trolley car problem.
The trolley problem doesn't have a human in the loop.
Edit: Instead of downvoting me, can the downvoter please point out where the trolley problem involves a human in the loop? This is a level 2 system, where a human driver has to remain alert and ready to take over. That's not the trolley problem.
Oh we let machines kill people all the time. Trains, power transformers, oil rigs...
Sure autodriving is entering the uncanny valley and that's different. Lots of things are different about it. Instead of prosecuting one person for causing an accident, now a whole line of vehicles is 'at fault' and maybe get sidelined. Good news there is, they can all be fixed at once.
It'll end up more about the law and insurance implications, than what we feel about it.
While it might imply otherwise, the message is clear about the _hardware_ being ready for self-driving. That doesn't mean the car is capable of self-driving in general.
Sure. But this is the sort of thing that is in my view an instigator of escalating adversarialism. I recognize this kind of cagey language, but I also recognize that most people don't and I think it's absurd to demand that they do understand it. I put this in the category of false advertising and I think a jury of random people will side with the dead. Hence the defense's job will be to disqualify ordinary people who will go for connotation over denotation.
They double down on it because they are shipping a system that is unsafe by design. Their engineers know it, their lawyers know it, and Elon Musk knows it.
Blaming the user and pointing to the ToS is their fig leaf against liability. When autopilot is used as required by their ToS, it is useless. When it is used as advertised, it is dangerous. They know better, and they don't give a damn. First to market, and all that.
According to the TOS, and given Tesla's statements, the system needs to be supervised by a 100% attentive human 24/7. From what we know of human physiology, and what we have observed from previous self-driving crashes, this level of attentiveness is impossible to maintain... Without being the one actively driving the car.
But if the safety feature works almost all the time, and occasionally fails catestrophically, I would argue that it is actually more dangerous than having no safety feature, since people will depend on it working and not catch it when it fails.
I think they're trying to hammer home the point that autopilot is very much not level 5 self-driving. The media is constantly referring to "self-driving" Teslas, so of course it's easy for members of the public to get confused, but this is very different from e.g. the Uber self-driving fatality that occurred recently, where the car was supposed to be able to drive 100% autonomously. Tesla is extremely clear that these are only driver assistance features, but people keep getting confused about it, so they keep doubling down on their message.
> Tesla is extremely clear that these are only driver assistance features
Promoting the system with the name 'Autopilot' is, to my mind, quite a bit less than extremely clear communication. This name provides Tesla's critics with a constant line of attack, which will continue right up until they reach level 5. I consider myself a hardcore Tesla fan but I think they've botched their messaging, and this latest statement isn't helping.
> ... but people keep getting confused about it, so they keep doubling down on their message.
Well, that's exactly the problem, isn't it: people keep getting confused. If your message isn't getting through, instead of just yelling it louder you could try eliminating all ambiguity, for instance by changing the name. If this were marketed as simply 'adaptive cruise control with steering', nobody could claim they were misled.
Promoting the system with the name 'Autopilot' is, to my mind, quite a bit less than extremely clear communication.
Is it, though? Autopilot has never in the history of those systems meant hands off controls, attention off driving. Pilots with their autopilots on in airplanes are still flying and watching gauges.
> Is it, though? Autopilot has never in the history of those systems meant hands off controls, attention off driving. Pilots with their autopilots on in airplanes are still flying and watching gauges.
You are correct about the history of autopilot systems in planes, but when it comes to mass communication technically correct is not necessarily the best kind of correct. I wouldn't assume most people are aware of how autopilot systems work in planes.
Edit: just to make it clear, I don't believe this accident was caused by any confusion about the name. Rather, as pointed out by others in this discussion, I think the fact that the system works great for long stretches (some say they never experienced any problems) makes it exceedingly hard for drivers to remain alert. The brain's optimization tendencies are so strong that I think systems relying on people to stay alert when the action-effect-evaluation feedback loop is broken are inherently unsafe. My point is that with suboptimal messaging, Tesla is making it harder for themselves to keep the public on their side when tragedies like this occur.
The comparison to autopilot is a good one, I agree, but the implication is just the same: it's potentially dangerous. The aviation had to address this issue back in the 80s and 90s, noting that simply failing to properly observe gauges or other indicators contributed to 31 of 37 airline disasters in the 80s: http://libraryonline.erau.edu/online-full-text/ntsb/safety-s...
That may come back to haunt them. For those not clicking it, the above link goes to a video showing the Autopilot in action. The white-on-black, full-screen text prefacing the demo is not doing them any favours in the debate about their messaging (all-caps are in the original):
"THE PERSON IN THE DRIVER'S SEAT IS ONLY THERE FOR LEGAL REASONS."
"HE IS NOT DOING ANYTHING. THE CAR IS DRIVING ITSELF."
Car demos are almost always "do not attempt" and violating this or that, so when they say the driver wasn't necessary for the demo I don't personally extrapolate that to say anything about real life.
> Car demos are almost always "do not attempt" and violating this or that
Ads like those explicitly include 'do not attempt' disclaimers; this video does not. The statement it does include only emphasizes how advanced the system is.
> so when they say the driver wasn't necessary for the demo I don't personally extrapolate that to say anything about real life.
You do not, but the question is how many jurors would see it the same way when a highly skilled lawyer argues that this video is misleading, as part of his efforts to hurt Tesla in a liability suit.
Yeah, there's a very fair criticism around what exactly Tesla is advertising here.
As a safety matter, I am pretty confident that nobody gets through configuration and delivery (let alone a test drive) thinking this is what their car does when they turn on Autopilot.
Anyone who knows what an autopilot [1] is understands that it does not make an aircraft autonomous. It's an assistive device - a stabilizer for wings and pitch.
To call the Tesla Model S an "autonomous vehicle" simply because it has a feature called autopilot is disingenuous.
Only licensed aviators know the difference... Also in that
In 1947 a US Air Force C-54 made a transatlantic flight, including takeoff and landing, completely under the control of an autopilot.
I don't know about legality, but imo there is an argument to be made that it's very unethical to feign ignorance as to what the vast majority associates "autopilot" with.
I do have a problem with some of their marketing material - which describes the car as having "Fully Self-Driving Capability." No, the Model S doesn't have the capability to drive itself fully right now, and it's unethical to lead consumers to believe it does.
But in my view, the name "autopilot" itself describes a device for pilots, providing a very similar level of partial assistance to Tesla's current feature. It is accurate. The problem arises when it's sold beyond its traditional meaning.
This fails the Reasonable Person test. Most people on the street are not aviators, and their assumption is that "autopilot = full autonomy", which is entirely understandable.
"Tesla is extremely clear that Autopilot requires the driver to be alert and have hands on the wheel. This reminder is made every single time Autopilot is engaged."
A reasonable person would not take this to mean that the car is driverless.
1. Tesla says it requires hands on wheel.
2. It has sensors to confirm this.
3. The sensors allow you a few seconds (a lot of seconds? https://www.youtube.com/watch?v=rfvkw1TIiM4) with hands off wheels before warning you.
So, which is it, Tesla? Is hands off wheel allowed, or not allowed? If it's not allowed, why do you program your sensors to tolerate the situation?
Not to mention the fact that the marketing around it strongly encourages that interpretation (showing people driving without touching the steering wheel, etc.)
This is the part that is upsetting to me, not the name of the feature. "Fully self-driving capability" suggests that it is one hundred percent self-driving (and therefore will not require a human to drive it). That part's a lie, and a dangerous one.
Might also be that they're afraid media might start pointing out that it's just fancy lane assist and not the "self driving" they've been subtly marketing. Don't get me wrong, I wish I had one in my driveway right now, but they really need to tone down the whole autopilot bit until they can make sure it doesn't kill people on very obvious obstacles.
> heck, Tesla probably should have blacklisted that particular GPS location to force the user to take control when approaching that point, if they can't handle the barrier correctly in autopilot
This is something they really ought to do. Also to proactively blacklist places where other drivers commonly disengage Autopilot because there might be something tricky there that it can't handle
Tesla is exerting and reinforcing that it is not liable for poor driver decisions. If you use any vehicle safety aid while not paying attention, and you die or kill someone while in control of the vehicle, that is your liability.
Whether they call their system "Autopilot" is immaterial. You are the responsibly party as the driver, and this is made clear both at purchase time and during vehicle use.
Actually, if the fault isn't reasonable the manufacturer is usually held responsible.
A recent example is the ignition switch recall that old GM had. It took operator action to trigger the fault, but GM still repaired the faulty cars and paid liability claims.
Poor example, as GM knew of the defect for a decade and did not notify customers of the defect until a lawsuit generating significant internal document discovery led to the public disclosure.
And faults in safety systems can leave their manufacturers liable, just like faults in any other system. The fact that it is a safety system doesn't mean anything changes.
> If you don’t agree, do not engage the safety system. As the operator, you have final authority.
"Don't like, don't use" has never been an acceptable answer in discussions of product safety-- in neither the court of law nor the court of public opinion-- and that's not going to change now.
Boy, could you imagine what things would be like if the aviation industry reacted this way? Airline automation has played a huge role in so many major disasters, and only through careful reflection has the measured and careful use of automation lead to ever-better outcomes for air travel safety.
How they call the system drives user expectations and trust. They call it autopilot so that people buy it due to having these expectations. It is consequential and thus legitimate subject to critique.
Calling it autopilot implies you have to pay attention. It implies it's NOT self-driving or autonomous any more than airplanes (which virtually all have some sort of autopilot) are.
The name is irrelevant. What's relevant is that 56% of commercial pilots have admitted to falling asleep while flying[1].
People suck at remaining alert and attentive when doing boring shit. So when autopilot was all over the road and screwing up every 30 seconds, it wasn't much of a problem. Now that it can navigate highways for an hour or more without needing a safety critical intervention, we have a problem.
What percent of commercial pilots also fly with a copilot?
Additionally, I'm not a professional driver, I don't have any autopilot on my car, but I've definitely briefly fell asleep while driving late at night. What do you think rumble strips are for?
Such an extremely low number of (but extremely highly publicized) fatality is not reason enough to say autopilot is a net negative. In fact, the extreme media focus that every fatality during autopilot causes should goad improvement of autopilot safety until it's far better than human drivers.
(And let's not pretend that this autopilot error doesn't also happen with human drivers... The only reason this failure was fatal is because less than two weeks earlier, a human driver crashed into the same barrier--likely due to the same confusion--and destroyed its effectiveness since the highway department was slow to reset it.)
In the article I posted it mentions that of the 56% of pilots who admitted falling asleep, 29% also admitted to having woken up to a sleeping copilot.
There is no path from Autopilot as it is to full autonomy without doing a major shift in strategy. Google figured out very early on that the incremental approach is unfeasible. Most of the other big car companies at the outset were intending to develop autonomy incrementally as well, but then they thought about it for a few minutes.
Why aren't major shifts in strategy possible? LIDAR may become very cheap (i.e. $1000, much less than drivers paid for the full self-driving package). Improvements in radar may also help. Requiring much lower speeds (driving like a Waymo grandma) and specific conditions (i.e. dry, well-lit, well-mapped) are also feasible.
As the technology improves, conditions can be relaxed for full self-driving.
The very fact that in this case, the driver had multiple problems with this same spot, and that other drivers also noticed problems in this area, points to an obvious mitigation strategy: flag problem areas geographically for autopilot, develop specific strategies that address the problem, and verify it has been fixed by examining how other Teslas on the road in "ghost" mode (i.e. with the human driving but the software pretending to self-drive) respond to the changes.
Tesla, due to its massive and well-connected fleet, has a lot of tools in the toolbox to address these problems, in some cases more than Google (although I agree Waymo's self-driving capability is currently more impressive).
> flag problem areas geographically for autopilot, develop specific strategies that address the problem, and verify it has been fixed by examining how other Teslas on the road in "ghost" mode
This is what I have been thinking as well, except it appears that Tesla doesn't currently have a way to handle such scenarios.
Ideally, they should be recording when a driver makes a sharp swerve to correct autopilot's naïve lane marker following algorithm, then use that information to inform future drives using autopilot on that stretch by disabling autopilot as that patch of road is approached, while they work on a reliable fix.
Tesla's new Waze-like real-time traffic system as part of its mapping software update could help handle this. If the autopilot system (engaged or in ghost mode) generates an error signal (either binary or some variable degree of error) that is pushed to the traffic system along with the traffic data, it could easily alert other drivers (and hopefully the autopilot system) of such problems.
Considering they have enough data to do realtime traffic analysis, they easily should have enough data to determine problem areas for the autopilot.
This is a very upsetting response. Yes, the driver could have paid more attention, and yes, he shouldn't have activated Autopilot in an area where he knew Tesla's autopilot was not working correctly.
However, Tesla calling it "Autopilot" should be no longer allowed. It's clear that it is most certainly not that, and customers are not treating this feature in a careful and cautious fashion, and I do believe that's due to Tesla overselling Autopilot's abilities. In addition, its warning systems are obviously not enough to get drivers to take corrective action and ensure their hands are on the wheel. (Why you can take your hands off the wheel at all when engaging autopilot is a mystery to me.)
It'll work 99% of the time and lull you into a false sense of security. The other 1% of the time you better be ready to take over with little notice or there's a chance you'll end up dead. And with automatic updates a section of road that has been fine for a year suddenly tries to kill you...
Blaming the victim certainly doesn't sit well with me.
I totally agree. I am a massive fan of Tesla but this response is absolutely atrocious and disrespectful.
They refuse to acknowledge a bug in THEIR software which drove his car in to a concrete barrier and pin all the blame on him. Even so far as to tease him and ask why he would even use their product in that area (which Tesla allows).
It must feel so so so sad to have your loved one die and then to read this heartless BS from tesla.
I also always say this to others. It doesn't matter if it works 99.9% of the time. It took them maybe 10 years to get the software to this point. But it will take 50 years to solve the rest. And nobody is going to use the systems before that.
Autopilot is a marketing term here. It's correct if we think of Autopilot = autonomous but it isn't. The thing alarms at you (visually and via sound) when you take your hands off the wheel. The fact that drivers ignore that isn't Tesla's fault, and saying it's "not enough" is kind of absurd. What the hell are they going to do if you take your hands off the wheel? Stop the car in the middle of a freeway?
I could use your exact same argument to say Ford isn't doing enough to force people to put on their seat belt. It's a silly argument. It provides seatbelts, alarms when you fail to put it on. At that point, any harm is user-error. That's the exact same case here.
Yes. When the alternative is having a crash they should find a way to stop. [edit: of course I don't mean to suddenly stop. But if the system can't trust the driver is alert it should disengage or stop. As we've seen now multiple times, if it continues it will result in a crash.]
The current behavior is not only dangerous to the driver of a tesla. It puts everyone in the road at risk, most of whom have no contract with tesla and never agreed to be guinea-pigs in their so-called "autonomous" experiment.
Stopping in the middle of the road is dangerous (people will crash into you) and it would also be hard to make something have a feature to pull them off the road. If the computer reads it wrong you'll be in a ditch from the car trying to pull you off the road into something with no shoulder. The shoulder of a road is also dangerous as well even if there is a proper one because people often do not stay exactly in their lane. This is why officers often come around to the passenger side.
Just make the car turn on the warning lights and gently come to a stop when the driver lets go of the steering wheel or is otherwise not paying attention.
There are two possibilities:
1) The driver is actually incapacitated. Then slowing down and stopping is a million times safer than continuing to drive.
2) The driver was looking at his mobile phone. Then he'll probably quickly react and take control of the car again.
I don't see the scenario where continuing to drive is safe when the car knows the driver isn't paying attention.
Big city highway traffic where the road is crowded and moving fast. Those people aren't paying attention either and will slam into you. I've seen cars stopped on the freeway a few times in Atlanta because they broke down and it causes a big wreck. Responders really rush to get them off the road because it's so dangerous.
The biggest issue here is 90% autonomous has been shown to not work. People invariably let down their guard and go "hey, this works" and promptly stop paying attention because there's nothing for them to do.
>People invariably let down their guard and go "hey, this works" and promptly stop paying attention because there's nothing for them to do.
This is not specific to Tesla.
Trivial example: A coworker has a car that has blind spot detection. She knows the manufacturer says "you should still check, it is not 100% accurate". And she admitted that despite knowing this, she does not check her blind spots and relies solely on blind spot detection.
More serious example:
I have a car that has adaptive cruise control. You can get on the highway, set the CC to 60 mph, but if traffic slows down, it automatically slows down and maintains a distance to the next car. It speeds up to 60 mph when the traffic picks up again.
I love it, but for the inexperienced driver, this is dangerous. It requires one to be more vigilant than regular CC. Every so often (rare, but has happened a few times), it will not detect that the vehicle in front of me is a vehicle, and will not slow down. The more you rely on it, the more vigilant you need to be. My wife has never used CC before, and this would be too much for her to handle, compared to simpler CC.
It's a general problem. Tesla is just a convenient target.
I very much agree with your opinion. Systems which take over most but not all responsibility seem to lull us humans into a false sense of security/reliability. The psychological aspect of this is actually quite intriguing, but it's also a very dangerous phenomena.
I don't know anything about your wife, but that does sound perhaps a touch condescending. Besides that I think you make a very good point, one that many people seem to want to wish away.
>I don't know anything about your wife, but that does sound perhaps a touch condescending.
That's probably because you don't know anything about my wife :-)
This isn't about my wife. I would never recommend anyone to learn CC on this type of system. They should learn CC on a regular car, and once they're comfortable with that, switch to something like this. You simply have to be more alert for a "safe" CC than a regular one.
Many marine autopilots integrate with AIS, sonar, and charts for accident avoidance and alerts. For example on some larger ships it is able to calculate if at the current speed a collision would occur and reduce speed to avoid it. Some can do "port to port" auto-steer, taking the vesicle through narrow channels.
We are both accurate. I was talking about boats, not ships. My point still stands - ships are required by Collision Regulations to keep a lookout at all times. And yet they are called "autopilots". Even the most crude of devices that I described (that is used on pleasure boats).
A big part of being allowed to take your hands off the wheel is probably "how do I detect the hands are on the wheel?". In current S/X with no interior camera the way autopilot works is you have to "yank" the wheel every now and then to prove you have your hands on. Requiring constant "yanking" wouldn't work, and there are often long stretches of roads on highways (where Autopilot is meant to work) that wouldn't require any movement of the wheel at all so they wouldn't know the difference between hands on or off unless you "yanked" it.
Every time I learn about how these cars are handling things I end up disappointed and concerned because the actual implementation is way less impressive and seemingly less foolproof than I - and I'm sure many other people - are assuming they are.
I would have never guessed "hands on wheel" is detected only if the driver yanks the wheel. I figured it was camera based or that the wheel had capacitive sensors.
Now compare this with Tesla's PR statement which makes it sound as though they have proof that the guy who was fatally steered into a barrier by their Autopilot had taken his hands off the wheel. They resort to this kind of sleazy, dishonest misrepresentation every time their software screws up.
The steering wheel detects torque. Generally if your hands are on the wheel, they exert enough torque through natural movements for the vehicle to detect them. In rare cases, where a road is straight and a driver's hands are perfectly balanced, the car will flash a warning, in effect telling the driver to provide some brief feedback. The slightest of forces on the wheel will quell the warning (much less force than is required to move the steering wheel).
Are you talking about vehicles in general or Teslas? I've had a Model X for about a year now and can say that just having hands on the wheel definitely does not generate enough torque to deter the warnings on Autopilot, it definitely requires a "yank".
Teslas. Maybe something changed but on my 2015 Model S I've never had to physically move the wheel in the slightest, just apply a little bit of pressure.
Cadillac Super Cruise is far superior to Tesla's autopilot and is a true Level 3 system. It does active attention monitoring of the kind you're expecting and uses GPS fences to only activate where GM has detailed maps.
Still very new only only on a single model so far, but it's at least the right direction.
I also don't hold the wheel hard enough, I always get the jiggle the wheel warning ~ 1 min as that's the only way for them to detect. Sometimes if you're so focused on watching cars in other lanes it's easy to miss the slight flashing warning that it starts with. So there is no evidence Walter wasn't paying attention in the past just based on that as most AP users get that.
I see your point on calling it "Autopilot", but if you think about the application of the term in terms of aviation, I think it makes more sense. Autopilot in planes won't land the plane for you. It basically will just keep you at a certain elevation and direction.
I think that Tesla's Autopilot function is comparable to planes. That's probably how they will justify it. However there might be a legitimate point in that people don't have that kind of understanding of what "Autopilot" means.
Aviation autopilots handle flying duties in a giant expanse of open sky. It's a way, way easier job to do than navigating a highway from behind a steering wheel.
So even if the "capabilities" may be comparable between the two systems, the amount of supervision they require is not, so I don't think they should use the term. It may be "lane-keeping cruise control" isn't nearly as sexy, but it doesn't establish unrealistic expectations which get people killed.
> Autopilot in planes won't land the plane for you.
For unmanned aircraft, autoland is an safety feature. After loss of link, an autopilot might be configured to land at a designated location, rather than just circling until it runs out of fuel and drops out of the sky. Though admittedly, crashing in a safe place is less serious when the vehicle has no people in it.
The Tesla autopilot needs much closer supervision than any of the autopilots I've worked with. That's more a result of its domain than of its sophistication, but it still makes me very uncomfortable with the name.
Autoland is also a safety feature for manned aircraft because it allows you to land in bad visibility or avoid accidents like the one at SFO a couple of years ago.
No, did you read the accident report which talked about the pilots being unable to land the plane because they were used to the computer doing most of the work?
Autopilots can and do land planes; the first autopilot landing was decades ago! They simply don't as a matter of practice so that pilots can maintain their skills during landing, which is the most crucial and dangerous part of the flight.
Additionally...Autopilot in planes is hands off. The pilots look at the gauges but they're not actively maintaining their grip on the controls.
No, I'm damning Tesla because they sell the vehicle as self-driving when it cannot, and moreover "require" the driver to maintain superficial alertness without actually enforcing alertness in a reasonable matter like all of their major competitors do.
First Cat III autoland aircraft was the Shorts Belfast freighter. Sud Caravelle and HS Trident airliners started at lower categories and worked up to III.
According to the family, Mr. Huang was well aware that Autopilot was not perfect and, specifically, he told them it was not reliable in that exact location, yet he nonetheless engaged Autopilot at that location. The crash happened on a clear day with several hundred feet of visibility ahead, which means that the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so.
The fundamental premise of both moral and legal liability is a broken promise, and there was none here. Tesla is extremely clear that Autopilot requires the driver to be alert and have hands on the wheel. This reminder is made every single time Autopilot is engaged. If the system detects that hands are not on, it provides visual and auditory alerts. This happened several times on Mr. Huang's drive that day.
We empathize with Mr. Huang's family, who are understandably facing loss and grief, but the false impression that Autopilot is unsafe will cause harm to others on the road. NHTSA found that even the early version of Tesla Autopilot resulted in 40% fewer crashes and it has improved substantially since then. The reason that other families are not on TV is because their loved ones are still alive."
I'm normally an automation booster, but this is the wrong PR take for sure. I mean, to take this at face value it sounds like Tesla is saying its autopilot doesn't actually work and that if you don't continue to drive your car you're going to die. Maybe that will work in court, but it's not going to sell a lot of cars.
I mean... clearly (hopefully) this isn't the attitude being exhibited inside the company. We all know there's a big strike team or whatnot that's been assembled to figure out exactly what happened, and how. We all know that it's probably a preventable failure. So that's what we need to see, Elon. Not this excuse making.
> it sounds like Tesla is saying its autopilot doesn't actually work and that if you don't continue to drive your car you're going to die
No, it doesn't just "sound like", it's actually
> extremely clear that Autopilot requires the driver to be alert and have hands on the wheel. This reminder is made every single time Autopilot is engaged. If the system detects that hands are not on, it provides visual and auditory alerts.
> Autopilot requires the driver to be alert and have hands on the wheel
Clearly that's not the case, because if it "required" it the car would cease to function if you took your hands off the wheel. Instead, the Tesla continues to drive itself for some time and then after a few seconds warns you, but that's about it. If you want to see what a real requirement looks like, go explore any of the other car makers that have rolled out similar systems at this point. Those really, actually do require "the driver to be alert and have hands on the wheel."
Has everyone forgotten Tesla's marketing from when they first rolled this poorly implemented system out?
This response from Tesla comes across as cold and shows no empathy to the victim's family. "We are very sorry for the family's loss" sounds so hollow after reading the rest of the statement. Does Tesla needs to give such strong public statement against the victim and his family? What is the point? Tesla is euphemistically saying - "Driver was stupid. Sorry but not really." This statement is really hurtful for the family and friends who got affected by this tragedy. Grow some heart Tesla. I am probably asking too much from a faceless corporation.
> The crash happened on a clear day with several hundred feet of visibility ahead, which means that the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so.
This is very misleading, he might have seen the barrier well in time but the car could have steered towards it in the last second, like it does in several of the videos demonstrating this issue.
In the videos from others, the autopilot begins to move with the lane, but then crosses over into the barrier as it approaches. There's no way to anticipate that this is going to happen by simply looking ahead of you. There are no warning signs that something strange is about to occur. By the time you've processed that something is happening, it's well underway.
If you were to simulate this, I wonder how many people would be able to properly react? I think this would be an interesting experiment; sit people in driving simulators and have them drive around for 30 minutes or something and randomly introduce this. Even in the context of an experiment where people are likely to be more attentive due to the novelty of the experience, I'd expect to see quite slow reactions with fairly high frequency.
I'm under the impression it was established that Mobileye's "superior" approach of hand-coding every possible situation would not cut it for full self-driving; they eschew deep learning in favor of having hundreds of people manually annotating images to uncover corner cases, plus a high reliance on maps, and were not happy with the level of automation Tesla wanted to push the system into.
Can you blame them for not being happy with that? Why put "superior" in quotes if they were doing a better job?
It sounds like Mobileye knows what it takes to make self-driving work given present technology, and Tesla didn't believe them and thought they were doing something so simple that their job could be automated.
Every AI system that works and makes important decisions has corner cases that are designed by humans. Turns out that computer programmers do a job that matters!
Here's a good example - a Tesla on autopilot crashing into a temporary road barrier which required a lane change.[1] This is a view from the dashcam of the vehicle behind the Tesla. At 00:21, things look normal. At 00:22, the Tesla should just be starting to turn to follow the lane and avoid the barrier, but it isn't. By 00:23, it's hit the wall. By the time the driver could have detected that failure, it was too late.
Big, solid, obvious orange obstacle. Freeway on a clear day. Tesla's system didn't detect it. By the time it was clear that the driver needed to take over, it was too late. This is why, as the head of Google's self driving effort once said, partial self driving "assistance" is inherently unsafe. Lane following assistance without good automatic braking kills.
This is the Tesla self-crashing car in action. Tesla fails at the basic task of self-driving - not hitting obstacles. If it doesn't look like the rear end of a car or truck, it gets hit. So far, one street sweeper, one fire truck, one disabled car, one crossing tractor trailer, and two freeway barriers have been hit. Those are the ones that got press attention. There are probably more incidents.
Automatic driving the Waymo way seems to be working. Automatic driving the Tesla way leaves a trail of blood and death. That is not an accident. It follows directly from Musk's decision to cut costs by trying to do the job with inadequate sensors and processing.
[1] https://www.youtube.com/watch?v=-2ml6sjk_8c