This makes it sound like the problem is that there are edge cases that aren’t handled. In reality, the test shows that Autopilot is only designed to handle the special case of lane keeping and following the car in front of you, but is too simplistic to actually handle the general case of driving the car. It’s a very advanced cruise control, not a self-driving system that still has kinks to work out.
If you think it’s a PR/marketing problem and the system operates as it should, then stop calling it Autopilot and stop advertising cars with full autopilot capabilities*
* “Theoretically the hardware should be enough to enable full auto pilot some time in the future. At the moment use this other Autopilot, which is not really autopilot. Btw we made this really cool autopilot video, but it’s not the Autopilot you have in your car.”
If you think Tesla Autopilot needs another name, stop operating on mystical beliefs like Rei-Ki and magical self-flying autopilot and learn what any aircraft autopilot is actually capable of.
It just needs the pilots to make a land / no-land decision at the specified altitude. Can they see the runway lights within the specified distance and is the crosswind-component within the acceptable range? If they choose to continue the landing then they don't touch the controls.
Imagine you're coming in to a wet runway with 20kt crosswind and one engine inoperative. Sounds like a nightmare; but in that case autoland will do a better and more consistent landing than most pilots. Can we say that about car self-driving systems yet?
Note all the crosschecking between the human operators and the two autopilots. Mentour speaks almost exclusively about flying the Boeing 737, a very reliable aircraft. Even on this aircraft the human pilots are ready to take over from the autopilot at a moment's notice.
Here is an auto land using Cat III ILS in an Airbus 320, with the humans invigilating at all times, ready to go around if anything appears off nominal https://www.youtube.com/watch?v=mSNE3SmYA-8
So to a naive observer, it may appear that you are correct: planes can land themselves without intervention. But that is only true for airport, aircraft and conditions qualified for Cat III approaches (i.e.: ideal conditions) and remain that way throughout the procedure. If the airport, aircraft or environment are not qualified for Cat III approach, you don't get this ultra-precise auto land capability. If anything goes wrong with any equipment during approach, you don't get the auto land capability (and indeed a go-around is required).
If there is anything outside spec (e.g.: aircraft manoeuvring on the taxiway can distort the ILS beam, thus Low Visibility Procedures), things will go wrong and the pilot needs to intervene.
At no point is it safe for a pilot to let the plane do the flying and sit back to have a cup of tea. The captain and first officer have their assigned tasks: be ready to land the plane and be ready to go around (respectively). Intervention is required from time to time, so they need to be ready to intervene every time.
Cars do not have the option of "going around". Every action has the potential to cause fatality within seconds. There's even less room for inattentiveness when using a car on autopilot.
The OP suggested that autopilot should be able to drive the car without any human intervention, but they have confused autopilot with autonomous driving, probably because of the "auto" prefix, yet they've never been confused about an automobile requiring constant human attention.
It's clear that Tesla have a PR/marketing problem, when the population is happy to drive an auto with auto, but can't differentiate between autopilot and autonomous.
Or, as in the Thatcham Research video, stuff that's hidden by a leading vehicle. I always make an effort to know what vehicles far ahead are doing. If necessary, I shift off lane occasionally to check for hidden stuff. And when I'm behind large vehicles, I don't follow so closely that I can't see what's ahead.
But if you're going to do all that, why use Autopilot?
I don't under stand the value of Autopilot at all. What is the value in sitting there concentrating on driving without slightly the turning the wheel to stay in a lane.
First, I think doing nothing but still being expected to pay attention is more annoying than having a role in driving. But even without that, whats so fucking hard about moving the wheel a little bit.
Sounds like my Grandma. For her safety, and everyone else's, she doesn't drive any more.
What's the point of having a self-driving system that doesn't do all that? The entire point of self-driving systems is that they're safer because they can see farther, anticipate more, and react faster than any human. If they can't, what's the point? Why even offer Autopilot?
Autopilot when used correctly can significantly reduce fatigue because you are not focussed on the minutiae of keeping pace with the car in front of you and keeping the car in the assigned lane. This frees your attention up for higher level operation such as observing the traffic further ahead, scanning for hazards, and enjoying the scenery to your left and right.
Discerning stationary hazards from stationary scenery is not part of Autopilot’s feature set right now, that is your job as the operator.
When these things pop up, I'd love to know the answers to some of these questions.
1. What is the rate of accidents for Tesla drivers with and without Autopilot on similar driving terrain.
2. How 'correctable' are these flaws in Autopilot? I doubt someone is programming in what to do in every situation. Is the solution to twiddle with some weights and pray your ML pipeline spits out a better version? I'm guessing the true answer lies in the middle, but I don't have the technical background to know for sure.
3. I see a lot of references to Waymo leading the pack of would-be self driving vehicles. What is it that Waymo has done that makes their self-driving tech so much better?
4. Are we okay with accepting the costs of self driving cars given their potential in the future? Most major transport-related revolutions have come with a significant human cost at the outset as the early adopters accept a large amount of risk to push the technology forward. With time and usage, things stabilize and become safer. Flying was quite dangerous early on, too.
1. Your question seems mostly answered in this recent article https://www.wired.com/story/tesla-autopilot-safety-statistic...
2. I don't know much about architecture / model used by Tesla. I don't have open access to the accident's details. It is challenging for most people to publicly and answer your question based on facts. Sorry!
3. Again, I would say only very limited number of people have a clear vision about what's the current status. Some claim it is hard for Tesla to manufacture and design cars, solar panels, batteries and designing an autopilot at the same time. It requires a very broad diversity of skills that may be hard to steer and build from the ground up.
4. Fair enough, point taken. My personal complain is not about the technology in general but about Tesla's marketing. I hope the first planes were not claiming to be safer. The Titanic did though :).
Autopilot is not autonomous. It needs human supervision, and the fact that some simplistic “autonomy scale” includes “requires human supervision” is the largest part of this problem.
If it requires any supervision, it is not autonomous.
Any discussion of autonomy that even mentions Autopilot is automatically null and void. I would consider it the autonomy equivalent of Godwin’s law: in any discussion about autonomy the chance of comparing the safety of an autonomous system to Autopilot or Cruise Control approaches 1 and the first person to mention the comparison automatically loses the argument.
Tesla claims that hardware in model 3 is capable of self-driving once software catches up. That allows them to charge 3K USD for "FSD package".
These are two very different approaches to the same problem.
> When a car is moving at low speeds, slamming on the brakes isn't a big risk. A car traveling at 20mph can afford to wait until an object is quite close before slamming on the brakes, making unnecessary stops unlikely. Short stopping distances also mean that a car slamming on the brakes at 20mph is unlikely to get rear-ended.
But the calculation changes for a car traveling at 70mph. In this case, preventing a crash requires slamming on the brakes while the car is still far away from a potential obstacle. That makes it more likely that the car will misunderstand the situation—for example, wrongly interpreting an object that's merely near the road as being in the road. Sudden braking at high speed can startle the driver, leading to erratic driving behavior. And it also creates a danger that the car behind won't stop in time, leading to a rear-end collision.
I suspect _that_ might _also_ "startle the driver"...
So of course the only sensible thing to do is release a product that happily accelerates into the back of a bright red emergency vehicle with flashing lights. Or into a stationary concrete barrier. Or into the side of a massive tractor-trailer.
I mean, who wants to bother with false positives?
"You deliver a project 85% correct at university, you get a high distinction. You deliver a project 15% wrong at work, you get fired."
Who are these people?
In the Thatcham Research video, however, swerving would have been far more workable than braking. But then, we don't want too much autonomous lane hopping, either.
Humans have one huge advantage over computers: we pattern-match instinctually, building mental models of our surroundings.
Tesla is taking advantage of the first fact, but by avoiding LIDAR, they're falling victim to the second fact.
We humans don't have LIDAR, so it makes sense that a car without LIDAR should be able to match us, but we have brains and visual systems that far exceed anything available today or at any point in the near future when it comes to pattern-matching.
40,100 vehicle deaths in 2017 in the US, most of them the result of distracted driving. It's a shame that Tesla's system performs worst at exactly the point that humans need the most help.
LIDAR is higher resolution than radar, but often slower - counterintuitive to be sure, but such is the case - i.e. this large volume of data can lag reality by some small delta, and at 60MPH, that can add up.
Uber does use LIDAR and RADAR, however the informed external commentary I've seen, admittedly guesswork based on the NTSB reports seems to indicate it was a fusion error behind the AZ fatality. Sensor fusion is a beastly complex problem on top of miserable calibration exercises :-(
More importantly, AI only exists in the minds of marketers, media, and the improperly informed. This is all just pattern recognition - we're a ways off from having a system understand that if an occluding obstacle moves out of the way, then high priority previous unknown information now exposed needs to be checked. I'm sure such tests will get hardwired into current systems, however the system is then limited by what does get hardwired into it.
At the end of the day a visual system, no matter how sophisticated is still only a visual system.
This seems to be the case -- if you watch the video the Tesla is at a full stop about 10 feet after the car, implying it must have hit the brakes hard way beforehand.
In that case, I don't blame Telsa for crashing -- you literally are baiting it into the crash a la a bull with a flag. You make it follow one car, prevent it from changing lanes, and then have an obstacle parked in the middle of the lane.
I'm all for getting Tesla to be safer. The accident count has gotten way too high. But this seems to be a test that is rigged, and Tesla did break.
This situation is pretty common on the road. And a lot of humans who aren't paying attention get caught by it.
But it's not impossible to avoid, you have to be watching ahead of the car in front of you. And if you can't see in front of the car in front of you, you gotta have a lot more space so you can break in tiime.
The test in only "rigged" in the way a unit test looking at a specific edge-case is "rigged". Yeah, it's not something that happens every day, but there's very few human driver who wouldn't have perfectly safely followed the Merc into the right hand lane and easily missed that inflatable car. Most of us would have noticed the stationary car in front of the car we were following, and quite likely had already changed lanes before the Merc did.
It's different takes edited together. Note the cameraman near the dummy car as the tesla approaches (you can see him and part of the video was shot by him).
> In that case, I don't blame Telsa for crashing
And in the case where it didn't brake? Would you blame Tesla then? Well... none of Tesla's driver assistance can handle that case, there's no way that they would brake. It will not brake for a stationary object (unless that object was previously moving).
If I did that the police and my insurance would blame me. Regardless of if I were braking hard. I'd almost certainly end up with points on my licence if I survived.
This is a very serious and consistent issue for Tesla's so-called "autopilot," as the article highlights with links to a variety of recent cases.
Problem is that the autopilot system makes it harder for the driver to pay attention.
The question is, does it reduce the amount of attention paid to a significant enough degree that it's dangerous? Consider subways -- most of them have automatic train control systems in place that could effectively drive the trains automatically. However, operators are still required to control the speed, largely because it keeps them engaged.
Give an operator nothing to do, and they won't pay attention -- this is basic human psychology (look up "Vigilance Tasks" if you're interested in further reading).
The accidents will instead get caught up in that antivaxxer Facebook debate on their phones.
Car distance (feet)
Model 3(orig) 152
Model 3 (new) 133
Model X 127
Camry Hybrid 125
F-150 Lariat 119
Model S 118
Porsche P.GTS 110 - Panamera
Chrysler 300S 109
Model 3 (new) 133
> In a recent test of various 2018 F-150s, Motor Trend recorded a 129-foot stopping distance for the 3.3-liter XL Supercab model, while an upscale Lariat trim made the stop in 10 fewer feet. For comparison, a Chrysler 300S tested by the same publication made the stop in 109 feet.