If this theory turns out to be correct then Tesla is in deep trouble because this would be a very elementary mistake. The system should have known that the lanes split at this point, noticed that the distance between what it thought was the diamond lane marker and the right lane line (which was clearly visible) was wrong, and at least sounded an alarm, if not actively braked until it had a better solution.
This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong. I dubbed this property "cognizant failure" in my 1991 Ph.D. thesis on autonomous driving, but no one seems to have adopted it. It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed. Tesla seems to have done neither.
This is a very good point: just like a human driver should slow down if they can't observe the road ahead well enough, an AI should slow down when it's not confident enough of its surroundings. This is probably very difficult to do, and I'm skeptical about your claim that an AI can be engineered to always detect its own failures. Further, I naively believe that Tesla is already doing a lot to detect conflicting inputs and other kinds of failures. Maybe being careful enough would prevent autopilot from working at all, so compromises have been made?
I probably don't understand AIs well enough to understand how they can be engineered to (almost) always recognize their own failures. But if a simple explanation exists, I'd love to hear it.
Basically it's a matter of having multiple redundant sensors and sounding the alarm if they don't all agree on what is going on, and also checking if what they think is going on is in the realm of reasonable possibility based on some a priori model (e.g. if suddenly all of your sensors tell you that you are 100 miles from where you were one second ago, that's probably a mistake even if all the sensors agree). That's a serious oversimplification, but that's the gist.
But it is more complicated than that when you're talking about algorithms and complex systems. If you had three of the same exact system, they'd likely all make the same mistake and thus you'd gain no safety improvement (except actual malfunctions, not logical limitations).
I would like to see auto-braking taken out of autopilot completely, so if autopilot drives you into a wall at least the brakes slow you.
I'm not an expert...but it seems like for airplanes there are standards that manufacturers need to abide by for avoidance systems...which would mean testing and certification by an independent association before it's deployed. With manufacturers doing OTA updates...there's really no guarantee it's been tested...you'd have to trust your life to the QA process of each manufacturer. Not only the one of the car you're driving...but the car next to you!
Regardless, it should be trivial have a different behavior for a moving object that enters the road versus a rigid object that has not been observed moving.
On top of that, Kalman innately tracks the uncertainty of its combined estimate. So you can simply look at the Kalman uncertainty covariance and decide if it is too much for the speed you are going.
I really wonder if Tesla is doing that...
Sorta. There are ways to extract kinds of uncertainty and confidence from NNs: for example, Gal's dropout trick where you train with dropout and then at runtime you use an 'ensemble' of multiple dropout-ed versions of your model, and the set of predictions gives a quasi-Bayesian posterior distribution for the predictions. NNs can be trained directly via HMC for small NNs, and there are arguments that constant-learning-rate SGD 'really' implements Bayesian inference and an ensemble of checkpoints yields an approximation of the posterior, etc. You can also train RL NNs which have an action of shortcutting computation and kicking the problem out to an oracle in exchange for a penalty, which trains them to specialize and 'know what they don't know' so they choose to call the oracle when they're insufficiently sure (this can be done for computational savings if the main NN is a small fast simple one and the oracle is a much bigger slower NN, or for safety if you imagine the oracle is a human or some fallback mechanism like halting).
I have some cites on these sorts of things in https://www.gwern.net/Tool-AI and you could also look at the relevant tags https://www.reddit.com/r/reinforcementlearning/search?q=flai... and https://www.reddit.com/r/reinforcementlearning/search?q=flai...
In particular, it's possible to learn the variance of the return using TD-methods with the same computational complexity as learning the expected value (the value function).
See  for how to do it via the squared TD-error, or  for how to estimate it via the second moment of the return, and my own notes (soon to be published and expanded for my thesis) here .
It turns out that identifying states with high variance is a great way of locating model error-- most of the real-world environments are fairly deterministic, so states with high variance tend to be "aliased" combinations of different states with wildly different outcomes.
You can use this to improve your agent via either allocating more representation power to those states to differentiate between very similar ones, or have your agent account for variance when choosing its policy.
For example, Tesla could identify when variance spikes in its model and trigger an alert to the user that they may need to take over.
Additionally, there's work by Bellemare  for estimating the distribution of the return, which allows for all sorts of statistical techniques for quantifying confidence, risk, or uncertainty.
Older ideas are http://mlg.eng.cam.ac.uk/yarin/blog_2248.html or http://papers.nips.cc/paper/7219-simple-and-scalable-predict...
Basically Bayesian neural networks were able to model confidence but are not applicable in current real-world scenarios. Thus, lots of methods rely on approximating bayesian methods.
However, in practice, this usually is just not a good indicator of confidence. NNs are notoriously overconfident.
I like that term. When I was involved in the medical instrumentation field, we had a similar concern: it was possible for an instrument to produce a measurement, e.g., the concentration of HIV in blood serum, that was completely incorrect, but plausible since it's within the expected clinical range. This is the worst-case scenario: a result that is wrong, but looks OK. This could lead to the patient receiving the wrong treatment or no treatment at all.
As much as possible, we had to be able to detect when we produced an incorrect measurement.
This meant that all the steps during the analyte's processing were monitored. If one of those steps went out of its expected range we could use that knowledge to inform the final result. So the clinician would get a test report that basically says, "here's the HIV level, but the incubation temperature went slightly out of range when I was processing it, so use caution interpreting it."
Like most software, the "happy path" was easy. The bulk of the work we did on those instruments was oriented towards the goal of detecting when something went wrong and either recovering in a clinically safe manner or refusing to continue.
With all the research into safety-critical systems over decades, it's hard to see how Tesla could not be aware of the standard practices.
there is no "slightly out of range". Its either in the range or outside. Valid or invalid, when it comes to critical tests like these.
if temperature deviation is outside of acceptable deviation range then thats a system fault and result should have been considered invalid.
Back in the days there was much less regulatory oversight and products like that could slip through the cracks, Resulting to deaths, missed diagnosis etc etc.
The same with Teslas AP: its either 100% confident in the situation or not. If the car is not confident in whats happening - it should shut down. If that happens too often then the AP feature should be removed / completely disabled.
How many more people have to get into accidents? I know, if Musk's mom (knock on wood) was a victim of this feature then things would be taken more seriously.
You do realize that I gave a much simplified view of the situation as this is a web forum discussing a related subject, not an actual design review of the instrument, right?
To any process I can set multiple "acceptable ranges" depending on what I want to accomplish. There can be a "reject" range, "ok, but warning" range, "perfect, no problem" range, or a "machine must be broken" range
Everything is context dependent; nothing is absolute.
That is an extremely surprising result. How is that possible? Are you really claiming that any control system can be engineered to detect that is has failed in any possible manner? What's an example of actual real-world system like that?
No, of course not. But it is possible to reduce the probability of non-cognizant failure to arbitrarily low levels -- at the expense of cost and the possibility of having a system that is too conservative to do anything useful.
Also: the lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane. If two systems give conflicting data some kind of alarm should be given to the driver.
GPS is not accurate enough to reliably pinpoint your location to a particular lane. Even WAAS (https://en.wikipedia.org/wiki/Wide_Area_Augmentation_System) can have up to 25 feet of error. Basic GPS is less accurate than that.
In fact, it's possible that GPS error was a contributing factor here but there's no way to know that from the video.
I have seen self-driving test cars in Silicon Valley (frequently, especially in the last year or so) using these types of systems, so they are at least being tested. I've also heard discussion of putting RTK base stations on cell-phone towers to provide wide area coverage, but I'm not sure if much effort has been put into that. I do know vast areas of the agricultural midwest are covered in RTK networks -- its used heavily for auto-steering control in agriculture.
Now the cars are relying on cameras, lidar to figure things out. What has happened to putting sensors on the road to broadcast what/where the roads is. Is that out of the question now because of cost?
So even if the error can be large in practice it often works very well.
It would be very interesting to see the input data streams these self driving systems rely on in case of errors, and in the case of accidents there might even be a legal reason to obtain them (for liability purposes).
Well, yeah, it's not like autopilot is driving cars into barriers every day. But I don't think "often works very well" (and then it kills you) is good enough for most people. It's certainly not good enough for me.
> Previous autopilot versions didn't do this. 10.4 and 12 do this 90% of the time. I didn't post it until I got .12 and it did it a few times in case it was a bug in 10.4.
> Also, start paying attention as you drive. Centering yourself in the lane is not always a good idea. You'll run into parked cars all over, and lots of times before there is an obstacle the lane gets wider. In general, my feel is that you're better off to hug one of the markings.
It kind of does sound like that autopilot is steering vehicles into barriers everyday, or it would be if drivers weren't being extra vigilant: https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...
Seems like we need a study to assess whether the kind of protections AP offers outweighs the extra challenges it adds to a daily drive.
Because if I interpret the video correctly either the car was partially in the wrong lane before the near accident or it was about to steer it into a situation like that. And that means it is time to hand off to the driver because the autopilot has either already made an error or is about to make one.
Either way the autopilot should have realized the situation is beyond it's abilities, and when GPS and camera inputs conflict (which I assume was the case) that's an excellent moment for a hand-off.
A good example of this I think is when driving in tunnels, there is no GPS info there, but the navigation shows you following the (curved) road.
Also, the better ones have inertial backup & flux gate for when GPS is unavailable. Inertial backup could also help to detect the signatures of lane changes and turns.
It's possible your highways are newer or less congested than ours?
I don't use Google maps for driving but I know plenty of people that do and having been in the car with them on occasion has made me even happier with my dedicated navigation system. The voice prompts are super confusing with a lot of irrelevant information (and usually totally mis-pronounced) and now you are supplying even more data points that the actual navigation itself isn't as good as it could be.
I suspect this is a by-product of doing a lot of stuff rather than just doing one thing and doing that well.
Sometimes it'll shout "The speed limit is 35 miles an hour!" when I go under a bridge carrying a 35MPH road, even though the road I'm on has a limit of 65.
1. Base stations at known locations broadcast corrections for factors like the effects of the ionosphere and troposphere, and inaccurate satellite ephemerides. If you have multiple base stations, you can interpolate between them (Trimble VRS Now  does this, for example)
2. Precise measurements by combining code phase (~300m wavelength) and carrier phase on two frequencies (~20cm wavelength), plus the beat frequency of the two frequencies (~80cm)
With these combined, and good visibility of the sky, centimetre-level accuracy is possible.
Autonomous vehicles will often combine this with inertial measurement  which offers a faster data rate, and works in tunnels.
Many people also expect autonomous vehicles to also track lane markings, in combination with a detailed map to say whether lane markings are accurate.
My phone has GPS accurate to within 1 foot. I use it for mining location all the time. It uses differential GPS plus Inertial sensors.
 See https://www.ri.cmu.edu/pub_files/2009/6/aimag2009_urmson.pdf at 21-23.
This is not a field where 'move fast and break stuff' is a mantra worth anything at all, the stuff that is broken are people's lives and as such you'd expect a far more conservative attitude, akin to what we see in Aerospace and medical equipment software development practices.
But instead barely tested software gets released via over-the-air updates and it's anybody's guess what you'll be running the next time you turn the key.
I agree with you that the software should have been able to detect that something was wrong either way, either it was already halfway in the wrong lane or it was heading to be halfway in the wrong lane, a situation that should not have passed without the car at least alerting the driver.
And from what we have seen in the video it just gave one inferred situation priority over the rest and that particular one happened to be dead wrong about it's interpretation of the model.
One of my concerns with self-driving systems is that they don't have a good model of the world they are operating in. Yes, sure, they build up 3-D models using multiple sensor types, and react to that accordingly, oftentimes with faster reflexes than a human.
However, consider this situation with pedestrians A and B, while driving down a street, approaching an intersection.
Pedestrian A, on the edge of the sidewalk, is faced away from the street, looking at his phone. The chances of him suddenly stepping out in front of my car are exactly 0.0%.
Pedestrian B, on the edge of the sidewalk, is leaning slightly towards the street, and is looking anxiously down the street (the wrong way). The chances of B suddenly stepping out is high, and the chances of B not seeing my car are very high (because B is being stupid/careless, or else is used to UK traffic patterns).
I, as a human, will react to those situations differently, preemptively slowing down if I recognize a situation like with Pedestrian B. Autonomous driving systems will treat both situations exactly the same.
From everything I've read about so far, current driving systems have no way of distinguishing between those two specific situations, or dealing with the myriad of other similar issues that can occur on our streets.
The cynic in me thinks self-driving will require the same, the rules and conservative practices will only come after a bunch of highly publicized, avoidable failures that leave their own trail of blood.
That's what makes this example bizarre to me. I had thought that AutoPilot's ideal situation was having a moving vehicle in front of it. For example, AP does not have the ability to react to traffic lights, but can kind of hack it by following the pace of the vehicle ahead of it (assuming the vehicle doesn't run a red light):
I’m surprised self-driving systems don’t do this (do they?). One or more vehicles in front of you that have successfully navigated an area you are now entering is a powerful data point. I’m not saying follow a car off a cliff, but one would think the behavior of followed vehicles should be somehow fused into the car’s pathfinding.
Following this principle would probably result in a lot of people angry that their autopilot had gotten them a speeding ticket.
Aside from GPS’ accuracy as mentioned in other replies, also take into account the navigation’s map material. The individual lans are probably not individually tracked on the map, but a single track per road with meta data specifying the number of lanes amongst other featerus. So even if GPS would have provided very accurate position readings, the map source material might not even match that level of detail.
This seems to me to be a clearly incorrect (and self-contradictory) claim. It entirely depends on your definition of failure.
Your analysis seems fine. The big problem is that the "autonomous" driver is using one signal (where are the lines on the edge of the road?) to the near exclusion of all others (is there a large stationary solid object in front of me?)
Maybe Tesla should have hired George Hotz (sp?) if only to write a lightweight sanity-check system that could argue with the main system about whether it was working.
I guess that means I was able to pull the wool over the eyes of all five members of my thesis committee because none of them thought so.
> It entirely depends on your definition of failure.
Well, yeah, of course. So? There is some subset of states of the world that you label "failure". The guarantee is not that the system never enters one of those states, the guarantee is that if the system enters one of those states it never (well, so extremely rarely that it's "never" for all practical purposes) fails to recognize that it has done so. Why do you find that so implausible?
Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not; I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".
They seem the same to me. In fact, in 27 years you are the first person to voice this concern.
> I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".
Could be, but that would be a publishable result. So... publish it and then we can discuss.
That is one possible failure mode. But you're right, it's not the only one.
There is an extensive literature on how to detect and correct sensor errors resulting from all kinds of different failure modes.
OTOH, if you say you’ve failed when you haven’t, that’s a false alarm (FA).
In general, there is a trade-off between MD and FA. You can drive MD probability to near-zero, but typically at a cost in FA.
Again in general, you can’t claim that you can drive MD to zero (other than by also driving FA probability to one) without knowing more about the specifics of the problem. Here, that would be the sensors, etc.
In particular, for systems with noisy, continuous data inputs and continuous state spaces -- not even considering real-world messiness -- I would be surprised if you could drive PD to zero without a very high cost in FA probability.
As a humbling example, you cannot do this even for detection of whether a signal is present in Gaussian noise. (I.e., PD = 1 is only achievable with PFA = 1!) PD = 1 is a very high standard in any probabilistic system.
Discrete-input systems can behave differently.
Right, that whole claim about engineering a system that always knows when it’s not working sounds rock solid to me. After all, we can build a human, right?
Not yet. But there's no reason to believe we won't be able to eventually.
i.e: Always report "I'm wrong."
> my 1991 PH.D. thesis on autonomous driving
Going to hide in a corner and stay quiet on HN until I forget about this comment!
Just in case you're interested:
and the associated conference paper:
Most of the work was done on a Mac II with 8MB (that's megabytes, not gigabytes) of RAM.
The progress that has been made since those days boggles my mind.
Im going to go hide in the corner as well.
my phone is more powerful today than most powerful desktop back in 1995. And what do i use my phone for? check email and read HN.
Actually, we are. Back in the day, I would have given long odds against seeing autonomous cars on the road in my lifetime. Notwithstanding the odd mishap, today's autonomous vehicles actually work much better than I would have thought possible.
> And what do i use my phone for? check email and read HN.
What's wrong with that? Add wikipedia to that list and put a slightly different spin on it: today you can carry around the equivalent of an entire library in your pocket. That seems like progress to me. When I was a kid, you had to actually (imagine this) PHYSICALLY GO TO A LIBRARY in order to do any kind of research. If you didn't live close to a good library you were SOL. Today anyone can access more high-quality information than they can possibly consume from anywhere for a few hundred bucks. It's a revolution bigger than the invention of the printing press.
It's true that society doesn't seem to be making very effective use of this new super power. But that doesn't make it any less of a super power.
you are probably correct. in comparison with the past we are moving fast, it just feels that we could do so much more with what we have.
> It's true that society doesn't seem to be making very effective use of this new super power. But that doesn't make it any less of a super power.
this is the sad part. but on the flip side this means there are so many opportunities for those who have the desire and drive to make things better.
The same is happening with chatbots - more and more businesses think they can put a chatbot on their site and assume it'll handle everything when in fact it's meant to assist you rather than take over things for you.
Apparently, we're not, as Waymo has shown.
> The term 'Autopilot' gives off the wrong idea that the driver can sit back and relax while the car brings you from point A to point B safely.
Agreed, the "but that's not how autopilots work in airplanes" canned response is irrelevant.
The only things Waymo has shown so far are a bunch of marketing videos and a few tightly controlled press rides.
Does Waymo drive at high speeds all the videos I have seen are at low speeds.
Can you point me to a website or store where I can buy my fully autonomous waymo car? I doubt waymo is even at par with tesla, considering tesla is actually selling cars. Waymo is just vaporware at this point.
Not sure what are you mentioning, self-driving cars only exist when you can buy them personally?
Let me know where I can buy a M1 Abrams, if you don't, I'll claim they don't exist!
But actually, you can get a ride in a Waymo self-driving car already, with nobody on the driver's seat.
No, but somebody should be able to buy. Waymo is just vapor right now. It's an experiment, not a product.
I personally know people who have the option to hail a self driving car in Phoenix, AZ, just as they would an Uber.
But I guess they are just my imagination, right?
Although serious, this is working as designed. Level II self-driving doesn't have automation that makes guarantees about recognizing scenarios it cannot handle. At level III, the driver can safely close their eyes until the car sounds the alarm that it needs emergency help. Audi plans to release a level III car next year, so we'll see how liability for accidents actually shakes out.
Unfortunately level II is probably the most dangerous automation even with drivers that understand the limitations of the system. They still need constant vigilance to notice failures like this and react quickly enough to avoid collisions. Just imagine poorly marked or abrupt and ramps or intersections that drivers hardly have enough time to react when they're already driving. Add in the delay to notice the computer is steering you into a wall and yank the wheel can turn some of these accident prone areas into accident likely areas.
I'll go further and say that level II is worse than useless. It's "autonomy theatre" to borrow a phrase from the security world. It gives the appearance that the car is safely driving itself when in fact it is not, but this isn't evident until you crash.
You can't possibly determine what hardware is required for Level 4 until you have proven a hardware/software combination, so that's just empty puffery, but even if it was true...
> So you’d think their level II would still be smart enough to detect these problems to some degree.
No, because the smartness of their system is about the software. They could have hardware sufficient to support Level 4 autonomy and better-than-human AI running software that only supports a less-thsn-adequate version of Level 2 autonomy. What their hardware could support (even if it was knowable) gives you no basis for belief about what their current software on it supports, except that it won't exceed the limits set by the hardware.
I've been skeptical of autonomous driving since it started to become a possibility. I spend a fair amount of time making corrections while driving that have nothing to do with what's happening in my immediate vicinity. If it can't handle an obstruction in the road, how will it handle sensing a collision down the road, or a deer that was sensed by the side of the road, or even just backing away from an aggressive driver in a large grouping of vehicles in front of you? I've had to slow down and/or move off on to the shoulder on two lane country roads because someone mistimed it when passing a vehicle. I don't have much faith in how this system would handle that. Not to mention handling an actual emergency failure like a tire blowing out.
I'm sure they will get there eventually, but it looks like they have conquered all the low-hanging fruit and somehow think that's enough. I'm now officially adding "staying away from vehicles studded with cameras and sensors" to my list of driving rules.
That sounds highly dubious. Here's a hypothetical scenario:
there's a very drunk person on the sidewalk. As a human driver, you know he might act unexpectedly so you slow down and steer to the left. This will help you avoid a deadly collision as the person stumbles into the road.
Now let's take a self driving car in the same scenario, where, since it doesnt have general intelligence, it fails to distinguish the drunk person from a normal pederstrian and keeps going at the same speed and distance from the sidewalk as normally. How, in this scenario, does the vehicle 100% know that it has failed (like you say is always possible)?
"Failure" must be defined with respect to a particular model. If you're driving in the United States, you're probably not worried about bazookas, and being hit by one is not a failure, it's just shit happening, which it sometimes does. (By way of contrast, if you're driving in Kabul then you may very well be concerned with bazookas.) Whether or not you want to worry about drunk pedestrians and avoid them at all possible costs is a design decision. But if you really want to, you can (at the possible cost of having to drive very, very slowly).
But no reasonable person could deny that avoiding collisions with stationary obstacles is a requirement for any reasonable autonomous driving system.
Let's not pretend that anticipating potentially dangerous behaviour from subtle clues is some once-in-a-lifetime corner case. People do this all the time when driving -- be it a drunk guy on the sidewalk, a small kid a tad bit too unstable when riding a bike by the roadside, kids playng catch nex to the road and not paying attention, etc etc. Understanding these situation is crucial in self driving if we want to beat the 1 fatality per 100M mile that we have with human drivers. For such scenarios, please explain how the AI can always know when it failed to anticipate a problem that a normal human driver can.
You raised this scenario:
> there's a very drunk person on the sidewalk. As a human driver, you know he might act unexpectedly so you slow down...
I was just responding to that.
> Let's not pretend that anticipating potentially dangerous behaviour from subtle clues is some once-in-a-lifetime corner case.
I never said it was. All I said was that "failure must be defined with respect to some model." If you really want to anticipate every contingency then you have to take into account some very unlikely possibilities, like bazookas or (to choose a slightly more plausible example) having someone hiding behind the parked car that you are driving past and jumping out just at the wrong moment.
The kind of "failure" that I'm talking about is not a failure to anticipate all possible contingencies, but a failure to act correctly given your design goals and the information you have at your disposal. Hitting someone who jumps out at you from behind a parked car, or failing to avoid a bazooka attack, may or may not be a failure depending on your design criteria. But the situation in the OP video was not a corner case. Steering into a static barrier at freeway speeds is just very clearly not the right answer under any reasonable design criteria for an autonomous vehicle.
My claim is simply that given a set of design criteria, you cannot in general build a system that never fails according to those criteria, but you can build a system that, if it fails, knows that it has failed. I further claim that this is useful because you can then put a layer on top of this failure-detection mechanism that can recover from point failures, and so increase the overall system reliability. If you really want to know the details, go read the thesis or the paper.
These are not particularly deep or revolutionary claims. If you think they are, then you haven't understood them. These are really just codifications of some engineering common-sense. Back in 1991, applying this common sense to autonomous robots was new. In 2018 it should be standard practice, but apparently it's not.
That’s exactly my experience as a driver:
You learn to anticipate that the ‘autopilot’ will disengage or otherwise fail. I have been good enough at this, obviously, but it is sometimes frightening how close you get to a dangerous situation …
Odd how your "armchair diagnosis" matches perfectly with the top ranked comment in the reddit thread that was posted 1 day ago.
courtlandreOwner 815 points 1 day ago
It sees the white line on the left, the white line on the right and thinks its a big lane. Its trying to center the car in the lane.
>My guess is that the autopilot mistook these lines for the diamond lane marker and steered towards them thinking it was centering itself in the lane.
Wouldn't the system doing the checking be considered an autonomous system...that could also fail?
If you are referring to assigning confidences/probabilities to decisions, this is standard in ML.
There's a little more to it than that but yeah, pretty much.
> this is standard in ML.
Yes, I know. But not, apparently, standard in embedded autonomous systems.
It might be fixable in software. I'm a bit annoyed at Tesla for over reliance over painted lines. They fade, they can be covered, be outdated ..
That old fail video of a SDV stuck inside a circle is not funny anymore.
Off topic: I never understood why this video was discussed that much, especially in order to blame SDVs. It's an example of pointless road markings and a perfectly behaving vehicle. It's like driving into a one way street that turns out to have no exit. The driver can't be blamed.
We'd expect at least 'cycle' detection ;)
> Yep, works for 6 months with zero issues. Then one Friday night you get an update. Everything works that weekend, and on your way to work on Monday. Then, 18 minutes into your commute home, it drives straight at a barrier at 60 MPH.
> It's important to remember that it's not like you got the update 5 minutes before this happened. Even worse, you may not know you got an update if you are in a multi-driver household and the other driver installed the update.
Very glad the driver had 100% of their attention on the road at that moment.
Remember Tesla's first press release on the crash and how it mentioned "Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times"? I imagine the number that have driven it successfully in that lane since that update was rolled out sometime in mid-March is rather smaller...
So, now regarding that previous crash: did that driver (or should I say occupant) get lulled into a false sense of security because he'd been through there many times in the past and it worked until that update happened and then it suddenly didn't?
Now that these other videos are showing up, and further details (the update) are emerging, that PR should bite them in the ass hard enough that they decide never to handle an incident that way again.
I don't want this to kill Tesla -- I sincerely hope they make their production goals and become a major automobile manufacturer -- but their handling of this should hurt them.
I'm also curious if any of the people at Tesla saying, "we call it autopilot even though we expect the human driver to be 100% attentive at all times" have studied any of the history of dead man's vigilance devices.
Tesla PR knows nothing about what updates the engineering team did. At least some people in Tesla PR probably don't even know the cars update their software regularly.
It's bad practice for them to speak out of turn, but I can absolutely see the PR team not having a good grasp of what really indicates safety and their job is to just put out the best numbers possible.
Their job is not to put out the best numbers possible, their job is to inform. Most likely they were more worried about the effect of their statement on their stock price than they were worried about public safety.
If they do put out numbers (such as 85K trips past that point) then it is clear that they have access to engineering data with a very high resolution, it would be very bad if they only used that access to search for numbers that bolster their image.
No the entire point of a PR department is to propagandize the public on behalf of the corporation.
Yes, and 100% is critical. That required pretty quick and smooth reactions. The car started to follow the right-hand lane, then suddenly re-centered on the barrier. The driver had about a second to see that and correct. That's just the sort of situation where even a slightly inattentive driver might panic and overcorrect, which could cause a spinout if the roads were wet or icy or cause a crash with parallel traffic.
People shouldn't use it.
Great measures have been taken in the past to ensure that other planets aren't contaminated before we've had a chance to understand their existing biology. Elon Musk is the kid who comes and knocks over some other kids' block tower for his own amusement.
Huh? Isn't it better that way, we might spread [simple] life to other planets. I'm completely serious, I don't understand what the concern is?
Look at the big picture. We risk denying these planets the ability to evolve in isolation. That is a decision that cannot be reversed. Do we really want to do that? Maybe so, but it ought to be a conversation. Great measures have been taken to reduce the risk of contaminating other planets with Earth based lifeforms. Then, this belligerent guy comes along and disregards all that.
You realize I was talking about your "contamination" (e.g. bacterial organisms, microscopic lifeforms, etc)?
> We risk denying these planets the ability to evolve in isolation.
Seems like a fairly limited risk. It is more likely these planets have no form of life at all and that we'd seed their only life (if it could sustain there).
> Great measures have been taken to reduce the risk of contaminating other planets with Earth based lifeforms.
You're conflating craft that were designed to land on other planets and look for life on them, with a space craft.
And if there had been a crash, Tesla probably would have said that the driver had an unobstructed view of the barrier for several seconds before impact (conveniently omitting at what point AP started to swerve).
He was likely prepared for it, which kinda makes it even scarier in a way. An inattentive driver would have totally botched this.
2) control over updates
3) everything to be completely open
Or to put it differently, I don't want to be driving on the same road as you with your rooted self driving car. You can be great sysadmin/coder, tesla guys may be too, but both of you changing random stuff without any communication with each other.. I've seen enough servers.
Keep your rooting to your phones.
While, in principle, users could organize their own communal verification programs for open software, that does not happen in practice, even when the software is widely used and needs to be safe (or at least secure - OpenSSL...)
If you're supposed to keep your hands on the wheel, and given videos like this that show you really need to keep your hands on the wheel and pay attention, is automatic steering really that big of a deal?
Cruise control, now, that really is useful because it automates a trivial chore (maintaining a steady speed) and will do it well enough to improve your gas mileage. The main failure condition is "car keeps driving at full speed towards an obstacle" but an automatic emergency brake feature (again, reasonably straightforward, and standard in many new cars) can mitigate that pretty well.
It seems to me that autopilot falls into an uncanny valley, where it's not simple enough to be a reliable automation or a useful safety improvement, but it also isn't smart enough to reduce the need for an alert human driver. So what's the point?
If you're excited about self-driving cars because they'll reduce accidents, as many people here claim, what you should really be excited about is the more mundane incremental improvements like pedestrian airbags. Those will actually save lives right now.
My VW has Adaptive Cruise Control (ACC) which does the normal cruise control, plus basic distance keeping (with an alarm if the closing speed changes dangerously).
My parents' Subarus have ACC plus lane-keeping. The car will only do so much correction, plus alarms.
These seem like much better solutions, given the current state of driving "AI".
It can help keep me in a lane, either by beeping or nudging the steering wheel if I drift - I only turn on active lane assist on long highway stretches, it is more of a security blanket than anything else - just in case I space out for a sec, here's another layer of defense.
Adaptive cruise is also great. Between the two, in long road trips I can put the car in a travel lane and just go. I still have to attend to my surroundings, but it is a lot easier to focus on that when I'm not worried about accidentally creeping up to 90 mph because suddenly there's nobody in front of me.
I also had the auto brake feature activate once when a car in front of me stopped unexpectedly. I was in the middle of braking, but the brake pedal depressed further and there was a loud alarm beep.
None of these are autopilot, and honestly I wouldn't want autopilot until it is legit reliable. Instead, these are defense in depth features. The computer helps prevent certain mistakes as I make them, but never is in primary control of the vehicle.
Lame keep assist is very subtle, I describe it like wind blowing on the side of the car. You can trivially over-power it and it will beep if you exit the lane (without blinker on) with or without lane keep assist active.
The whole package of safety features is wonderful and impressive for something starting at around $22K.
Anyway, my point is these things are common now, and while getting the full package might cost a premium, the more important parts (like FEB) are certainly becoming fairly standard.
We have a 2017 Subaru with all of these and it's really excellent.
Lanekeeping is actually very nice. The system in Teslas and most (all?) others do require you to keep your hands on the wheel and pay attention, but having to constantly manually steer the car is much more fatiguing than you would think. It's really annoying to drive cars without lanekeeping now.
I don't find steering onerous, but it requires just enough attention to keep you alert.
(I've never used a lanekeeping system, though. Maybe I'd like it, I dunno)
Plus, lanekeeping can be really annoying if you rely on it all the time. The car sometimes tends to drift back and forth in a lane a little between corrections, instead of just going straight. So: you're still steering. It's just that you're steering less and have a defense in depth against loss of focus, and can drive without exhausting yourself.
Naturally, I tried the auto pilot feature on a highway but I wasn't to impressed. There is a major "bump" (negative G-forces), and the car tried to swirl into the other lane (this was within the first 40 minutes of my drive), and that made me distrust the auto steering feature.
As I think you're onto, I DO feel that auto pilot is the future, but we're not there yet - let's improve the existing life saving features (and not disconnect them, looking at you, Uber).
Trying to maintain a steady speed and conserve gas is a fun challenge, but a bit pointless, because a) it's a distraction from more important tasks, and b) the computer can usually do it better than I can.
I did a 6 hours trip with a friend continuously putting his foot up and down on the throttle, and it was the most gruelling car trip I've had I think.
When I asked he said it was "to keep control on the car". I'm a patient friend.
Not entirely true. I've been to German driving school and my father really pushed the concept of smooth throttle control on me as a beginning driver. Be smooth always. However, as I was able to afford higher and higher performance vehicles my take on the smooth maintaining of speed without noticeable accelerator input has changed. While I'll drive my SUV very smoothly, when I drive my manual six speed German roadster, my style is entirely different. Because of the weight, size, and HP, it's really not possible to drive it smoothly outside of tossing it into 6th which isn't terribly "fun". In the gears where it's "fun", it's a very much "on" or "off" throttle experience simply because of the HP produced by the engine.
a) you don't want to drive them smoothing because that's no fun - even for just the exhaust note &
b) It's actually difficult to drive them smoothly because of the torque/HP.
It doesn't mean you are a bad driver, it simply means you've adjusted your driving style to match the car you are driving.
In her defense, she learned to drive in a very different environment (dense Chicago surface street traffic).
Anti-lock brakes were the first such system. Before, you had to pump the brakes in an emergency, a practice that was difficult to execute even without the shock of an impeding accident. That system alone certainly saves thousands of lives every year.
That being said, it's always been while stopping for a light, not in a case when I had to swerve, so its never prevented a crash for me, but it really is comforting to have it.
The star usecase is to set cruise to be very near the speed limit, such that after acceleration events like overtaking, you coast back to highway speed.
It's a low-effort way of ensuring that one will be compliant with speed laws most of the time, yet maintaining a steady pace. I too prefer to be 'actively engaged' while driving, but in my opinion the reduction of constant acceleration input is a welcome convenience.
'Adaptive' cruise control, on the other hand, feels to me like riding on a tenuous rollercoaster. It's intended to let cruise control be usable in packed traffic, but it requires one to cede a lot of trust and control to the machine in ways that physically make me uneasy -- and it doesn't help that the exact behavior differs between models and manufacturers, so that trust doesn't automatically transplant into a different car.
Part of the problem is, again, with terminology. Ever since Adaptive Cruise Control proliferated as a term, it drew a parallel to classic 'Cruise Control', which I think is a mistake. Classic Cruise Control is a fire-and-forget, non-safety feature that's simple to reason about: do I want the car to gun it at a constant 70 mph, or no? You can run a quick mental judgement call and decide whether to engage it or leave it off.
'Adaptive' cruise control fundamentally about maintaining following distance, i.e. tailgating restriction. It's a safety feature. It's a button to "proceed forward not exceeding target speed", but if it gets disengaged for any reason then you can easily overrun into the car ahead. It's a safety feature with the UI/UX of a non-safety feature, so it's always opt-in (!) -- which is simply horrific.
All safety features in vehicles should be either always-on, or opt-out, and NEVER opt-in. On a modern car, tailgate restriction should be on by default, with a button unlocking the car into free-throttle mode. Braking -- alone -- should never disable a safety feature.
On ceding trust - at least with the ICC system in Nissans, it's A) far back, which gives more reaction time B) quite easy to tell when it sees the car in front vs when it doesn't. You're ceding trust, sure, but you can also verify easily.
Your 'tailgate restriction' bit is effectively a more agressive form of collision warning/forward emergency braking, and FCW+FEB as far as I know is available on all or at least most vehicles with ACC/ICC. Unfortunately, the realities of city driving means that 'maintaining distance' is a goal in some cases (i.e. just got cut off, tight merges, etc) rather than an absolute directive - frankly, something trying to force me to a certain distance away from the car in front of me would be more aggravating than useful.
Are there cars that have ACC and don't have AEB (Automatic Emergency Braking)?
The Subaru system will not let you let go of the wheel for more than 15 seconds (after that it will instantly disengage), so it's more to save your effort of continual minor corrections. It also disengages as soon as it's confused in the slightest (faded lines, lines at an angle etc)
Guess what computers excel at? Driving consistently on consistent highways.
The Tesla Autopilot is supposed to be the always aware and paying attention portion on these cases where a human driver would be very likely to start texting or dozing off. Now, it's not fully autonomous and may well decide it can't handle a situation (or apparently try to drive you into a barrier to see if you're still awake...). In this case the human driver who is somewhat zoned out needs to take control instantly and correct the situation, until they can safely re-engage autopilot (or pull over and make sure they're still alive, etc).
Guess what computers excel at? Driving consistently
on consistent highways.
Possibly on paper. In reality, as of right now, computers are clearly far from excelling at this specific task.
Now, I realise people in old-fashioned non-autopilot cars can and do doze off, and that's very dangerous. But it's not clear to me how the autopilot improves that situation. Relying on the autopilot actually encourages you to doze off.
We already have simple remedies like "pull over if you feel tired" and "never ever pick up your phone while driving (or you'll lose your licence)"
the first being people are being put in harms way by either false sense of trust invoked by the name or the mixed messages from Tesla
the second is that if the first is left unchecked Tesla could single handily set back Autonomous driving for all by souring public and government opinion.
it needs a new name that aligns better with what it can do. it could be a safety system which gently corrects a driver and takes over in an obvious emergency internal or external. as it stands now it is just dangerous
I don't know what the answer is but it feels like GM's super cruise does a more adequate job of acknowledging the realistic limits of the technology and explicitly white lists roads where the technology is available for use.
I personally think that without some sort of sensors or beacons in the road, autonomous driving via camera and LIDAR sesnors is ever going to be good enough to achieve level 5 autonomous operation.
It's the sophisticated behaviors necessary to safely drive through that world model that are the issue.
The success of emergency breaking systems (that aren't advertised as "driving" assist) are pretty good evidence that the sensors can serve well as input to safe behaviors.
And what about beacon maintenance? Seems like most cities have a hard enough time keeping up with pot holes, lines, etc. as is.
Following the beacons safely would be a vastly easier problem than trying to completely replace a human driver in all situations, but it would still give you about 90% of the benefits.
The first thing I thought when I read beacons: Hackers are going to have a field day with them. Add malicious beacons to streets and cars will drive off road at high speeds.
I assume they’re more expensive to install than just painting a few lines, but they’re very robust and long-lasting, and they’re fantastic for human drivers. It’s not a stretch to imagine something similarly useful for computerized cars, that links to a standard road database.
Hacking is a risk, sure. I envisage you’d lock it down by having a cryptographically signed master map; if the observed beacons diverge from the map, the autopilot system would refuse to proceed. (OK, I guess that allows a DoS attack at least.)
Yes, you still need to pay attention. It is hard for me to believe that any Autopilot user doesn't know this because you learn it by experience almost immediately. People text and drive all the time in manual cars, but for some reason when they do it in a Tesla, we declare that Autopilot lulled them into it.
While I agree that you need to be ready to grab the wheel or hit the brake on short notice, I disagree about what that means. There is a big difference between having to be ready to do those things and actually having to do them every few seconds. This difference wasn't intuitive to me, but in practice I've found it to be extremely mentally liberating and true beyond question.
I've also found that Autopilot makes it easier to take in all of your surroundings and drive defensively against things you otherwise wouldn't see. One thing that has struck me, as I now see more drivers than just the one in front of me, is how many people are distracted while driving. If I glance continually at an arbitrary driver on my way to work, there is a greater likelihood than not that within 10 seconds they'll look at a phone. That is terrifying to me, but it is also good information to have as I drive -- I am now able to drive defensively against drivers around me, not just the one in front of me.
I've also found I'm more able to think or listen to music or podcasts than I was before Autopilot. I could never get much out of technical audiobooks, for instance, while driving manually. But Autopilot has changed that. I hesitate to say this because I worry that I will give the impression that I am focusing less on the road, but I don't think that's what's happening. My mental abilities feel much higher when I am not constantly turning a wheel or adjusting a pedal. I'm listening to music, podcasts, or audiobooks either way -- I just get a lot more out of them with Autopilot. I think it goes back to the lack of mental fatigue.
Whatever you make of my experience, I urge you to try it on a long drive if you ever get an opportunity. I have put over 60k miles on Autopilot, I have taken over on demand hundreds if not thousands of times, and I've never had a close call that was Autopilot's fault.
That's a very unfortunate choice of words.
> I have taken over on demand hundreds if not thousands of times, and I've never had a close call that was Autopilot's fault.
Maybe you are an exceptional driver, to be able to be vigilant at all times even when AP is active.
Even so, it would probably only take one instance where you weren't in time to change your mind on all this (assuming you'd survive) so until then it is a very literal case of survivors bias.
Which makes me wonder how that person that died the other week felt about their autopilot right up to the fatal crash.
Solid lol, but I stand by it!
> Maybe you are an exceptional driver, to be able to be vigilant at all times even when AP is active.
I've had driving moments I'm not proud of. But it's because I was being dumb, not because Autopilot made me do it.
I think the relevant question is: does Autopilot make people less attentive? I have no data on this. My personal experience is that most drivers are already inattentive, and Autopilot (1) makes it easier to be attentive (for a driver who chooses to be); and (2) is better than the alternative in cases where a driver is already inattentive.
> Even so, it would probably only take one instance where you weren't in time to change your mind on all this (assuming you'd survive) so until then it is a very literal case of survivors bias.
I hope I'd be more thoughtful and independent than that, but maybe you're right. But I don't think my view in the face of a terrible accident should be what drives policy, either.
On the issue of survivorship bias, I would add that "Man Doesn't Text While Driving, Resulting in No Accident" isn't a headline that you're likely to read. I see a much greater quantity of bias in the failure cases that are reported and discussed than in the survivorship stories told (as evidenced by the proportions of comments and opinions here, vs in a user community like TMC or /r/teslamotors). I posted my experience because I think it brings more to this comment thread in particular than my survivorship bias detracts from it.
> Which makes me wonder how that person that died the other week felt about their autopilot right up to the fatal crash.
There are large bodies of knowledge about this gained from studies regarding trains and airline pilots and the conclusion seems to be uniformly that it is much harder to suddenly jump into an already problematic situation than it is to deal with that situation when you were engaged all along.
> On the issue of survivorship bias, I would add that "Man Doesn't Text While Driving, Resulting in No Accident" isn't a headline that you're likely to read.
It's one of the reasons I don't have a smartphone, I consider them distraction machines.
BUT...I feel like autopilot should only be for traffic jams on highways. It's downright dangerous the way it forces the driver to disengage. The adaptive cruise control is much better as at least you still have to pay attention but the car manages the throttle and following distances efficiently.
Adaptive cruise control also helps; if the system detects a car in front slowing, it'll slow at a roughly equivalent pace to avoid a collision.
This self-driving car craze would be in a very different place if Silicon Valley had halfway decent mass transit...