Hacker News new | more | comments | ask | show | jobs | submit login
Another Tesla on autopilot steers towards a barrier (reddit.com)
433 points by walrus01 10 months ago | hide | past | web | favorite | 465 comments



Here is my armchair diagnosis: right before the car veers towards the barrier it drives through a stretch of road where the only visible lane marker is on the left. Then the right lane marker comes into view at about the point where the lane starts to widen out for the lane split. The lines that will become the right and left lane markers of the split left and right lanes respectively are right next to the van in front of the Tesla, and at this point resemble the diamond lane markers in the middle of the split lanes. My guess is that the autopilot mistook these lines for the diamond lane marker and steered towards them thinking it was centering itself in the lane.

If this theory turns out to be correct then Tesla is in deep trouble because this would be a very elementary mistake. The system should have known that the lanes split at this point, noticed that the distance between what it thought was the diamond lane marker and the right lane line (which was clearly visible) was wrong, and at least sounded an alarm, if not actively braked until it had a better solution.

This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong. I dubbed this property "cognizant failure" in my 1991 Ph.D. thesis on autonomous driving, but no one seems to have adopted it. It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed. Tesla seems to have done neither.


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

This is a very good point: just like a human driver should slow down if they can't observe the road ahead well enough, an AI should slow down when it's not confident enough of its surroundings. This is probably very difficult to do, and I'm skeptical about your claim that an AI can be engineered to always detect its own failures. Further, I naively believe that Tesla is already doing a lot to detect conflicting inputs and other kinds of failures. Maybe being careful enough would prevent autopilot from working at all, so compromises have been made?

I probably don't understand AIs well enough to understand how they can be engineered to (almost) always recognize their own failures. But if a simple explanation exists, I'd love to hear it.


> if a simple explanation exists, I'd love to hear it

Basically it's a matter of having multiple redundant sensors and sounding the alarm if they don't all agree on what is going on, and also checking if what they think is going on is in the realm of reasonable possibility based on some a priori model (e.g. if suddenly all of your sensors tell you that you are 100 miles from where you were one second ago, that's probably a mistake even if all the sensors agree). That's a serious oversimplification, but that's the gist.


That's exactly how a lot of commercial aircraft systems work, they have three, if two agree and one disagree then that one is ignored.

But it is more complicated than that when you're talking about algorithms and complex systems. If you had three of the same exact system, they'd likely all make the same mistake and thus you'd gain no safety improvement (except actual malfunctions, not logical limitations).

I would like to see auto-braking taken out of autopilot completely, so if autopilot drives you into a wall at least the brakes slow you.


https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...

I'm not an expert...but it seems like for airplanes there are standards that manufacturers need to abide by for avoidance systems...which would mean testing and certification by an independent association before it's deployed. With manufacturers doing OTA updates...there's really no guarantee it's been tested...you'd have to trust your life to the QA process of each manufacturer. Not only the one of the car you're driving...but the car next to you!


This has been mentioned a lot of times lately (the auto-braking). When you think about it, you realize there are a million moments during normal driving where you have an obstacle ahead. Other cars, lane dividers like these that you might _have_ to pass by closely, or at least will have your car pointed at it for a few seconds, making tight curves on a walled road, high curbs, buildings that are on the edge of the road, etc. It's not that simple, and I bet a lot of this is already taken into account by the software.


My understanding is that auto-braking during autonomous driving doesn’t normally react to stationary objects like barriers. Otherwise it would brake for things like cardboard boxes, plastic bags, and other debris that often makes its way onto the road. But rather, autonomous driving AI puts a lot of trust into its lane guidance.


In what world would you not want your car to brake for a cardboard box? You don’t know what is inside. It could be empty, or could be full of nails. Or a cat.

Regardless, it should be trivial have a different behavior for a moving object that enters the road versus a rigid object that has not been observed moving.


Sometimes it's just a piece of paper or something similarly harmless that happens to be positioned on the road in such a way that it appears to be an obstacle.


If a system can't reliably differentiate between a flat object lying on the road, an object fluttering rapidly in the air, and a stationary object that is protruding from the road, it has no right to be in control of a car.


Will it run over accident victim, animal, or slow down and ask the human?


As far as I know things like the space shuttle had 3 different guidance computers developed by 3 independent teams. They were given the same requirements and came up with the hardware and software independently.


I'd say that Kalman filters are perfect for this-- they optimally blend estimates from multiple sources, including a model of the system dynamics (where you think you're going to be next based on where you were before, i.e. "I'm not going to suddenly jump 100 miles away"), and all the various sensor inputs ("GPS says I'm here, cameras say I'm here, wheel distance says I'm here").

On top of that, Kalman innately tracks the uncertainty of its combined estimate. So you can simply look at the Kalman uncertainty covariance and decide if it is too much for the speed you are going.

I really wonder if Tesla is doing that...


I feel that’s the problem. Techniques like SVM can provide reasonable definitions of confidence. Reinforcement Learning.. DNN... the mechanisms behind autonomous cars... are they able to do this??


> Reinforcement Learning.. DNN... the mechanisms behind autonomous cars... are they able to do this??

Sorta. There are ways to extract kinds of uncertainty and confidence from NNs: for example, Gal's dropout trick where you train with dropout and then at runtime you use an 'ensemble' of multiple dropout-ed versions of your model, and the set of predictions gives a quasi-Bayesian posterior distribution for the predictions. NNs can be trained directly via HMC for small NNs, and there are arguments that constant-learning-rate SGD 'really' implements Bayesian inference and an ensemble of checkpoints yields an approximation of the posterior, etc. You can also train RL NNs which have an action of shortcutting computation and kicking the problem out to an oracle in exchange for a penalty, which trains them to specialize and 'know what they don't know' so they choose to call the oracle when they're insufficiently sure (this can be done for computational savings if the main NN is a small fast simple one and the oracle is a much bigger slower NN, or for safety if you imagine the oracle is a human or some fallback mechanism like halting).

I have some cites on these sorts of things in https://www.gwern.net/Tool-AI and you could also look at the relevant tags https://www.reddit.com/r/reinforcementlearning/search?q=flai... and https://www.reddit.com/r/reinforcementlearning/search?q=flai...


Yes.

In particular, it's possible to learn the variance of the return using TD-methods with the same computational complexity as learning the expected value (the value function). See [0] for how to do it via the squared TD-error, or [1] for how to estimate it via the second moment of the return, and my own notes (soon to be published and expanded for my thesis) here [2].

It turns out that identifying states with high variance is a great way of locating model error-- most of the real-world environments are fairly deterministic, so states with high variance tend to be "aliased" combinations of different states with wildly different outcomes. You can use this to improve your agent via either allocating more representation power to those states to differentiate between very similar ones, or have your agent account for variance when choosing its policy. For example, Tesla could identify when variance spikes in its model and trigger an alert to the user that they may need to take over.

Additionally, there's work by Bellemare [3] for estimating the distribution of the return, which allows for all sorts of statistical techniques for quantifying confidence, risk, or uncertainty.

---

0. https://arxiv.org/abs/1801.08287

1. https://arxiv.org/abs/1607.00446

2. http://rl.ai/posts/fun-with-the-td-error-part-ii.html

3. https://arxiv.org/abs/1707.06887


Yes, there is an active branch in deep learning (Bayesian deep learning) trying to model uncertainty measurements into neural networks.

Older ideas are http://mlg.eng.cam.ac.uk/yarin/blog_2248.html or http://papers.nips.cc/paper/7219-simple-and-scalable-predict...

Basically Bayesian neural networks were able to model confidence but are not applicable in current real-world scenarios. Thus, lots of methods rely on approximating bayesian methods.


not an expert, but aren't the output nodes thresholded to make a decision? How far you are from the threshold might be interpretable as confidence possibly?


For a single perceptron, not for the network as a whole.


Yep that's exactly what you would do.

However, in practice, this usually is just not a good indicator of confidence. NNs are notoriously overconfident.


I dubbed this property "cognizant failure"

I like that term. When I was involved in the medical instrumentation field, we had a similar concern: it was possible for an instrument to produce a measurement, e.g., the concentration of HIV in blood serum, that was completely incorrect, but plausible since it's within the expected clinical range. This is the worst-case scenario: a result that is wrong, but looks OK. This could lead to the patient receiving the wrong treatment or no treatment at all.

As much as possible, we had to be able to detect when we produced an incorrect measurement.

This meant that all the steps during the analyte's processing were monitored. If one of those steps went out of its expected range we could use that knowledge to inform the final result. So the clinician would get a test report that basically says, "here's the HIV level, but the incubation temperature went slightly out of range when I was processing it, so use caution interpreting it."

Like most software, the "happy path" was easy. The bulk of the work we did on those instruments was oriented towards the goal of detecting when something went wrong and either recovering in a clinically safe manner or refusing to continue.

With all the research into safety-critical systems over decades, it's hard to see how Tesla could not be aware of the standard practices.


"here's the HIV level, but the incubation temperature went slightly out of range when I was processing it, so use caution interpreting it."

there is no "slightly out of range". Its either in the range or outside. Valid or invalid, when it comes to critical tests like these.

if temperature deviation is outside of acceptable deviation range then thats a system fault and result should have been considered invalid.

Back in the days there was much less regulatory oversight and products like that could slip through the cracks, Resulting to deaths, missed diagnosis etc etc.

The same with Teslas AP: its either 100% confident in the situation or not. If the car is not confident in whats happening - it should shut down. If that happens too often then the AP feature should be removed / completely disabled.

How many more people have to get into accidents? I know, if Musk's mom (knock on wood) was a victim of this feature then things would be taken more seriously.


It's so nice to see people critique products they know nothing about with absolutely no useful context!

You do realize that I gave a much simplified view of the situation as this is a web forum discussing a related subject, not an actual design review of the instrument, right?

To any process I can set multiple "acceptable ranges" depending on what I want to accomplish. There can be a "reject" range, "ok, but warning" range, "perfect, no problem" range, or a "machine must be broken" range

Everything is context dependent; nothing is absolute.


> I like that term.

Thanks!


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

That is an extremely surprising result. How is that possible? Are you really claiming that any control system can be engineered to detect that is has failed in any possible manner? What's an example of actual real-world system like that?


> Are you really claiming that any control system can be engineered to detect that is has failed in any possible manner?

No, of course not. But it is possible to reduce the probability of non-cognizant failure to arbitrarily low levels -- at the expense of cost and the possibility of having a system that is too conservative to do anything useful.


I totally agree with your analysis. What is more worrying is that even following the rest of the traffic would have solved the problem. If you are the only car doing something then there is a good chance you are doing something wrong.

Also: the lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane. If two systems give conflicting data some kind of alarm should be given to the driver.


> lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane

GPS is not accurate enough to reliably pinpoint your location to a particular lane. Even WAAS (https://en.wikipedia.org/wiki/Wide_Area_Augmentation_System) can have up to 25 feet of error. Basic GPS is less accurate than that.

In fact, it's possible that GPS error was a contributing factor here but there's no way to know that from the video.


RTK [0] systems definitely can have centimetre-level accuracy, though they require a fixed base station within 10-15 miles or so to broadcast corrections. It would also require roads to be mapped to a high level of accuracy.

I have seen self-driving test cars in Silicon Valley (frequently, especially in the last year or so) using these types of systems, so they are at least being tested. I've also heard discussion of putting RTK base stations on cell-phone towers to provide wide area coverage, but I'm not sure if much effort has been put into that. I do know vast areas of the agricultural midwest are covered in RTK networks -- its used heavily for auto-steering control in agriculture.

[0] https://en.wikipedia.org/wiki/Real_Time_Kinematic


I've heard on and off for many years (15 years?) about ad-hoc wireless networks for cars. Whether it's car to car, or car to some devices on the ground (lane marking, stop lights) so it knows where it is at in relationship to the road and other cars - and it would know if the car ahead is going to break or slow down.

Now the cars are relying on cameras, lidar to figure things out. What has happened to putting sensors on the road to broadcast what/where the roads is. Is that out of the question now because of cost?


I can see lots of potential problems with that, for instance you'd have to rely on the sensor data of that other car and you'd have to believe that other car telling you the truth. Lots of griefer potential there.


I drive a lot with my GPS (TomTom) on and it rarely gets the lane I'm in wrong, usually that only happens in construction zones.

So even if the error can be large in practice it often works very well.

It would be very interesting to see the input data streams these self driving systems rely on in case of errors, and in the case of accidents there might even be a legal reason to obtain them (for liability purposes).


> it often works very well

Well, yeah, it's not like autopilot is driving cars into barriers every day. But I don't think "often works very well" (and then it kills you) is good enough for most people. It's certainly not good enough for me.


According to the Reddit posterL

> Previous autopilot versions didn't do this. 10.4 and 12 do this 90% of the time. I didn't post it until I got .12 and it did it a few times in case it was a bug in 10.4.

> Also, start paying attention as you drive. Centering yourself in the lane is not always a good idea. You'll run into parked cars all over, and lots of times before there is an obstacle the lane gets wider. In general, my feel is that you're better off to hug one of the markings.

It kind of does sound like that autopilot is steering vehicles into barriers everyday, or it would be if drivers weren't being extra vigilant: https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...

Seems like we need a study to assess whether the kind of protections AP offers outweighs the extra challenges it adds to a daily drive.


That's absolutely true, however, I did not mean it to take the place of whatever they have running right now, I meant for it to be an input into a system that can detect if there is a potential problem and the driver should be alerted.

Because if I interpret the video correctly either the car was partially in the wrong lane before the near accident or it was about to steer it into a situation like that. And that means it is time to hand off to the driver because the autopilot has either already made an error or is about to make one.

Either way the autopilot should have realized the situation is beyond it's abilities, and when GPS and camera inputs conflict (which I assume was the case) that's an excellent moment for a hand-off.


Don't navigation systems 'snap' to the most probable lane you are driving in based on general direction instead of your exact GPS coordinates?

A good example of this I think is when driving in tunnels, there is no GPS info there, but the navigation shows you following the (curved) road.


Good nav systems will fuse whatever inputs they can get including GPS, INS, compass, etc. in order to figure out and ‘snap’ to what road you’re on. Snapping to an individual lane is still AFAIK beyond these systems’ sensors’ capability. You will need an additional input like a camera for this that looks for lane markings.


Yes, they do that, and they rarely get that wrong.

Also, the better ones have inertial backup & flux gate for when GPS is unavailable. Inertial backup could also help to detect the signatures of lane changes and turns.


Try driving near a place with a lot of roads that are close together, like an airport. You’ll quickly see that snapping doesn’t really work, especially if you deviate from the route the GPS is trying to take you on.


I do this all the time and it works remarkably well. Amsterdam and the roads around it are pretty dense, up to 6 lanes each way in quite a few places with all kinds of flyovers and multi-lane exits.


At what speeds? I'm in NYC, similar dense road systems. At a reasonable speed, snapping still works because it knows where you came from. In stop stop stop go traffic combined with lane weaving and ramp merging I've found it is not hard to trick the gps into thinking you're on a parallel access road or unrelated ramp for a little while.

It's possible your highways are newer or less congested than ours?


Counterpoint, my Lenovo tablet running google maps often seems to get confused about lanes. E.g. it will think I'm taking an exit ramp when I am still on the main road. Once the ramp and the road diverge enough it will recover, but not before going into it's "rerouting" mode and sometimes even starting directions to get back on the road I'm already on.


That sounds pretty bad if that's the thing you rely on to get you where you're going. Here in NL interchanges are often complex enough that without the navigator picking the right lane you'd end up taking the wrong exit.

I don't use Google maps for driving but I know plenty of people that do and having been in the car with them on occasion has made me even happier with my dedicated navigation system. The voice prompts are super confusing with a lot of irrelevant information (and usually totally mis-pronounced) and now you are supplying even more data points that the actual navigation itself isn't as good as it could be.

I suspect this is a by-product of doing a lot of stuff rather than just doing one thing and doing that well.


Here in D.C. Google maps will regularly get confused about whether you're on the main road or a ramp; whether you're on an overhead highway or the street underneath it, whether you're on a main road or the access road next to it.


My FIATChrysler Uconnect system will sometimes think I'm on the road above when driving under a bridge.

Sometimes it'll shout "The speed limit is 35 miles an hour!" when I go under a bridge carrying a 35MPH road, even though the road I'm on has a limit of 65.


Is it using some other localization technique to complement the GPS? GPS at best can give accuracy within 3 meters (if I remember right). To make it accurate, laser/radar beams are shot out to the landmarks, of which precise co-ordinates are known, and using them the pin point coordinates of Ego vehicle are determined.


The highest resolution GPS receivers have two changes:

1. Base stations at known locations broadcast corrections for factors like the effects of the ionosphere and troposphere, and inaccurate satellite ephemerides. If you have multiple base stations, you can interpolate between them (Trimble VRS Now [1] does this, for example)

2. Precise measurements by combining code phase (~300m wavelength) and carrier phase on two frequencies (~20cm wavelength), plus the beat frequency of the two frequencies (~80cm)

With these combined, and good visibility of the sky, centimetre-level accuracy is possible.

Autonomous vehicles will often combine this with inertial measurement [2] which offers a faster data rate, and works in tunnels.

Many people also expect autonomous vehicles to also track lane markings, in combination with a detailed map to say whether lane markings are accurate.

[1] http://www.trimble.com/positioning-services/vrs-now [2] http://www.oxts.com/products/rt3000/


Inertial, flux gate compass.


"GPS is not accurate enough to reliably pinpoint your location to a particular lane."

My phone has GPS accurate to within 1 foot. I use it for mining location all the time. It uses differential GPS plus Inertial sensors.


When I think of self-driving car technology, I think of all of these inputs (maps, cameras, GPS, etc.) being fed into a world model from which inferences are drawn.[1] Here, the car could have known from cameras and maps that there was a lane split, and should have been able to detect that the line following was giving a conflicting inference compared to those other sources. The question is: does Tesla actually do that? Because if not, it's not really self-driving car technology. It's not a glitch in a product that's 90% of the way there; it's a fundamentally more basic technology.

[1] See https://www.ri.cmu.edu/pub_files/2009/6/aimag2009_urmson.pdf at 21-23.


The whole problem seems to me that to get a 90% solution for a self driving vehicle is now no longer hard. The remaining 10% is super hard and the degree to which the present day solutions tend to get these wrong is worrisome enough that if the quality doesn't jump up very quickly the whole self driving car thing could end up in an AI winter all of its own making. And that would be a real loss.

This is not a field where 'move fast and break stuff' is a mantra worth anything at all, the stuff that is broken are people's lives and as such you'd expect a far more conservative attitude, akin to what we see in Aerospace and medical equipment software development practices.

But instead barely tested software gets released via over-the-air updates and it's anybody's guess what you'll be running the next time you turn the key.

I agree with you that the software should have been able to detect that something was wrong either way, either it was already halfway in the wrong lane or it was heading to be halfway in the wrong lane, a situation that should not have passed without the car at least alerting the driver.

And from what we have seen in the video it just gave one inferred situation priority over the rest and that particular one happened to be dead wrong about it's interpretation of the model.


I agree with all you've said.

One of my concerns with self-driving systems is that they don't have a good model of the world they are operating in. Yes, sure, they build up 3-D models using multiple sensor types, and react to that accordingly, oftentimes with faster reflexes than a human.

However, consider this situation with pedestrians A and B, while driving down a street, approaching an intersection.

Pedestrian A, on the edge of the sidewalk, is faced away from the street, looking at his phone. The chances of him suddenly stepping out in front of my car are exactly 0.0%.

Pedestrian B, on the edge of the sidewalk, is leaning slightly towards the street, and is looking anxiously down the street (the wrong way). The chances of B suddenly stepping out is high, and the chances of B not seeing my car are very high (because B is being stupid/careless, or else is used to UK traffic patterns).

I, as a human, will react to those situations differently, preemptively slowing down if I recognize a situation like with Pedestrian B. Autonomous driving systems will treat both situations exactly the same.

From everything I've read about so far, current driving systems have no way of distinguishing between those two specific situations, or dealing with the myriad of other similar issues that can occur on our streets.


Another similar situation where autopilot may fail. It sees a ball rolling across the street. Will it slow down, because a child maybe chasing the ball.


I used to contemplate similar situations. For example, a stationary person on a bike at the top of a steep driveway that's perpendicular to the vehicle's driving direction. But after seeing the video of the Arizona incident I stopped. It's too soon to consider these corner cases when some systems on the public road can't even slow down for a person in front of the fucking vehicle.


That's because waymo is playing chess while uber plays checkers. For waymo, these corner cases probably matter. Uber is not actually pursuing self driving tech, they are pretending to do so to defraud investors. Their troubles aren't representative therefore.


The saying in those fields is "the rules are written in blood". The Therac-25 (a medical device) is taught in practically every CS Ethics class for having changed software development practices for such things.

The cynic in me thinks self-driving will require the same, the rules and conservative practices will only come after a bunch of highly publicized, avoidable failures that leave their own trail of blood.


I wonder how the public would respond if the person killed in that Uber accident a while ago was a kid on a bicycle and how the response will be when/if a Tesla on autopilot plows into a schoolbus (a situation they seem to have problems with is stationary objects partially in a lane).


If there's anything left to be conservative upon. Current climate of "drive fast and kill people" might wipe out the field via PR.


This is where my chief complaint with Musk comes in. His whole bravado as a salesman is probably having a longer term impact on AI for cars. He keeps pushing this auto pilot when it's not.


regardless of the lane recognition it has to at least detect the obstacle in front of it, it's something that a basic collision avoidance system can do


> What is more worrying is that even following the rest of the traffic would have solved the problem.

That's what makes this example bizarre to me. I had thought that AutoPilot's ideal situation was having a moving vehicle in front of it. For example, AP does not have the ability to react to traffic lights, but can kind of hack it by following the pace of the vehicle ahead of it (assuming the vehicle doesn't run a red light):

https://www.youtube.com/watch?v=EXs4qZlDWbQ


That's a heck of an ass-u-mption. Running the red light as the n-th exponentially increases the probability you'll get into collision.


If you are the only car doing something then there is a good chance you are doing something wrong.

I’m surprised self-driving systems don’t do this (do they?). One or more vehicles in front of you that have successfully navigated an area you are now entering is a powerful data point. I’m not saying follow a car off a cliff, but one would think the behavior of followed vehicles should be somehow fused into the car’s pathfinding.


"Follow the leader" is a main ingredient in flocking behavior in birds, who rarely if ever collide in flight.


> If you are the only car doing something then there is a good chance you are doing something wrong.

Following this principle would probably result in a lot of people angry that their autopilot had gotten them a speeding ticket.


> lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane

Aside from GPS’ accuracy as mentioned in other replies, also take into account the navigation’s map material. The individual lans are probably not individually tracked on the map, but a single track per road with meta data specifying the number of lanes amongst other featerus. So even if GPS would have provided very accurate position readings, the map source material might not even match that level of detail.


Pretty sure I heard an audible warning in the vid.


No, that's sound you hear when you override and take over the autopilot


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed

This seems to me to be a clearly incorrect (and self-contradictory) claim. It entirely depends on your definition of failure.

Your analysis seems fine. The big problem is that the "autonomous" driver is using one signal (where are the lines on the edge of the road?) to the near exclusion of all others (is there a large stationary solid object in front of me?)

Maybe Tesla should have hired George Hotz (sp?) if only to write a lightweight sanity-check system that could argue with the main system about whether it was working.


> This seems to me to be a clearly incorrect (and self-contradictory) claim.

I guess that means I was able to pull the wool over the eyes of all five members of my thesis committee because none of them thought so.

> It entirely depends on your definition of failure.

Well, yeah, of course. So? There is some subset of states of the world that you label "failure". The guarantee is not that the system never enters one of those states, the guarantee is that if the system enters one of those states it never (well, so extremely rarely that it's "never" for all practical purposes) fails to recognize that it has done so. Why do you find that so implausible?


> The guarantee is not that the system never enters one of those states, the guarantee is that if the system enters one of those states it never (well, so extremely rarely that it's "never" for all practical purposes) fails to recognize that it has done so. Why do you find that so implausible?

Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not; I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".


> Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not

They seem the same to me. In fact, in 27 years you are the first person to voice this concern.

> I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".

Could be, but that would be a publishable result. So... publish it and then we can discuss.


What happens if all your sensors fail, including the one that senses the failure of your sensors? What happens if the power source disconnects?


Having all your sensors fail is actually a very easy case. Imagine if all of your sensors failed: suddenly you could not see, hear, feel, smell, or taste... do you think it would be hard to tell that something was wrong?


Having your sensors fail doesn't mean they're not providing data. It means they're not providing accurate data. In humans, we would call this hallucinating, and humans in fact cannot generally tell that they are hallucinating.


> Having your sensors fail doesn't mean they're not providing data.

That is one possible failure mode. But you're right, it's not the only one.

There is an extensive literature on how to detect and correct sensor errors resulting from all kinds of different failure modes.


Arguing that it's possible to build an autonomous that is sometimes wrong but always knows when it's wrong seems like a stretch to me.


A system that can sense when it is in a failure mode is analogous to a “detector” in the Neyman-Pearson sense. A detector that says it is OK when it is actually in a failure condition is said to have a missed detection (MD).

OTOH, if you say you’ve failed when you haven’t, that’s a false alarm (FA).

In general, there is a trade-off between MD and FA. You can drive MD probability to near-zero, but typically at a cost in FA.

Again in general, you can’t claim that you can drive MD to zero (other than by also driving FA probability to one) without knowing more about the specifics of the problem. Here, that would be the sensors, etc.

In particular, for systems with noisy, continuous data inputs and continuous state spaces -- not even considering real-world messiness -- I would be surprised if you could drive PD to zero without a very high cost in FA probability.

As a humbling example, you cannot do this even for detection of whether a signal is present in Gaussian noise. (I.e., PD = 1 is only achievable with PFA = 1!) PD = 1 is a very high standard in any probabilistic system.

Discrete-input systems can behave differently.


You've done a great job of describing the problem. It is manifestly possible to drive missed detection rates very close to zero without too many false alarms because humans are capable of driving safely.


Ah, yes... you have convinced me - though humans can apply generalized intelligence to the problem, which I imagine is particularly useful in lots of special cases.


General intelligence is not needed. Illiterate people can drive. People who can't do math can drive.


Illiteracy is not incompatible with intelligence.


So we can get within epsilon for an undefined delta because humans can do something, although not always.

Right, that whole claim about engineering a system that always knows when it’s not working sounds rock solid to me. After all, we can build a human, right?


> we can build a human, right?

Not yet. But there's no reason to believe we won't be able to eventually.


You can if you dial the specificity down to 0.

i.e: Always report "I'm wrong."


Humans have moments like this too. I remember suddenly driving into dense fog on the freeway and not able to see more than 20 feet in front of me. I had a definite "failure mode" where I slowed down and freaked out because all my previous driving experience was "failing" me on how to avoid a potential accident.


> my armchair diagnosis

> my 1991 PH.D. thesis on autonomous driving

Going to hide in a corner and stay quiet on HN until I forget about this comment!


:-)

Just in case you're interested:

https://vtechworks.lib.vt.edu/handle/10919/38880

and the associated conference paper:

http://www.flownet.com/gat/papers/aaai92.pdf

Most of the work was done on a Mac II with 8MB (that's megabytes, not gigabytes) of RAM.

The progress that has been made since those days boggles my mind.


shameless plug! haha just kidding and WOW! Autonomous driving so early on, JPL, Google..

Im going to go hide in the corner as well.


The most impressive thing in retrospect is that all that work was done on 32-bit processors with ~16 MHz clocks and ~8MB of RAM. And those machines cost $5000 each in 1991 dollars. Today I can buy a machine with 100 times faster clock and 1000 times as much RAM for about 1% of the price. That's seven orders of magnitude price-performance increase in <30 years. It's truly mind-boggling.


so due to this significantly cheaper availability of computing power one would think that today we would be making enormous progress every day.

my phone is more powerful today than most powerful desktop back in 1995. And what do i use my phone for? check email and read HN.


> one would think that today we would be making enormous progress every day

Actually, we are. Back in the day, I would have given long odds against seeing autonomous cars on the road in my lifetime. Notwithstanding the odd mishap, today's autonomous vehicles actually work much better than I would have thought possible.

> And what do i use my phone for? check email and read HN.

What's wrong with that? Add wikipedia to that list and put a slightly different spin on it: today you can carry around the equivalent of an entire library in your pocket. That seems like progress to me. When I was a kid, you had to actually (imagine this) PHYSICALLY GO TO A LIBRARY in order to do any kind of research. If you didn't live close to a good library you were SOL. Today anyone can access more high-quality information than they can possibly consume from anywhere for a few hundred bucks. It's a revolution bigger than the invention of the printing press.

It's true that society doesn't seem to be making very effective use of this new super power. But that doesn't make it any less of a super power.


> Actually, we are. Back in the day, I would have given long odds against seeing autonomous cars on the road in my lifetime. Notwithstanding the odd mishap, today's autonomous vehicles actually work much better than I would have thought possible.

you are probably correct. in comparison with the past we are moving fast, it just feels that we could do so much more with what we have.

> It's true that society doesn't seem to be making very effective use of this new super power. But that doesn't make it any less of a super power.

this is the sad part. but on the flip side this means there are so many opportunities for those who have the desire and drive to make things better.


I think more people need to embrace and understand the fact that we are a long way from having fully autonomous vehicles that you can just give the steering reins to from start to finish. The term 'Autopilot' gives off the wrong idea that the driver can sit back and relax while the car brings you from point A to point B safely.

The same is happening with chatbots - more and more businesses think they can put a chatbot on their site and assume it'll handle everything when in fact it's meant to assist you rather than take over things for you.


> I think more people need to embrace and understand the fact that we are a long way from having fully autonomous vehicles that you can just give the steering reins to from start to finish

Apparently, we're not, as Waymo has shown.

> The term 'Autopilot' gives off the wrong idea that the driver can sit back and relax while the car brings you from point A to point B safely.

Agreed, the "but that's not how autopilots work in airplanes" canned response is irrelevant.


Yep it took Google/Waymo 8 years to get there but Musk and Uber just wants to move fast and crash into things. Musk wouldn't be selling self-driving since Oct'16 if it's not any day now or would he ?


> Apparently, we're not, as Waymo has shown.

The only things Waymo has shown so far are a bunch of marketing videos and a few tightly controlled press rides.


> Apparently, we're not, as Waymo has shown.

Does Waymo drive at high speeds all the videos I have seen are at low speeds.


> Apparently, we're not, as Waymo has shown.

Can you point me to a website or store where I can buy my fully autonomous waymo car? I doubt waymo is even at par with tesla, considering tesla is actually selling cars. Waymo is just vaporware at this point.


> Can you point me to a website or store where I can buy my fully autonomous waymo car?

Not sure what are you mentioning, self-driving cars only exist when you can buy them personally?

Let me know where I can buy a M1 Abrams, if you don't, I'll claim they don't exist!

But actually, you can get a ride in a Waymo self-driving car already, with nobody on the driver's seat.


> Not sure what are you mentioning, self-driving cars only exist when you can buy them personally?

No, but somebody should be able to buy. Waymo is just vapor right now. It's an experiment, not a product.


> No, but somebody should be able to buy. Waymo is just vapor right now. It's an experiment, not a product.

I personally know people who have the option to hail a self driving car in Phoenix, AZ, just as they would an Uber.

But I guess they are just my imagination, right?


Waymo has shown no such thing. In controlled, limited circumstances, yes, but in complex, poorly mapped, degraded conditions, not yet.


>This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong.

Although serious, this is working as designed. Level II self-driving doesn't have automation that makes guarantees about recognizing scenarios it cannot handle. At level III, the driver can safely close their eyes until the car sounds the alarm that it needs emergency help. Audi plans to release a level III car next year, so we'll see how liability for accidents actually shakes out.

Unfortunately level II is probably the most dangerous automation even with drivers that understand the limitations of the system. They still need constant vigilance to notice failures like this and react quickly enough to avoid collisions. Just imagine poorly marked or abrupt and ramps or intersections that drivers hardly have enough time to react when they're already driving. Add in the delay to notice the computer is steering you into a wall and yank the wheel can turn some of these accident prone areas into accident likely areas.


> level II is probably the most dangerous automation

I'll go further and say that level II is worse than useless. It's "autonomy theatre" to borrow a phrase from the security world. It gives the appearance that the car is safely driving itself when in fact it is not, but this isn't evident until you crash.


I disagree. It’s not as though level II is a hard and fast definition - Tesla wants level 4 and claims the cars possess the hardware for that already. So you’d think their level II would still be smart enough to detect these problems to some degree. It’s not a fixed system but one they keep upgrading. I would expect this nominally level II system to have more smarts than a system that is designed never to exceed level II.


> Tesla wants level 4 and claims the cars possess the hardware for that already.

You can't possibly determine what hardware is required for Level 4 until you have proven a hardware/software combination, so that's just empty puffery, but even if it was true...

> So you’d think their level II would still be smart enough to detect these problems to some degree.

No, because the smartness of their system is about the software. They could have hardware sufficient to support Level 4 autonomy and better-than-human AI running software that only supports a less-thsn-adequate version of Level 2 autonomy. What their hardware could support (even if it was knowable) gives you no basis for belief about what their current software on it supports, except that it won't exceed the limits set by the hardware.


That's probably right or very close to what is happening, but what I cannot understand: what about damn black-and-yellow barrier board right in front of the car closing in at the speed of 60mph? Some sensor should have picked up on that.


This was my thought as well. Doesn't the system have the equivalent of reflexes that override the approved plan?

I've been skeptical of autonomous driving since it started to become a possibility. I spend a fair amount of time making corrections while driving that have nothing to do with what's happening in my immediate vicinity. If it can't handle an obstruction in the road, how will it handle sensing a collision down the road, or a deer that was sensed by the side of the road, or even just backing away from an aggressive driver in a large grouping of vehicles in front of you? I've had to slow down and/or move off on to the shoulder on two lane country roads because someone mistimed it when passing a vehicle. I don't have much faith in how this system would handle that. Not to mention handling an actual emergency failure like a tire blowing out.

I'm sure they will get there eventually, but it looks like they have conquered all the low-hanging fruit and somehow think that's enough. I'm now officially adding "staying away from vehicles studded with cameras and sensors" to my list of driving rules.


What is someone drives down the road with lane-marking painter? I have witnessed two or three times in my life, it used to just be funny, but now it can be deadly.


The cars will for survival should outweigh the lane marker following algorithm. Maintaining distance and trajectory relative to things with mass is more important than coloring within in the lines.


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

That sounds highly dubious. Here's a hypothetical scenario: there's a very drunk person on the sidewalk. As a human driver, you know he might act unexpectedly so you slow down and steer to the left. This will help you avoid a deadly collision as the person stumbles into the road.

Now let's take a self driving car in the same scenario, where, since it doesnt have general intelligence, it fails to distinguish the drunk person from a normal pederstrian and keeps going at the same speed and distance from the sidewalk as normally. How, in this scenario, does the vehicle 100% know that it has failed (like you say is always possible)?


An even more extreme example: suppose someone on the sidewalk suddenly whips out a bazooka and shoots it at you. Does your failure to anticipate this contingency count as a failure?

"Failure" must be defined with respect to a particular model. If you're driving in the United States, you're probably not worried about bazookas, and being hit by one is not a failure, it's just shit happening, which it sometimes does. (By way of contrast, if you're driving in Kabul then you may very well be concerned with bazookas.) Whether or not you want to worry about drunk pedestrians and avoid them at all possible costs is a design decision. But if you really want to, you can (at the possible cost of having to drive very, very slowly).

But no reasonable person could deny that avoiding collisions with stationary obstacles is a requirement for any reasonable autonomous driving system.


Way to dodge the question. And how did we get from always knowing when you're failed to "just drive very, very slow when", when dealing with situations that human drivers deal with all the time.

Let's not pretend that anticipating potentially dangerous behaviour from subtle clues is some once-in-a-lifetime corner case. People do this all the time when driving -- be it a drunk guy on the sidewalk, a small kid a tad bit too unstable when riding a bike by the roadside, kids playng catch nex to the road and not paying attention, etc etc. Understanding these situation is crucial in self driving if we want to beat the 1 fatality per 100M mile that we have with human drivers. For such scenarios, please explain how the AI can always know when it failed to anticipate a problem that a normal human driver can.


> how did we get from always knowing when you're failed to "just drive very, very slow when", when dealing with situations that human drivers deal with all the time

You raised this scenario:

> there's a very drunk person on the sidewalk. As a human driver, you know he might act unexpectedly so you slow down...

I was just responding to that.

> Let's not pretend that anticipating potentially dangerous behaviour from subtle clues is some once-in-a-lifetime corner case.

I never said it was. All I said was that "failure must be defined with respect to some model." If you really want to anticipate every contingency then you have to take into account some very unlikely possibilities, like bazookas or (to choose a slightly more plausible example) having someone hiding behind the parked car that you are driving past and jumping out just at the wrong moment.

The kind of "failure" that I'm talking about is not a failure to anticipate all possible contingencies, but a failure to act correctly given your design goals and the information you have at your disposal. Hitting someone who jumps out at you from behind a parked car, or failing to avoid a bazooka attack, may or may not be a failure depending on your design criteria. But the situation in the OP video was not a corner case. Steering into a static barrier at freeway speeds is just very clearly not the right answer under any reasonable design criteria for an autonomous vehicle.

My claim is simply that given a set of design criteria, you cannot in general build a system that never fails according to those criteria, but you can build a system that, if it fails, knows that it has failed. I further claim that this is useful because you can then put a layer on top of this failure-detection mechanism that can recover from point failures, and so increase the overall system reliability. If you really want to know the details, go read the thesis or the paper.

These are not particularly deep or revolutionary claims. If you think they are, then you haven't understood them. These are really just codifications of some engineering common-sense. Back in 1991, applying this common sense to autonomous robots was new. In 2018 it should be standard practice, but apparently it's not.


You don't even need to go to the level of a drunk person. Imagine driving down a suburban street and a small child darts out onto the road chasing after a ball.


Some local knowledge: they blocked off lane to the left side is the same lane used when the express lanes are in their opposite configuration, in the morning. In the morning with the express inbound towards the city, that lane is a left-hand exit from the main flow of Interstate 5, onto the dedicated express lanes. There should be a sufficient amount of GPS data gathered from hundreds of other teslas, and camera data, that shows the same lane from an opposite perspective.


> This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong.

That’s exactly my experience as a driver:

You learn to anticipate that the ‘autopilot’ will disengage or otherwise fail. I have been good enough at this, obviously, but it is sometimes frightening how close you get to a dangerous situation …


>Here is my armchair diagnosis:

Odd how your "armchair diagnosis" matches perfectly with the top ranked comment in the reddit thread that was posted 1 day ago.

courtlandreOwner 815 points 1 day ago

It sees the white line on the left, the white line on the right and thinks its a big lane. Its trying to center the car in the lane.


The comment above is saying the split looks like a diamond lane marker indicating an HOV/electric car lane, so the car is centering itself on the diamond. That's a completely different idea from the Reddit comment about the car centering itself between the right and left white lines. My guess is that both inputs contributed to the car "thinking" it was following a lane.


The conclusions are the exact same. The autopilot was trying to center the car between lines.

>My guess is that the autopilot mistook these lines for the diamond lane marker and steered towards them thinking it was centering itself in the lane.

It sees the white line on the left, the white line on the right and thinks its a big lane. Its trying to center the car in the lane.


Is it really odd that two people could reach the same conclusion when presented with the same video?


Of course not. But it is odd when the top rated comment directly below the video is essentially the armchair theory that was proposed.


Again, how is it odd that everyone agrees with the obvious explanation?


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

Wouldn't the system doing the checking be considered an autonomous system...that could also fail?


No, because the monitoring system doesn't interact with an environment, so its job is much simpler. This is actually the reason that cognizant failure works.


Thank you for the concise answer. I was just about to ask if you have a blog, but found your website in your HN profile. I know what I am doing for the rest of the night :P PS: Funny that we've both written a double ratchet implementation in JS :)


I don't mean to call the OP lier but anyone knows if there is any evidence whatsoever that this video was taken on an autopilot engaged Tesla? Just want to be sure about it before I form judgment against Tesla.


More basic lane assist in other cars already solve this problem. 100% teslas fault. My car will disable lane assist the moment the lane gets too wide.


Sounds like an easy fix then. After that we move on to the next edge case and so on ad infinitum.


If you are aware when you are wrong... you would not be wrong.

If you are referring to assigning confidences/probabilities to decisions, this is standard in ML.


> If you are referring to assigning confidences/probabilities to decisions,

There's a little more to it than that but yeah, pretty much.

> this is standard in ML.

Yes, I know. But not, apparently, standard in embedded autonomous systems.


I don’t know you, sir, but Tesla should hire you!


Estimating errors or confidence level reliably is still one of the biggest unsolved problems in AI.


In other words an old ambiguous parsing problem.

It might be fixable in software. I'm a bit annoyed at Tesla for over reliance over painted lines. They fade, they can be covered, be outdated ..

That old fail video of a SDV stuck inside a circle is not funny anymore.


> SDV stuck inside a circle

Off topic: I never understood why this video was discussed that much, especially in order to blame SDVs. It's an example of pointless road markings and a perfectly behaving vehicle. It's like driving into a one way street that turns out to have no exit. The driver can't be blamed.


I know, and partly agree. Still when you have something that hints at leaping into the future, a simplistic automata behaviour like this brings all to the ground.

We'd expect at least 'cycle' detection ;)


This comment from OP (beastpilot) is pretty frightening:

> Yep, works for 6 months with zero issues. Then one Friday night you get an update. Everything works that weekend, and on your way to work on Monday. Then, 18 minutes into your commute home, it drives straight at a barrier at 60 MPH.

> It's important to remember that it's not like you got the update 5 minutes before this happened. Even worse, you may not know you got an update if you are in a multi-driver household and the other driver installed the update.

Very glad the driver had 100% of their attention on the road at that moment.


Elsewhere from the comments: "Yes the lane width detection and centering was introduced in 10.4. Our X does this now as well and while it's a welcome intoroduction in most cases every one in a while in this instance or when HOV lanes split it is unwelcome." So basically, if I'm understanding this right they introduced a new feature that positions your car a little more consistently in the lane at the small cost of sometimes steering you at full speed head-on into a barrier.

Remember Tesla's first press release on the crash and how it mentioned "Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times"? I imagine the number that have driven it successfully in that lane since that update was rolled out sometime in mid-March is rather smaller...


And that's exactly what makes Tesla PR so disingenuous, they know better than anybody what updates they did and how often their updated software passed that point and yet they will happily cite numbers they know to be wrong.

So, now regarding that previous crash: did that driver (or should I say occupant) get lulled into a false sense of security because he'd been through there many times in the past and it worked until that update happened and then it suddenly didn't?


I'm a huge fan of Tesla and I thought that bit of PR was horrifyingly bad. It came across as completely dismissive of any further investigation or accepting of even the possibility of responsibility.

Now that these other videos are showing up, and further details (the update) are emerging, that PR should bite them in the ass hard enough that they decide never to handle an incident that way again.

I don't want this to kill Tesla -- I sincerely hope they make their production goals and become a major automobile manufacturer -- but their handling of this should hurt them.

I'm also curious if any of the people at Tesla saying, "we call it autopilot even though we expect the human driver to be 100% attentive at all times" have studied any of the history of dead man's vigilance devices.


I've got to take issue with you there. Tesla Engineering knows better than anybody what updates they did and how often they update etc.

Tesla PR knows nothing about what updates the engineering team did. At least some people in Tesla PR probably don't even know the cars update their software regularly.

It's bad practice for them to speak out of turn, but I can absolutely see the PR team not having a good grasp of what really indicates safety and their job is to just put out the best numbers possible.


I'm sorry but Tesla PR == Tesla. If they don't have a good grasp on this they should STFU until they do. That would make Tesla even worse in my book.

Their job is not to put out the best numbers possible, their job is to inform. Most likely they were more worried about the effect of their statement on their stock price than they were worried about public safety.

If they do put out numbers (such as 85K trips past that point) then it is clear that they have access to engineering data with a very high resolution, it would be very bad if they only used that access to search for numbers that bolster their image.


Well I kind of came to hope that Tesla PR was less BS than other PR companies. But that memo basically shows they're no different from all other PR, just twisting the truth to avoid negative stories until the fuss dies down.


The entire point of a PR department is to gather that information before releasing it. Yes, they wrangle it to put the company in the best light, but that doesn't mean they should be operating in ignorance. They have access to Tesla employees that outside reporters do not (or at least, should not) have.


> The entire point of a PR department is to gather that information before releasing it.

No the entire point of a PR department is to propagandize the public on behalf of the corporation.


It seems like the collective pack of Teslas would feed telemetry back to the mothership as unit tests for any new versions. In other words, any change has to redrive (simulate) a big sample of what has already been driven, especially where the human had to correct.


Life and death patch notes. I think I'm going to wait a while before using this feature.


> Very glad the driver had 100% of their attention on the road at that moment.

Yes, and 100% is critical. That required pretty quick and smooth reactions. The car started to follow the right-hand lane, then suddenly re-centered on the barrier. The driver had about a second to see that and correct. That's just the sort of situation where even a slightly inattentive driver might panic and overcorrect, which could cause a spinout if the roads were wet or icy or cause a crash with parallel traffic.


So your basically paying (A lot) of cash to beta test their software for them... at your own (Very personal) risk!


And endanger others who have decided to opt-out of the beta test. Keep in mind that in any accident all bets are off with respect to the traffic around you, even if you ram into a stationary barrier you could easily rebound into other traffic.


Also, if something goes wrong it's your fault anyways because you're supposed to keep 100% of your attention on the road.


That's the best kind of small print. The kind that disclaims all liability no matter what. I really don't get why we accept this stuff. It's akin to driving without a license. After all, as long as a Tesla 'autopilot' can't take a driving test it has no business controlling cars.


Not to mention the fact that, if they're software screws up and kills you, they'll rush to to Twitter in an attempt to smear you using personal data and deceptive statements.


I don't know why people use this thing. The cars are nice, and justify their purchase on their own, without this autopilot feature. But I don't think you can do good autopilot without lidar.

  People shouldn't use it.


I really don't think the cars are that nice... this is a side from the main issue here but the build quality is poor and the design is bland - both internally and externally (That massive big iPad like screen - yuck!).


The big Youtuber that rebuilds salvage Teslas made the point that almost every Tesla owning Youtuber has a video showing their car getting hauled away on a tow-truck. One of the possible reasons that Tesla service dept. doesn't provide maintenance history on salvage cars is because the amount of service would reflect poorly. It doesn't seem uncommon for a model S with 80,000 miles to have several door handle replacements, at least one whole drive unit replacement, battery contactor replacement, an infotainment system replacement, in addition to all the recall/redesigned components that are replaced before failure. I still think Tesla's are quite nice, but a bullet-proof Honda they are not.


Not just your own risk... There are other people on the road who share that risk. It makes me angry at Elon Musk himself. First he didn't sterilize that stupid thing he sent into interplanetary space. Now he's putting peoples lives at risk.


Have to disagree with you there. It is in a fairly normal solar orbit. There are a shitload of dead 3rd stages and satellites in solar orbit. They were not sterilized. National space agencies have been launching things into earth escape velocities without sterile procedures for 40 years. If it were designed to land intact on Mars that is another story


Well there's plenty of unsterilized stuff in space. The early space missions just dumped the bodily waste of the astronauts overboard.


Yeah, but it's coming back to Earth, where it came from. This is an object that could land on Mars or another planet.

Great measures have been taken in the past to ensure that other planets aren't contaminated before we've had a chance to understand their existing biology. Elon Musk is the kid who comes and knocks over some other kids' block tower for his own amusement.

https://en.wikipedia.org/wiki/Planetary_protection


> First he didn't sterilize that stupid thing

Huh? Isn't it better that way, we might spread [simple] life to other planets. I'm completely serious, I don't understand what the concern is?


There may already be life on other planets. Whether or not it's simple is a short-sighted human judgement.

Look at the big picture. We risk denying these planets the ability to evolve in isolation. That is a decision that cannot be reversed. Do we really want to do that? Maybe so, but it ought to be a conversation. Great measures have been taken to reduce the risk of contaminating other planets with Earth based lifeforms. Then, this belligerent guy comes along and disregards all that.


> Whether or not it's simple is a short-sighted human judgement.

You realize I was talking about your "contamination" (e.g. bacterial organisms, microscopic lifeforms, etc)?

> We risk denying these planets the ability to evolve in isolation.

Seems like a fairly limited risk. It is more likely these planets have no form of life at all and that we'd seed their only life (if it could sustain there).

> Great measures have been taken to reduce the risk of contaminating other planets with Earth based lifeforms.

You're conflating craft that were designed to land on other planets and look for life on them, with a space craft.


> The driver had about a second to see that and correct.

And if there had been a crash, Tesla probably would have said that the driver had an unobstructed view of the barrier for several seconds before impact (conveniently omitting at what point AP started to swerve).


And don't forget pointing out that the driver forgot to use their blinker a week before the crash so clearly they just aren't a responsible driver.


From the OP's comments on Reddit - he said that it happens about 90% of the time on that stretch and that he has more footage of the same exact incident.

He was likely prepared for it, which kinda makes it even scarier in a way. An inattentive driver would have totally botched this.


After working in large codebases I discovered that there are all sorts of assumptions in the code, and that a small change in one function can break an assumption used a long ways elsewhere. I came to the conclusion that any change basically invalidates all your tests. An update could introduce any possible problem that wasn't there before, so now all your experience with the system as a driver is actually reset to zero.


This is just about what I expected it to look like when software engineering safety standards are let loose on the physical world.


And this is why if I ever get a self driving car I want:

1) root

2) control over updates

3) everything to be completely open


You do not want root on your self driving car.

Or to put it differently, I don't want to be driving on the same road as you with your rooted self driving car. You can be great sysadmin/coder, tesla guys may be too, but both of you changing random stuff without any communication with each other.. I've seen enough servers.


I am trying to imagine the typical arch Linux enthusiasts modifying the lane detection software on their cars and shudder at the thought.


This is a problem that will very quickly fix itself, as morbid as it sounds...


It's one thing to remove the warning stickers from a gas-fire oven, its another when you're commanding several thousand pounds of unintentional killing machine.

Keep your rooting to your phones.


All I want in a modern car is to have nothing smart in it that I'd _need_ (or, want) to root...


Everything to be completely open is really important, but I don't want you to have root or control over update if you are using your car on a public road.


Only certified (by the government) updates should be allowed on self driving cars. It should be at least a misdemeanor to have your own code on the car.


Do these things really matter unless you are going to verify the software yourself? (and if you are, kudos to you!)

While, in principle, users could organize their own communal verification programs for open software, that does not happen in practice, even when the software is widely used and needs to be safe (or at least secure - OpenSSL...)


I just don't understand the willingness people have to put their lives in the hands of unproven technology like this. I mean, I don't even trust regular old cruise control in my cars and I keep my finger on the "cancel" button whenever I use it during long highway drives.


This kind of video makes me wonder: what is the "autopilot" feature actually for? Do people generally like it? (I don't own a Tesla and I've never driven one.)

If you're supposed to keep your hands on the wheel, and given videos like this that show you really need to keep your hands on the wheel and pay attention, is automatic steering really that big of a deal?

Cruise control, now, that really is useful because it automates a trivial chore (maintaining a steady speed) and will do it well enough to improve your gas mileage. The main failure condition is "car keeps driving at full speed towards an obstacle" but an automatic emergency brake feature (again, reasonably straightforward, and standard in many new cars) can mitigate that pretty well.

It seems to me that autopilot falls into an uncanny valley, where it's not simple enough to be a reliable automation or a useful safety improvement, but it also isn't smart enough to reduce the need for an alert human driver. So what's the point?

If you're excited about self-driving cars because they'll reduce accidents, as many people here claim, what you should really be excited about is the more mundane incremental improvements like pedestrian airbags. Those will actually save lives right now.


To directly answer your question: people buy autopilot so they can check their phone while commuting down the freeway to work. I was talking to a group of people about the difficulties of commuting in the Santa Clara Valley, and one guy shrugged and said, "just get a Tesla, move to the left lane, and manage email until you get to work".


Yikes! I really hope that's a joke...


What else would people be using it for? It's a "drive my car for me while I send this text" mode that allows you to take your attention off the road every few seconds. Otherwise, the system makes no sense at all.


Once you're engrossed in your phone, you're not taking attention off the road for a 'few seconds'. You're essentially letting autopilot drive for you. Which Tesla explicitly says you're not supposed to do.


Agreed.


I see people attempting that on the roads around here without the Tesla. That is why I always honk at cell phone drivers. Sometimes I wish I had a fully-charged Marx bank in the trunk.


It should be illegal. It is in many countries.


It is in New York.


Proponents keep touting this as the real benefit of self-driving cars, so it's hardly surprising that some people have taken them at their word and not realised that the car is not, in fact, truly autonomous and may drop out at any moment.


or not drop out and drive you into a barrier


It's not a joke. I was in a bus crossing the Bay Bridge a few weeks ago. I was watching a Tesla driver play on his phone. He didn't look up once in the span where I had visibility (about half of the bridge).


I have seen pictures of iPads strapped to steering wheel in Nissan Altimas, so yes people are really this dumb. Airbag issues entirely aside.


Oh god. Imagine the last thing you see before dying is your candy crush score propelled at your face at 200 mph.


Years ago I saw someone reading a book while driving...


Agreed. I really hate that Tesla named the feature "autopilot" as it gives the impression of being fully automatic (yes, I realize boat and airline "autopilots" aren't fully automatic - that is likely lot on most consumers).

My VW has Adaptive Cruise Control (ACC) which does the normal cruise control, plus basic distance keeping (with an alarm if the closing speed changes dangerously).

My parents' Subarus have ACC plus lane-keeping. The car will only do so much correction, plus alarms.

These seem like much better solutions, given the current state of driving "AI".


Yeah, I have a recent Subaru with those features. It beeps when I'm backing out of the garage and there's someone driving or biking up the road before they even come into my sight lines.

It can help keep me in a lane, either by beeping or nudging the steering wheel if I drift - I only turn on active lane assist on long highway stretches, it is more of a security blanket than anything else - just in case I space out for a sec, here's another layer of defense.

Adaptive cruise is also great. Between the two, in long road trips I can put the car in a travel lane and just go. I still have to attend to my surroundings, but it is a lot easier to focus on that when I'm not worried about accidentally creeping up to 90 mph because suddenly there's nobody in front of me.

I also had the auto brake feature activate once when a car in front of me stopped unexpectedly. I was in the middle of braking, but the brake pedal depressed further and there was a loud alarm beep.

None of these are autopilot, and honestly I wouldn't want autopilot until it is legit reliable. Instead, these are defense in depth features. The computer helps prevent certain mistakes as I make them, but never is in primary control of the vehicle.


Subaru's system is simply fantastic and you can tell they thought a lot about the "human condition" (human weaknesses) while designing it.

Lame keep assist is very subtle, I describe it like wind blowing on the side of the car. You can trivially over-power it and it will beep if you exit the lane (without blinker on) with or without lane keep assist active.

The whole package of safety features is wonderful and impressive for something starting at around $22K.


I drive a Nissan myself, but from the sounds of it it's about the same. LDW (just warning in my case), FEB + pedestrian/moving object detection, blind spot detection, intelligent/adaptive cruise control. (There's also a lane keeping system option called ProPilot, but I skipped out on that - I don't think I'd trust it in the regular rain anyway)

Anyway, my point is these things are common now, and while getting the full package might cost a premium, the more important parts (like FEB) are certainly becoming fairly standard.


From what I understand, Subaru develops their own. They are the only ones where there are two cameras in the front that both see in color.

We have a 2017 Subaru with all of these and it's really excellent.


I have ProPilot. It's pretty great. In heavy rain it shuts itself off and tells you that it is unavailable in bad weather.


That's great news. ProPilot sounds ideal for dealing with daily stop and go commute without the cost and risk of Tesla


I think Tesla is losing out here by calling what is little more than adaptive cruise control plus lanekeeping "autopilot." Adaptive cruise control + lanekeeping + collision avoidance is available on a ton of cars now, even economy cars, and it works great. I regularly make a ~200 mile drive on the Honda system and it is 100% the whole way. But when you sell it as Autopilot instead of driver assist, people have much higher expectations and blame your car rather than the driver - though there very well could be serious problems with the Tesla implementation.

Lanekeeping is actually very nice. The system in Teslas and most (all?) others do require you to keep your hands on the wheel and pay attention, but having to constantly manually steer the car is much more fatiguing than you would think. It's really annoying to drive cars without lanekeeping now.


My concern is, if you have both cruise control and lanekeeping, what's left to keep your attention and stop you zoning out?

I don't find steering onerous, but it requires just enough attention to keep you alert.

(I've never used a lanekeeping system, though. Maybe I'd like it, I dunno)


I use lanekeeping on the freeway, usually with adaptive cruise control as well (subaru). I focus more on the Big Picture of all the cars around me, can see further and have a better sense of who is doing what. That's anecdotal, of course, and could well be placebo effect, but I still pay a lot of attention to the road.

Plus, lanekeeping can be really annoying if you rely on it all the time. The car sometimes tends to drift back and forth in a lane a little between corrections, instead of just going straight. So: you're still steering. It's just that you're steering less and have a defense in depth against loss of focus, and can drive without exhausting yourself.


I've rented a model S for a weekend and drove about 1 000 kilometers in it.

Naturally, I tried the auto pilot feature on a highway but I wasn't to impressed. There is a major "bump" (negative G-forces), and the car tried to swirl into the other lane (this was within the first 40 minutes of my drive), and that made me distrust the auto steering feature.

As I think you're onto, I DO feel that auto pilot is the future, but we're not there yet - let's improve the existing life saving features (and not disconnect them, looking at you, Uber).


Personally I hate cruise control. I prefer to be actively engaged when driving. Either I am driving 100% or I am not at all, there is absolutely no middle ground in my opinion


I can understand that. I like cruise control because it lets me focus 100% on steering and watching the road around me.

Trying to maintain a steady speed and conserve gas is a fun challenge, but a bit pointless, because a) it's a distraction from more important tasks, and b) the computer can usually do it better than I can.


It's also more comfortable for your passengers.

I did a 6 hours trip with a friend continuously putting his foot up and down on the throttle, and it was the most gruelling car trip I've had I think.

When I asked he said it was "to keep control on the car". I'm a patient friend.


You're a better friend than I am. That truly is torture, and is the reason I volunteer to drive on ALL trips. I don't care how long. People who can't maintain proper speed without either flooring it, or taking their foot off the pedal, shouldn't be driving.


>People who can't maintain proper speed without either flooring it, or taking their foot off the pedal, shouldn't be driving.

Not entirely true. I've been to German driving school and my father really pushed the concept of smooth throttle control on me as a beginning driver. Be smooth always. However, as I was able to afford higher and higher performance vehicles my take on the smooth maintaining of speed without noticeable accelerator input has changed. While I'll drive my SUV very smoothly, when I drive my manual six speed German roadster, my style is entirely different. Because of the weight, size, and HP, it's really not possible to drive it smoothly outside of tossing it into 6th which isn't terribly "fun". In the gears where it's "fun", it's a very much "on" or "off" throttle experience simply because of the HP produced by the engine.


I drive a manual transmission BMW. The shifting, and speed control can be as smooth as glass. It's the driver not the machine. Take consideration for your passenger, and everyone else on the road.


Yes, but its not a 577 HP track-tuned roadster. I can drive my 320 HP manual sedan quite smoothly as well. There are some cars where:

a) you don't want to drive them smoothing because that's no fun - even for just the exhaust note & b) It's actually difficult to drive them smoothly because of the torque/HP.

It doesn't mean you are a bad driver, it simply means you've adjusted your driving style to match the car you are driving.


I DOUBT OP was taking a six hour road trip with his friend in a 577 HP track car. Which is the point here. It's awesome that you have a race car, but comparing that to the rest of our daily drivers makes no sense.


If I didn't know better, I'd think you were describing my wife.

In her defense, she learned to drive in a very different environment (dense Chicago surface street traffic).


I've got a friend that drives like that too. Sooooo annoying. Speed up, slow down, speed up, slow down. On and on. Ugh.


Cruise control on my car actually gains me about 1 mpg. This is specifically because the car "knows" that the speed won't be changing much so it locks the torque converter completely and drops the engine RPM by 100~200. Many of those things are direct results of the mediocre Ford implementation of the 2005 Five Hundred CVT and the fact that it predates Adaptive Cruise Control, but it has a physical reason and follows directly from new assumptions in the code only viable from the state I put it in.


We've actually moved quite a bit away from "100%" direct control.

Anti-lock brakes were the first such system. Before, you had to pump the brakes in an emergency, a practice that was difficult to execute even without the shock of an impeding accident. That system alone certainly saves thousands of lives every year.


Ahem. That is purely semantics -- ABS makes the car perform more like what you are conditioned to expect (under normal driving conditions). That's the whole success of ABS; it saves lives precisely because it conveys the illusion of you still being 100% under direct control when in fact you would be careening out of control otherwise.


One of the things I've heard mentioned by professional driving instructors, without a good citation, is that many more accidents could be avoided if people trusted ABS more. Specifically, when you're braking hard/panic braking and the ABS kicks it, the shock of the rapid firing against your foot causes you to back off pressure on the brake pedal. I was specifically instructed to stand on it anyway to see what the car behaved like when truly panic stopping and maintaining pressure, but I definitely had an instinctive reaction to let off of the brake when the ABS kicked in. If you're ever in that situation, press harder, don't let up!


That's why cars have Brake Assist (BA) now that will break more than you're actually requesting with the pedal if you break suddenly.

https://en.wikipedia.org/wiki/Emergency_brake_assist


Yes, my driving instructor (aka my father, employing the paternal prerogative of never citing anything ever) used to say the same thing.


The average driver will probably never actually experience abs. It's a good failsafe that is rarely employed...if it is, you definitely realize it


Unless you live in the US in the Midwest or New England area, then the likelihood of you never having experienced ABS kicking in is likely to be 0%. Braking on ice is not the most fun experience.


I live in the Midwest. I have yet to experience abs. Between the three vehicles I've owned and the 400-500k combined miles.


I have ABS kick on around once-twice a season. I mean, sure, if you live somewhere it doesn't snow you won't even experience ABS, but if you live in a climate where it snows you definitely will.

That being said, it's always been while stopping for a light, not in a case when I had to swerve, so its never prevented a crash for me, but it really is comforting to have it.


Around Houston I was surprised to see many warning about ice on the roads. Aparently even in Texas on a cold night a road especially on bridges can freeze resulting in black ice.


It's important to practice once in a while. Find a clear road where you're not putting anyone else at risk and get used to the feeling, so you'll not release the brakes in an emergency.


Cruise control is great where there's long stretches of open road, light traffic (LOS A or B [1]), and little need for frequent braking.

The star usecase is to set cruise to be very near the speed limit, such that after acceleration events like overtaking, you coast back to highway speed.

It's a low-effort way of ensuring that one will be compliant with speed laws most of the time, yet maintaining a steady pace. I too prefer to be 'actively engaged' while driving, but in my opinion the reduction of constant acceleration input is a welcome convenience.

'Adaptive' cruise control, on the other hand, feels to me like riding on a tenuous rollercoaster. It's intended to let cruise control be usable in packed traffic, but it requires one to cede a lot of trust and control to the machine in ways that physically make me uneasy -- and it doesn't help that the exact behavior differs between models and manufacturers, so that trust doesn't automatically transplant into a different car.

Part of the problem is, again, with terminology. Ever since Adaptive Cruise Control proliferated as a term, it drew a parallel to classic 'Cruise Control', which I think is a mistake. Classic Cruise Control is a fire-and-forget, non-safety feature that's simple to reason about: do I want the car to gun it at a constant 70 mph, or no? You can run a quick mental judgement call and decide whether to engage it or leave it off.

'Adaptive' cruise control fundamentally about maintaining following distance, i.e. tailgating restriction. It's a safety feature. It's a button to "proceed forward not exceeding target speed", but if it gets disengaged for any reason then you can easily overrun into the car ahead. It's a safety feature with the UI/UX of a non-safety feature, so it's always opt-in (!) -- which is simply horrific.

All safety features in vehicles should be either always-on, or opt-out, and NEVER opt-in. On a modern car, tailgate restriction should be on by default, with a button unlocking the car into free-throttle mode. Braking -- alone -- should never disable a safety feature.

[1] https://en.wikipedia.org/wiki/Level_of_service


ACC/ICC is not a safety feature in the general sense. It might be termed as a safety enhancement to cruise control, in which case it should be opt-out when entering cruise control (which it is, in my case) and which disengages when exiting cruise control (which is done by braking, and that is well known).

On ceding trust - at least with the ICC system in Nissans, it's A) far back, which gives more reaction time B) quite easy to tell when it sees the car in front vs when it doesn't. You're ceding trust, sure, but you can also verify easily.

Your 'tailgate restriction' bit is effectively a more agressive form of collision warning/forward emergency braking, and FCW+FEB as far as I know is available on all or at least most vehicles with ACC/ICC. Unfortunately, the realities of city driving means that 'maintaining distance' is a goal in some cases (i.e. just got cut off, tight merges, etc) rather than an absolute directive - frankly, something trying to force me to a certain distance away from the car in front of me would be more aggravating than useful.


> if it gets disengaged for any reason then you can easily overrun into the car ahead

Are there cars that have ACC and don't have AEB (Automatic Emergency Braking)?


Furthermore, if an ACC system is getting disengaged, you're going to coast to a stop. Not accelerate straight into the car in front of you.


I like it when driving through areas with strictly enforced speed limits. Otherwise I'd collect way too many tickets.


I've been wondering exactly the same thing over the last week once all this came to light. Seems it would be the worst of both worlds. Would personally feel safer to me knowing either I'm supposed to drive or not. This half way system means actually means the better the auto pilot is the more reliant I would naturally become and then not be ready to react or be in the frame of mind where I can. How airline pilots stop themselves becoming overly reliant on their autopilots is also a concern. It's that classic situation where two people each think the other will take care of it, then no one does. "I thought you were going to do it."


I don't own a car so I drive very rarely (once a month or less). I recently went on a week-long roadtrip in a rental Subaru which had their EyeSight system which does adaptive cruise (follows the speed of the car in front) + lane keeping. I'd say at the end of the day I was noticeably less exhausted than I usually am after a day of driving.

The Subaru system will not let you let go of the wheel for more than 15 seconds (after that it will instantly disengage), so it's more to save your effort of continual minor corrections. It also disengages as soon as it's confused in the slightest (faded lines, lines at an angle etc)


I'm looking forward to trying adaptive cruise control on the new Subaru my dad got recently. I pretty much stopped using regular cruise control years ago because the roads where I mostly drive are busy enough I find that it encourages driving in a way that prioritizes maintaining a constant speed even when that's different from the flow of the traffic.


So human drivers are terrible at driving for any long period of time, especially on relatively consistent "boring" highways.

Guess what computers excel at? Driving consistently on consistent highways.

The Tesla Autopilot is supposed to be the always aware and paying attention portion on these cases where a human driver would be very likely to start texting or dozing off. Now, it's not fully autonomous and may well decide it can't handle a situation (or apparently try to drive you into a barrier to see if you're still awake...). In this case the human driver who is somewhat zoned out needs to take control instantly and correct the situation, until they can safely re-engage autopilot (or pull over and make sure they're still alive, etc).


  Guess what computers excel at? Driving consistently
  on consistent highways.
Watching the video, would you say the computer was excelling, or that the road was radically unusual? I wouldn't.


The location is the end of some tidal flow lanes rejoining the main freeway lanes with an unusual configuration of lane markings and barriers. I drive past the spot all the time and can totally see a computer messing up here.


In other words, "works great if it works, might actively try to kill you at random, there's no clear way of telling when it might try the latter, it's your fault in any case." Yeah, that's very much the safe driving paradise I'm being promised "in two years" for a decade now, by the technooptimists. It's almost as if the technology is at 50%-ready instead of the 90% that marketing types seem to peddle, who would have thunk?


pretty much


The computer decided that staying in the middle of a wide lane was more important than avoiding a large obstacle where there was sufficient room on either side. That's bog-standard target-fixation fallacy. Is that excelling?


It might not have been radically unusual, but that doesn't make it consistent either.. (looked like lanes where closed to me)


> "Guess what computers excel at? Driving consistently on consistent highways."

Possibly on paper. In reality, as of right now, computers are clearly far from excelling at this specific task.


Assuming perfectly spherical cars on perfectly consistent highways in a vacuum.


OK, but there are clearly cases where the autopilot gets into trouble, and the human needs to take control again quickly. The human driver absolutely cannot doze off.

Now, I realise people in old-fashioned non-autopilot cars can and do doze off, and that's very dangerous. But it's not clear to me how the autopilot improves that situation. Relying on the autopilot actually encourages you to doze off.

We already have simple remedies like "pull over if you feel tired" and "never ever pick up your phone while driving (or you'll lose your licence)"


Expecting "drivers" to be able to instantly switch from "somewhat zoned out" to having the situational awareness to resolve the problem that AP fails at is unrealistic.


The are two very real issues/threats here.

the first being people are being put in harms way by either false sense of trust invoked by the name or the mixed messages from Tesla

the second is that if the first is left unchecked Tesla could single handily set back Autonomous driving for all by souring public and government opinion.

it needs a new name that aligns better with what it can do. it could be a safety system which gently corrects a driver and takes over in an obvious emergency internal or external. as it stands now it is just dangerous


The problem is autopilot actively engages you even less than highway driving. I argue this exacerbates some of the problems that cause humans to checkout.

I don't know what the answer is but it feels like GM's super cruise does a more adequate job of acknowledging the realistic limits of the technology and explicitly white lists roads where the technology is available for use.

I personally think that without some sort of sensors or beacons in the road, autonomous driving via camera and LIDAR sesnors is ever going to be good enough to achieve level 5 autonomous operation.


I think the world modeling is the part that autonomous cars can obviously be better at. They have all sorts of advantages over humans; access to much more detailed maps than the average person can recall, multiple types of sensors pointed in multiple directions, a system designed to integrate all that information in a way that is advantageous for driving (we have a system evolved for whatever it evolved for) and so on.

It's the sophisticated behaviors necessary to safely drive through that world model that are the issue.

The success of emergency breaking systems (that aren't advertised as "driving" assist) are pretty good evidence that the sensors can serve well as input to safe behaviors.


If a human can navigate current roads, why shouldn't a computer be able to? We may need to develop a new types of sensors for vehicles, but that seems like a better/easier plan than installing beacons on every road, everywhere.

And what about beacon maintenance? Seems like most cities have a hard enough time keeping up with pot holes, lines, etc. as is.


If a human can write a novel, why shouldn’t a computer be able to?


I have no reason to believe a computer cannot write a novel.


They probably need to be able to reproduce before they can write a compelling novel.


Beacons is a great idea -- have some standardized beacon system, and a public map of autopilot-approved roads.

Following the beacons safely would be a vastly easier problem than trying to completely replace a human driver in all situations, but it would still give you about 90% of the benefits.


Lines fade away because there is little money for maintenance but beacons that cost multiples are the solution?

The first thing I thought when I read beacons: Hackers are going to have a field day with them. Add malicious beacons to streets and cars will drive off road at high speeds.


Well, look at cat’s eyes (sometimes called Botts’ dots in the US, I think?)

I assume they’re more expensive to install than just painting a few lines, but they’re very robust and long-lasting, and they’re fantastic for human drivers. It’s not a stretch to imagine something similarly useful for computerized cars, that links to a standard road database.

Hacking is a risk, sure. I envisage you’d lock it down by having a cryptographically signed master map; if the observed beacons diverge from the map, the autopilot system would refuse to proceed. (OK, I guess that allows a DoS attack at least.)


I use Autopilot every day. I think it's amazing and borderline life-changing if you are like me and dread the monotony of a daily commute. I don't think most people realize just how mentally fatiguing the act of driving (constantly making all sorts of micro adjustments) is until they use Autopilot for a while and see the difference for themselves. I certainly didn't. I can now drive for many hours and feel just as alert as when I first got into the car. I was never able to do that before Autopilot.

Yes, you still need to pay attention. It is hard for me to believe that any Autopilot user doesn't know this because you learn it by experience almost immediately. People text and drive all the time in manual cars, but for some reason when they do it in a Tesla, we declare that Autopilot lulled them into it.

While I agree that you need to be ready to grab the wheel or hit the brake on short notice, I disagree about what that means. There is a big difference between having to be ready to do those things and actually having to do them every few seconds. This difference wasn't intuitive to me, but in practice I've found it to be extremely mentally liberating and true beyond question.

I've also found that Autopilot makes it easier to take in all of your surroundings and drive defensively against things you otherwise wouldn't see. One thing that has struck me, as I now see more drivers than just the one in front of me, is how many people are distracted while driving. If I glance continually at an arbitrary driver on my way to work, there is a greater likelihood than not that within 10 seconds they'll look at a phone. That is terrifying to me, but it is also good information to have as I drive -- I am now able to drive defensively against drivers around me, not just the one in front of me.

I've also found I'm more able to think or listen to music or podcasts than I was before Autopilot. I could never get much out of technical audiobooks, for instance, while driving manually. But Autopilot has changed that. I hesitate to say this because I worry that I will give the impression that I am focusing less on the road, but I don't think that's what's happening. My mental abilities feel much higher when I am not constantly turning a wheel or adjusting a pedal. I'm listening to music, podcasts, or audiobooks either way -- I just get a lot more out of them with Autopilot. I think it goes back to the lack of mental fatigue.

Whatever you make of my experience, I urge you to try it on a long drive if you ever get an opportunity. I have put over 60k miles on Autopilot, I have taken over on demand hundreds if not thousands of times, and I've never had a close call that was Autopilot's fault.


> I think it's amazing and borderline life-changing

That's a very unfortunate choice of words.

> I have taken over on demand hundreds if not thousands of times, and I've never had a close call that was Autopilot's fault.

Maybe you are an exceptional driver, to be able to be vigilant at all times even when AP is active.

Even so, it would probably only take one instance where you weren't in time to change your mind on all this (assuming you'd survive) so until then it is a very literal case of survivors bias.

Which makes me wonder how that person that died the other week felt about their autopilot right up to the fatal crash.


> That's a very unfortunate choice of words.

Solid lol, but I stand by it!

> Maybe you are an exceptional driver, to be able to be vigilant at all times even when AP is active.

I've had driving moments I'm not proud of. But it's because I was being dumb, not because Autopilot made me do it.

I think the relevant question is: does Autopilot make people less attentive? I have no data on this. My personal experience is that most drivers are already inattentive, and Autopilot (1) makes it easier to be attentive (for a driver who chooses to be); and (2) is better than the alternative in cases where a driver is already inattentive.

> Even so, it would probably only take one instance where you weren't in time to change your mind on all this (assuming you'd survive) so until then it is a very literal case of survivors bias.

I hope I'd be more thoughtful and independent than that, but maybe you're right. But I don't think my view in the face of a terrible accident should be what drives policy, either.[1]

On the issue of survivorship bias, I would add that "Man Doesn't Text While Driving, Resulting in No Accident" isn't a headline that you're likely to read. I see a much greater quantity of bias in the failure cases that are reported and discussed than in the survivorship stories told (as evidenced by the proportions of comments and opinions here, vs in a user community like TMC or /r/teslamotors). I posted my experience because I think it brings more to this comment thread in particular than my survivorship bias detracts from it.

> Which makes me wonder how that person that died the other week felt about their autopilot right up to the fatal crash.

See: [1]


> I think the relevant question is: does Autopilot make people less attentive?

There are large bodies of knowledge about this gained from studies regarding trains and airline pilots and the conclusion seems to be uniformly that it is much harder to suddenly jump into an already problematic situation than it is to deal with that situation when you were engaged all along.

> On the issue of survivorship bias, I would add that "Man Doesn't Text While Driving, Resulting in No Accident" isn't a headline that you're likely to read.

It's one of the reasons I don't have a smartphone, I consider them distraction machines.


I rented one yesterday (s), it was my first time in a Tesla. There are a lot of things I didn't like - the interior is ugly and the giant ipad console is stupid and hard to use while driving without any tactile switches. But it IS the future. And WHAT a car. Just f'n brilliant as a package.

BUT...I feel like autopilot should only be for traffic jams on highways. It's downright dangerous the way it forces the driver to disengage. The adaptive cruise control is much better as at least you still have to pay attention but the car manages the throttle and following distances efficiently.


> Cruise control, now, that really is useful because it automates a trivial chore (maintaining a steady speed) and will do it well enough to improve your gas mileage. The main failure condition is "car keeps driving at full speed towards an obstacle" but an automatic emergency brake feature (again, reasonably straightforward, and standard in many new cars) can mitigate that pretty well.

Adaptive cruise control also helps; if the system detects a car in front slowing, it'll slow at a roughly equivalent pace to avoid a collision.


I think right now it's only good for stop and go traffic conditions and that's about it. That's still really useful for a lot of people.


So, like the traffic every day along US-101?

This self-driving car craze would be in a very different place if Silicon Valley had halfway decent mass transit...


It’s good in traffic jams. You can relax a lot more than driving yourself or autopilot at high speed. The worst case scenario is a fender bender.

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: