Hacker News new | past | comments | ask | show | jobs | submit login
Tesla Autopilot (tesla.com)
449 points by chetangole on June 17, 2017 | hide | past | web | favorite | 340 comments



Watch this at 0.25x speed or slower to see what's going on. This is a carefully chosen environment. Every place it drives has very clear highway centerline markings. It seems to be highly dependent on those for guidance. Sometimes it can't quite identify the road edge, but the centerline provides a position reference.

The inputs seem to be road line recognition, optical flow for the road, and solid object recognition, all vision-driven. Object recognition is limited. It doesn't recognize traffic cones as obstacles, either on the road centerline or on the road edge. Nor does it seem to be aware of guard rails or bridge railings just outside the road edge. It probably can't drive around an obstacle; we never see it do that in the video.

This looks like lane following plus smart cruise control plus GPS-based route guidance. That's nice, but it's not good enough that you can go to sleep while it's driving.


It terrifies me to think of relying on image recognition software to correctly determine an upcoming crossing road (with cars zipping across) so it can properly slow down, rather than get broadsided at full speed. Or a number of other life-and-death situations (which are common during driving). It just seems fragile (what if the 'vision' is somehow impaired, f.ex. blinded by sunlight, or the road markings are wrong or obscured) when the failure mode is so catastrophic.

I suppose once statistics start to prove that these cars are safer than human-driven ones, we can chalk it up to an irrational fear, but for now it seems crazy to me to put my life in the hands of an AI, when a mistake means that I die or kill someone, rather than play the wrong song on Spotify.

I don't fully understand why more effort is not put into a hardware solution, where roads are simply marked up for self-driving vehicles, e.g. magnets lining the lanes or something like that. Of course a more expensive solution, but seems like it would make the vehicles themselves a whole lot simpler and safer. Begin with inner cities, where the area is limited and traffic is most complex.


> It terrifies me to think of relying on image recognition software to correctly determine (... snip)

Do you not drive using only your eyes? If you are not terrified of the sensors, then software? Turing's central belief was that the human brain was 'just' a computer.

Regards doing things like embedding reflectors in roads and other ways to simplify lane holding, completely agree. But we can't forego the cameras etc that deal with situations that don't contain reflectors.


The thing is, human "machine vision" has graceful failure modes. You don't switch from seeing cars to seeing nothing to seeing an elephant just because the input got a little noisy. The same cannot be said about current ML demonstrations - because they operate on just vision. Humans continuously reconcile visual input with their model of the world and with other inputs, to the point of overriding visual data if needed.

I suppose you could make ML avoid sharp changes in recognition, but I don't trust the current neural-network models to do so reliably.


Another issue is detecting issues and fixing them. For example, if dirt gets into my eyes while driving, I immediately know and do what is necessary to clean my eyes before continuing on my drive. On a car, what if there's a small spec of dust on a camera that blocks some part of the view. Is the car smart enough to figure out something is wrong and refuse to go?


The AP2 feature in the Model S currently alerts the user when vision is obstructed and refuses to go into autopilot mode. If you're in autopilot already it makes noise and flashes warnings to tell the driver to take over. I assume if the driver doesn't it will slow down and come to a stop, but the one time this happened to me I didn't give it a chance to do so!


You've never been temporary blinded by a low-sun? Never sneezed and had your eyes closed at the wrong moment?

This is the same thing..


Humans know when they can't see properly. The problem with adversarial examples is that the neural net thinks it's doing great. You have to manually specify every possible failure mode for the neural net before hand -- if you miss one, you might have an accident.


Motorcyclists have a word for this phenomenon. It's called SMIDSY - "sorry mate, I didn't see you." Clear road, good vision, bike traveling in a straight line within the speed limit, car still somehow pulls out and hits the bike. Even worse, the times when the bike is stationary at a red light and the car just drives through it.


And it's not limited to cyclists. I think the general term used in cognitive science is "Looked but failed to see".


The real world is not filled with adversarial examples. Further, NN often does not think it's doing great when providing off the wall examples. Further this is a car, with GPS, where simply stopping is a completely viable option 99% of the time.

The truth is people really really stuck a driving, but it's easy enough that the vast majority of mistakes don't result in any problems.


>simply stopping is a completely viable option 99% of the time.

For a naive definition of "vaiable" that does not take into account other people that need to use the roadways. A car stopping or even slowing drastically in a lane will have the same negative affect on the transportation system as a minor accident.


You mean like we have every day already? Human drivers constantly make mistakes, and in many cases they aren't even aware of it or their affect on the drivers around them. Most don't cause an accident, but the behavior doesn't get corrected and that is what it eventually leads to.

Slowing for a turn then deciding to continue on, merging without checking blindspots, running red lights, speeding next to the short on-ramp lane, not following a yield to right turn sign, ignoring people in crosswalks, etc. All of these things happen all the time, I see at least one example every time I go out somewhere.


These cars are unlikely to stay stopped for long as there are going to be passengers in them.

Now, we could get into hypothetical situations, but the reality is minor accidents are already really common like with so many other things machines can be better than people without being perfect.


I really want to see how cars deal with the massive code explosion as they have to account for things like tak tak gangs.

I posit now: "as the problems with automated cars become evident, solutions provided will tend towards Skynet as t increases."


> Humans know when they can't see properly

There's plenty of conditions that are more difficult to humans: cars that are painted with the same color of asphalt seem to disappear in highways. Also, fog.


Usually. Drunk and sleepy drivers blur this distinction, but you still make a good point.


Aren't general adversarial networks designed to solve this?


It's not. When you're being blinded by sun, you don't suddenly see elephants or spaceships or comic books. You also don't think the world just disappeared - you know to just ignore the visual input.

Also, I'm more worried about temporary, unexpected configuration of things that suddenly make a NN see something weird.


That doesn't sound as an insurmountable problem.

Humans don't see elephants suddenly because a prior is telling them that an elephant is improbable in that context.

Something like that in a computer system shouldn't be above the current capabilities.


I agree it shouldn't be above current capabilities. But I'm yet to see anyone doing something like this. And I'm definitely wary about taking the results of toying with "deep learning" and saying we've can put that on a car and expect good results - which is an impression I sometimes get in articles about those topics.


Also, this will stop working the day an elephant escapes from a ZOO and manages to get onto a highway. All the autopilots will declare the elephant's presence as impossible, and will happily crash into it.


Not likely. Computers can do a double-take, just like humans.


I'm not sure the teslas are going to be trained to see elephants either!


You don't necessarily have to train it to see elephants, but Neural Networks fail in pretty interesting ways. This that are very obvious to you and I have the ability to confuse a CNN[1].

[1] - http://www.evolvingai.org/fooling


These look like a digital Rorschach test. With the minor difference that a human "knows" the image is not really an object and is being asked to interpret in some way; where the machine has no "knowledge" nor "understanding" but has an imperative to match this input to something in its repertoire of knowledge.

As a human, if you asked me to match these synthetic images to a most-likely real-world match, then my responses will also look "confused."


Julia Evans: How to trick a neural network into thinking a panda is a vulture

https://codewords.recurse.com/issues/five/why-do-neural-netw...


The thing is that computers also rate them with 99% certainty.


Those aren't too bad (out of context of the Tesla article), I can see how the ai might misinterpret those images. The bagel one, for example, is pretty on point.


Those examples show that ML algorithms "see" things much differently from us. It also shows they're very vulnerable to nonsense data. How can I be reasonably sure that some random play of shadows and lights, that would never fool a human, will cause a failure mode in a neural network in a vision-only self-driving car?


Or worse - paint jobs created specifically to muck with image recognition.

Or when the graffiti becomes intentionally malicious (thinking of a variation of Wiley E. Coyote's tricks with paint and solid walls)?


Would these be the same graffiti artists that are currently maliciously painting over road warning and direction signs to fool human drivers? Say, whiting out a 'low bridge ahead' sign, so that trucks and double-decker buses crash into it...? Because, nobody is doing that now, and if they tried it, they would be prosecuted, probably for murder or attempted murder.

I don't understand where some people get the belief that automated vehicles will suddenly become targets of murderous or sociopathic pranksters?


> I don't understand where some people get the belief that automated vehicles will suddenly become targets

There is precedent when it comes to the automation of people's jobs (truckers, cab drivers, courier services) and fanatical condemnation of technological advancement; see everything from the Luddite's infamous wooden shoes to the spiking of trees to stop logging.

In the forests near my hometown, some environmental fanatics will string up wire across motorcycle and ATV tracks, or create blockages and attack riders with crowbars and pipes. I wish I were kidding.

As for current graffiti artists - they're taking down stop signs and road markers, defacing handicapped parking signs with stickers, tagging any and all road signs, and so forth. They're minor nuisances for human drivers, who can usually easily identify compensate for such changes. Computers, even those running neural networks, are not able to compensate so easily.


Last year, there was a popular hoax about a graffiti of a Road Runner tunnel: http://www.snopes.com/road-runner-tunnel-crash-rumor/


I guess the CNN was trained only with real images. They must add some [random noise], [crap], [glitch] and [collage] images to the training set. This may reduce the efficiency of the NN in the real world, but it will make more difficult to find funny examples of misclassification.


They will need to be if they are to be able to recognise all the potential real world phenomena they might encounter!


That's just Bayesian priors. Easy peasy for ML.


Haven't seen much Bayes in current deep learning trends though.


> You don't switch from seeing cars to seeing nothing to seeing an elephant just because the input got a little noisy

yeah, instead you get hallucinations, muscle twitches, seizures, mental breaks and many more ways a brain can fail to do it's job.


Sure, but a) we have thousands of years of experience with those, and b) none of those are caused by little random noise in the visual input. Brains are incredibly robust against noisy data.


Our vision and persistent world model are good, but all humans are overconfident in their ability to drive in poor conditions (or even normal conditions, for many people.)

A computer isn't going to look down at a text message on their phone, fall asleep at the wheel, or get in a car while intoxicated. All which occur literally every day. People take driving for granted, because (at least in the US) we have to. Our sprawling suburbs couldn't function without it, so the barrier to entry is simply being a teenager.

Would I completely trust Tesla's auto-driving? No, not at all. But I at least trust them as much as I do an average driver, which is still just barely. The difference with Tesla's system is that one failure leads to improvement of the entire fleet, whereas one human failure maybe improves that one human driver. That means self-driving cars will continuously improve, while humans will just continue to be terrible drivers.


Yes! Tesla will take any death due to a failure of self-driving quite seriously indeed and work to fix the failure mode.

Whereas humans get the same kinds of fatal crashes over, and over, and over...

The Australian government has a "Black Spot" [1] program to provide federal funds (often millions) to fix up locations with a history of crashes. Bet it's cheaper to make those fixes to software.

[1] http://investment.infrastructure.gov.au/funding/blackspots/


Robustness is a key property of the methods and models used in self driving cars.

For a human, the noise isn't bad recognition, it's corrupted sensor data via distraction. It's one of the most common causes of accidents. For the AI agent an elephant will never appear unless it's been trained to be recognized. More likely it's recognized as an obstacle, which is ok. But keep in mind that years of Google's autonomous vehicle success make this pretty improbable, certainly less than in the average driver.

Much of the fear here is irrational and natural when humans lose control.


I'm not trying to express a fear that comes from humans losing control. I don't fear that - in fact, I can't wait until the day comes no humans are allowed to drive on public roads.

I was just answering about the concerns of self-driving systems relying solely (or primarily) on visual data from cameras. It's true this is how humans drive, but human visual system is much more complicated than what the current (published) state of the art in image processing seems to be. I do not trust visual-only systems today (especially if they employ deep learning shenanigans).

I know those problems aren't insurmountable, but I'd feel much safer if they threw in a LIDAR there too.


While I'm sure more sensors gives a better chance at consensus I don't think relying solely on visual data is a problem if done well (low false positive false negative rate, comprehensive trained, fault tolerant)


a) well, no. we only have about 100 years dealing with these problems in context of driving cars and we're really bad at it; b) that seems like shifting goalposts, what we care about is whether driver's picture of the world is correct and actions are sensible with respect to that picture, it just so happens that driverless cars' picture is visual (or close to visual), but if you're judging them on entirety of input and side-effects it's only fair to judge our brains by the same standard.


Some adversarial methods work on black-box networks - has any work been done on applying those to human vision? We know there are a wide range of simple inputs that produce unexpected results (eg. optical illusions​).


> You don't switch from seeing cars to seeing nothing to seeing an elephant just because the input got a little noisy.

You'd be surprised. All kinds of conditions can do really weird things to your vision. Migraine auras probably being the most common.


We don't drive with just our eyes. We fuse vision, sound, touch and proprioception. We self-locate easily. Most important though, is we do sophisticated visual, theory of mind and physics based inference.

If you take the example of Chess, humans and machines play very different styles. Computers replace predictive understanding with excellent memory and fast search.

Similarly for cars, we want to replace the lack of a powerful predictive model with some compensatory mechanism, such as sophisticated sensors. The problem is that, even with faster reaction times, better sensors and fuller awareness, these cars' compensatory abilities against a lack of powerful predictive models are still far from sufficient.

At this point in time, a car with more sensors and failsafes to augment the human against collision is safer than a self-driving car that depends on a lack of lapses of attention. That depends on proper behavior in the customer. Even dividing tasks, with the human controlling the steering (but allowing the car to takeover when highly certain of danger) and the car controlling acceleration would be safer. This set up acknowledges the human propensity to mind wander when attention is not engaged.

Until self-driving cars no longer require human rescue in difficult situations, one can't rightly say that self-driving software are better drivers than humans.


If the brain is just a computer, its machine learning algorithms for image recognition at a busy intersection are believably currently more accurate than any image processing software run on any computer that Tesla puts in its cars.


There are ways in which current machine learning is better than your brain -- tracking many objects at the same time is one of them.

Sure, it's worse in many other ways.


The ability to track multiple objects doesn't necessarily mean it's any good at integrating a solution. My camera can track multiple faces and smiles - that doesn't mean it can take better photos.


We are far away from hard AI (computer actually can think) some people start to believe we might not be able to reach it. What we currently succeeding is developing soft AI (solving specific problems) the problem is that such AI might do fine when things are clearly labeled, but might not know what to do in unexpected situations. With car stopping might often be correct, but in other situations actually not stopping is best etc.

Another thing is that your eyes can last you 100+ years, technology is not that durable and breaks more often, the objectives in cameras get dirty etc.

The cameras typically aren't as good as human eye, for example there are reports where Tesla wanted to jam into a truck in front only because its back in the camera couldn't be differentiated from the sky.

Yes, relying on cameras only is perfectly fine when you play PS4 game, but when life is on the line you want extra safety checks. Kind of like cars have brakes on every wheel and then also a handbrake. How every car has minimum of 3 stop lights (how often you see that one or more lights are bad in other cars?)

Your belief that we have hard AI (and this is kind of indirectly created by companies like Tesla, Google, Amazon, IBM etc) gives you conclusion that just two cameras should be enough in reality we did not made much progress in that area and instead just concentrated on specialized AI that is capable solving specialized problems, that kind of approach absolutely works better with more sensors.


Is the "computer vision" on Tesla good enough to ignore this false "lane marker" and use the actual yellow lane marker, not the "fake" one caused by the sun/shadow/pavement interaction?

http://i.imgur.com/fk3OvEW.jpg

This situation generated constant lane departure warnings for the driver.


I can't speak for Tesla. But I have a newest generation VW with lane assist. And lots of road construction sites in my area, where the normal white lanes markers are "overridden" with some yellow lane markers, which are narrower and often also take different pathes. Once I'm inside a yellow lane and drive straight it halfway works and the lane assist keeps that lane. However on the first position where the lanes depart (e.g. normal white lane goes straight, but yellow lane turns left to change over to other side of the road) the car has not really an idea what to do and mostly follows the wrong lane --> Without manual intervention a crash in these conditions would be guaranteed.


Here in the southeast, tons of roads have leftover sets of lane markings from when lanes were shifted over. Or no lane markings because of a lag between relaxing and marking lanes.


Not sure but that's pretty confusing to me too tbh without close inspection. The car in front seems to be treating it as a fog line too.


I really want to see one of these put on the road in India and see how it works. That's the real litmus test for me.


The human visual cortex is far more powerful than the computers we can put in cars.


"I don't fully understand why more effort is not put into a hardware solution, where roads are simply marked up for self-driving vehicles, e.g. magnets lining the lanes or something like that."

How would "magnets" be any improvement over visual markings? Is there some sensor that can track magnets at greater distance, or with greater reliability than visual markings?

Besides, how many millions of miles of road would need to be equipped with these magnets around the world? What would that cost? Would it be acceptable that autonomous cars could only drive on roads that have been properly equipped?

It's worth remembering that self-driving AI does not solely rely on lane markings to navigate. It also can see the road edge, follow other vehicles, and cross-reference with a (learned/crowd-sourced) navigation model of the laneway. Additional sensor types, like LIDAR and radar, are often used in addition to cameras.

You can see a lot of this at work, in a complex London traffic environment, in this demo:

https://www.youtube.com/watch?v=cfRqNAhAe6c


"I don't fully understand why more effort is not put into a hardware solution, where roads are simply marked up for self-driving vehicles, e.g. magnets lining the lanes or something like that."

Volvo is plugging for magnets in roads in Sweden, so they can find lanes in the snow. It might happen, because it can also be used for snowplow guidance. In heavy-snow areas, posts, poles, and even over-road arrows (Japan uses this in Hokkaido) are placed to provide guidance. It's not intended to replace vision and LIDAR, just as additional guidance hints for bad conditions.


A simple RF transmitter in new versions of the existing reflector discs would do it.

It wouldn't need to do anything but "ping", the battery could last for years. You could even have a mechanism to disable it when the lines are repainted, though normally roads that use the existing version of these reflectors would have them removed when moving lines.

The transmitter would have to be powerful enough to go through a small amount of snow in some states.

The question is can we get the car companies to agree on a standard for this? Would it make sense to put it in the frequency range allocated for vehicle radar systems?

What we get instead is an attempt to define the whole system de jure, including full V2V protocols and specifications.


But what is the actual advantage of this RF system? Why is this any better than just using a camera and image processing?

OK, RF would mean the lane geometry can be "seen" through all weather conditions (fog, snow). But so does precise-enough mapping/fleet learning. Neither would help in detecting traffic and transitory obstacles that might be hidden by that fog - you still need other sensors for that!

A "safe" autonomous vehicle should perhaps simply slow down in adverse weather conditions just as a human driver would. Even without additional sensors, their ability to see through snow and fog is certainly no worse than a humans.


We need transmitters in licence plates and on street signs too.


Why? So they can be "seen" through rain, fog and snow?

Reading road signs optically is a solved problem. Mobileye and similar systems have been doing this for a decade or more.


Well you answered your question.

Further, it would help learning in limited visibility, although they can already do it by remembering locations of signs (people do that, do autonomous cars store city/state/worldwide road data?)


The fear is irrational because even now you can always be hit by other drivers who are not as careful as you. In my home town, certain people are driving like crazy.. Ignoring traffic lights, going to fast, call or text during driving, etcetera. With autonomous driving, these people can game, work, drink etc instead. The main benefit of AI is that it drives more predictable and follows the traffic rules. Furthermore, cars themselves have become safer. If you look to traffic fatalities per country. You can observe that modern countries have less fatalities. Because of airbags, cripple zones, safer knocking of pedestrians.. And also traffic rules. To conclude, even though sensors might fail, they are better than human sensors.


Seriously? Do you want to spend an incredibly huge amount of money to put magnets to delimit lanes all around the world?When there are so many roads without any clear painted lane marking, when the paint itself fades away in an handful of years and the magnets will last a couple of years at best ONLY if embedded in the asphalt, that means spending 100 times the amount of money above. And when in 2-3 years time we very likely will have self-driving cars far safer than humans. IMHO is pure nonsense. It will be a clusterfuck order of magnitudes worse than the British building thousands of km of canals when after 20-30 years the rails were dominating transportation. But at least they have the excuse that they didn't know what was coming, and 20-30 years is an order of magnitude longer than 2-3 years. Furthermore they spent an infinitesimal fraction of what you propose to spend.


"worse than the British building thousands of km of canals when after 20-30 years the rails were dominating transportation."

I'm not sure that's a fair analogy. Without the canal network to cheaply transport goods and resources in bulk, it's hard to imagine how Britain could ever have industrialised to the point where building a railway network became possible in the first place!

Besides, railways did not immediately obsolete the canals as you seem to suggest. Some canals were still being built and extended until the early 1900s (70+ years after the railway boom), and the last commercial freight services lasted until as late as 1970 - suggesting it was really motorways that "killed off" commercial canal traffic.

One key point of difference is that (like road freight) pretty much anyone could operate a freight service using canals, where as the railways were in the hands of a few huge corporations. That meant a lot of niche, point-to-point freight was more viable to transport over water than rail.


They tried the magnet thing but it isn't flexible enough. For instance during roadworks you can't easily move the lanes, it only works when everything is perfect. There was a test track in NL with this technology a few years ago.


Start with highways - they are grade separated, there are no pedestrians, no traffic lights, no intersections (besides merging), driving is often long and boring.


I agree that 'properly' marking roads for self driving vehicles is better. But this is a chicken and egg problem. There is no incentive to create such markings until self driving vehicles are common. And limiting self driving vehicles to properly marked roads will make them useless for most people, resulting in less funding for further development.


These would also apply to Lane Departure Warnings present on more and more cars each model year.


It doesn't have to rely on vision.

They probably use GPS+map for a low resolution location and trajectory planning. Vision should only be used for fine positioning and obstacle detection.


When did all algorithms become AI?


Whether an algorithm is AI essentially depends on whether the objective is 1) human level difficult and/or 2) heuristic.

It should probably be both to avoid moving goal posts. That would make even chess algorithms still AI.

Self driving cars and image recognition are both definitely still AI.

"AI" does not typically mean strong or general AI.


This video is from 7 months ago and a concept demo. They would be quite a lot further along now. Read further down: "Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed."


They _might_ be further along. Everything you've quoted is vapor until it is demonstrated with their current hardware.


> This is a carefully chosen environment.

This is Los Altos - Los Altos Hills - Palo Alto route to Tesla's HQ on Deer Creek Rd. A very expensive area to live and therefore well maintained.

https://goo.gl/s9RJxw


> .25x speed?

That's a bit of an overkill.

Regardless, there was two points in the video ~48 sec, and ~1.40 min/sec mark. The car stopped AFTER turning... I thought that was really strange behavior. Then when it passes the pedestrians, it almost went to a complete stop, instead of going slightly to the left and mildly slowly down.

Overall, it's a great case study. I think your criticism is technical snobbery instead of what this means for the average person. The future is here, get ready. When techies complain that someone is doing something with 'x tool' and not 'y tool'...I think those people are just throwing mud around and not actually contributing to the conversation. Build something better if you're going to nitpik over the technical details. It would serve you better and all of humanity.


The future is here, just not from Tesla. Waymo and Volvo have systems which actually work. Waymo has real situational awareness, watching other road users and predicting what they might do next. Tesla is mostly following the lines on the road and stopping for interfering traffic.

Here's Tesla's "Autopilot" killing someone.[1] This is the Mobileye "does not recognize obstacles protruding into left edge of lane" bug.

Here's another crash from that defect.[2]

Here's Tesla's "Autopilot" hitting a traffic barrier.[3] Again, it's an obstacle at the left edge of a lane.

This is what happens when you take Level 2 lane-keeping and automatic cruise control and hype it into automatic driving. Other carmakers have offered comparable systems, but with much stronger driver-must-have-hands-on-wheel enforcement. Tesla didn't do that, encouraging drivers to relax and let the computers drive.

Waymo and Volvo take the position that when the automatic system is in control, the manufacturer is responsible. Tesla tries to blame the driver.

(Been there, done that. Ran a DARPA Grand Challenge team 2003-2005. Too old now to work on this.)

[1] https://www.youtube.com/watch?v=fc0yYJ8-Dyo [2] https://www.youtube.com/watch?v=qQkx-4pFjus [3] http://video.dailymail.co.uk/video/mol/2017/03/02/5177969943...


It's also fun to watch the guys hands. Little tells that he's suppressing the desire to grab the wheel (e.g. 0:44, 0:54). He seems to be fine at stops and intersections, but a bit more attentive when things get tight and curvy.

Also, what is the car doing at 1:33. It takes a right turn, then just stops in the lane, then proceeds.


My guess is that at 1:33 the car makes the turn a bit too wide and brakes because of the oncoming cars that come just a bit too close for comfort.


Speaking of - what happened exactly at 0:54? It seemed like an abrupt slowing down. A human driver would not slow down this much due to a couple of pedestrians walking next to the road. The correct behavior is slight slowing down and making sure the pedestrians will remain safe and then going past them.


Isn't Tesla's official recommendation to keep your hands on the wheel at all times?


One day some joker with two pots of paint (one white, one gray) is going to have a field day causing cars to run off the road.


Something like that happened to a Tesla, here.[1] The Tesla is the car in front in the left lane, not the one taking the pictures with a dashcam. There's a construction barrier that's inconsistent with the lane lines. The Tesla plowed into the barrier.

This shows why the Tesla approach isn't good enough. The vehicle is on a freeway, an supported environment for the old "Autopilot". Everything is going just fine. Until there's something unexpected the system can't handle, appearing fast enough that the driver couldn't take over in time.

Tesla is somewhere between Level 2 and Level 3. Good enough that the driver is tempted to tune out, not good enough that they can.

[1] http://video.dailymail.co.uk/video/mol/2017/03/02/5177969943...


Holy crap that looks like a very basic failure. This is exactly the sort of thing that keeps me up, getting into an accident because of some stupid software bug. Fortunately the car didn't spin or he'd have side swiped the vehicle in the next lane.

If you look across the roof of the Tesla it totally failed to imitate the car in front of it too, that's a double fault, it had more than one indication the lane was changing direction.

Here is an article with a bit more info about that crash:

https://www.digitaltrends.com/cars/tesla-driver-error-autopi...

There's more like these:

https://www.theguardian.com/technology/2015/oct/21/tesla-aut...


I drive through a construction zone once a week where the old lane shift lines and the new ones are close enough in quality to make following the lines instead of the traffic around you a very bad idea.


My Passat (2016) with lane assist falls into these traps all the time. Same with offramps. It can be a fight to keep it from jeering off. It is really nice on long drives though.



That's funny! One day I'll have other original thought... one day. But not today apparently.


a'la Wiley E. Coyote? Yup. Going to be interesting.


well all one had to do before to understand how staged previous demonstrations were was to go you youtube and type in tesla autopilot failures. some were downright spectacular and at those at night, well just forget about it.

I would love to see some work made towards configuring HOV lanes that many interstates have (toll or otherwise) to have not only clear visible markings but in road indicators to facilitate easier autonomous driving in a more controlled environment.

It would have so much safer to Tesla to have trademarked (not sure of proper legal term) Autopilot for later use and call it CoPilot until its safe on its own


You had to reverse early cars up hills due to their gravity fed fuel systems.

Yet here we are, driving forwards up hills.


Compare Chris Urmson's SXSW talk from 2016 about Google's system.[1] Look carefully at the graphics which show what the sensors are detecting, and compare with Tesla's. Watch Urmson discuss hard cases, problems, and failures. Not seeing that with Tesla. Or Uber. Or Cruise. Or Otto.

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik


Tesla want a system that works all the time, even if that means it kills some pedestrians once in a while.

Waymo want a system that has a zero accident rate, even if that means it can't operate in the rain.


Otto is Uber now ;)


Cool, when is it being released?


Waymo's Early Rider Program is now running in Phoenix AZ.[1] They have soccer moms and their families in self-driving minivans.

[1] https://www.youtube.com/watch?v=6hbd8N3j7bg


self-driving minivans with human safety drivers


And yet, its still an objectively better driver than you.


Not a better driver than "me +active driver assistance like collision detection, Lane keeping etc".


Indeed, and not only that, Elon Musk's example of Autopilot supposedly saving lives that he famously tweeted a while back was manual driving + active driver assistance in the form of automatic emergency braking (which is supposed to become mandatory on new cars at some point). It happened in conditions far outside what Autopilot was capable of actually driving in.


I think that we'll only turn over driving to a computer (for reals) when safety systems are good enough to prevent most accidents. I think we'll keep the safety system separate from the drive system.

Perfecting safety on human drivers first is the way to go and will most of the risk of lettings computers drive.


Is that what they're claiming here? It looks like they're just saying that the hardware is capable of it, and that it will beat human driving at some point in the future.


Correct. There are probably primates that have all the hardware required to drive a car too. But more is required.

Humans are still better at interpreting many things and probably dealing with weather and construction and when there is no data.


"All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."


"Have the hardware needed". Note: not the capability; just the hardware. It's an unnatural way to write promo copy for a feature, and is written that way for a reason.


I think that with tesla's wireless software updating capabilities they'll continue to improve the software over time after you buy the car.


That is Tesla's existing track record, yes. How far they can improve on a given sensor platform is the unknown part.


So we're just waiting for Sufficiently Smart Car Software, is what you're saying?

That sounds speculative.


it certainly is, but that's my read on the situation.


It's not objectively better if I've caused 0 accidents to the autopilot's several, even if I've driven several million fewer miles.


This is quite interesting, I hadn't seen or heard about their intention to restrict like this, prior reading it tonight:

"Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year."


I don't think our legal system should allow car manufacturers to impose these kinds of restrictions on cars fully owned by their operators. If I have the title on my vehicle, I should be able to use it in any legal capacity. This restriction is an attack on the fundamental ideas of ownership and property.


the question is one of responsibility in the case of accident. is it the owner of the car? or the driver? what happens when there is no driver? well, my guess is that TSLA the manufacturer of the auto-driving software is going to be liable in place of the driver, and as such, they have every right to say, nope, no using our auto-pilot for commercial activity unless it is on our own network where we can self-regulate to make sure we are liable for as few of problems as possible.


When Tesla say you can't drive passengers commercially it is probably because of liability concerns. Insurance for taxis are much higher than insurance for personal vehicles.

Volvo CEO said already in 2015 that Volvo will accept full liability whenever one of its cars is in autonomous mode.

http://fortune.com/2015/10/07/volvo-liability-self-driving-c...


>what happens when there is no driver? well, my guess is that TSLA the manufacturer of the auto-driving software is going to be liable in place of the driver, and as such, they have every right to say, nope, no using our auto-pilot for commercial activity unless it is on our own network

Why does this argument hold true only for self-driving cars, and not current cars ? As such, self-driving cars are just a point in continuous evolution. Current cars are already quite different from what they were a few decades ago. Why is now a good time to draw a line ? Every change in this eveolution goes through government approval. If it is approved, it should be safe enough for everyone.


It sort of does hold for current cars. Generally a driver (in most cases, a human) must obtain a further license or some form of government permission/authorization to drive a vehicle for commercial purposes like a taxi or other passenger carrying service. This seems to be simply pointing out that the same restriction will apply to the software-based driver in a Tesla...?


There is no legislation about ride sharing/taxi with self driving cars. So whether or not an extra authorization is needed by the owner of the car is a different issue. Here Tesla is asking you to not engage in ride sharing unless Tesla gets a cut. i.e., Even if the government and insurance companies allow you to do so, Tesla won't unless you hand over your cut to them. Is there more to it than that ? Tesla is getting approval to make a car road legal. An insurance company may underwrite insurance for various purposes including taxi. Why do I need tesla's permissions to combine a road legal car for taxi purposes ?


Because Tesla is assuming responsibility (that is, guaranteeing fitness for purpose, and all the ramifications that come with that) for the car being able to drive you under the same circumstances as any other private person driving. They would be taking on much more (legal, insurance, etc.) risk if they were to allow the car to drive as a passenger carrying, fare accepting, commercial entity, therefore it seems fair that they require you to use their managed system to (I assume) mitigate and/or manage that risk. I would guess that it would also be possible, eventually, to license the autopilot software for your own arbitrary commercial use.


The line is "software provided by the manufacturer is driving" vs "person is driving".

When you cede control of the steering, braking, acceleration, and even watching the road to software, you are no longer driving.


This has been happening slowly in a lot of areas already. We went from manual transmission to automatic transmission without much fanfare. Even with the current self-driving trend, it is happening pretty slowly with various assisted technologies. I doubt things will change overnight.


How is this any different than Microsoft restricting Home versions of Office for non-business use? They've been getting away with that for decades now.


Because when you buy software, you know you are renting it - you technically can reverse engineer/decompile, copy, clone, resell - but you know you shouldn't.

If I buy a car and want to take the engine out, or a light and resell it - I would be very stupid, but, it is my property and I should be able to.

If Tesla want to lend instead of sell you something - fine, just be honest!


Why is there a difference between sowtware and the car? Didn't you also pay for the software?


A functioning commercial regulatory apparatus would not allow vendors to call the practice of providing encumbered software licenses in exchange for money by the term "selling software". That we put to with this practice, which basically lives on consumer confusion, is a bug.

It's a lot like our allowing people to sell patent medicines as "supplements". In both cases, we let people distort the meaning of regular words for commercial gain.


Copyright is a very stupid concept. Just because we benefit from it doesn't mean that we should support it until the end of time. Perhaps we should start taking about getting rid of copyright.

Now, I'm not saying we get rid of all laws. End of copyright will not allow me to put a maliciously crafted copy of Microsoft Windows or Google chrome online in order to deceive users into thinking they are genuine. I still may not misrepresent when I copy things. In fact, I think these laws need strengthening which we can if we ditch copyright.


He said the payment was more like a rent than gaining full ownership, as in a car.


In this case the software you're renting is the self driving capability.


Vehicle makers are already starting to make that argument, that because they own the software, you can't do anything to the car without their permission:

https://www.wired.com/2015/04/dmca-ownership-john-deere/


Well, for one, a car is a physical object; you can't near-instantaneously create an arbitrary number of exact duplicates of your car.


Why would that - whether it's physical or not - determine what kind of licenses you can attach to the product?


When you buy a car, the title is signed over to you, and you become its owner. Microsoft Office is information that Microsoft agrees to share with you subject to certain restrictions on your use of it. To actually purchase Microsoft Office the intellectual property, as opposed to a license for it, would cost you about 9 orders of magnitude more.

They're also restricting their customers' actions in meatspace (you can't drive people around in exchange for money) while the Office restrictions (you can't create business-related files) are IP-related.

A more accurate analogy would be if Microsoft banned you from using a Surface for business purposes (or for business purposes while running Windows, but it's impossible to install a different OS).

IANAL so I don't know if Tesla is on solid legal ground (I would guess that probably they are), but I think there's certainly a clear philosophical distinction.


tesla is restricting the autopilot software not the car itself


Given that the car can not operate at all without the software enabled (and not just Autopilot, as Tesla has demonstrated with salvage title cars), this is a distinction without a difference.


Well it isn't currently clear that they do. Tesla can write whatever they want into the car version of a EULA, but it doesn't mean there is legal merit to any of it.

Assuming Tesla eventually "catches" someone using a Tesla for ride sharing profit outside of their network they could remotely disable the car, at which point they will surely be sued and the courts will decide.

And they will very likely decide against Tesla, IMO. The immediate reaction to this statement is bound to cause people who work purely in the software realm to protest, but to most courts software is still a very nebulous thing. Disabling someone's car (after selling it to them outright) because they did something with it that you didn't like is EASY to understand, anathema to the entire history of car ownership and I really don't think it'll fly.


If a court rules against the person who has the risk from controlling the risk, they will get themselves out of that situation. That will either mean changing the law, changing to a lease model and ceasing sales, or stop selling self driving at all.


Tesla: We can't place restrictions on fully owned vehicles? OK, from now on we don't sell cars - we rent them for 50 years. Now can we place restrictions?


I personally find this to be better and more honest. If we as customers don't own something, companies shouldn't use words like "your" and "ownership" to mislead people.


+1. Even if there's no difference in practice, at least a long lease signifies unusual restrictions. Makers of big heavy pieces of industrial equipment also sometimes offer only long leases and never sales.


Presumably you're allowed to destroy the Tesla you own - which wouldn't be true with a long term rental, you'd have to give that back.


Software-like licenses make their way to hardware world. I wonder how soon we'll start seeing "as is" and "no guarantees of any kind" clauses…


I mean... Autopilot is software, right?


If you consider that the hardware is entirely comprised of third-party commodity parts* then yes, whatever it is that makes Autopilot a distinct product/proposition is entirely software.

* It's all really just off-the-shelf cameras, sensors, processors etc sourced from various suppliers. Of course we should acknowledge that the validation and alignment of these parts plays an important role, but there's not really any secret sauce there.


I wouldn't say that. It certainly not exclusively software and I don't think it is honest to divide a system like that into "soft" and "hard" components, since neither one has value for customer on its own.


Really? You don't see this as any different than iOS running on an iPhone? I can't use the iOS license on any other device, I can't run binaries on the device outside of what's available on the App Store. I'm wholly restricted to their rules on both the hardware and software sides of things.


I don't get what's the disagreement here. Apple sells you a whole system, iPhone + iOS. They don't sell you two things, like in PC world, for example, and that's different and that's okay as long as everyone is upfront about it.

What I am saying is that Apple sells you a product. You own it. You can do whatever you want with it. Feel free to alter any part of it, just don't expect it to work or keep your warrantee. The fact the product has limitations out of the box is a different matter. (Kind of like no one complains that kitchen timer can't boil water, or doesn't have nanoseconds precision — you want those things, you buy a different thing or make your own.)

What Apple doesn't do is to restrict you in how you use their product. There's no such thing like "you can only non-commercially photograph your family" or "never resell your iPhone", or "detaching screen is illegal".

And, more importantly, there is nothing Apple can do to stop you from doing any of those things.


In a nutshell, the difference is "sell" vs "license", "own" vs "rent".

You own your iPhone and you rent your Tesla Autopilot it seems.


People try these shenanigans every generation or so. See: https://en.wikipedia.org/wiki/First-sale_doctrine


It's not going to work, things will change or we will have no self driving cars. There must be someone who's liable, and that person will be the corporation that wrote the software. That in turn will mean restrictions; the software must be updated on a timely basis, and the risk to the company must be controlled in an actuarial sense just like vehicle insurance.

Vehicle insurance like today can't be used because incentives aren't aligned. Fully self-driving cars will come with an insurance package, and the company will need to control its insurable risk, or there will be no self-driving cars at all. If it's against the law, the law will change, not the model.

I'm sorry, but society will demand it. You will not and cannot have full ownership control over a self-driving car.


It does seem a bit like having to pay extra for pencils if they're being used for commercial purposes.


I wonder if it's to prevent the following scenario in the future, where the self-driving capabilities are perfect and legal: rich people buy as many Model 3's as possible, and deploy their own taxi fleet. Each one pays itself off in 4-6 months (est ~$15/hr, can operate 24/7), and generate lots of passive income forever.


This is not how markets work. Once the "self-driving capabilities are perfect and legal":

- investing in such a fleet would be like investing in any other kind of business: it's easier if you have the money, but if not you can go to any bank and get a loan.

- prices for a trip with a self-driving taxi will drop immensely, this is probably going to be a big liability if you would start the aforementioned business

- other effects that I can't think of right now

It is true that in a lot of places rich people have an edge on the less-wealthy (it's easier getting/staying rich if you are rich), but that's more of a generic problem than something that would be amplified by change to self-driving cars.


No one is forcing you to buy the car. If you don't like the terms of the license, don't buy it.


Do you work for Deere or something? That's exactly the opinion that most people are against when applied to hardware as evidenced by the support (among people aware of them) for "right to repair" movements.

The "you don't actually own the hardware you paid for" model is slowly becoming untenable for hardware sold to individuals.


Let's hope for someone to initiate GNU/Tesla hehe.


Unfortunately, such a thing will never exist, at least in a road legal sense. No government is going to approve it if it can be modified by the user. At best, you can have "look but don't touch" kind of open backed by some company if it has to be road legal. We aren't even fully over wifi router firmware yet.


Yet people work on their brakes in their driveway all the time. Hard to imagine a government allowing that.


Are we heading toward a future where it's illegal to drive your car on the road if you have modified its software?


They've been clear on this point for quite a while. When I purchased mine with full self driving capability, I read it in the fine print and thought it was very smart of them. It's a weird proposition to turn your luxury car into an autonomous Uber. I would never do it.


What is so smart about it ? Literally every company on the planet has been doing this for at least a few decades now. And if you are plonking $100-150k on a car, what is weird about expecting some sort of return on it ?


That means a city is in uncertainty if it tries to buy 3 or 4 Tesla to serve as paid minibuses on local territory – neither can a company for a campus, TV channels for a huge studio range, entertainment park to drive their agents around or a shuttle service to the airport who would like to provide limousine services.


At worst they could call Tesla and ask for permission and a separate license agreement. I'd expect Tesla to agree in any of those cases.


This seems like if you support Tesla, I guess you support monopolistic efforts.


I think it kinda makes sense considering that Autopilot is going to need constant maintenance/updates/investigations, plus the whole legal liability thing.

Autopilot is a service.


This is scary. Or rather pointing to a scary future. Does it mean I won't be able to play with my future GF fake boobs because they are only allowed into the iClinic network participants?


Tesla sent out an email today.

Autopilot Updates We just released the latest version of Autopilot. You can now experience Enhanced Autopilot features including Traffic-Aware Cruise Control, Autosteer, Auto Lane Change, Parallel + Perpendicular Autopark, and Summon. Automatic Emergency Braking, Forward + Side Collision Warning, and more advanced safety features are also active and standard.

All Tesla vehicles have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. And Tesla vehicles continue to improve with over-the-air software updates, introducing new features and improving existing functionality to make your vehicle safer and more capable over time.


I don't really believe that the hardware is enough for what they are trying to achieve. There is almost no redundancy e.g. on sensor level except for the front radar supporting the front cameras and ultrasound for close distances, however if one of those rear-facing cameras fails due to a defect or dirt on the lens the vehicle would have to stop because it wouldn't be able to detect oncoming cars. I am also not sure about the processing power, handling video streams and doing deep learning for up to eight cameras in real time seems a little too much for a current gpu?


I'm interested how they plan to solve the problems that the cameras alone cannot solve (assuming they're just RGB cameras with no special dynamic-range abilities) - like white-out snow conditions or driving directly facing the sun (a contributory cause towards the Tesla+truck fatal crash last year) - I understand that only systems with active projection (e.g. Lidar) and beyond-visual-spectrum cameras can provide reliable data... and yet Tesla claims that their current hardware, without these more exotic sensors, is sufficient.


There are quite a few transforms HSV, HSL etc that you can do on images for doing things like lane detection in different conditions (bright light or darkness). Also typically you take the input from 3 camera sources left, right and center.

To add to it there are techniques like applying shadow to a bright image.

When we combine some of these techniques, the task does become possible, even in in adverse conditions.

Also, various techniques often complement each other. For e.g. We may derive a steering angle out of a Machine learning system, and we can cross check that against the lane detection, vehicle detection (no collision) etc.

To think of it, we when we drive ourselves, we just use the eyes. So there is a view that cameras alone are sufficient. But definitely as Radar and Lidar complement the cameras and make the driving system more robust. As Lidar gets cheaper, we are bound to see it being used more even in camera only approaches.


The processing power "could" be enough, but lack of redundancy in sensors is a lot more troubling to me.


Then again, if one of the sensors breaks. It's a small chance that the vehicle will crash when trying to find a safe spot to stop.


>All tesla vehicles

As i understand it, the newer ones use a machine learning based approach and different hardware than the old ones. Does this mean tesla is actively working in parallel to develop two completely different self driving car softwares, one built on ML and one around a more traditional heuristic approach?


> You can now experience Enhanced Autopilot features including Traffic-Aware Cruise Control, Autosteer, Auto Lane Change, Parallel + Perpendicular Autopark, and Summon

Scary stuff, really. No consumer wants (or needs) autopilot "features" - at least not from a marketing standpoint. If you ask me, the car either drives itself or you are driving it.

It's a bit ridiculous to expect people to use these things safely. If anyone has to think about when the car is or isn't in control, I'm sorry, but you've already lost in my book. Humans just aren't that good at switching context or remembering what a product does (or what this particular version of a product does).


In my experience, having driven or been driven by the autopilot v1 hardware and software for thousands of kilometers, it is not ridiculous to use this safely. You configure how you use it once, you don't have to remember the state of every feature.

There are moments when you have to think about what state the car is in. But in opinion it is by far offset by how much less tiring it is to drive a long way.


"Mode confusion" is a major cause of CFIT accidents in aviation. And these are trained operators with partners and checklists.


Having driven a long distance in the vicinity of a Tesla that was presumably on autopilot, I found it very unsafe-seeming, darting around within its lane, sometimes drifting onto the lane line. You may not realize if you aren't paying attention, but your car might not drive as well as you expect.


Having driven for years in the vicinity of human drivers, I'd say they do far worse than that. And that is before you add the DUI.


> presumably on autopilot

I don't know, that sounds like lots of human drivers I've encountered!


Side sitted at a Tesla car, it's not.


Agree that lane weaving is annoying and frequently has no good explanation - there are no tight curves, the road markings are in perfect condition, there are no obstacles ahead and no one is changing lanes in front of the car.

I also wish that left-most and right-most lanes were special-cased, with a bit of preference towards left and right side respectively, instead of sticking to the center of the lane no matter what. Just seems safer overall, and with fairly large number of US roads not exceeding 2 lanes in one direction, seems like a pretty good rule of thumb.


In the USA are there not rules about which lanes you use? Various dash cam videos from the US seem to show drivers choosing any lane, though I could viewing then wrong.

In the UK you use the outside lane if you are a slower vehicle and use the inside lane(s) as overtaking lanes, with the inside for the fastest speeds (within the limit) . If you are using one of the inside lane and are travelling slower that traffic in the outside lane you can get stopped by the police and fined, and will certainly feel the ire of of other drivers.


The dash cam videos would be correct, outside of high-occupancy lane (2+ people) there are no lane speed restrictions, and if someone feels like getting into the left-most lane and driving 5 miles below the speed limit, they most certainly act on that impulse.

As you can imagine, anybody intending to use the left-most lane for high-speed driving is now forced to use the brakes, execute a [rarely safe] lane merge to the right, pass the slower vehicle on the right, and then aggressively execute a merge back to the left.

Roughly 100 people die in car accidents on a daily basis https://en.m.wikipedia.org/wiki/List_of_motor_vehicle_deaths... For further context, the World Trade Center attack on September 11, 2001 resulted in 3,000 deaths and has led to a brief but complete shutdown of air traffic, complete overhaul of homeland security, passage of PATRIOT Act, at least one major war and countless "military operations" that don't count as wars.


I only recently discovered that in the US many people driving in the left lane will not move to the middle lane to allow cars approaching from behind overtake them.

That's incredibly selfish, not least because it's incredibly dangerous for the driver that has to weave into the middle lane and back again.

Seems bizarre. Then again, you Yanks also don't have kettles, so nothing surprises me ;)


> you Yanks also don't have kettles

Mind blown. What on earth do they boil water with?


We typically don't drink beverages that require boiling water, aside from coffee which is brewed in machines. Everyone that I know that drinks tea or uses a french press has an electric kettle, though.


> The dash cam videos would be correct, outside of high-occupancy lane (2+ people) there are no lane speed restrictions, and if someone feels like getting into the left-most lane and driving 5 miles below the speed limit, they most certainly act on that impulse.

Depending on state there are laws about using left lanes only for passing, which implies you must be travelling faster than the lane to your right. Those laws are rarely enforced though.


My driving instructor and parents all told me that driving in the left lane when not passing is illegal and that people who do it are dickheads. I see signs reminding people to stay in the right lane when not passing on a regular basis.

Is it just the state I live in (WA) or is the rule not as solid as I thought?


It seems like even in WA the "Slow Traffic Keep Right" is more of a suggestion than a hard rule. There's also subjective interpretation of the rule - someone driving 71 in a 70-mph speed limit zone might not qualify themselves as slow, while someone behind who's okay with going 80 mph might disagree.


I have no experience with Autopilot v2, but v1 is better than I am at lane keeping on a motorway except for special circumstances, like when there are no lane markers.


I supervise the car and pay attention. I am very aware of what the car is doing. It may be a big difference between v1 and v2 though.


Speak for yourself. I love the radar cruise control on my Mazda 3 Astina, and also value the AEB. I specifically bought this car because of these systems - at the time (mid 2015) it was the cheapest such car available on the market in Australia. It's a 2014 2.5 liter sedan, manual, ex demo. Love it.


well, it yells at you if you let go of the steering wheel and will even disable itself if you keep doing it. It's not that scary. And the features are real. At least it gives owners the confidence that things are moving forward and if they buy a Tesla today they will have full self driving some day.


I have about 18K miles on my Model S with the first-generation sensor package. I live and drive in New England.

The lane-holding feature (autosteer) does expect you to keep slight pressure on the wheel. It has, that I can discern, four modes ...

freeway following (another vehicle in front) freeway leading nonfreeway following nonfreeway leading.

Only in the first case, you have five minutes before it starts pestering you to hold the wheel. In the other cases it's one minute.

Lane holding and adaptive cruise control in freeway following mode is a WONDERFUL feature. It drastically reduces the driver's workload in heavy traffic. And, it has speed control features that contribute to the melting of compression-wave traffic jams.

Lane holding can be a little squirrely when the road surface is dirty. I think the car sometimes mistakes tire marks for lane markings. There are parts of my regular route where I just take over from lane holding.

Snow or heavy rain? the lane holding feature goes first, then the adaptive cruise control. Snow, rain, or frost on the windshield interferes with the forward looking camera. Snow or ice on the nose interferes with the radar. Ice on the bumpers interferes with the near-field sonar proximity sensors.

Autonomous driving on the Stanford campus and up Sand Hill Road is one thing. Autonomous driving in a nor'easter snowstorm on winding Atlantic Coast roads is another. We shall see.

I admire and support people who tackle hard problems fearlessly. Tesla's doing that.


What they're saying is humans can't recontextualize themselves fast enough when the system fails and it starts yelling at you.


It's still pretty easy to tune out even if you're gripping the steering wheel…


where else would the training data come from?


this is probably one of the most ignorant and short sighted comments i've ever read on HN and reaks of competitive shilling.

It's like saying we shouldn't have released seatbelts because people moght not have used them. Or maybe we shouldn't have released vaccines because you still need booster shots.

Safety features, even incremental ones, make the world safer for everyone.


I would like to respectfully disagree with you here. While I believe that self driving cars are infinitely better than human drivers, in this case I'd exercise caution. The issue here is that the vehicle is not fully self-driving–it still requires a human to be paying attention. The problem is that the system appears to be automatic and as such reduces the driver's focus; I've heard stories of people simply tuning out while this "autopilot" on, not knowing that they must be alert for any exceptional circumstances. In this case, the autopilot may actually make it slightly less safe–not from its inherent design, but by human psychology.


if the driver allows themselves to lose focus, they are just bad drivers. autopilot won't change that. what it will do is reduce the number of collisions by people who are shitty drivers and are currently driving unprotected.

the data already demonstrates that autopilot reduces collisions and fatalities. to argue that it doesn't put you in league with climate change deniers.


> if the driver allows themselves to lose focus, they are just bad drivers

This needs empirical support. All of us are human, and are susceptible to false senses of security. Most of us are not good at assessing real-world risks and probabilities. If the autopilot software is good enough to drive the car successfully for months on end, drivers will start to trust it. It will be very hard to stay vigilant and aware, with eyes on the road at all times and an alertness level on par with manual driving.

In other words, an otherwise “good” driver can easily be lulled into complacency by a system that’s really good, but not perfect. It’s an uncanny valley that is worth discussing.


its only worth discussing if you can prove it exists. you've just offered supposition, not fact.

the fact that safety features such as seatbelts and airbags save lives is undeniable. they didn't lead to uncanny valleys of complacency for anyone but drivers who chose to abuse those features with wreckless driving. the data for AP already indicates this trend will continue with ML drivers.


I never claimed anything more than supposition. I think these autopilot features are probably a net safety gain. But that's just a gut feeling.


I think his concern is that it's unsafe because you might trust it when you weren't supposed to because it's too complicated to figure out who's in control at any moment.

However, since the number of features is quite small, I'm sure owners will quickly get used to what the car does and doesn't do. The autopark and summon is surely going to run over far fewer children than human drivers do in those situations. That leaves just adaptive cruise control and lane changing. Well, if you forget to change lanes because you trusted the car, you still won't crash since the front crash detection (and your eyes) should protect you from that. So I don't see any safety issue there.


The difference between autopilot and examples you mentioned is that autopilot can decrease safety. If it works correctly 99.9% of the time, you get used to it and when 0.1% arrive you will cause accident, because you weren't paying attention.


The airplanes of this world demonstrate this.


IMHO airplanes demonstrate the exact opposite. A plane can not only potentially fly the entire flight path on its own, it can also autonomously land. The reason why pilots still hand fly planes is 1) because the whole process is not automatic and you are still receiving directions from an ATC, and 2) to keep their proficiency in controlling the plane.

Numerous times I've seen pilots complain that their employer requires autopilot to be used wherever possible, while they would like to fly the plane by themselves occasionally.

If we polished off the autopilot to also handle take-off and taxi, and installed it in every single plane that exists, I'm pretty sure the entire aviation industry could be completely automated.


Airplanes generally somewhat safer than driving. By quite a margin.

Without consulting the literature, it just seems easy to believe that machine-like consistency 99.9% of the time is more important to safety than confusion in the 0.1%.


It really depends on what 0.1%.

If the car fails to detect one in a thousand cars in front of you, or one in a thousand corners, I can see that being highly dangerous.

Let's say you change lane 5 times while commuting. That's 50 times per week. Now, are you going to pay attention well enough that you catch it twice a year just before it pulls out right into another car?

Perhaps likening it to having another person driving you would be good. If you were given a perfect chauffeur, except with the knowledge that they'll drive straight through one in a thousand red lights with no warning, are you confident you'll catch them?


That may well not be a decrease in safety, considering how poorly most people drive.


the same can be said for the examples i gave. if you get used to being vaccinated but forget your boosters before traveling to a new part of the world, should you have forgone your vaccinations altogether? of course not. if you have technology that makes it safer to drive 99.9% of the time but you expect it to cover for you when you're not paying attention, should you disregard the benefits of the other 99.9% of the time?


This is not a good analogy. Being vaccinated is strictly better than not being vaccinated, even if you don’t get your boosters. It’s not settled yet whether a three-nines level of automation is strictly better than zero automation. It’s entirely possible that this level of automation is good enough to convince most users of its infallibility, but not good enough to reduce the statistical accident or fatality rate.

Perfectly safe 99.9% and highly dangerous the other 0.1% may be overall more hazardous than human-level safe 100% of the time. Or it may not. I’m not making a claim either way. But it’s not a crazy notion that it may not be.

More data is needed. I’m cool with that data being collected in the real world with real drivers. Gotta crack some eggs...


more data IS needed. but some automation is indeed better than no automation. this has been settled time and time again. computers are more accurate, more precise, and don't get tired, sick, or drunk.


Are you new to Naysayer News?


All the necessary hardware? Do Teslas have lidar?


They don't work in bad weather conditions like rain. So auto driving systems can't rely on LIDAR what rules it out as main sensor.


Lidar is not "necessary" hardware. Humans drive just fine without lidar.


It's pretty obvious what the poster meant. Why doesn't tesla need lidar? Cars aren't humans, so that comparison isn't really helpful.


What the poster seems to have meant is that LIDAR is essential hardware. Which is not some universal truth, so it is they who should justify it.


Visual cameras can be blinded fairly easily - even high-end cameras - simply point such a camera at the sun and try to make out a cloud in the sky. If Tesla were using true high-dynamic-range cameras (e.g. Oversampled binary imagers) then I would be more confident - but Tesla isn't saying that they are - and if they really were they would definitely boast about it.

LIDAR also does work great in the rain - provided you have multiple LIDAR units (e.g. Ford's snow-proof sensor array: https://qz.com/637509/driverless-cars-have-a-new-way-to-navi... ).

What I like about LIDAR is that it will never give you false-negative data regarding object proximity (i.e. it will never tell you an obstruction in the road is not there) but visual-only cameras can be fooled very easily and definitely can give you false-negatives regarding road obstructions.

It seems inherently less safe to rely on a more homogeneous sensor array: conversely it makes sense to use as many different types of sensor as possible to ensure your design isn't susceptible to being brought down thanks to a weakness in your predominant sensor type.


Lidar is absolutely useless for level 5 autonomy given that doesn't work in "bad" weather. It may be useful for level 4 autonomy but for sure it is not necessary even then. Source: no car driven around the planet at the moment has any lidar apart for testing purposes.


The human visual cortex is still far more advanced than a computer's. Maybe that will change some day, but asking about non-camera sensing equipment is a reasonable question until it does.


Sure, and it's not only cortex. Humans also know a lot about the world, and can predict things much better. There are areas in which humans are limited, however, such as reaction time, spectral sensitivity, the fact that humans can only see well in the center of where they're looking at and fake the rest of their visual field, etc. It's not at all clear to me that a human pair of eyeballs is better than 8 high quality, wide spectrum cameras feeding into the system at once.

That said, I think it's disingenuous of Tesla to call their system "autopilot" or imply autonomy of any kind when talking about their system. I will call something autopilot when it can drive me from door to door without me touching the wheel, in less than friendly weather conditions. Not drive in a straight line where it never rains.


You can have a 1000 cameras and it still doesn't matter if you don't have a good computer to process the data. That is where machine learning, etc comes in. Machine learning is really starting to come into its own in last few years especially for image recognition but it still sucks compared to humans. That doesn't mean self-driving based on cameras isn't good enough but it needs to be proven and we aren't there yet.


Dude, I work on high performance machine learning and machine vision 12 hours a day. You don't need to tell me it sucks, I know. But it's superhuman on some tasks already, and in a few short years, it'll be superhuman on a few more, and little by little it'll get there. All you get from that lidar is a depth map. You can do that without a lidar, using two or more cameras. If you also interpolate across a series of predictions, and have sensor fusion (which Tesla does, they also use radars and ultrasonic sensors) you can even make it robust. Truly, people who say it can't be done should not interrupt people who are doing it.


Not really, humans are horrible drivers with miserably slow reaction times.


Some humans are horrible drivers. Some humans are good drivers. The biggest difference is often how proactive the driver is.

It's true that a careful, experienced driver will typically recognise a rapidly emerging hazard as much as several tenths of a second faster than a novice, giving them significantly more time and space to react. However, a careful, experienced driver will also anticipate places where there are likely to be hazards and adjust their driving style to compensate.

Does a self-driving car know that there's a park just round the corner and it's half an hour after the local kids came out of school, thus increasing the risk of a child chasing a ball into the road?

Does a self-driving car understand that the group of people standing quite near the road up ahead are outside a bar at 11:30pm and thus quite likely to be drunk and suddenly stagger into the road?

Does a self-driving car know about the pothole in the cycle lane that you had to avoid while riding into town yesterday, and anticipate that anyone riding in that cycle lane today may move out into the main traffic lane without warning to go around it?

Does a self-driving car know that the news last night reported on a local black spot for "accidents" caused by people wanting to make fake insurance claims, and decide to take another route that is a little slower but avoids that black spot?

Better sensors, fast data processing, and the ability to monitor all sensors all the time are big advantages, for sure, but these things mostly support reactive behaviour. I've seen nothing so far to suggest that the better reactions currently outweigh proactively avoiding or mitigating these kinds of hazards in the first place. Obviously that might change in any number of ways in the future, but we seem to be a long, long way from that point yet.


a) Most humans aren't aware of these things, either, so they're really non sequiturs at best. b) Even if you accept them as valid premises, it's much easier to disperse this kind of info to every car on the road than it is to disperse it to humans (every tourist in a city needs to know where every bar / park is? Or watch the local news?)


Those were all real examples. These kinds of things happen in my area every day, and drivers are actively taught to look for signs like these before they are allowed to qualify and drive independently. Obviously not everyone gets the message, and the best anticipation skills only develop with more experience, but nothing I described was unusual or exceptional (other than the last one, which was quite a specific example of a more general idea).

On your second point, the important thing here is that you don't need to disperse much of this information to humans. Humans automatically recognise situations based on all of their experience, not just their driving experience. Of course sometimes external information sources like the news might be helpful, but much of it is just down to understanding context. See fresh horse crap on a country road? Someone's probably riding horses nearby. Horses scare easily. So, slow down and try to avoid anything noisy that could startle the animals. How many of today's self-driving algorithms take into account this kind of implied knowledge?


Humans have the computation equivalent of 38 petaflops of processing power. Does a Tesla vehicle?

If you seriously want to play the inane game of "well if a human doesn't have it then a Tesla doesn't need it" then let's play that game and talk about the things humans have that the Tesla lacks.

What's interesting about human's vision system is that the human eyeball is, relatively speaking, poor. We have digital cameras far better than that already. It is what the human brain that does with that raw data which makes us, as a species, thrive. Most of what we believe we "see" we never actually see, our brain fills in the gaps dynamically and infers information over time.

So this human processing ability, much of it automated rather than conscious, is totally relevant if you want to have this "Tesla Vs. human" debate. It is also why Lidar might be needed to make up the massive shortfall in a Tesla's processing ability relative to the human brain.

But hey, you want to keep to the "but HUMANS don't need it" then I ask where is my 38 petaflops and 1 TB of memory...


You don't need 38 petaflops to drive a car. We are wasting our minds driving. Driving doesn't need creativity, it needs the 360 degrees of awareness without any lapse in concentration and the ability to react in milliseconds.


I would peg the human brain at closer to 1 flop. That's about how many floating point operations I can do in a second, and only very simple ones.


You do more than that when calculating the trajectory of a ball thrown that you have to catch. Just not with numbers.


Humans also lose focus, fall asleep and get tired.

Teslas have multiple radars for judging distance and multiple cameras that are used for stereo disparity. Also human 38 teraflops is not the same as nvidia teraflops.

I am not saying teslas are better than humans, I'm just saying teslas can drive on I5 highway from Vancouver to Mexico better than I can.

Also Lidars are really really expensive, I applaud Tesla and commaai for breaking major ground just with cameras. Convolutional neural nets have being doing phenomenal things in the past few years.


I'm curious where you get the number "38 petaflops" from.


IBM researchers: https://blogs.scientificamerican.com/news-blog/computers-hav...

You can find other figures, but many are in the petaflop range, well above what could be realistically installed in a vehicle.


How about 60 bits/s:

https://www.technologyreview.com/s/415041/new-measure-of-hum... I don't think we know enough about how the human brain works yet to give a precise value, but just on caloric arguments I would say that the mean processing power of the brain is not significantly above what we have now in general purpose computing devices.


In the same sense that your dog solves differential equations when he catches a Frisbee, I suppose.


It's memory + control driven with visual feedback, not much more. You don't have to solve anything if you already sorta know the solution, and can adjust it for the goal.


Tesla's test and validation framework must be something else. I've never written code that may kill someone if it fails. Not directly at least.

A in-dept report on Teslas' code and system qualification framework would be a very interesting read, I'm sure.


"All Tesla vehicles have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."

In the Model S configuration page, "enhanced autopilot" and "full self-driving capability" are still optional extras that together cost $8,000, so I'm not sure what they're talking about here.

https://www.tesla.com/models/design


My understanding is that all new Tesla vehicles have all the necessary hardware, and the features are software-locked behind that $8,000 fee.

Apparently, these features can also be unlocked at a future time, for a higher fee of $10,000.


Just new? My understanding is that all applies to every single Model S ever produced.


Just new ones, since fall of 2016. See https://en.wikipedia.org/wiki/Tesla_Autopilot for more on the different hardware generations.


The 8 camera system with NVidia hardware has been built into all Model S since October 2016. Before they used a Mobileye system with one camera. Only the 8 camera system is promised to achieve full autonomy.


All Teslas come with the hardware needed. The $8000 enables the software.


No. Source: owner of a 2013 Model S


I should say, all Teslas currently being made.


If true, then it seems rather disingenuous to advertise that all Teslas are capable of self driving, if it costs eight grand to turn the sensors on. I wouldn't expect a car advertised to have 1000 horsepower to require a multi-thousand dollar software patch to be able to use more than a hundred hp.


They didn't advertise that all Teslas are capable of self-driving, they just say that the hardware has the capability. They're specifically speaking in computer terms, even - I wouldn't expect that buying a computer advertised as having the hardware to run <game> at 120fps would include a copy of that game.


But if it's a game-console that's advertised as being able to play a particular game very well... but it's also the only console capable of playing that particular game... and they're both made by the same company, seems a bit mean.

It's also equivalent to those high-end network switches with 48 ports that have soft-locks to disable ports on the switch until you fork-out $lots to unlock them: it's an artificial limitation.


I think the sensors are always on, in all cars. Just to gather data.


They have the hardware but you gotta pay for that sweet sweet soft ware


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: