Hacker News new | past | comments | ask | show | jobs | submit login

Previously I worked at Waymo for a year on the perception module of the self driving car. Based on what I know about the state of the art of computer vision, I can pretty much guarantee that current Tesla cars will never be autonomous. This is probably a huge risk for the investors of Tesla, because currently Tesla is selling a fully-autonomous option on its car which will never happen with current hardware. We need several breakthroughs in computer vision for no-fail image based object detection, and you will need higher resolution cameras, and much much more compute power to be able to process all the images. It's hard to estimate when we will reach that level of advancement in computer vision, my most optimistic wild guess is 10-20 years. And then Tesla will need to upgrade all those cameras and install a dedicated TPU which should be at least 10x or probably more faster than the nVidia chip they have installed on their cars, and they should do it for zero dollars because they have already sold the option. It is kinda amazing that Elon Musk is selling cars based on speculative future breakthroughs in technology. That must be a first.



Exactly. Claiming to be close to solving production problems that haven't been solved in robotics research in nice controlled environments is nuts. Either people at Tesla are so smart they are making many huge breakthroughs in the technology (hint: they're not) or the are exaggerating and lying to sell cars despite the dangers of selling systems based on heuristics that could cost lives.


I was driving the other day and stopped behind a stop sign at a 4-way intersection. A police car was already stopped at the same intersection, to my left. Since he had the right of way I waited for him to proceed. But after a few seconds without moving he flashed his high beams, which I understood to mean that he was waiting for something and was yielding to me. Now that's not a standard signal for yielding in the California Vehicle Code but most humans can figure it out.

These are the type of odd little situations that come up all the time in real world driving. I can't understand how anyone would expect level 4+ autonomous driving to work on a widespread basis without some tremendous breakthroughs in AGI.


Do you really need AGI to understand that some drivers don't respect the right-of-way rules at 4-way intersections? And even if you don't detect the high beams flashing, do you need AGI to know that you shouldn't lock yourself out of the intersection purely based on your place in the queue?


Regardless of the possible solutions for that one particular situation, my point was that odd unpredictable situations come up all the time in real world driving. We can't hope to code rules in advance for every possible situation. In the general case is it possible to handle unexpected situations without true AGI? That remains an unanswered question.


I like the analogy of the internet. In order to have the internet, we have packet switching, retransmissions, flow control with exponential backoff, distributed spanning tree algorithms, etc. If we accept what the AI proponents say, the internet is a jumbled mess of conflicting hand-crafted rules that has no chance of ever working. And yet here we are on the internet, and we don't even need a Go or chess solver running in each router.


You're making a strawman argument and I have no idea which AI proponents you're referring to. Internet routers mostly rely on deterministic non-conflicting rules with little or no AI involved. It generally works fine, although there are multiple major failures every year.

How much large scale network engineering have you done?


>Internet routers mostly rely on deterministic non-conflicting rules

Exponential backoff (which the previous poster mentioned) is randomized and success is by definition non-deterministic algorithm to resolve conflicting usage. It works very well, and is very simple. But deterministic and non-conflicting are really not qualities that the IP protocol is known for.


And sometimes networks go down, but nobody dies.


That is what my retort was going to be. The stakes are a lot higher when it's a human at 65 mph.


And packet collisions are not unusual. But then you simply wait a short time, and try again.


    10 Drive forwards a bit
    20 Stop
    30 Did we hit something?
    35 YES: Sleep for a short time
    40 GOTO 10


Reminds me of the development of Diablo. Once that sleep time became small enough the game took off.


Well Waymo has demonstrated their reliability enough to be able to have a fleet of driverless vehicles: https://www.dmv.ca.gov/portal/dmv/detail/pubs/newsrel/2018/2...


Yes, they have a fleet... that also fails in a number of ambiguous situations that average human drivers handle easily. I've observed failures in more than half my trips through Mountain View.

Two use cases happen repeatedly: first, it's indecisive on lane changes when there are vehicle(s) in its targeted lane for a long time. If it cannot merge over safely due to traffic or rudeness, it will Stop in its lane until a clearance occurs -- the concept of proceeding to the next left and making a U-turn seems incomprehensible. Second, in certain right turn situations on a red light, it will hever turn if there is traffic in far lanes even if the nearest lane has a generous opening. I see this all the time on the right turn from eastbound Central turning onto southbound Castro St., for example.


>Second, in certain right turn situations on a red light, it will hever turn if there is traffic in far lanes even if the nearest lane has a generous opening. I see this all the time on the right turn from eastbound Central turning onto southbound Castro St., for example.

To be fair, even I do that sometimes if I just don't trust oncoming traffic not to do something like changing lanes in the intersection or at the last second. To your point, though, it's all dependent upon the intersection, number of lanes, etc., and I'm not familiar with the intersection in question.


...continuously supervised by humans (either remotely, or in the vehicle) I believe.


They are supervised remotely no doubt, but they are driverless and driven autonomously.


Super simple solution is to dial in a remote human operator whenever something 'odd' happens.

'Odd' events like that are probably only 5 seconds out of every hour of driving, so in aggregate one operator could handle 720 cars. At that point, the operator is far cheaper than further work on the AI.


Sure, that wo-NO CARRIER

Side note: The police officer may have thought that you both arrived at the same time, in which case you would have right-of-way, since the right-most driver has priority at a four way stop (if one wasn’t clearly at the intersection first).

(If four cars arrive at the same time ... you just wing it, I guess.)


Nope. He was already stopped when I rolled up to the intersection, and I could see he remained stopped after I went through.


Never mind then. I've been in similarly frustrating situations, where I try to accommodate someone while driving and they just don't even recognize it:

- Someone will tailgate me like they want to pass ... but I'm safely in the right lane, and they can easily pass in one of the other two lanes.

- They want to drive a lot faster than me in a residential area, so I pull to the side and wave them to pass, but they just stop there.


These situations can result in the vehicle notifying the human driver, or for fully-driverless operation, remote operator taking control of the vehicle (at greater expense, of course).


Level 3 autonomy such as you describe is generally considered unsafe because the human driver may not have been paying attention and lacks the context to safely take control.

Remote operation is a total non-starter. Our existing cellular networks lack the bandwidth and redundancy for safety critical applications. What happens if the local tower is down because the cooling system failed or a construction crew accidentally cut the backhaul fiber?


You're describing an anecdotal and seemingly trivial corner case, which may very well have been solved by these secretive companies already (wish we saw more data). It's most likely the scenarios we haven't thought of, the truly unique/very hard scenarios, the ones humans would fail at as well: those are the truly hard edge cases. Not 2 cars sitting stopped at stop signs... are you seriously claiming 2 stopped cars requires AGI?


[Tesla would need to].. and install a dedicated TPU which should be at least 10x or probably more faster than the nVidia chip they have installed on their cars

That is literally what Tesla have announced they are doing next. I've got no opinion on your other points, but if you've missed that news, you're not really following Tesla very closely. It's a drop in replacement chip set designed by Jim Keller.


I doubt Tesla has multiple (more like dozens of) major industry first sensor and AI breakthroughs, regardless of how closely OP follows them or not.


Elon Musk takes huge ambitious gambles. In this case the gamble is that good AI and the huge amount of training data from the Tesla fleet will win out over lidar. It is a gamble. But if you look at other gambles SpaceX and Tesla have taken, the context changes a bit. Everyone talks about innovation, but innovating is actually a gamble.


As part of that risk I'm sure Tesla would far rather burn the people who paid extra for some technology that wasn't ready yet than release something that is sub-par at a minimum and dangerous at worst

Killing a bunch of people or jamming up traffic is a quick way to going out of business in the car industry, so there's far more incentives to wait for the real thing. It's not like an optional $2-3k add-on (to a $40-60k product) that turns out to be useless would be a game changer. People will just not trust them to pre-invest and wait to buy it when it's available.


> I'm sure Tesla would far rather burn the people who paid extra for some technology that wasn't ready yet than release something that is sub-par at a minimum and dangerous at worst.

Uhh... I love Tesla but current Autopilot, that they have released, is definitely not what I'd call "public-ready".[1] And they agree, which is why it's locked behind clickwrap disclaimers and warnings and described as a "beta".

Don't get me wrong, I want one. But I wouldn't want my mum to use Autopilot.

[1] It can't see stopped cars on the highway (!) and gets confused by freeway exits, sometimes driving directly at the gore point. Worse, this second issue was fixed and is now happening again, indicating a critical lack of regression testing. (!!)


Speaking from a code archeology ;) perspective, there's another explanation: the previous fix has addressed the proximate problem, but also unearthed a deeper problem, which may or may not be fixable.

I don't think it's that he saw LiDAR as technically inferior, just expensive. They needed an approach that they could put in place with minimal unit cost.


OP seems to be operating under the premise that such a system must be "no-fail". That is not what Tesla is going for.


It's not that it has to be no-fail it's that the failures need to be a subset of the failures that humans make.

If the failures are in places that humans can easily and reliably handle (which is the case now) then people won't trust these systems. If the failures are created by the software not being able to handle basic driving tasks and wouldn't normally happen with a person driving then this is a huge problem with the system. A system that repeatedly drives at a lane divider repeatedly is not something people should trust.

https://arstechnica.com/cars/2019/03/dashcam-video-shows-tes...


Yes! I think that’s the right way to measure whether you’ve solved self-driving cars: if it has similar (possibly worse) failure rates to humans in each environment where humans are expected to operate.

Example: say an SDC never had collisions except when a cop is directing traffic, and in that case it floors it full speed to the cop. I would not consider that to have solved self driving cars, even though dealing with a cop directing traffic is so rare that its accident rate is lower.


Considering that they’ve already killed a couple of people, that’s a given.


Humans kill over 100 people per day in traffic accidents in the US alone - I don’t think no-fail (0 deaths) is a reasonable requirement, just being safer than humans is enough.

I do agree with the OPs skepticism though - full autonomy is 10 years away.


> Humans kill over 100 people per day in traffic accidents in the US alone - I don’t think no-fail (0 deaths) is a reasonable requirement, just being safer than humans is enough.

This is a fallacy. People don't just look at safety statistics. The actual manner/setting in which something can kill you matters a ton too. There's a huge difference between a computer that crashes in even the most benign situations and one that only crashes in truly difficult situations, even if the the first one isn't any more frequent than the second.

Hypothetical example: you've stopped behind a red light and your car randomly starts moving, and you get hit by another car. Turns out your car has a habit of glitching like this every 165,000 miles or so (which seems to be the average accident rate) for no apparent reason. Would you be fine this car? I don't think most people would even come close to finding this acceptable. They want to be safe when they haven't done anything dangerous -- they want to be in reasonable control of their destiny.

P.S. people are also more forgiving of each other than computers. E.g. a 1-second reaction time can be pretty acceptable for a human but, but if the computer I'd trusted with my life had that kind of reaction time it would get defenestrated pretty quickly.


Let's say there are two cars with two drivers. The first, with a human driver, is deadly under traditional human scenarios-- the driver could be drunk, texting or eating or distracted. They could make a slow decision, be looking the wrong way, stomp on the wrong pedal, etc.

The second driver is a machine. It sees all around it at all times and never gets drunk. But when it fails it does so in a way that to a human looks incredibly stupid. A way that is unthinkable for a human to screw up- making an obviously bad decision that would be trivial for a human to avoid.

Now let's say that statistically it can be proven that the first type of driver (human) is ten times more likely to be at fault for killing their passengers than the second in real world driving. So if you die, and it's the machine's fault, it will be in a easily avoidable and probably embarrassingly stupid way. But it's far less likely to happen.

Which type of driver do you choose?


Here's the thing--they hand out drivers licenses to anyone. You need virtually no skills whatsoever to pass a road test here in the States.

If I asked everyone on HN what steers the rear of a motor vehicle; I'd guess only 10% would guess correctly, and we're talking about some of the smartest most well read people on the Planet here. If I asked everyone on HN how many times they have practiced stopping their car from high speed in the shortest time possible, and were competent braking at high speed; I'd round that guess to virtually 0. Let's talk about the wet; can you control the car when it fishtails? Can you stop quickly in the rain? No and no.

You simply can not be competent in a life and death situation without training, nor without a basic understanding of vehicle dynamics. You just can't. Now I'm not saying Everyone must be able to toss the car into a corner, get it sideways and clip the apex while whistling a happy tune; but for god's sakes can we at least mandate 2 days of training at a closed course by an instructor who has a clue about how to drive? That would absolutely save lives... lots and lots of lives.

Which brings me to my favorite feature of some of these "self driving" cars: all of a sudden, with No warning whatsoever the computer says hey I'm fucked here --you have to drive now and save us all from certain disaster. I probably could not do that, and I sure as hell can toss a street car sideways into the corner while whistling a happy tune.


> If I asked everyone on HN what steers the rear of a motor vehicle;

What does this even mean? I'm guessing you're going for 'the throttle' but it's a pretty ambiguous question.

Totally agree on advanced driver training though. If you don't know the limits of your vehicle then you shouldn't be driving it.

As for the last point, I think we need to ditch the "level X" designations and describe automated vehicles in terms of time that they can operate autonomously without human intervention. A normal car is rated maybe 0.5s. Autopilot would be 0.5s - 1s. Waymo would be much more depending on how rarely they need a nudge from the remote operators.


> What does this even mean? I'm guessing you're going for 'the throttle' but it's a pretty ambiguous question.

I am going for the throttle (you know, to stop the car from rotating too much after I threw into that corner), and yes you are correct it (the throttle) does "steer" the rear of the car. Plus 1 btw.

Ambiguous... maybe. Anyway; see you on the wet skidpad ;) .


Ooh I like wet skidpads! :D

Other possible interpretations that I thought of (for the record):

- The front wheels (under good traction conditions)

- The limit of traction for the rear wheels (when in a corner nearing said limit of traction... my favourite part, btw.)

- The front wheels (if you're already sliding but hoping to go mostly straight)

- The throttle (if you're sliding and planning on keeping it this way while the front wheels dictate angle of drift)

- Edit: The handbrake (if you're in a front wheel drive for some reason)

:D


> Other possible interpretations that I thought of (for the record):

I almost forgot how difficult it can be to explain the nuances of vehicle dynamics clearly and succinctly.

Let's start by using the classic Grand Prix Cornering Technique (rear wheel drive/rear engine car). We brake in a straight line, and the weight transfers forward so that now the front tires have more grip than the rear tires (as a rule of thumb; the more weight a tire has on it the more it grips, because it is being "pushed" down onto the road. You can of course "overwhelm" the tire by putting too much weight on it causing it to start to lose adhesion). As we get to the turn in area of the corner we (gently) release the brake and we (gently) apply throttle to move some of the weight of the car back towards the rear tires (if we didn't do this the back of the car would still have almost no grip and we would spin as soon as we initiated steering input to turn into the corner).

Now we are into the first 1/3 of the turn, and approaching the apex--we have all the necessary steering lock to make the corner, that is to say we will not move the steering wheel anymore until it is time to unwind it in the final third of our turn (also we are on even throttle we cannot accelerate until we are at the apex). So here we are--the front and rear slip angles are virtually equal, but we want to increase our rate of turn because we see we will not perfectly clip the apex... we breath (lift a bit) off the throttle, but keep the steering locked at the same angle and the car turns (a bit) more sharply. We have actually just steered the rear of the car with the throttle; yes we have affected the front tires slip angles as well, but if we viewed this from above we would see we have rotated the car on its own axis.

This works, to varying degrees, in every layout of vehicle--FWD, AWD, RWD. Technique and timing are critical as is the speed, gearing, road camber, and so, and so. The fact remains though that the throttle steers the rear of the car.


The answer to this question really depends on whether your vehicle is FWD or RWD. It sounds like you have a RWD car and the people not answering it "correctly" don't.


"If I asked everyone on HN what steers the rear of a motor vehicle; I'd guess only 10% would guess correctly, and we're talking about some of the smartest most well read people on the Planet here."

https://en.wikipedia.org/wiki/Weissach_axle


In this hypothetical, I'd rather ride in a car driven by the human--I can see if my driver is drunk, texting, or otherwise distracted and yell at him or demand to get out of the car.

With the computer, I'm just completely in the dark and then maybe I die in some stupid way.


If that were the only options, we could choose the second.

But you have the option of assistive technologies. Have the human drive and the machine supervise it. The mistakes made when the human falls asleep can be prevented. The mistakes that the machine makes, well those won't be made at all as human is the active driver.


> Which type of driver do you choose?

Being able to explain the mistake (eating, texting) vs. (???) makes things qualitatively different.

I think any reasonable level of risk for the computers can be acceptable, if glitches are explainable, able to be recreated and provably remediable.


Yeah-- they are qualitatively different.

I didn't want to cloud the question with this point, but data from a machine driver mistake can be used to train every other machine driver and make it better. While much can till be learned from the mistake made by a human driver, the error is not as likely to be minimized across the 'fleet' in the same way as it is for a machine driver, if that makes sense.

Also it's probably important to keep in mind-- if my undestanding is correct-- companies like Tesla are only using neural nets for situational awareness-- to understand the car's position in space and in relation to the identified objects, paths, and obstacles around it. The actual logic related to the car's operation/behavior within that space is via traditional programming. So it's not quite a black box in terms of understanding why the car decided to act in a particular way-- it may be that something in the world was miscategorized or otherwise sensed incorrectly, which could be addressed (via better training/validation, etc.). Or it could be that it understood the world around it but took the wrong action, which could also be addressed (via traditional programming).

If I'm wrong about that, I'm sure someone will chime in. (please do!)


>While much can till be learned from the mistake made by a human driver, the error is not as likely to be minimized across the 'fleet' in the same way as it is for a machine driver, if that makes sense.

This is some serious whitewashing here. It's not "unlikely", it's simply not going to happen at all. People have been killing innocents by drunk driving for decades now, so they obviously still haven't learned. They continually make the same mistakes, over and over. No, human drivers do not learn at all from each other's mistakes in any significant fashion.

This could be changed, if we as a society wanted it to. We could mandate serious driver training (like what they do in Germany), and also periodic retraining. Putting people in simulators and test tracks and teaching them how to handle various situations, using the latest findings, would save a lot of lives. But we choose not to do this because it's expensive and we just don't care that much; we think that driving is some kind of inherent human right and it's very hard for people to lose that privilege in this country. And it doesn't help that not being able to drive equates to being very difficult to survive in much of the US thanks to a lack of public transit options.


>They continually make the same mistakes, over and over. No, human drivers do not learn at all from each other's mistakes in any significant fashion.

Why do you assume that drivers have to learn from each other's mistakes? A drunk driver learning from his own mistakes is already significantly ahead of what a self driving car does which potentially just repeats the same mistake over and over again. The correlated risk may even cause accidents in bursts. 10 self driving cars all doing the same mistake at the same time will cause even more damage than just a single one.


>Why do you assume that drivers have to learn from each other's mistakes?

Because that's what computers do: we can program them to avoid a mistake once we know about it, and then ALL cars will never make that mistake again. The same isn't true with humans: they keep making the same stupid mistakes.

>A drunk driver learning from his own mistakes is already significantly ahead of what a self driving car does which potentially just repeats the same mistake over and over again

Why do you think this? You're assuming the car's software will never be updated, which is completely nonsensical.

>10 self driving cars all doing the same mistake at the same time will cause even more damage than just a single one.

Only in the short term. As soon as they're updated to avoid that mistake, it never happens again. Hum


I would choose the "machine" driver if it is two ( not one ) orders of magnitude safer.

By the way, your examples are not very good as the drunk/texting driver makes a choice and to some extent her passengers too. No such choice is given with the car is autonomous.


Drunk drivers don't just kill their passengers; they kill other drivers and pedestrians. You have no control over that.


As opposed to...well. RIP EH.

The machine, of course


Do you think your choice is representative of the population?


I hope so. Even if it not, at least I'm contributing to make it the representation.


I'd pick the distracted driver's currently and for two good reasons:

1) Computer has bugs that will kill the driver 100/100 times if they encounter that case. Driving at the guard rail, the truck decapitation and whatever bugs have yet to be discovered.

2) A distracted driver nay encounter one of those situations and see and avoid, even if drunk or looking down.

The likely case is the current crop of self driving cars are much more dangerous and will remain that until some magical breakthrough happens that was mentioned above.


If that was actually the case, i.e. it had a lower accident rate than humans buts its accidents were silly bugs, it wouldn't be a "huge difference" -- it would be a slight improvement that would become a major improvement when the bug gets fixed.

If the average person thinks they're better than the computer when the computer is better than the average person, the average person is incorrect.

> They want to be safe when they haven't done anything dangerous -- they want to be in reasonable control of their destiny.

If you don't trust the computer, does that mean you won't trust another driver, if the computer is better than the average driver? Then how do you drive at all, when the roads are full of average drivers who could hit you at any time?


> If the average person thinks they're better than the computer when the computer is better than the average person, the average person is incorrect.

Until you remember that to obtain statistically significant evidence that the latest version number of Tesla (or any other firm's) software is safer than the average driver entails hundreds of millions of miles of driving. And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].

In the mean time, the human heuristic that a car which tries to kill you by accelerating at lane dividers isn't safer than your own average level of driving skill in many circumstances is probably better than trusting exponential curves and the Elon Musk reality distortion field.


> And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].

And they include driving in a huge range of conditions and roads that driving automation does not function in.

The conditions in which automated driving technology gets used (good weather, highway driving) must have far lower rates of accidents than average.


The accident rate comparison wasn't between miles driven by humans vs. miles driven by technology, it was between miles driven by cars before and after the technology was made available, and the accident rate went down. The only way that happens is if it does better than humans at the thing it's actually being used for. Which means that even if it's only used in clear conditions, it's doing better than humans did in clear conditions -- not just better than humans did on average.

It's plausible that it currently does worse with adverse weather than humans do with adverse weather, but I'm not aware of any data on that one way or the other, and I wouldn't expect any since it's not currently intended to be used in those conditions.


> Until you remember that to obtain statistically significant evidence that the latest version number of Tesla (or any other firm's) software is safer than the average driver entails hundreds of millions of miles of driving.

It is possible for a newer version to contain a new bug that would increase the accident rate significantly, but given the existence of realtime collision data, that seems like the sort of thing that would be caught and corrected rather quickly, before it would dramatically affect the long-term average. So you have a probability of being the unlucky first person to encounter a new bug, but unless the probability of that is significantly higher than the overall probability of being in a collision for some other reason, that's just background noise.

Moreover, it isn't an unreasonable expectation that newer versions should be safer than older versions in general, so using the risk data for the older versions would typically be overestimating the risk.

> And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].

That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.

> In the mean time, the human heuristic that a car which tries to kill you by accelerating at lane dividers isn't safer than your own average level of driving skill in many circumstances is probably better than trusting exponential curves and the Elon Musk reality distortion field.

The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR. But then your heuristic is stale as soon as they fix it, which is likely to happen long before any kind of true driverless operation is actually available.


> It is possible for a newer version to contain a new bug that would increase the accident rate significantly, but given the existence of realtime collision data, that seems like the sort of thing that would be caught and corrected rather quickly, before it would dramatically affect the long-term average.

This assumes they correctly diagnose the fault and know how to fix it, and are able to fix it without any adverse side effects on superficially similar situations requiring a different course of action. This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios. Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low. The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)

> That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.

I don't get to choose what vehicles other people use. I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.

> The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR.

This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed. I'm sure plenty of engineer time has been devoted to studying them (despite Tesla's actual PR strategy being to deny the problem and deflect blame onto the driver rather than announce fixes) but the fixes aren't trivial or easily generalised and approaches to fixing them are bound to produce side effects of their own.


> This assumes they correctly diagnose the fault and know how to fix it, and are able to fix it without any adverse side effects on superficially similar situations requiring a different course of action.

We're talking about a regression that makes things worse than they were before. The worst case is that they have to put it back the way it was.

> This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios.

I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.

That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically, which puts pressure on the companies to compete to have the best safety record and therefore not trade the reductions in misbehavior for additional complexity as much.

> Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low.

It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.

And if it was really that common then why aren't their safety records worse than they actually are?

> The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)

The rate for autonomous vehicles is also very low, and the average person is still average.

> I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.

So who is forcing you to buy a car with this, or use that feature even if you do? Not everything is a dichotomy between being mandatory or prohibited. You can drive yourself and the drunk can let the software drive both at the same time.

Though it wouldn't be all that surprising that computers will one day be able to beat even the best drivers the same way they can beat even the best chess players.

> This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed.

You're assuming they're the same problem rather than merely the same result.

And in this case it's purposely adversarial behavior. There are tons of things you can do to cause an accident if you're trying to do it on purpose, regardless of who or what is driving. The fact that software can be programmed to handle these types of situations is exactly their advantage. If you push a sofa off an overpass into the highway full of fast moving traffic, there may be a way for the humans to react to prevent that from turning into an multi-car pile up, but they probably won't. And they still won't even if you do it once a year for a lifetime because every time it's different humans without much opportunity to learn from the mistakes of those who came before.


> I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.

I think this is misunderstanding the difference between a safety critical system which is designed to be as simple as possible, such as an airline system to maintain altitude or follow an ILS-signalled landing approach, and a safety critical system which cannot be simple and is difficult to even design to be tractable, such as an AI system designed to handle a vehicle in a variety of normal road conditions without human fallback.

> That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically

The benchmark maximum acceptable fatality rate for all kinds of traffic related fatality is a little over 1 per hundred million miles, based on that of human driver. Pretty damned difficult to measure safety performance of a vehicle type statistically when you're dealing with those orders of magnitude...

> It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.

Well yes, the system will handle a significant proportion of unforeseen scenarios safely, or at least in a manner not unsafe enough to be fatal (much like most bad human driving is unpunished). Trouble is, there are a lot of unforeseen scenarios over a few tens of millions of miles, and a large proportion of these involve some danger to occupants or other road users in the event of incorrect [in]action. It's got to be capable of handling all unforeseen scenarios encountered in tens of millions of road miles without fatalities to be safer than the average driver.

> And if it was really that common then why aren't their safety records worse than they actually are? They really haven't driven enough to produce adequate statistics to judge that, and invariably drive with a human fallback (occasionally a remote one). Still, the available data would suggest that with safety drivers and conservative disengagement protocols, purportedly fully autonomous vehicles are roughly an order of magnitude behind human drivers for deaths per mile. Tesla's fatality rate is also higher than that of other cars in the same price bracket (although there are obviously factors other than Autopilot at play here)

> The rate for autonomous vehicles is also very low, and the average person is still average.

You say this, but our best estimate for the known rate for autonomous vehicles isn't low relative to human drivers despite the safety driver rectifying most software errors. And if a disproportionate number of rare incidents are caused by "below average" drivers, then basic statistics implies that an autonomous driving system which actually achieved the same gross accident rate as human drivers would still have considerably less reliability at the wheel than the median driver.

> You're assuming they're the same problem rather than merely the same result

From the point of view of a driver, my car killing me by accelerating into lane drivers is the same problem. The fact there are multiple discrete possibilities for my car to accelerate into lane dividers and that fixing one does not affect the others (may even increase the chances of them occurring) supports my argument, not yours. And even in this instance, which unlike others was an adversarial scenario, involved something as common and easily handled by human drivers as a white patch on a road.


> Turns out your car has a habit of glitching like this every 165,000 miles or so (which seems to be the average accident rate) for no apparent reason.

People do equally inexplicably stupid things as well, due to distraction, tiredness or just a plain brain fart.


> This is a fallacy.

Please do point out where the statement is incorrect.

I wasn't talking about perception, people have all kinds of ideas about machines (mostly unfounded), but typically safety records drive policy, not whether drivers think they are better than the machine.

I believe machine assisted driving is safer than unassisted already and will continue to improve such that in 10 or 20 years human drivers will be the exception, not the norm. That will happen because computers are already safer in most conditions - the switch will happen when they are demonstrably far safer in all conditions.

Remember they don't have to beat the best human driver, just the average.


> That will happen because computers are already safer in most conditions - the switch will happen when they are demonstrably far safer in all conditions.

Computers currently can disengage in any conditions for any reason - how did you come with conclusion computers are already safer?


Humans drive 8 billion miles per day in the US, through a variety of conditions, crazy weather, crazy traffic, crazy pedestrians, etc. “Just being safer than humans” is an extremely tall order.


Full autonomy has been 10 years away for half a century. If driving still exists in a century, I'd wager that full autonomy would still be 10 years away then.

   just being safer than humans is enough
"Enough" for a statistician... but not if you're the one killed in an "acceptable" edge case that most humans handle fine.


> Humans kill over 100 people per day in traffic accidents in the US alone

They also drive 8-9 billion miles per day. That's around 1 death per 90 million miles of driving. Given the number of AV miles that are driven annually right now, we actually would expect to see ~0 deaths per year if AVs were as safe as humans...


FWIW, Tesla claims to have logged 1.2 billion autopilot (primarily freeway only) miles as of last summer:

https://electrek.co/2018/07/17/tesla-autopilot-miles-shadow-...

It is probably best to categorize these as autopilot PLUS human supervision, but anyway, Wikipedia cites 3 autopilot-caused deaths worldwide over 3 years or so.

https://en.wikipedia.org/wiki/List_of_self-driving_car_fatal...


It's all worth subcategorizing. Highway driving is substantially safer than surface streets, and if you stick to awake, sober, daytime drivers in good weather, the safety of human drivers is even higher.

Given that autopilot can handle nighttime, but not the others, it's completely possible that 1/300 million miles is above the sober good weather highway human driver fatality rate.


Jim Keller quit. No chip yet. Doesn't bode well.


Keller finished designing HW3 before departing Tesla. In a recent tweet Musk claimed HW3 is nearing production, then, a while after after media outlets rushed to report on the first tweet Elon threw out the caveat, which is that HW3 won't be going into cars until the software is ready, which could mean anything, and likely means they are nowhere near having their current software validated for HW3, let alone FSD.


Hmm, he says the upgrade will be offered in a few months: https://twitter.com/elonmusk/status/1117118581865476096

Always a good sign when the designer bails before production.


In Keller's case, that's his MO. He did exactly the same thing with Zen which basically saved AMD. I'm fairly bearish on Tesla and don't read anything in to it.


That's actually Jim Keller's M.O, he doesn't hang around for long. He does the bit he's interested in and then moves on. In the last decade he's worked for Apple, AMD, Tesla and Intel.


Here's a recent interview he did with his new role at Intel. https://www.hpcwire.com/2019/03/21/interview-with-2019-perso...


I can appreciate being so good you can get away with that.


What's M.O?


Modus operandi - someone's habits of working, particularly in the context of business or criminal investigations.

https://en.m.wikipedia.org/wiki/Modus_operandi



Why did the first 2 replies to this comment use the phrasing that that is his 'MO'?


I posted mine and looked back at the thread and someone else had posted at roughly the same time. Coincidence I guess. M O is a good term to describe the situation


Isn't software a bigger issue? I mean it took Nervana and Nvidia a non-trivial amount of years to optimize code. This is also the reason OpenCL is next to useless for deep learning, and AMD is creating a HIP transpiler instead of building it's own equivalents of CUBLAS/CUDNN/....


I believe the chipset was designed with the current/future roadmap of the AI in mind. I think its basically a chipset optimised for the particular neural net approach they are using.


The chip's supposed to be shipping soon. I think the design was done a while back.


>The chip's supposed to be shipping soon.

Oh? Where did you hear that? Can you post a source, or are you speculating?


"the Tesla autopilot AI computer is about to roll into production" - Elon Musk, Feb 20 or so https://www.youtube.com/watch?v=Y8dEYm8hzLo&t=16m20s


They probably needed the data collected by the current hardware in order to consider developing the next generation. Then they needed the hardware in order to develop the software. The only way to get the data is from a real fleet running hardware and software you control.


Tesla's vaunted "fleet data" is a red herring.



>but if you've missed that news, you're not really following Tesla very closely.

If you think this will actually happen, you're not really following Tesla very closely.

Outside of Elon's boasting, is there any evidence they have a chip that is going to be 10x better than the best in the industry? How do you imagine that happens? Where does the talent come from?


A 30 watt power Nvidia GPU from 2016 is not "the best in the industry".

Edit: To expand, the GPU in current Teslas is a GP106. It has no neural net-specific compute abilities. For NN inference it's slower than last year's iPhone. A vastly faster chip wouldn't be hard to get. Even if their in-house silicon efforts fail, they could just buy a modern Nvidia chip and the inference speed would go up 100x. Those chips go for $1000, easily covered by Tesla's "full self driving" upgrade package price tag of $5000.

If they run into a problem with FSD, it's not going to be finding a way to run 10x faster compute than the current shipping hardware. They may have other problems, but not that.


1000 Tflops to make it usable is currently out of reach; even lab prototypes can't do that yet.


Not literally 10 times processor speed, but 10 times as many video frames processed per second


Tesla is offering iterative improvements for free.

The above post is saying that they’ll need to offer groundbreaking cutting edge tech that doesn’t even exist yet ... for free.

All the while this tech is being invented Tesla is selling more cars that will need this technology.


If the improvements happen in software, deploying them for free is obviously not an issue. The question is how much the necessary hardware modifications would cost. But they're charging thousands of dollars for the option, which gives them some leeway to at least break even there, especially if you consider the time value of money between when they sell the option and when they deliver it.

The real question is, what happens if "full self driving" doesn't arrive for another 30 years?

But if you think car companies have never promised something they weren't sure they could deliver, you're not aware of the incumbents' unfunded pension liabilities.


> But if you think car companies have never promised something they weren't sure they could deliver, you're not aware of the incumbents' unfunded pension liabilities.

Can we just get rid of whataboutism here entirely, please, HN?

One wrong does not excuse another.


This the original post in this thread:

> It is kinda amazing that Elon Musk is selling cars based on speculative future breakthroughs in technology. That must be a first.

But promising things based on speculation about the future is not new, it's common practice. It may be a questionable idea, but the claim was that it's unusual. It's not.


Please do provide one example in which any car manufacturer promised features, which were outright vapor ware.

Cheating, i.e. Dieselgate, doesn't count. That was not a promised feature, this was outright fraud.


> Please do provide one example in which any car manufacturer promised features, which were outright vapor ware.

Volkswagen has been advertising their EV charging network (Electrify America), promising to expand availability in the future:

https://www.electrifyamerica.com/our-plan

If you buy one of their electric cars planning to use it, you're assuming they're actually going to build it, and if you live near one of the proposed sites, actually build that one in particular.

And this is hardly new. Ford Sync is more than a decade old, but when it came out they were advertising all the things you could theoretically do with it, many of which were contingent on third parties writing apps for it. Some of that actually happened, some of it didn't, but it wasn't all there on day one.


Fair enough.

Thanks.


Whataboutism is “X isn’t bad because Y is worse/also bad”.

Parent was saying “X will probably be bad because similar Y was bad in the same dimension (here, overpromising)”.

Maybe not a solid argument, but it isn’t whataboutism.

Edit: Never mind, it looks like I misread the parent's point, and he was saying that it's no big deal to promise something without being sure you can deliver because the Big Three did it with pensions.


You've used different phrasing to say the same thing.


On second thought, I misread the argument, and I agree it's whataboutism. See edit. (However, if my misreading had been correct, it wouldn't have been whataboutism, as it would be a prediction that Tesla would also be bad.)


Musk says the new processor is about 20x faster than the Nvidia one in frames per second and they can run all the cameras at full resolution into it.


You make no mention of HW3, which will be installed on the cars that bought FSD for free, and includes Tesla's own chip, (I assume some kind of TPU) designed by Jim Keller. It has been described by Tesla as having 100x the FPS processing capability that the nvidia PX2, the one they use now, has [1]. So they're more or less in full agreement with, and have been working on making happen, the things you say need to be done, for the past 3-4 years.

Tesla's head of AI is Andrej Karpathy, who many in this community hold in high esteem. I know this is "argument by authority", but we're working with a black box here, so it will have to do. Do you really think he is wasting his best years on a project that anyone in the field can "guarantee" will never happen? Or could it be that he knows something you don't?

By the way, it seems you also don't know that they hold most FSD revenue in reserve on their financials, it's not being spent. So if they need to return it, they can.

[1]: https://www.inverse.com/article/47683-tesla-elon-musk-just-d...


It's hard to argue when all of your arguments are based on Elon Musk's assertions. I remember watching Elon Musk in one of his product announcement events promising that development of full self-driving mode will be finished by the end of 2018 and rolled out through 2019. This is while they told CA DMV they haven't tested even a single mile in self-driving mode in 2018: https://www.dmv.ca.gov/portal/wcm/connect/96c89ec9-aca6-4910... You'd think they need at least one mile before rolling out the feature.

Regarding the TPU chip that is 100x faster than NVidia's chip, I also take that with a grain of salt. Note that 3rd generation Google TPUs are on par with latest NVidia GPUs in terms of performance according to Google. If Tesla has made a chip that is 100x faster, they should spin it off as a separate company that could be valuable as much as 2x Tesla's market cap.


>Note that 3rd generation Google TPUs are on par with latest NVidia GPUs in terms of performance according to Google. If Tesla has made a chip that is 100x faster, they should spin it off

I don't think they've claimed that the FSD computer (hw3) is 100x faster than "the latest NVidia GPUs". He's said that it's about an order of magnitude in the number of video images the current Nvidia hardware in a Tesla can process (that is, from 150-200fps to ~2000fps) without needing to scale down any frames.

I think he may have said in one live interview it would be "2000%" better, but since he said previously it would do 2000 frames/s, that may have just been a mistake of saying "percent" instead of "frames".


Your TPU point makes no sense. If Google's TPUs are on par with NVIDIA GPUs for NN workloads (I assume that's what you mean on par?) then they suck at making chips, which they don't. What's even the point of making them, and why has e.g. Deepmind been using them if they could have been using NVidia chips in the first place? I don't think they're budget constrained. It sounds to me like you're using a non-relevant definition of "on par".

About the stuff that's based on Elon's assertions:

First, yes, he is often wrong on timelines. Nobody doubts that. By the way, for other car companies (even Waymo!) who claim they'll have X milestone by Y date, everyone is understanding, since timelines slip. For Tesla, apparently it's a capital crime to say "I think we'll have it by then" and not have it. But your original points were not about timelines.

As for the miles they have registered with the DMV, Tesla's self-driving programme does not follow the same path as others. They are progressing from level 2 upwards, and deploying improvements to their fleet of cars in production. Other companies are working with tiny fleets and aiming directly at level 4+. So basically, you're looking in the wrong place. But even so, Elon's latest prediction is that they'll have "feature completeness" by end of year, and then they'll start working on regulatory approval. So I assume that's when you'll start seeing miles there, and you will very likely see lots of them, all at once.


As far as I know, Tesla's FSD feature set is level 2, which no manufacturer needs to report to the CA DMV.


Andrej Karpathy just finished his PhD and he was offered an executive position at a major public company. Also, he is not wasting his time. His department is building something that is actually used in cars today. It's just not the self-driving software that Elon is pitching.


> Andrej Karpathy just finished his PhD and he was offered an executive position at a major public company.

Since when is middle management with an org size of 80 an "executive position"?

> His department is building something that is actually used in cars today.

So do the departments at Cruise, Waymo, BMW, and even universities. Karpathy isn't special -- and neither is Tesla Autopilot's progress.


Ah, so they are going to equip all future cars with a chip that doesn't exist yet, for free, so it will shrink margins even more. Sounds smart.

>By the way, it seems you also don't know that they hold most FSD revenue in reserve on their financials, it's not being spent. So if they need to return it, they can.

Please point to the line and note in the financials where this is done, because I'm quite sure you're mistaken. Tesla hardly has any warranty reserves, let alone FSD refunds.


Hello, fellow "former Waymo dev." What you said is one of the many reasons that I replaced my Model X with an I-PACE. While I support the movement-away-from-gas part of their mission, I can no longer support Tesla as a business. I'm now focusing my energy toward advocating sustainable migration to general EV technology.


Sometimes when I go to hacker news, I feel really poor.


Nah, get on to Blind. That will make you feel poor.


People on blind are either straight up lying or having a very extreme case of selection bias.

Blind worries me a lot. Because if the general claims made on the website are true, then the people in top companies making $300k+ annually are some of the most immature and toxic people on the face of the earth.


It happens on this site too... if you believe HN everyone in the UK is making £100k+, even though the average salary here is more like 40k. I’ve seen people complain because they’re ‘only’ making as much per month as most people in the country make annually.


You can walk onto any city street and see the occasional person people driving nice cars too. Does that make you feel poor too?


I don't see these other people as fellow developers, but as golf-playing executives or their children. I'm aware that not everyone here is a dev, but we mostly have similar careers.


Not everyone here lives in the US. Making SV salaries here, where 99+% of people are making under $80k/year, would be obscene. If I had access to that kind of money, I'd be putting it into better housing (penthouse?) long before ever thinking of buying a car like that.


I live on about 30% of my salary. Unless you're in the bay area or have a family to support, that's plausible (and honestly pretty comfortable) for most developers. If you wanted that car, you could have it.


An I-Pace is only $60-70k. Assume you have 20k positive equity in your current vehicle the loan cost is about $800/mo. The equivalent would be buying a new Subaru Outback or F150 and paying for gas and oil changes.


>An I-Pace is only $60-70k

Please take a step back and think about the average human being for a moment


"Assume you have 20k positive equity in your current vehicle" Yeah, that's definitely an assumption.


Perhaps an easier and quicker fix (although extremely costly) is to outfit all roadways with aids for autonomous driving. Otherwise, we will have to wait decades for the technology to be mature enough.


I wonder though if the roadway maintenance would be good enough to have that work reliably. The car would still have to react to misconfigured , misplaced or defective aids.

Even if it’s not just negligeance, if an accident were to move the markers, would it cascade accidents until these markers are fixed ?

In the end, it seems we’d still need cars way clever than what we have now for it to be trustworthy.


The fatal Tesla crash into a highway barrier a while back was because of faded highway markings.

Many localities can barely keep the lines painted. Many more have all sorts of adverse weather conditions (e.g., snow and slush) that make road markings hard to see. The chances of this being a workable solution, at least in the US, is nil.


that crash wasn’t because of the markings. saying so, although maybe accurate, softens the issue. it was because tesla has a poor autonomous system that couldn’t plainly see a road splitting and instead hit a barrier at full speed wit no braking. the markers were a contributor since all teslas seem to mainly do is stay between the lines they see.


No tech that requires a special type of road will ever get successful, whether it's aids for autonomous driving aids, electric roads for on the go charging, or similar. The world already have countless of kilometers of roads, and all you need to drive a regular car is a relatively hard and flat piece of dirt. The cost and time required would be immense, and there will always be a need to use cars outside any built up road network.

Sure, for commuting in large cities anything is possible, but then cars still need to be able to do both old an new navigation.


>No tech that requires a special type of road will ever get successful

I'm reading this from inside a train.


Trams are also very popular in Europe and cars can still drive on roads with tram tracks.


Notice how few miles of train tracks there are in comparison to roadways.


That's exactly why trains are not as successful as cars.


It doesn't have to be an all-or-nothing proposition. For example, just augmenting interstate highways to enable reliable autonomous driving would be a complete game changer for commerce, let alone personal use.

Let the driver drive the car manually for the last mile of travel. Most of the problems of autonomous driving are on the last mile (pedestrians, unmarked roads, poor lighting, school crossings, train crossings, etc.). The last mile is the most expensive to outfit and maintain, too.

I wouldn't be disappointed at all to find that in the future, cars drive by themselves 80% of the time, and humans take over for the tricky 20%. If the roadway isn't autonomous-ready the car shouldn't become a brick. You just drive it manually.


Who pays for that?

You would either have to add massive taxes to road usage (not just gas), heavily tax automobile ownership or use general purpose taxes for such very expensive upgrades.

As a non-owner of a car I would balk at the last option.


You're assuming the upgrades will be extraordinarily expensive. What if what is required is something as simple as radar-reflecting paint, and somehow encoding additional information in the markings? Or maybe all cars should have an RFID-like thing on their bumpers to make car detection more reliable. The gov't could distribute that bumper sticker along with the car registration sticker. You could enforce that in the annual inspection.

Sure, a car that doesn't need any aids is much cooler. But if you look at the history of technology, especially in the PC world, there were a lot of clever hacks and external aids that had to be used in order to make the tech feasible. Then with time, they were rendered unnecessary.


Roads already refreshed regularly, may be some specific color will be enought.


>Roads already refreshed regularly

You don't live in the US, do you?


regularly doesn't mean yearly, but 10 year interval is okeyish for slow infrastructure upgrade.


Here in DC, we don't update roads nearly that often, judging by all the potholes.


Road lines are colored yearly or something like this on most streets


> No tech that requires a special type of road will ever get successful

Except, y'know, those newfangled automobile things that they had to build special roads for.


What if China does it first and the western world sees it as another Sputnik moment?


yea, just like we’ve been able to outfit all roads properly for aiding human drivers. you mentioned it’s costly, but this just won’t or can’t happen. we don’t even properly maintain our roads.


Just a random data point. I've used Tesla's autopilot on completely unmarked dirt roads in the past. The speed was low an I was monitoring the it very carefully, but it did somewhat work on some of the dirt roads I tried it on.


> It is kinda amazing that Elon Musk is selling cars based on speculative future breakthroughs in technology. That must be a first.

It is. But it's not that unusual these days: it's basically a Kickstarter!

Well, not exactly. Kickstarter projects have no responsibility to ever deliver. They just have to make a good faith effort.

Tesla has to eventually deliver some version of full self driving to customers, but they made sure not to say when.

So it's hard for me to see the legal liability for them. How can someone sue for a product that was never guaranteed a delivery date?

Worst case would be a lot of customers demanding refunds: which I doubt Tesla would fight.

So I guess that's the liability on the books: some % of $5000 refunds.

But a lot of customers will be happy with whatever they deliver whenever they deliver it. It doesn't feel like an existential risk to me.

Part of why it doesnt faze me is Tesla has so many knobs they can dial here. They can decide where FSD is enabled: if it works only on roads where the autopilot is highly confident, and only during good weather conditions, that probably fulfills their legal obligations.

This is why I think self-driving bears are wrong.... They argue computers will never be able to handle every corner case, which is true. But it only needs to work for one slice of drivers in one location for there to be a market, and from there it's just an iterative process to grow the service area.

It's like arguing my ice cream shop will fail because many people are lactose intolerant, pale, vegan, etc. Sure, but to have a successful business I just need one customer, then two, then four. And I get to choose which customers I court.

Tesla is the same wrt FSD.


> my most optimistic wild guess is 10-20 years

Woah, that's a long time! What are the problems in computer vision that need to be solved?


https://youtu.be/tH0rASvVItk?t=176

if this is accurate, the current models give a hint of the issue. the car can't yet see very far and any minor obstruction confuses the hell out of it. there's too little pixel to make out the road at a distance and the car doesn't use any of the other clues human do to figure out what the road is doing next - i.e. we can guess a corner from vertical signs, guard rails, trees and even hill sides - see this example: in red the pixels that hints a tesla of an incoming corner, in blu those that also a human can use: https://i.imgur.com/CvntZuZ.png

the problem is that a camera, especially if it's not at a high vantage point, will have very few pixel to represent distant features.


The autopilot behaves like an illegal street racer in that video (extremely tight turns) and this happened while it was only a few seconds away from colliding with a motorcycle.


as a side note: not all the video was using autopilot mode, the icon changes to show where there was manual intervention


Adversarial attacks. Extrapolating from incomplete data. Consistent performance in any/most lightning conditions: night, dusk, low hanging sun over an icy road. Fog. Any combination of the above.

Humans while not perfect are capable if making split-second decisions on incomplete data under a surprising range if conditions based on the fact that our brains are an unmatched pattern-matching beast feeding off of our experience.


Adversarial Attacks? Like removing a speed limit sign? Painting over lane markers? Dropping bricks off of overpasses? Throwing down a spike strip? Sugar in the gas tank?

Unless you're talking traditional computer security, which it doesn't seem like you are, these types of threats have not prohibited human drivers despite the fact that humans are very susceptible to "adversarial attacks" while driving too. Whether it's putting carefully crafted stickers over a stop sign to confuse a CNN or yanking it out of the ground to get a human driver killed.. you're talking about interfering with the operator of a moving vehicle... so what's the critical difference here?


I think the difference is that software tends to be much more fragile - and more predictable - than humans. Paint fake lane markers, and an autopilot might drive full speed into a wall because it trusts them; the next 5 cars with autopilots will all do the same thing. An attacker can verify that an autopilot will do this ahead of time. A human on the other hand will be more likely to notice that things are amiss - they can pick up on contextual clues, like the fresh paint, and the fact they've driven that road hundreds of times before and instantly notice the change, and the pile of burning self-driving cars.


I'm not sure why hackers prefer to hack large scale computer systems rather than individual humans, but they do. So we have to protect neural networks against adversarial examples for the same reason we have to protect databases against sql injections.

> so what's the critical difference here?

If neural networks are deployed at scale in self driving cars, a single bug could trigger millions of accidents.

Printing an adversarial example on billboards would lead to crashes all around the country. Are we going to assume no one is going to try? (btw: real world adversarial examples are easy to craft [1]).

[1] https://arxiv.org/pdf/1707.07397.pdf


> Adversarial Attacks? Like

Like literally the article we’re commenting on. Image recognition systems in general are much more susceptible to errors in cases where humans wouldn’t even think twice.

And yes, “hey, a stop sign was here just yesterday” is also a situation for which humans are uniquely equipped, and computers aren’t.


Adversarial attacks with low friction for the attacker e.g malicious software updates.


Do you mean OTA or just any attack because I think non-autonomous vehicles would have the same concerns... or really any equipment with embedded computers.


I mean any attack where the attacker is not required to move away from the keyboard and can corrupt multiple vehicles in one go.


Could the keyboard be in a radio-equipped car?

https://jalopnik.com/its-scarily-easy-to-hack-a-traffic-ligh...


Boy, are you an optimist.

Humans are LOUSY in almost all of those conditions as well as much less challenging ones--we engineer the road in an attempt to deal with human imperfections. The I-5/I-805 intersection in California used to have a very slight S-curve to it--there would be an accident at that point every single day. Signs. Warning markers. Police enforcement. NOTHING worked. They eventually just had to straighten the road.

Humans SUCK at driving.

Most humans have a time and landmark-based memory of a path and they follow that. Any deviation from that memory and boom accident.

This is the problem I have with the current crop of self-driving cars. They are solving the wrong problem. Driving is two intertwined tasks--long-term pathing, which is mostly spatial memorization, and immediate response, which is mostly station keeping with occasional excursions into real-time changes.

Once they solve station-keeping, the pathing will come almost immediately afterward.


Compared to the current generation of “self-driving cars” I’d say humans excel at driving.


You make a succinct point. Can you elaborate on the difference between Station Keeping and Lane Keeping, what is generally available now as LKAS?


"Station Keeping" is maintaining your position relative to the other cars around around you--and it's what most people do when driving.

Ever notice how a bunch of stupid drivers playing with their phones tend to lock to the same speed and wind up abreast of one another? Ever notice how you feel compelled to start rolling forward at a light even when it is still red simply because the car next to you started moving? When in fog, you are paying attention to lane markers if you can see them, but you are also paying attention to what the tail lights ahead of you are doing.

All of that is "station keeping".

And it's normally extremely important to give it priority--generally even over external signals and markings (a green light is only relevant if the car in front of you moves). It's the kind of thing that prevents you from running into a barrier because everybody else is avoiding the barrier, too.

Of course, it's also what leads to 20 car pile ups, so it's not always good...


It’s not a long time. Computer speech recognition, a far simpler problem, has barely advanced at all in 10-20 years. Siri is no better than Dragon Dictate was in the late 1990s. It’s possibly worse.


Yeah this is just completely wrong. Without getting into specific products, public test sets from the 1990s like Switchboard and WSJ are now at around human-level transcription accuracy rates; 20 years ago the state of the art was nowhere near that.

It's also not objectively a simpler problem. Humans are actually not particularly good at speech recognition, especially when talking to strangers and when they can't ask the speaker to repeat themselves. Consider how often you need subtitles to understand an unfamiliar accent, or reach for a lyrics sheet to understand what's being sung in a song. For certain tasks ASR may be approaching the noise floor of human speech as a communication channel.


I assume you're basing your claim on WER on data sets like Switchboard, which is a garbage metric: https://medium.com/descript/challenges-in-measuring-automati....

Humans may not be particularly great at speech transcription, but they're phenomenal at speech recognition, because they can fill in any gaps in transcription from context and memory. At 95% accuracy, you're talking about a dozen errors per printed page. Any secretary that made that many errors in dictation, or a court reporter that made that many errors in transcribing a trial, would quickly be fired. In reality, you'd be hard pressed to find one obvious error in dozens of pages in a transcript prepared by an experienced court reporter. It is not uncommon in the legal field to receive a "proof" of a deposition transcript, comprising hundreds of pages, and have only a handful of substantive errors that need to be corrected. That is to say, whether or not the result is exactly what was said, it's semantically indistinguishable from what was actually said. (And that is why WER is a garbage metric--what matters is semantically meaningful errors.)

The proof of the pudding is in the eating. If automatic speech recognition worked, people would use it. (After all, executives used to dictate letters to secretaries back in the day.) The rare occasions you see people dictate something into Siri and Android, more often then not what you see is hilarious results.


That article is correct that WER has some problems, but it also correctly concludes that "Even WER’s critics begrudgingly admit its supremacy."

Yes, Switchboard has problems (I've mentioned many of them here) but it was something that 1990s systems could be tuned for. You would see even more dramatic improvements when using newer test sets. A speech recognition system from the 1990s will fall flat on its face when transcribing (say) a set of modern YouTube searches. Most systems in those days also didn't even make a real attempt at speaker-independence, which makes the the problem vastly easier.

Executives don't dictate as much any more because most of them learned to touch type.


Oh come on. I remember playing with speech rec in the 90's and it was terrible.

Now it works great!


Siri, at least, is total garbage. Half the time I try to dictate a simple reminder, Siri botches it. (The other day, I tried to text my wife that both Maddie and Tae sing. Siri kept transcribing “sing” as “sick.”) Siri at least is no better than Dragon Naturally Speaking was in the 1990s. The Windows 10 speech recognizer is somewhat better, but it’s still not usable (what was the last time you saw anybody use it?).


I don't have experience with Dragon or Siri, but Google Assistant has been improving at a noticable pace and for me seems seems to recognize at least 90% correct.


I think the biggest problem with speech recognization is that it annoys everyone around you. I would use it more often but I don't like being noisy...

Maybe if it could be made to work well while whispering....


> you will need higher resolution cameras

I'm skeptical a bit.

We confirm or disprove this by getting human test subjects to drive cars, using nothing but video feed from exactly same cameras.

If the humans perform significantly better, then that shows that there is a lot of work to be done independently of the camera resolution.

I could drive reasonably safely without my glasses (equivalent to a massive reduction in resolution); I just wouldn't be able to read direction signs and such. I'd have to drive right up to a parking sign to get the exact days and hours of the parking restrictions and such, but I wouldn't run over a pedestrian, or go the wrong way down some lane.


Curious about this, I had a look at some Tesla camera footage, the youtube video on this site https://www.teslarati.com/tesla-dashcam-autopilot-cameras-fi...

As a human driver I'd say you could drive ok with that although looking into the distance the images are annoyingly blurry compared to what I'd see by naked eye driving normally. Maybe comparable to driving in the rain with so so windscreen wipers.


I'd say it's marginal if I'd be able to drive safely with vision like that.

quite a lot of the time, the vertical 'smear' prevents me seeing what something is or if it's moving towards me, and the dynamic range is too poor to make out black cars on a black background.


Do you have any opinion on the new HW3 which is a custom ASIC for CNNs or something like that? Would love to know how that stands compared to other possible ASICs at Waymo.


I can understand the weariness about Tesla, but now Mobileye is claiming they can drive with vision sensors only.

https://www.youtube.com/watch?v=yZwax1tb3vo


Of course, self-driving cars don't have to be perfect, they only have to be better than human drivers.

Then they'll go mainstream, and they'll keep getting better and better, while human drivers don't.


No, they have to be much better than human drivers, close to perfect. We all know and expect that people make mistakes, and if a driver kills themselves by mistake that's on them. But if you buy a machine to drive you and it kills you because the engineers made a mistake, that's a different thing. That's much less tolerated, and rightly so, even if it isn't 100% logical by your standard. It's not just about mean number of accidents per driven km, it's also about accountability and blame.


I understand that many people will view it that way, but there is another way to view it.

Every single time I get in a vehicle, the people coming towards me might be talking on their phone, drunk or whatever. There is a very real chance I will be injured or killed on a daily basis even if I drive perfectly, and nothing beyond my abilities happens.

In the same way that seat belts and airbags and automatic braking improved safety, imperfect self-driving cars will improve overall safety.

The important thing to note is that seat belts, airbags and automatic braking a far from perfect, and thousands of people a year still die even though they are using them. People still use them, because it is safer than the alternative - which imperfect self driving cars will be too.


So what?

People will not accept a computer driver, even if it manages a significant improvement on the current rate of car accidents, because there is a real psychological barrier. Your counterargument of "but it's an improvement" would miss that point.

Humans are prone to all sorts of fallacies, especially surrounding "destiny" and our own influence over our lives - that's why ideas like "bad things happen to bad people" are so popular. That kind of problem is not surmountable by statistical fact. You cannot sway the majority of road users to trust a machine that way. They want to be in control, because it makes them feel something that the computer can't - safe in their own hands.


> People will not accept a computer driver, even if it manages a significant improvement on the current rate of car accidents, because there is a real psychological barrier

In your opinion.

We can't know what people will accept, because we've never tried something like this before.

I've pointed out my view, and you've pointed out yours. Time will be the only way to see what people are willing to accept, or not.


Are you also one of these 'people' that will not accept a computer driver? Those people are using a poor emotional argument.


>Those people are using a poor emotional argument.

Yes, they certainly are, but those people also vote. You can't just ignore them and their idiotic arguments, unless you live in a techno-authoritarian nation where things like this are forced on the populace against their will (even if their will is stupid). This is the whole problem here; you can't just convince everyone with a technical argument, because most people are non-technical, emotional, and frequently quite stupid, but they also have a say in decisions.


That's exactly what I'm saying. The poor emotional argument is very important, and you can't just attack it with statistics.


Sure, for some people but certainly not me. I even will accept computer driver if it only slightly worse than human driver


I would rather compare an auto-driving failure more to an airbag that failed to deploy or was otherwise rendered useless. An airbag that is inadequate for the amount of force applied might still save your life, and is still serving its intended purpose. Auto-driving incidents are so outrageous because they represent systematic failures.


What does it mean to be “better than human drivers?” One of the features of humans is that while they make mistakes, those mistakes tend to be pretty randomly distributed. In any given unusual scenario, most people will be fine most of the time. Sometimes we get tired and veer into the incoming lane, but you could get rid of the lane markings entirely and almost everyone would still be able to figure out what to do. A self driving car might never get tired and veer into oncoming traffic on a well marked highway on a sunny day. But we don’t know what phenomena will emerge if a particular stretch of weirdly painted lines causes every vehicle to veer into oncoming traffic during rush hour.

“Better than human drivers” is not an analytically useful criterion. The question is, can you handle all the corner cases human drivers manage to handle every day?


> What does it mean to be “better than human drivers?”

It's extremely simple, and the same criteria they use to determine if driving is safer with many safety devices (ie. airbags or seat belts) than without them.

Injuries or deaths per x miles or x hours driven.

It's very important to note that in some cases airbags and seat belts actually result in more severe injuries or deaths than without them, but overall they are better because they reduce the fatalities per x miles driven.

Just because airbags sometimes kill people, it doesn't mean I'm going to choose a car that doesn't have them. Overall, I'm better off to have them.


That’s not an incorrect metric, but it’s not a very useful one. Or put differently, you’ll need to be a lot more “perfect” than you think in order to get to an ultimate injury/death rate lower than human drivers. You need to be able to handle all the corner cases humans handle routinely, otherwise you’re going to get catastrophic effects.

For example, right now Teslas cannot detect stationary obstacles. They slam right into them at highway speeds: https://www.caranddriver.com/features/a24511826/safety-featu.... This is not a matter of just tweaking the algorithms to get better and better error rates—it’s a fundamental problem with the system Tesla uses to detect obstacles.

In order to actually get close to the accident rate of human drivers on average, you have to be “perfect” in the sense you have to be able to handle every edge case a human driver is likely to ever run into in their entire lives.


> In order to actually get close to the accident rate of human drivers on average, you have to be “perfect” in the sense you have to be able to handle every edge case a human driver is likely to ever run into in their entire lives.

That's like saying for airbags to be overall better, they have to be better at every single edge case like kids in the front seat or unrestrained passengers. We know for a fact they are not better at those edge cases, yet overall airbags are safer.

Why? Because 99.99999% of driving is not edge cases (that's why they're called edge cases), and as long as you're better for that very vast majority of cases, then you're better overall.


Fun fact: if you can only handle 99.99999% of cases (say on a per mile basis), your system will blow up 32 million times per year in the US alone.

You need to handle the “long tail” of exceptional cases. While most driving is not exceptional cases (on a per mile basis), they arise quite often for any given driver, and more importantly, for drivers in the aggregate. A vehicle stopped in the middle of the road is an edge case. It’s also something that just happened to me today, and in the DC metro area happens hundreds of times per day. Encountering a traffic cop in the middle of the street directing traffic happens to thousands of cars per day in DC. Traffic detours happen thousands of times per day. Road construction, unplanned rerouting, screwed up lane markings, misleading signs, etc.

The basic problem you’re having is that you’re assuming that failure modes for self driving vehicles are basically random. That’s not the case. A particular human might plow into a stopped vehicle because she is not paying attention, but the vast majority of people will not. But a particular Tesla will plow into a stopped vehicle because it doesn’t know how to handle that edge case, and so will every other Tesla on the road. A human driver might blow through a construction worker signaling traffic by hand because they’re texting. But the vast vast majority will not. But all the Teslas on that stretch of highway will blow through that guy signaling traffic, because they don’t know how to handle that edge case. A human driver might hit a panhandler who jumps into the street because she isn’t paying attention to body language. Every Tesla will do it, every time. Humans can handle all the edge cases most of the time. That means self driving cars must be able to handle all the edge cases, because any edge case they can’t handle, they can’t handle it any of the time.


> Fun fact: if you can only handle 99.99999% of cases (say on a per mile basis), your system will blow up 32 million times per year in the US alone.

We know how many fatalities there are in the US per year, but I wonder how many crashes there are? How many near misses, or how many times does someone "luck" out and miss death by inches while being completely oblivious to it?

> A vehicle stopped in the middle of the road is an edge case

No it's not. It happens all the time (vehicles waiting to turn across on comming traffic) and I'm sure training models are already dealing with it.

> But all the Teslas on that stretch of highway will blow through that guy signaling traffic, because they don’t know how to handle that edge case

You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.

> A human driver might hit a panhandler who jumps into the street because she isn’t paying attention to body language. Every Tesla will do it, every time.

Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.

> Humans can handle all the edge cases most of the time

The number of road deaths per day around the world makes me strongly disagree with that.

It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is.

They don't have to be perfect, but they do have to continually get better. And they are.


> No it's not. It happens all the time (vehicles waiting to turn across on comming traffic) and I'm sure training models are already dealing with it.

Yet Tesla released a vehicle with an “auto pilot” that can’t handle that case. Makes me skeptical they’ll ever be able to handle the real edge cases.

> You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.

Teslas can’t. And will the be able to read the hand signals of the Verizon worker who didn’t have a stop sign while directing traffic on my commute last week?

> Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.

For a human driver, it doesn’t come down to reaction time. The human driver will know to be careful from the pan handler’s body language long before they jump into traffic.

Also, being able to emergency stop isn’t the issue. Designing a system that can emergency stop while not generating false positives is the issue. That’s why that Uber killed the lady in Arizona. Uber had to disable the emergency breaking because it generated too many false positives.

> Humans can handle all the edge cases most of the time The number of road deaths per day around the world makes me strongly disagree with that.

Humans drive 3.2 trillion miles every year in the US, in every sort of condition. Statistically, people encounter a lifetime’s worth of edge cases without ever getting into a collision (there is one collision for about every 520,000 miles driven in the US). In order to reach human levels of safety, self driving cars must be able to handle every edge case a human is likely to encounter over a entire lifetime.

> It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is. They don't have to be perfect, but they do have to continually get better. And they are.

I have a bent against techno optimism. Engineering is really hard, and most technology doesn’t pan out. Technology gets “continually better” until you hit a wall, and then it stops, and where it stops may not be advanced enough to do what you need. That happened with aerospace, for example. I grew up during a period of techno optimism about aerospace, but by the time I actually got my degree in aerospace engineering, I realized that we had hit a plateau. In the 60 years between the 1900s and 1960s, we went from the Wright Flyer to putting a man in space. But we hit a plateau since then. When the Boeing engineers where designing the 747 in the 1960s, I don’t think they realized that they were basically at the end of aviation history. That 50+ years later (nearly the same gap between the Wright Flyer and themselves), the Tokyo to LA flight would take basically the same time as it would in their 747.

The history of technology is the history of plateaus. We never got pervasive nuclear power. We never got supersonic airliners. Voice control of computers is still an absurd joke three generations after it was shown on Star Trek.

It’s 2019. The folks who pioneered automatic memory management and high level languages in their youth are now octogenarians, or dead. But the “sufficiently smart compiler” or garbage collector never happened. We still write systems software in what is fundamentally a 1960s-era language. The big new trend is languages (like Rust), that require even more human input about object lifetimes.

CPUs have hit a wall. You used to be able to saturate the fastest Ethernet link of the day with one core and an ordinary TCP stack. No longer. Because CPUs have hit a wall, we’re again trading human brain cells for gains: multi-core CPUs that require even more clever programming, vector instruction sets that often require hand rolled assembly, systems like DPDK that bypass the abstraction of sockets and force programmers to think at the packet level. This is all a symptom of the fact that we’ve hit a plateau in key areas of computing.

There is no reason to assume self driving tech will keep getting better until it gets good enough. It may, or it may not. This is real engineering, where the last 10% is 90% of the work, and where that last 10% often proves intractable.


How many of those airbag fatalities are due to drivers not buckling up?

It's called Supplemental Restraint System for a reason.


OK, so imagine someone paints these robotcar-fooling dots on the road, and a caravan of a hundred robot cars is cruising down the street. All 100 of them veer into oncoming traffic. That will destroy your safety statistics quite handily.


OMG, Imagine someone cuts all the brake lines in our current cars, or hacks the traffic system and makes all the lights green at the same time, or puts nails on the road, or turns into the boogie man and "gets" people.

Fear mongering is not the way to view problems, or to make our society better. We need to improve things, not live in fear of what "Bad" people could do to us.


> [we will reach] no-fail image based object detection level of advancement [...] my most optimistic wild guess is 10-20 years

You mean using cameras without LIDARs right? Presumably Waymo itself (cameras + LIDARs) is roughly at that level of advancement today.

In 10-20 years LIDARs should improve dramatically as well, in both price and performance. So I'm guessing cars will continue to rely on both technologies even when cameras become viable by themselves.

BTW, did you enjoy working at Waymo? Would you recommend the perception group as an employer?


I think the overreliance/overfocus on any given individual technology (especially computer vision) is a major issue

It should be a blended, combined approach: FLIR, LIDAR, RADAR, USONAR, CV, etc

There doesn't need to be The One True Technology To Rule Them All™


In your opinion, how much does that timeline change if policy/infrastructure also evolves to support the growth with things like dedicated autonomous lanes and roads, shared network data from vehicles in those lands (ie. vehicle from Manufacturer A 10 cars up stops short and sends that event to the network so cars further back can start breaking even earlier and less aggressively), etc.?


Can you be more specific about the limitations of state of the art image based object detection? Is it skewed toward false positives or false negatives? What accuracy is considered state of the art? Do you mean classification of an object in an image, or merely recognising that there is an object there?


I think the real potential issue for self-driving cars is that they need to have an understanding of how people's minds work, to understand their gestures, intentions, and movements. This requires, not just visual processing, but something closer to AGI. (Not an expert, so feel free to correct me.)


Not really, you don’t really gauge traffic by looking at the drivers firstly you can’t see most of them secondly it’s dangerous to assume their intention which is why we have things like right of way and road signs.

The big issue is still in terms of processing speed and sensor fusion and Tesla isn’t leading the pack in either.

As far as wide scale autonomous driving goes for it to be good enough there needs to be an agreeable and verifiable decision making model for all manufacturers to follow and for regulators to validate, this could be as simple as never to perform an action that would cause a collision but that’s not exactly an ideal model either because you might then get a silly decision such as you don’t engage the breaks to slow down because you’ll end up hitting the car in front of you in either case.

Sadly I’m seeing too many people that tend to complicate things and chase the red herrings when it comes to autonomous driving such as “ethics” while it’s an interesting philosophical exercise in the real world it’s not the important part as when it comes to accidents the vast majority of drivers act on basic instinct and don’t weight the orphan vs the disabled veteran simply because they don’t have that data nor the capacity to incorporate it into their decision making process.

First we need to get sensors that don’t get confused by oddly placed traffic cones, birds and shadows the rest is pretty much irrelevant at this point.


> Not really, you don’t really gauge traffic by looking at the drivers firstly you can’t see most of them secondly it’s dangerous to assume their intention which is why we have things like right of way and road signs.

Have you ever driven in a city? People don’t follow right of way rules and road signs. Pedestrians jump into traffic randomly. You absolutely have to be looking around trying to gauge peoples’ intentions in order to drive safely.


I think there are some situations like merging on a busy highway where something like a theory of mind is at least a little bit useful so you can negotiate entry. If you don’t, the car is either going to be too aggressive or too timid.


That’s something completely different because you follow right of way laws and can gauge if you can merge or not safely based on their speed you sure as hell not watching what look they give you through your side mirror.

And the “theory of mind” comes to assume the other driver doesn’t want to cause an accident either and will slow down at which case the estimation can be broken to math such as would they have to slow down given the average response time and their speed.

As a person you have no idea as well if they are drunk, paying attention or not the only thing you can see is how fast they are going and usually if they are going well above the speed limit you’ll assume they are jerks and won’t let you merge which luckily is simple enough for the car to do as well


Facial expressions aren't required here - even reading the basic behavior of a car being driven by a human involves theory of mind. Some people are actual [insult]s and will not let you merge or otherwise not play a game theoretic optimal strategy.


Right you don’t need to see someone’s face to guess what they’re thinking any more than you need to see their brain.

You can infer motivations and intentions based on how they’re driving. If you turn on your blinker and the guy in the lane next to you speeds up to close your available space, you have a pretty good idea of what they’re trying to do.

I’m not sure if there’s an algorithm that will figure out if other people on the road are complete fucking assholes that you want to avoid.


Yep tons of focus on philosophical sideshows like the trolley problem in these discussions, when autonomous cars just plain _do not work_ in the general case.


People can’t solve a trolley problem either while driving (or at all).

If you have enough time to stop if say a kid just jumped in front of you you will stop, if you don’t you don’t have enough time to process all the information around you and gauge the most optimal outcome if you swerve you swerve but that is likely going to happen regardless of what jumps out in front of you simply because that what your instincts are telling you to do.

The best thing an autonomous vehicle can offer is a better response time so you’ll be more likely to stop or to slow down to non/less than fatal velocity.


Nobody is saying people have solved it either, I am saying that the question is at best a distraction.


Yeah it’s a complete distraction, even if you were living in some dystopian world where each car could ping everyone around it to gauge their citizen score and select the lowest score citizen to run over in this Kobayashi Maru scenario your cars would be much safer if they didn’t waste CPU cycles on this nonsense but rather just focus on the road.

Drivers are drivers doesn’t matter if it’s humans or robots distracting them with nonsense wouldn’t make them better or safer but rather quite the opposite.


Right. The trolley problem assumes vehicles can make optimal split second decisions while right now Tesla can't even tell the difference between the road and a lane divider reliably.

https://arstechnica.com/cars/2019/03/dashcam-video-shows-tes...


The trolley problem is not relevant to driving autonomous or not.


Human drivers do have to make split second ethical decisions though: You're driving down a country road and a small animal jumps out in front of you. Do you hit the animal, or do you swerve into the ditch? Hitting the animal would kill it, but not harm you. Swerving into the ditch might harm you.

There is no right answer to that question, because I didn't specify what the animal is. Whether the animal is a baby deer or a human child will influence your choice. This isn't contrived. When I hit a deer, I made a choice to stay on the road. Had it been a child, I would have swerved, endangering myself in the process.


I have not logged enough miles in US interstates, but on German Autobahns you surely need to read other drivers intention (if you want to prevent unpleasant surprises). And this is usually done by carefully observing their driving and the overall context.

- Are they zooming by, then suddenly moving to the right and slowing down? Probably got a phone call, make ready to overtake them. And watch out, they might get really slow.

- Is the car in front stuck behind a truck and slowly moving to the left? Driver probably plans to overtake the truck soon. Reduce speed to not crash into his rear end.

Looking forward to when these things will be included in the reasoning model of self-driving cars.


The general ability to understand others (and oneself) to the level of intentions etc is often referred to as Theory of Mind. Seen from the task of autonomous driving, what is needed is a strong predictive model for what other cars/drivers are going to do, based on current observations. Considering that driving is a quite more narrow space than all possible human behavior, this is hopefully[0] solvable using something close to existing approaches.

0. Otherwise I don't think self-driving cars is happening soon.


This. Sharing a street with others requires human communication: a driver waves to a pedestrian wanting to use a crosswalk, or a pedestrian waves a driver through before doing so, or they both pause before one or the other goes first. Our streets aren't designed for robots, and while they may work on controlled-access highways in the not-too-distant future, I don't expect to see self-driving cars on city streets within the next few decades.


https://www.technologyreview.com/s/546066/googles-ai-masters...

are you willing to put money on your time estimate?


Processing an image and extract understanding is exponentially harder than processing a go board


I think the point he wanted to make is that even an educated guess can be a decade off.


To illustrate this: https://xkcd.com/1425/


computer vision is clearly much harder than go. the point is that the forefront of AI is moving quite rapidly, and the consequence are bound to catch the casual observer (or even an expert) off guard.


Also Musk seems quite confident https://www.youtube.com/watch?v=Y8dEYm8hzLo&t=9m00s


Elon Musk also keeps fear mongering about AI (which is what made him invest in OpenAI). His success made him overconfident about, thinking he knows more about a field than the experts.


You haven't been keeping up, Tesla is doing exactly that, look into AP3 hardware.


AP3 is not going to be nearly good enough. It's just the next step in keeping the "we sell full self-driving hardware as a feature but's it's not ready yet" promise alive. It's not years away, it's decades away. And for a frame of reference, the computers (as in the actual processors, ram, memory, etcetera) in cars a la Waymo and Cruise cost more than the cars, so it's not an insignificant cost. That's completely ignoring the lidar issue and the fact that state-of-the-art just isn't good enough even when you have an extra $100k in hardware, have spent billions training your models, and have remote operators to help you out of sticky situations.

But yeah, okay, Elon Musk said it's going to happen. He also said they were going to have it years ago and we're still waiting on that cross-country trip from New York to Los Angeles that never materialized.


We know how it ends though.

We'll see them roll out an error-prone stop-sign+traffic-light detection as fsd beta, and everyone will act like that was what was promised. Just like Lane keeping assist is "autopilot".


> We know how it ends though.

I realize this is a tangent, but I feel like it ends with Tesla going bankrupt before they ever come close to building a self-driving car, or even for the narrative to wear off for the general public. I believe a criminal investigation should be brought against Tesla for negligent manslaughter due to the fraudulent advertising and consequential deaths, but I have no expectation of that ever happening.

I just get really angry when people lie, then people die, and nobody does anything about it. As another commenter pointed out, Elon Musk himself is still demonstrating the technology as being completely autonomous just a few months ago.


I guess others don't agree, but I do. People are dying using autopilot, deaths that are likely preventable, and yet very little has been done about it. What's even worse is they're advertising safety claims based on false data, and you have to fight the courts just to have access to the document showing the redacted data.

https://www.wired.com/story/tesla-autopilot-safety-statistic...

But it seems nobody cares.


That article says:

"The upshot is that Autopilot might, in fact, be saving a ton of lives. Or maybe not. We just don’t know."

...which doesn't support the claim that banning Tesla's autopilot would be a net positive.

Maybe the effect is zero-sum, which means that preventing the deaths where Autopilot did wrong by banning Autopilot, means that other people would die, because Autopilot saved them from a situation they would have crashed in.

If Tesla's Autopilot is demonstrably net negative, then yes, it should be banned, and banning it would save lives.

But if it's currently zero-sum, we should absolutely allow it, we should absolutely allow it to be improved, because improvements to the system will probably tip it over into net positive.

It is absolutely concerning that the company is trying to spin the tech as already being net positive, without any clear evidence of that, yes, I agree. If the tech were net negative, and they were trying to cover that up, that would be even worse. But that's not the position we're in.


You are correct that the data is redacted, but I think it's a reasonable assumption to assume it's pretty bad for Tesla. If the data painted Tesla in a favorable light, they would be force-feeding it down our throats. They would do everything they could to let everybody know "here is the data proving we are safer."

Instead they are burying it behind legal procedures and doing everything they can to make sure nobody knows what the actual data says.

I cannot deny that I do not know anything with certainty. But it's more than just a guess that autopilot is not what's it's advertised to be.


Wikipedia lists 3 Tesla autopilot deaths since Jan 2016. In that time about 4 million people have died in road accidents caused by humans. I care about the deaths but 4,000,000>3.


This is an absurd misinterpretation of the data. What matters is how many miles driven per death, not how many deaths. Did you know more people die driving each year than from having bullets implanted in their brains? And yet I'd still rather get in a car than shoot myself in the face.

If you'd actually care to educate yourself, you can start by reading the article I linked in my original comment.


The article seems to say

maybe the rate dropped 40% or 13% but

>Now NHTSA says that’s not exactly right—and there’s no clear evidence for how safe the pseudo-self-driving feature actually is.

Which doesn't seem so terrible. Personally I'm optimistic that as the systems get better they can roll them out to other cars and make a dent in the 1.3m/year global deaths.


That's not an unreasonable conclusion, but it's a lot messier than that.

The real issue is comparing miles driven to similar miles driven, and autopilot miles are only supposed to be on the highway in good conditions (which is when the fewest accidents occur...well, probably). But the breakdown of accidents into categories such as speed, weather, traffic, etcetera does not exist (or at least I am unaware of it). It's further complicated by demographics, where older more affluent drivers - the kind likely to buy a Tesla - are safer as well. Then it's also confounded by the fact that Tesla is not a trustworthy company, at least in my opinion, and they will give OTA updates without warning owners which can revitalize old bugs (https://www.techspot.com/news/79331-tesla-autopilot-steering...). A lack of regression testing for a safety critical system is just terrifying.

Now admittedly, you came back to me with a reasonable response and I am throwing you a litany of "yeah, but" rebuttals. Do I believe Tesla Autopilot has the potential, when used properly, to make driving under certain situations safer? Probably. The main problem is the human element, making sure they're actually monitoring the car, informing them correctly of what Autopilot can and cannot do, etcetera. There are also issues with how Tesla not only improves the technology, but validates it. It's the gross overpromising (honestly I believe it is probably fraud, but I cannot be sure) that makes me despise Tesla as a company. But I can admit they make a product a lot of people like. But I think a lot of people like them because they are misinformed.


Facebook: move fast and break things

Tesla: drive fast and break people.


> Tesla is doing exactly that

Not arguing that they're taking steps in that direction but the implication that they're anywhere reasonably close to the goal is unrealistic. It's like saying we can travel to space so we have all we need to get to the nearest star.


There is the energy limitation as well. All the equipment needed to do self driving today requires more energy than you can sustainably draw from typical cars.

Please if you downvote at least leave a comment why you think I am wrong. This is a well known fact in the autopilot community.


What equipment and how much energy?



Thanks, do you know how much power Tesla’s autopilot equipment consumes? I assume that almost all that power goes into the vision chips?


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: