Hacker News new | past | comments | ask | show | jobs | submit login

[Tesla would need to].. and install a dedicated TPU which should be at least 10x or probably more faster than the nVidia chip they have installed on their cars

That is literally what Tesla have announced they are doing next. I've got no opinion on your other points, but if you've missed that news, you're not really following Tesla very closely. It's a drop in replacement chip set designed by Jim Keller.




I doubt Tesla has multiple (more like dozens of) major industry first sensor and AI breakthroughs, regardless of how closely OP follows them or not.


Elon Musk takes huge ambitious gambles. In this case the gamble is that good AI and the huge amount of training data from the Tesla fleet will win out over lidar. It is a gamble. But if you look at other gambles SpaceX and Tesla have taken, the context changes a bit. Everyone talks about innovation, but innovating is actually a gamble.


As part of that risk I'm sure Tesla would far rather burn the people who paid extra for some technology that wasn't ready yet than release something that is sub-par at a minimum and dangerous at worst

Killing a bunch of people or jamming up traffic is a quick way to going out of business in the car industry, so there's far more incentives to wait for the real thing. It's not like an optional $2-3k add-on (to a $40-60k product) that turns out to be useless would be a game changer. People will just not trust them to pre-invest and wait to buy it when it's available.


> I'm sure Tesla would far rather burn the people who paid extra for some technology that wasn't ready yet than release something that is sub-par at a minimum and dangerous at worst.

Uhh... I love Tesla but current Autopilot, that they have released, is definitely not what I'd call "public-ready".[1] And they agree, which is why it's locked behind clickwrap disclaimers and warnings and described as a "beta".

Don't get me wrong, I want one. But I wouldn't want my mum to use Autopilot.

[1] It can't see stopped cars on the highway (!) and gets confused by freeway exits, sometimes driving directly at the gore point. Worse, this second issue was fixed and is now happening again, indicating a critical lack of regression testing. (!!)


Speaking from a code archeology ;) perspective, there's another explanation: the previous fix has addressed the proximate problem, but also unearthed a deeper problem, which may or may not be fixable.


I don't think it's that he saw LiDAR as technically inferior, just expensive. They needed an approach that they could put in place with minimal unit cost.


OP seems to be operating under the premise that such a system must be "no-fail". That is not what Tesla is going for.


It's not that it has to be no-fail it's that the failures need to be a subset of the failures that humans make.

If the failures are in places that humans can easily and reliably handle (which is the case now) then people won't trust these systems. If the failures are created by the software not being able to handle basic driving tasks and wouldn't normally happen with a person driving then this is a huge problem with the system. A system that repeatedly drives at a lane divider repeatedly is not something people should trust.

https://arstechnica.com/cars/2019/03/dashcam-video-shows-tes...


Yes! I think that’s the right way to measure whether you’ve solved self-driving cars: if it has similar (possibly worse) failure rates to humans in each environment where humans are expected to operate.

Example: say an SDC never had collisions except when a cop is directing traffic, and in that case it floors it full speed to the cop. I would not consider that to have solved self driving cars, even though dealing with a cop directing traffic is so rare that its accident rate is lower.


Considering that they’ve already killed a couple of people, that’s a given.


Humans kill over 100 people per day in traffic accidents in the US alone - I don’t think no-fail (0 deaths) is a reasonable requirement, just being safer than humans is enough.

I do agree with the OPs skepticism though - full autonomy is 10 years away.


> Humans kill over 100 people per day in traffic accidents in the US alone - I don’t think no-fail (0 deaths) is a reasonable requirement, just being safer than humans is enough.

This is a fallacy. People don't just look at safety statistics. The actual manner/setting in which something can kill you matters a ton too. There's a huge difference between a computer that crashes in even the most benign situations and one that only crashes in truly difficult situations, even if the the first one isn't any more frequent than the second.

Hypothetical example: you've stopped behind a red light and your car randomly starts moving, and you get hit by another car. Turns out your car has a habit of glitching like this every 165,000 miles or so (which seems to be the average accident rate) for no apparent reason. Would you be fine this car? I don't think most people would even come close to finding this acceptable. They want to be safe when they haven't done anything dangerous -- they want to be in reasonable control of their destiny.

P.S. people are also more forgiving of each other than computers. E.g. a 1-second reaction time can be pretty acceptable for a human but, but if the computer I'd trusted with my life had that kind of reaction time it would get defenestrated pretty quickly.


Let's say there are two cars with two drivers. The first, with a human driver, is deadly under traditional human scenarios-- the driver could be drunk, texting or eating or distracted. They could make a slow decision, be looking the wrong way, stomp on the wrong pedal, etc.

The second driver is a machine. It sees all around it at all times and never gets drunk. But when it fails it does so in a way that to a human looks incredibly stupid. A way that is unthinkable for a human to screw up- making an obviously bad decision that would be trivial for a human to avoid.

Now let's say that statistically it can be proven that the first type of driver (human) is ten times more likely to be at fault for killing their passengers than the second in real world driving. So if you die, and it's the machine's fault, it will be in a easily avoidable and probably embarrassingly stupid way. But it's far less likely to happen.

Which type of driver do you choose?


Here's the thing--they hand out drivers licenses to anyone. You need virtually no skills whatsoever to pass a road test here in the States.

If I asked everyone on HN what steers the rear of a motor vehicle; I'd guess only 10% would guess correctly, and we're talking about some of the smartest most well read people on the Planet here. If I asked everyone on HN how many times they have practiced stopping their car from high speed in the shortest time possible, and were competent braking at high speed; I'd round that guess to virtually 0. Let's talk about the wet; can you control the car when it fishtails? Can you stop quickly in the rain? No and no.

You simply can not be competent in a life and death situation without training, nor without a basic understanding of vehicle dynamics. You just can't. Now I'm not saying Everyone must be able to toss the car into a corner, get it sideways and clip the apex while whistling a happy tune; but for god's sakes can we at least mandate 2 days of training at a closed course by an instructor who has a clue about how to drive? That would absolutely save lives... lots and lots of lives.

Which brings me to my favorite feature of some of these "self driving" cars: all of a sudden, with No warning whatsoever the computer says hey I'm fucked here --you have to drive now and save us all from certain disaster. I probably could not do that, and I sure as hell can toss a street car sideways into the corner while whistling a happy tune.


> If I asked everyone on HN what steers the rear of a motor vehicle;

What does this even mean? I'm guessing you're going for 'the throttle' but it's a pretty ambiguous question.

Totally agree on advanced driver training though. If you don't know the limits of your vehicle then you shouldn't be driving it.

As for the last point, I think we need to ditch the "level X" designations and describe automated vehicles in terms of time that they can operate autonomously without human intervention. A normal car is rated maybe 0.5s. Autopilot would be 0.5s - 1s. Waymo would be much more depending on how rarely they need a nudge from the remote operators.


> What does this even mean? I'm guessing you're going for 'the throttle' but it's a pretty ambiguous question.

I am going for the throttle (you know, to stop the car from rotating too much after I threw into that corner), and yes you are correct it (the throttle) does "steer" the rear of the car. Plus 1 btw.

Ambiguous... maybe. Anyway; see you on the wet skidpad ;) .


The answer to this question really depends on whether your vehicle is FWD or RWD. It sounds like you have a RWD car and the people not answering it "correctly" don't.


Ooh I like wet skidpads! :D

Other possible interpretations that I thought of (for the record):

- The front wheels (under good traction conditions)

- The limit of traction for the rear wheels (when in a corner nearing said limit of traction... my favourite part, btw.)

- The front wheels (if you're already sliding but hoping to go mostly straight)

- The throttle (if you're sliding and planning on keeping it this way while the front wheels dictate angle of drift)

- Edit: The handbrake (if you're in a front wheel drive for some reason)

:D


> Other possible interpretations that I thought of (for the record):

I almost forgot how difficult it can be to explain the nuances of vehicle dynamics clearly and succinctly.

Let's start by using the classic Grand Prix Cornering Technique (rear wheel drive/rear engine car). We brake in a straight line, and the weight transfers forward so that now the front tires have more grip than the rear tires (as a rule of thumb; the more weight a tire has on it the more it grips, because it is being "pushed" down onto the road. You can of course "overwhelm" the tire by putting too much weight on it causing it to start to lose adhesion). As we get to the turn in area of the corner we (gently) release the brake and we (gently) apply throttle to move some of the weight of the car back towards the rear tires (if we didn't do this the back of the car would still have almost no grip and we would spin as soon as we initiated steering input to turn into the corner).

Now we are into the first 1/3 of the turn, and approaching the apex--we have all the necessary steering lock to make the corner, that is to say we will not move the steering wheel anymore until it is time to unwind it in the final third of our turn (also we are on even throttle we cannot accelerate until we are at the apex). So here we are--the front and rear slip angles are virtually equal, but we want to increase our rate of turn because we see we will not perfectly clip the apex... we breath (lift a bit) off the throttle, but keep the steering locked at the same angle and the car turns (a bit) more sharply. We have actually just steered the rear of the car with the throttle; yes we have affected the front tires slip angles as well, but if we viewed this from above we would see we have rotated the car on its own axis.

This works, to varying degrees, in every layout of vehicle--FWD, AWD, RWD. Technique and timing are critical as is the speed, gearing, road camber, and so, and so. The fact remains though that the throttle steers the rear of the car.


"If I asked everyone on HN what steers the rear of a motor vehicle; I'd guess only 10% would guess correctly, and we're talking about some of the smartest most well read people on the Planet here."

https://en.wikipedia.org/wiki/Weissach_axle


In this hypothetical, I'd rather ride in a car driven by the human--I can see if my driver is drunk, texting, or otherwise distracted and yell at him or demand to get out of the car.

With the computer, I'm just completely in the dark and then maybe I die in some stupid way.


If that were the only options, we could choose the second.

But you have the option of assistive technologies. Have the human drive and the machine supervise it. The mistakes made when the human falls asleep can be prevented. The mistakes that the machine makes, well those won't be made at all as human is the active driver.


> Which type of driver do you choose?

Being able to explain the mistake (eating, texting) vs. (???) makes things qualitatively different.

I think any reasonable level of risk for the computers can be acceptable, if glitches are explainable, able to be recreated and provably remediable.


Yeah-- they are qualitatively different.

I didn't want to cloud the question with this point, but data from a machine driver mistake can be used to train every other machine driver and make it better. While much can till be learned from the mistake made by a human driver, the error is not as likely to be minimized across the 'fleet' in the same way as it is for a machine driver, if that makes sense.

Also it's probably important to keep in mind-- if my undestanding is correct-- companies like Tesla are only using neural nets for situational awareness-- to understand the car's position in space and in relation to the identified objects, paths, and obstacles around it. The actual logic related to the car's operation/behavior within that space is via traditional programming. So it's not quite a black box in terms of understanding why the car decided to act in a particular way-- it may be that something in the world was miscategorized or otherwise sensed incorrectly, which could be addressed (via better training/validation, etc.). Or it could be that it understood the world around it but took the wrong action, which could also be addressed (via traditional programming).

If I'm wrong about that, I'm sure someone will chime in. (please do!)


>While much can till be learned from the mistake made by a human driver, the error is not as likely to be minimized across the 'fleet' in the same way as it is for a machine driver, if that makes sense.

This is some serious whitewashing here. It's not "unlikely", it's simply not going to happen at all. People have been killing innocents by drunk driving for decades now, so they obviously still haven't learned. They continually make the same mistakes, over and over. No, human drivers do not learn at all from each other's mistakes in any significant fashion.

This could be changed, if we as a society wanted it to. We could mandate serious driver training (like what they do in Germany), and also periodic retraining. Putting people in simulators and test tracks and teaching them how to handle various situations, using the latest findings, would save a lot of lives. But we choose not to do this because it's expensive and we just don't care that much; we think that driving is some kind of inherent human right and it's very hard for people to lose that privilege in this country. And it doesn't help that not being able to drive equates to being very difficult to survive in much of the US thanks to a lack of public transit options.


>They continually make the same mistakes, over and over. No, human drivers do not learn at all from each other's mistakes in any significant fashion.

Why do you assume that drivers have to learn from each other's mistakes? A drunk driver learning from his own mistakes is already significantly ahead of what a self driving car does which potentially just repeats the same mistake over and over again. The correlated risk may even cause accidents in bursts. 10 self driving cars all doing the same mistake at the same time will cause even more damage than just a single one.


>Why do you assume that drivers have to learn from each other's mistakes?

Because that's what computers do: we can program them to avoid a mistake once we know about it, and then ALL cars will never make that mistake again. The same isn't true with humans: they keep making the same stupid mistakes.

>A drunk driver learning from his own mistakes is already significantly ahead of what a self driving car does which potentially just repeats the same mistake over and over again

Why do you think this? You're assuming the car's software will never be updated, which is completely nonsensical.

>10 self driving cars all doing the same mistake at the same time will cause even more damage than just a single one.

Only in the short term. As soon as they're updated to avoid that mistake, it never happens again. Hum


I would choose the "machine" driver if it is two ( not one ) orders of magnitude safer.

By the way, your examples are not very good as the drunk/texting driver makes a choice and to some extent her passengers too. No such choice is given with the car is autonomous.


Drunk drivers don't just kill their passengers; they kill other drivers and pedestrians. You have no control over that.


As opposed to...well. RIP EH.


The machine, of course


Do you think your choice is representative of the population?


I hope so. Even if it not, at least I'm contributing to make it the representation.


I'd pick the distracted driver's currently and for two good reasons:

1) Computer has bugs that will kill the driver 100/100 times if they encounter that case. Driving at the guard rail, the truck decapitation and whatever bugs have yet to be discovered.

2) A distracted driver nay encounter one of those situations and see and avoid, even if drunk or looking down.

The likely case is the current crop of self driving cars are much more dangerous and will remain that until some magical breakthrough happens that was mentioned above.


If that was actually the case, i.e. it had a lower accident rate than humans buts its accidents were silly bugs, it wouldn't be a "huge difference" -- it would be a slight improvement that would become a major improvement when the bug gets fixed.

If the average person thinks they're better than the computer when the computer is better than the average person, the average person is incorrect.

> They want to be safe when they haven't done anything dangerous -- they want to be in reasonable control of their destiny.

If you don't trust the computer, does that mean you won't trust another driver, if the computer is better than the average driver? Then how do you drive at all, when the roads are full of average drivers who could hit you at any time?


> If the average person thinks they're better than the computer when the computer is better than the average person, the average person is incorrect.

Until you remember that to obtain statistically significant evidence that the latest version number of Tesla (or any other firm's) software is safer than the average driver entails hundreds of millions of miles of driving. And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].

In the mean time, the human heuristic that a car which tries to kill you by accelerating at lane dividers isn't safer than your own average level of driving skill in many circumstances is probably better than trusting exponential curves and the Elon Musk reality distortion field.


> And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].

And they include driving in a huge range of conditions and roads that driving automation does not function in.

The conditions in which automated driving technology gets used (good weather, highway driving) must have far lower rates of accidents than average.


The accident rate comparison wasn't between miles driven by humans vs. miles driven by technology, it was between miles driven by cars before and after the technology was made available, and the accident rate went down. The only way that happens is if it does better than humans at the thing it's actually being used for. Which means that even if it's only used in clear conditions, it's doing better than humans did in clear conditions -- not just better than humans did on average.

It's plausible that it currently does worse with adverse weather than humans do with adverse weather, but I'm not aware of any data on that one way or the other, and I wouldn't expect any since it's not currently intended to be used in those conditions.


> Until you remember that to obtain statistically significant evidence that the latest version number of Tesla (or any other firm's) software is safer than the average driver entails hundreds of millions of miles of driving.

It is possible for a newer version to contain a new bug that would increase the accident rate significantly, but given the existence of realtime collision data, that seems like the sort of thing that would be caught and corrected rather quickly, before it would dramatically affect the long-term average. So you have a probability of being the unlucky first person to encounter a new bug, but unless the probability of that is significantly higher than the overall probability of being in a collision for some other reason, that's just background noise.

Moreover, it isn't an unreasonable expectation that newer versions should be safer than older versions in general, so using the risk data for the older versions would typically be overestimating the risk.

> And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].

That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.

> In the mean time, the human heuristic that a car which tries to kill you by accelerating at lane dividers isn't safer than your own average level of driving skill in many circumstances is probably better than trusting exponential curves and the Elon Musk reality distortion field.

The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR. But then your heuristic is stale as soon as they fix it, which is likely to happen long before any kind of true driverless operation is actually available.


> It is possible for a newer version to contain a new bug that would increase the accident rate significantly, but given the existence of realtime collision data, that seems like the sort of thing that would be caught and corrected rather quickly, before it would dramatically affect the long-term average.

This assumes they correctly diagnose the fault and know how to fix it, and are able to fix it without any adverse side effects on superficially similar situations requiring a different course of action. This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios. Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low. The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)

> That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.

I don't get to choose what vehicles other people use. I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.

> The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR.

This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed. I'm sure plenty of engineer time has been devoted to studying them (despite Tesla's actual PR strategy being to deny the problem and deflect blame onto the driver rather than announce fixes) but the fixes aren't trivial or easily generalised and approaches to fixing them are bound to produce side effects of their own.


> This assumes they correctly diagnose the fault and know how to fix it, and are able to fix it without any adverse side effects on superficially similar situations requiring a different course of action.

We're talking about a regression that makes things worse than they were before. The worst case is that they have to put it back the way it was.

> This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios.

I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.

That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically, which puts pressure on the companies to compete to have the best safety record and therefore not trade the reductions in misbehavior for additional complexity as much.

> Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low.

It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.

And if it was really that common then why aren't their safety records worse than they actually are?

> The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)

The rate for autonomous vehicles is also very low, and the average person is still average.

> I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.

So who is forcing you to buy a car with this, or use that feature even if you do? Not everything is a dichotomy between being mandatory or prohibited. You can drive yourself and the drunk can let the software drive both at the same time.

Though it wouldn't be all that surprising that computers will one day be able to beat even the best drivers the same way they can beat even the best chess players.

> This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed.

You're assuming they're the same problem rather than merely the same result.

And in this case it's purposely adversarial behavior. There are tons of things you can do to cause an accident if you're trying to do it on purpose, regardless of who or what is driving. The fact that software can be programmed to handle these types of situations is exactly their advantage. If you push a sofa off an overpass into the highway full of fast moving traffic, there may be a way for the humans to react to prevent that from turning into an multi-car pile up, but they probably won't. And they still won't even if you do it once a year for a lifetime because every time it's different humans without much opportunity to learn from the mistakes of those who came before.


> I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.

I think this is misunderstanding the difference between a safety critical system which is designed to be as simple as possible, such as an airline system to maintain altitude or follow an ILS-signalled landing approach, and a safety critical system which cannot be simple and is difficult to even design to be tractable, such as an AI system designed to handle a vehicle in a variety of normal road conditions without human fallback.

> That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically

The benchmark maximum acceptable fatality rate for all kinds of traffic related fatality is a little over 1 per hundred million miles, based on that of human driver. Pretty damned difficult to measure safety performance of a vehicle type statistically when you're dealing with those orders of magnitude...

> It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.

Well yes, the system will handle a significant proportion of unforeseen scenarios safely, or at least in a manner not unsafe enough to be fatal (much like most bad human driving is unpunished). Trouble is, there are a lot of unforeseen scenarios over a few tens of millions of miles, and a large proportion of these involve some danger to occupants or other road users in the event of incorrect [in]action. It's got to be capable of handling all unforeseen scenarios encountered in tens of millions of road miles without fatalities to be safer than the average driver.

> And if it was really that common then why aren't their safety records worse than they actually are? They really haven't driven enough to produce adequate statistics to judge that, and invariably drive with a human fallback (occasionally a remote one). Still, the available data would suggest that with safety drivers and conservative disengagement protocols, purportedly fully autonomous vehicles are roughly an order of magnitude behind human drivers for deaths per mile. Tesla's fatality rate is also higher than that of other cars in the same price bracket (although there are obviously factors other than Autopilot at play here)

> The rate for autonomous vehicles is also very low, and the average person is still average.

You say this, but our best estimate for the known rate for autonomous vehicles isn't low relative to human drivers despite the safety driver rectifying most software errors. And if a disproportionate number of rare incidents are caused by "below average" drivers, then basic statistics implies that an autonomous driving system which actually achieved the same gross accident rate as human drivers would still have considerably less reliability at the wheel than the median driver.

> You're assuming they're the same problem rather than merely the same result

From the point of view of a driver, my car killing me by accelerating into lane drivers is the same problem. The fact there are multiple discrete possibilities for my car to accelerate into lane dividers and that fixing one does not affect the others (may even increase the chances of them occurring) supports my argument, not yours. And even in this instance, which unlike others was an adversarial scenario, involved something as common and easily handled by human drivers as a white patch on a road.


> Turns out your car has a habit of glitching like this every 165,000 miles or so (which seems to be the average accident rate) for no apparent reason.

People do equally inexplicably stupid things as well, due to distraction, tiredness or just a plain brain fart.


> This is a fallacy.

Please do point out where the statement is incorrect.

I wasn't talking about perception, people have all kinds of ideas about machines (mostly unfounded), but typically safety records drive policy, not whether drivers think they are better than the machine.

I believe machine assisted driving is safer than unassisted already and will continue to improve such that in 10 or 20 years human drivers will be the exception, not the norm. That will happen because computers are already safer in most conditions - the switch will happen when they are demonstrably far safer in all conditions.

Remember they don't have to beat the best human driver, just the average.


> That will happen because computers are already safer in most conditions - the switch will happen when they are demonstrably far safer in all conditions.

Computers currently can disengage in any conditions for any reason - how did you come with conclusion computers are already safer?


Humans drive 8 billion miles per day in the US, through a variety of conditions, crazy weather, crazy traffic, crazy pedestrians, etc. “Just being safer than humans” is an extremely tall order.


   just being safer than humans is enough
"Enough" for a statistician... but not if you're the one killed in an "acceptable" edge case that most humans handle fine.


> Humans kill over 100 people per day in traffic accidents in the US alone

They also drive 8-9 billion miles per day. That's around 1 death per 90 million miles of driving. Given the number of AV miles that are driven annually right now, we actually would expect to see ~0 deaths per year if AVs were as safe as humans...


FWIW, Tesla claims to have logged 1.2 billion autopilot (primarily freeway only) miles as of last summer:

https://electrek.co/2018/07/17/tesla-autopilot-miles-shadow-...

It is probably best to categorize these as autopilot PLUS human supervision, but anyway, Wikipedia cites 3 autopilot-caused deaths worldwide over 3 years or so.

https://en.wikipedia.org/wiki/List_of_self-driving_car_fatal...


It's all worth subcategorizing. Highway driving is substantially safer than surface streets, and if you stick to awake, sober, daytime drivers in good weather, the safety of human drivers is even higher.

Given that autopilot can handle nighttime, but not the others, it's completely possible that 1/300 million miles is above the sober good weather highway human driver fatality rate.


Full autonomy has been 10 years away for half a century. If driving still exists in a century, I'd wager that full autonomy would still be 10 years away then.


Jim Keller quit. No chip yet. Doesn't bode well.


Keller finished designing HW3 before departing Tesla. In a recent tweet Musk claimed HW3 is nearing production, then, a while after after media outlets rushed to report on the first tweet Elon threw out the caveat, which is that HW3 won't be going into cars until the software is ready, which could mean anything, and likely means they are nowhere near having their current software validated for HW3, let alone FSD.


Hmm, he says the upgrade will be offered in a few months: https://twitter.com/elonmusk/status/1117118581865476096


Always a good sign when the designer bails before production.


In Keller's case, that's his MO. He did exactly the same thing with Zen which basically saved AMD. I'm fairly bearish on Tesla and don't read anything in to it.


That's actually Jim Keller's M.O, he doesn't hang around for long. He does the bit he's interested in and then moves on. In the last decade he's worked for Apple, AMD, Tesla and Intel.


Here's a recent interview he did with his new role at Intel. https://www.hpcwire.com/2019/03/21/interview-with-2019-perso...


I can appreciate being so good you can get away with that.


What's M.O?


Modus operandi - someone's habits of working, particularly in the context of business or criminal investigations.

https://en.m.wikipedia.org/wiki/Modus_operandi



Why did the first 2 replies to this comment use the phrasing that that is his 'MO'?


I posted mine and looked back at the thread and someone else had posted at roughly the same time. Coincidence I guess. M O is a good term to describe the situation


Isn't software a bigger issue? I mean it took Nervana and Nvidia a non-trivial amount of years to optimize code. This is also the reason OpenCL is next to useless for deep learning, and AMD is creating a HIP transpiler instead of building it's own equivalents of CUBLAS/CUDNN/....


I believe the chipset was designed with the current/future roadmap of the AI in mind. I think its basically a chipset optimised for the particular neural net approach they are using.


The chip's supposed to be shipping soon. I think the design was done a while back.


>The chip's supposed to be shipping soon.

Oh? Where did you hear that? Can you post a source, or are you speculating?


"the Tesla autopilot AI computer is about to roll into production" - Elon Musk, Feb 20 or so https://www.youtube.com/watch?v=Y8dEYm8hzLo&t=16m20s


They probably needed the data collected by the current hardware in order to consider developing the next generation. Then they needed the hardware in order to develop the software. The only way to get the data is from a real fleet running hardware and software you control.


Tesla's vaunted "fleet data" is a red herring.



>but if you've missed that news, you're not really following Tesla very closely.

If you think this will actually happen, you're not really following Tesla very closely.

Outside of Elon's boasting, is there any evidence they have a chip that is going to be 10x better than the best in the industry? How do you imagine that happens? Where does the talent come from?


A 30 watt power Nvidia GPU from 2016 is not "the best in the industry".

Edit: To expand, the GPU in current Teslas is a GP106. It has no neural net-specific compute abilities. For NN inference it's slower than last year's iPhone. A vastly faster chip wouldn't be hard to get. Even if their in-house silicon efforts fail, they could just buy a modern Nvidia chip and the inference speed would go up 100x. Those chips go for $1000, easily covered by Tesla's "full self driving" upgrade package price tag of $5000.

If they run into a problem with FSD, it's not going to be finding a way to run 10x faster compute than the current shipping hardware. They may have other problems, but not that.


1000 Tflops to make it usable is currently out of reach; even lab prototypes can't do that yet.


Not literally 10 times processor speed, but 10 times as many video frames processed per second


Tesla is offering iterative improvements for free.

The above post is saying that they’ll need to offer groundbreaking cutting edge tech that doesn’t even exist yet ... for free.

All the while this tech is being invented Tesla is selling more cars that will need this technology.


If the improvements happen in software, deploying them for free is obviously not an issue. The question is how much the necessary hardware modifications would cost. But they're charging thousands of dollars for the option, which gives them some leeway to at least break even there, especially if you consider the time value of money between when they sell the option and when they deliver it.

The real question is, what happens if "full self driving" doesn't arrive for another 30 years?

But if you think car companies have never promised something they weren't sure they could deliver, you're not aware of the incumbents' unfunded pension liabilities.


> But if you think car companies have never promised something they weren't sure they could deliver, you're not aware of the incumbents' unfunded pension liabilities.

Can we just get rid of whataboutism here entirely, please, HN?

One wrong does not excuse another.


This the original post in this thread:

> It is kinda amazing that Elon Musk is selling cars based on speculative future breakthroughs in technology. That must be a first.

But promising things based on speculation about the future is not new, it's common practice. It may be a questionable idea, but the claim was that it's unusual. It's not.


Please do provide one example in which any car manufacturer promised features, which were outright vapor ware.

Cheating, i.e. Dieselgate, doesn't count. That was not a promised feature, this was outright fraud.


> Please do provide one example in which any car manufacturer promised features, which were outright vapor ware.

Volkswagen has been advertising their EV charging network (Electrify America), promising to expand availability in the future:

https://www.electrifyamerica.com/our-plan

If you buy one of their electric cars planning to use it, you're assuming they're actually going to build it, and if you live near one of the proposed sites, actually build that one in particular.

And this is hardly new. Ford Sync is more than a decade old, but when it came out they were advertising all the things you could theoretically do with it, many of which were contingent on third parties writing apps for it. Some of that actually happened, some of it didn't, but it wasn't all there on day one.


Fair enough.

Thanks.


Whataboutism is “X isn’t bad because Y is worse/also bad”.

Parent was saying “X will probably be bad because similar Y was bad in the same dimension (here, overpromising)”.

Maybe not a solid argument, but it isn’t whataboutism.

Edit: Never mind, it looks like I misread the parent's point, and he was saying that it's no big deal to promise something without being sure you can deliver because the Big Three did it with pensions.


You've used different phrasing to say the same thing.


On second thought, I misread the argument, and I agree it's whataboutism. See edit. (However, if my misreading had been correct, it wouldn't have been whataboutism, as it would be a prediction that Tesla would also be bad.)


Musk says the new processor is about 20x faster than the Nvidia one in frames per second and they can run all the cameras at full resolution into it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: