These are the type of odd little situations that come up all the time in real world driving. I can't understand how anyone would expect level 4+ autonomous driving to work on a widespread basis without some tremendous breakthroughs in AGI.
How much large scale network engineering have you done?
Exponential backoff (which the previous poster mentioned) is randomized and success is by definition non-deterministic algorithm to resolve conflicting usage. It works very well, and is very simple. But deterministic and non-conflicting are really not qualities that the IP protocol is known for.
10 Drive forwards a bit
30 Did we hit something?
35 YES: Sleep for a short time
40 GOTO 10
Two use cases happen repeatedly: first, it's indecisive on lane changes when there are vehicle(s) in its targeted lane for a long time. If it cannot merge over safely due to traffic or rudeness, it will Stop in its lane until a clearance occurs -- the concept of proceeding to the next left and making a U-turn seems incomprehensible. Second, in certain right turn situations on a red light, it will hever turn if there is traffic in far lanes even if the nearest lane has a generous opening. I see this all the time on the right turn from eastbound Central turning onto southbound Castro St., for example.
To be fair, even I do that sometimes if I just don't trust oncoming traffic not to do something like changing lanes in the intersection or at the last second. To your point, though, it's all dependent upon the intersection, number of lanes, etc., and I'm not familiar with the intersection in question.
'Odd' events like that are probably only 5 seconds out of every hour of driving, so in aggregate one operator could handle 720 cars. At that point, the operator is far cheaper than further work on the AI.
(If four cars arrive at the same time ... you just wing it, I guess.)
- Someone will tailgate me like they want to pass ... but I'm safely in the right lane, and they can easily pass in one of the other two lanes.
- They want to drive a lot faster than me in a residential area, so I pull to the side and wave them to pass, but they just stop there.
Remote operation is a total non-starter. Our existing cellular networks lack the bandwidth and redundancy for safety critical applications. What happens if the local tower is down because the cooling system failed or a construction crew accidentally cut the backhaul fiber?
That is literally what Tesla have announced they are doing next. I've got no opinion on your other points, but if you've missed that news, you're not really following Tesla very closely. It's a drop in replacement chip set designed by Jim Keller.
Killing a bunch of people or jamming up traffic is a quick way to going out of business in the car industry, so there's far more incentives to wait for the real thing. It's not like an optional $2-3k add-on (to a $40-60k product) that turns out to be useless would be a game changer. People will just not trust them to pre-invest and wait to buy it when it's available.
Uhh... I love Tesla but current Autopilot, that they have released, is definitely not what I'd call "public-ready". And they agree, which is why it's locked behind clickwrap disclaimers and warnings and described as a "beta".
Don't get me wrong, I want one. But I wouldn't want my mum to use Autopilot.
 It can't see stopped cars on the highway (!) and gets confused by freeway exits, sometimes driving directly at the gore point. Worse, this second issue was fixed and is now happening again, indicating a critical lack of regression testing. (!!)
If the failures are in places that humans can easily and reliably handle (which is the case now) then people won't trust these systems. If the failures are created by the software not being able to handle basic driving tasks and wouldn't normally happen with a person driving then this is a huge problem with the system. A system that repeatedly drives at a lane divider repeatedly is not something people should trust.
Example: say an SDC never had collisions except when a cop is directing traffic, and in that case it floors it full speed to the cop. I would not consider that to have solved self driving cars, even though dealing with a cop directing traffic is so rare that its accident rate is lower.
I do agree with the OPs skepticism though - full autonomy is 10 years away.
This is a fallacy. People don't just look at safety statistics. The actual manner/setting in which something can kill you matters a ton too. There's a huge difference between a computer that crashes in even the most benign situations and one that only crashes in truly difficult situations, even if the the first one isn't any more frequent than the second.
Hypothetical example: you've stopped behind a red light and your car randomly starts moving, and you get hit by another car. Turns out your car has a habit of glitching like this every 165,000 miles or so (which seems to be the average accident rate) for no apparent reason. Would you be fine this car? I don't think most people would even come close to finding this acceptable. They want to be safe when they haven't done anything dangerous -- they want to be in reasonable control of their destiny.
P.S. people are also more forgiving of each other than computers. E.g. a 1-second reaction time can be pretty acceptable for a human but, but if the computer I'd trusted with my life had that kind of reaction time it would get defenestrated pretty quickly.
The second driver is a machine. It sees all around it at all times and never gets drunk. But when it fails it does so in a way that to a human looks incredibly stupid. A way that is unthinkable for a human to screw up- making an obviously bad decision that would be trivial for a human to avoid.
Now let's say that statistically it can be proven that the first type of driver (human) is ten times more likely to be at fault for killing their passengers than the second in real world driving. So if you die, and it's the machine's fault, it will be in a easily avoidable and probably embarrassingly stupid way. But it's far less likely to happen.
Which type of driver do you choose?
If I asked everyone on HN what steers the rear of a motor vehicle; I'd guess only 10% would guess correctly, and we're talking about some of the smartest most well read people on the Planet here. If I asked everyone on HN how many times they have practiced stopping their car from high speed in the shortest time possible, and were competent braking at high speed; I'd round that guess to virtually 0. Let's talk about the wet; can you control the car when it fishtails? Can you stop quickly in the rain? No and no.
You simply can not be competent in a life and death situation without training, nor without a basic understanding of vehicle dynamics. You just can't. Now I'm not saying Everyone must be able to toss the car into a corner, get it sideways and clip the apex while whistling a happy tune; but for god's sakes can we at least mandate 2 days of training at a closed course by an instructor who has a clue about how to drive? That would absolutely save lives... lots and lots of lives.
Which brings me to my favorite feature of some of these "self driving" cars: all of a sudden, with No warning whatsoever the computer says hey I'm fucked here --you have to drive now and save us all from certain disaster. I probably could not do that, and I sure as hell can toss a street car sideways into the corner while whistling a happy tune.
What does this even mean? I'm guessing you're going for 'the throttle' but it's a pretty ambiguous question.
Totally agree on advanced driver training though. If you don't know the limits of your vehicle then you shouldn't be driving it.
As for the last point, I think we need to ditch the "level X" designations and describe automated vehicles in terms of time that they can operate autonomously without human intervention. A normal car is rated maybe 0.5s. Autopilot would be 0.5s - 1s. Waymo would be much more depending on how rarely they need a nudge from the remote operators.
I am going for the throttle (you know, to stop the car from rotating too much after I threw into that corner), and yes you are correct it (the throttle) does "steer" the rear of the car. Plus 1 btw.
Ambiguous... maybe. Anyway; see you on the wet skidpad ;) .
Other possible interpretations that I thought of (for the record):
- The front wheels (under good traction conditions)
- The limit of traction for the rear wheels (when in a corner nearing said limit of traction... my favourite part, btw.)
- The front wheels (if you're already sliding but hoping to go mostly straight)
- The throttle (if you're sliding and planning on keeping it this way while the front wheels dictate angle of drift)
- Edit: The handbrake (if you're in a front wheel drive for some reason)
I almost forgot how difficult it can be to explain the nuances of vehicle dynamics clearly and succinctly.
Let's start by using the classic Grand Prix Cornering Technique (rear wheel drive/rear engine car). We brake in a straight line, and the weight transfers forward so that now the front tires have more grip than the rear tires (as a rule of thumb; the more weight a tire has on it the more it grips, because it is being "pushed" down onto the road. You can of course "overwhelm" the tire by putting too much weight on it causing it to start to lose adhesion). As we get to the turn in area of the corner we (gently) release the brake and we (gently) apply throttle to move some of the weight of the car back towards the rear tires (if we didn't do this the back of the car would still have almost no grip and we would spin as soon as we initiated steering input to turn into the corner).
Now we are into the first 1/3 of the turn, and approaching the apex--we have all the necessary steering lock to make the corner, that is to say we will not move the steering wheel anymore until it is time to unwind it in the final third of our turn (also we are on even throttle we cannot accelerate until we are at the apex). So here we are--the front and rear slip angles are virtually equal, but we want to increase our rate of turn because we see we will not perfectly clip the apex... we breath (lift a bit) off the throttle, but keep the steering locked at the same angle and the car turns (a bit) more sharply. We have actually just steered the rear of the car with the throttle; yes we have affected the front tires slip angles as well, but if we viewed this from above we would see we have rotated the car on its own axis.
This works, to varying degrees, in every layout of vehicle--FWD, AWD, RWD. Technique and timing are critical as is the speed, gearing, road camber, and so, and so. The fact remains though that the throttle steers the rear of the car.
With the computer, I'm just completely in the dark and then maybe I die in some stupid way.
But you have the option of assistive technologies. Have the human drive and the machine supervise it. The mistakes made when the human falls asleep can be prevented. The mistakes that the machine makes, well those won't be made at all as human is the active driver.
Being able to explain the mistake (eating, texting) vs. (???) makes things qualitatively different.
I think any reasonable level of risk for the computers can be acceptable, if glitches are explainable, able to be recreated and provably remediable.
I didn't want to cloud the question with this point, but data from a machine driver mistake can be used to train every other machine driver and make it better. While much can till be learned from the mistake made by a human driver, the error is not as likely to be minimized across the 'fleet' in the same way as it is for a machine driver, if that makes sense.
Also it's probably important to keep in mind-- if my undestanding is correct-- companies like Tesla are only using neural nets for situational awareness-- to understand the car's position in space and in relation to the identified objects, paths, and obstacles around it. The actual logic related to the car's operation/behavior within that space is via traditional programming. So it's not quite a black box in terms of understanding why the car decided to act in a particular way-- it may be that something in the world was miscategorized or otherwise sensed incorrectly, which could be addressed (via better training/validation, etc.). Or it could be that it understood the world around it but took the wrong action, which could also be addressed (via traditional programming).
If I'm wrong about that, I'm sure someone will chime in. (please do!)
This is some serious whitewashing here. It's not "unlikely", it's simply not going to happen at all. People have been killing innocents by drunk driving for decades now, so they obviously still haven't learned. They continually make the same mistakes, over and over. No, human drivers do not learn at all from each other's mistakes in any significant fashion.
This could be changed, if we as a society wanted it to. We could mandate serious driver training (like what they do in Germany), and also periodic retraining. Putting people in simulators and test tracks and teaching them how to handle various situations, using the latest findings, would save a lot of lives. But we choose not to do this because it's expensive and we just don't care that much; we think that driving is some kind of inherent human right and it's very hard for people to lose that privilege in this country. And it doesn't help that not being able to drive equates to being very difficult to survive in much of the US thanks to a lack of public transit options.
Why do you assume that drivers have to learn from each other's mistakes? A drunk driver learning from his own mistakes is already significantly ahead of what a self driving car does which potentially just repeats the same mistake over and over again. The correlated risk may even cause accidents in bursts. 10 self driving cars all doing the same mistake at the same time will cause even more damage than just a single one.
Because that's what computers do: we can program them to avoid a mistake once we know about it, and then ALL cars will never make that mistake again. The same isn't true with humans: they keep making the same stupid mistakes.
>A drunk driver learning from his own mistakes is already significantly ahead of what a self driving car does which potentially just repeats the same mistake over and over again
Why do you think this? You're assuming the car's software will never be updated, which is completely nonsensical.
>10 self driving cars all doing the same mistake at the same time will cause even more damage than just a single one.
Only in the short term. As soon as they're updated to avoid that mistake, it never happens again. Hum
By the way, your examples are not very good as the drunk/texting driver makes a choice and to some extent her passengers too. No such choice is given with the car is autonomous.
1) Computer has bugs that will kill the driver 100/100 times if they encounter that case. Driving at the guard rail, the truck decapitation and whatever bugs have yet to be discovered.
2) A distracted driver nay encounter one of those situations and see and avoid, even if drunk or looking down.
The likely case is the current crop of self driving cars are much more dangerous and will remain that until some magical breakthrough happens that was mentioned above.
If the average person thinks they're better than the computer when the computer is better than the average person, the average person is incorrect.
> They want to be safe when they haven't done anything dangerous -- they want to be in reasonable control of their destiny.
If you don't trust the computer, does that mean you won't trust another driver, if the computer is better than the average driver? Then how do you drive at all, when the roads are full of average drivers who could hit you at any time?
Until you remember that to obtain statistically significant evidence that the latest version number of Tesla (or any other firm's) software is safer than the average driver entails hundreds of millions of miles of driving. And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].
In the mean time, the human heuristic that a car which tries to kill you by accelerating at lane dividers isn't safer than your own average level of driving skill in many circumstances is probably better than trusting exponential curves and the Elon Musk reality distortion field.
And they include driving in a huge range of conditions and roads that driving automation does not function in.
The conditions in which automated driving technology gets used (good weather, highway driving) must have far lower rates of accidents than average.
It's plausible that it currently does worse with adverse weather than humans do with adverse weather, but I'm not aware of any data on that one way or the other, and I wouldn't expect any since it's not currently intended to be used in those conditions.
It is possible for a newer version to contain a new bug that would increase the accident rate significantly, but given the existence of realtime collision data, that seems like the sort of thing that would be caught and corrected rather quickly, before it would dramatically affect the long-term average. So you have a probability of being the unlucky first person to encounter a new bug, but unless the probability of that is significantly higher than the overall probability of being in a collision for some other reason, that's just background noise.
Moreover, it isn't an unreasonable expectation that newer versions should be safer than older versions in general, so using the risk data for the older versions would typically be overestimating the risk.
> And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].
That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.
> In the mean time, the human heuristic that a car which tries to kill you by accelerating at lane dividers isn't safer than your own average level of driving skill in many circumstances is probably better than trusting exponential curves and the Elon Musk reality distortion field.
The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR. But then your heuristic is stale as soon as they fix it, which is likely to happen long before any kind of true driverless operation is actually available.
This assumes they correctly diagnose the fault and know how to fix it, and are able to fix it without any adverse side effects on superficially similar situations requiring a different course of action. This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios. Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low. The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)
> That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.
I don't get to choose what vehicles other people use. I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.
> The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR.
This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed. I'm sure plenty of engineer time has been devoted to studying them (despite Tesla's actual PR strategy being to deny the problem and deflect blame onto the driver rather than announce fixes) but the fixes aren't trivial or easily generalised and approaches to fixing them are bound to produce side effects of their own.
We're talking about a regression that makes things worse than they were before. The worst case is that they have to put it back the way it was.
> This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios.
I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.
That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically, which puts pressure on the companies to compete to have the best safety record and therefore not trade the reductions in misbehavior for additional complexity as much.
> Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low.
It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.
And if it was really that common then why aren't their safety records worse than they actually are?
> The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)
The rate for autonomous vehicles is also very low, and the average person is still average.
> I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.
So who is forcing you to buy a car with this, or use that feature even if you do? Not everything is a dichotomy between being mandatory or prohibited. You can drive yourself and the drunk can let the software drive both at the same time.
Though it wouldn't be all that surprising that computers will one day be able to beat even the best drivers the same way they can beat even the best chess players.
> This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed.
You're assuming they're the same problem rather than merely the same result.
And in this case it's purposely adversarial behavior. There are tons of things you can do to cause an accident if you're trying to do it on purpose, regardless of who or what is driving. The fact that software can be programmed to handle these types of situations is exactly their advantage. If you push a sofa off an overpass into the highway full of fast moving traffic, there may be a way for the humans to react to prevent that from turning into an multi-car pile up, but they probably won't. And they still won't even if you do it once a year for a lifetime because every time it's different humans without much opportunity to learn from the mistakes of those who came before.
I think this is misunderstanding the difference between a safety critical system which is designed to be as simple as possible, such as an airline system to maintain altitude or follow an ILS-signalled landing approach, and a safety critical system which cannot be simple and is difficult to even design to be tractable, such as an AI system designed to handle a vehicle in a variety of normal road conditions without human fallback.
> That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically
The benchmark maximum acceptable fatality rate for all kinds of traffic related fatality is a little over 1 per hundred million miles, based on that of human driver. Pretty damned difficult to measure safety performance of a vehicle type statistically when you're dealing with those orders of magnitude...
> It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.
Well yes, the system will handle a significant proportion of unforeseen scenarios safely, or at least in a manner not unsafe enough to be fatal (much like most bad human driving is unpunished). Trouble is, there are a lot of unforeseen scenarios over a few tens of millions of miles, and a large proportion of these involve some danger to occupants or other road users in the event of incorrect [in]action. It's got to be capable of handling all unforeseen scenarios encountered in tens of millions of road miles without fatalities to be safer than the average driver.
> And if it was really that common then why aren't their safety records worse than they actually are?
They really haven't driven enough to produce adequate statistics to judge that, and invariably drive with a human fallback (occasionally a remote one). Still, the available data would suggest that with safety drivers and conservative disengagement protocols, purportedly fully autonomous vehicles are roughly an order of magnitude behind human drivers for deaths per mile. Tesla's fatality rate is also higher than that of other cars in the same price bracket (although there are obviously factors other than Autopilot at play here)
> The rate for autonomous vehicles is also very low, and the average person is still average.
You say this, but our best estimate for the known rate for autonomous vehicles isn't low relative to human drivers despite the safety driver rectifying most software errors. And if a disproportionate number of rare incidents are caused by "below average" drivers, then basic statistics implies that an autonomous driving system which actually achieved the same gross accident rate as human drivers would still have considerably less reliability at the wheel than the median driver.
> You're assuming they're the same problem rather than merely the same result
From the point of view of a driver, my car killing me by accelerating into lane drivers is the same problem. The fact there are multiple discrete possibilities for my car to accelerate into lane dividers and that fixing one does not affect the others (may even increase the chances of them occurring) supports my argument, not yours. And even in this instance, which unlike others was an adversarial scenario, involved something as common and easily handled by human drivers as a white patch on a road.
People do equally inexplicably stupid things as well, due to distraction, tiredness or just a plain brain fart.
Please do point out where the statement is incorrect.
I wasn't talking about perception, people have all kinds of ideas about machines (mostly unfounded), but typically safety records drive policy, not whether drivers think they are better than the machine.
I believe machine assisted driving is safer than unassisted already and will continue to improve such that in 10 or 20 years human drivers will be the exception, not the norm. That will happen because computers are already safer in most conditions - the switch will happen when they are demonstrably far safer in all conditions.
Remember they don't have to beat the best human driver, just the average.
Computers currently can disengage in any conditions for any reason - how did you come with conclusion computers are already safer?
just being safer than humans is enough
They also drive 8-9 billion miles per day. That's around 1 death per 90 million miles of driving. Given the number of AV miles that are driven annually right now, we actually would expect to see ~0 deaths per year if AVs were as safe as humans...
It is probably best to categorize these as autopilot PLUS human supervision, but anyway, Wikipedia cites 3 autopilot-caused deaths worldwide over 3 years or so.
Given that autopilot can handle nighttime, but not the others, it's completely possible that 1/300 million miles is above the sober good weather highway human driver fatality rate.
Oh? Where did you hear that? Can you post a source, or are you speculating?
If you think this will actually happen, you're not really following Tesla very closely.
Outside of Elon's boasting, is there any evidence they have a chip that is going to be 10x better than the best in the industry? How do you imagine that happens? Where does the talent come from?
Edit: To expand, the GPU in current Teslas is a GP106. It has no neural net-specific compute abilities. For NN inference it's slower than last year's iPhone. A vastly faster chip wouldn't be hard to get. Even if their in-house silicon efforts fail, they could just buy a modern Nvidia chip and the inference speed would go up 100x. Those chips go for $1000, easily covered by Tesla's "full self driving" upgrade package price tag
If they run into a problem with FSD, it's not going to be finding a way to run 10x faster compute than the current shipping hardware. They may have other problems, but not that.
The above post is saying that they’ll need to offer groundbreaking cutting edge tech that doesn’t even exist yet ... for free.
All the while this tech is being invented Tesla is selling more cars that will need this technology.
The real question is, what happens if "full self driving" doesn't arrive for another 30 years?
But if you think car companies have never promised something they weren't sure they could deliver, you're not aware of the incumbents' unfunded pension liabilities.
Can we just get rid of whataboutism here entirely, please, HN?
One wrong does not excuse another.
> It is kinda amazing that Elon Musk is selling cars based on speculative future breakthroughs in technology. That must be a first.
But promising things based on speculation about the future is not new, it's common practice. It may be a questionable idea, but the claim was that it's unusual. It's not.
Cheating, i.e. Dieselgate, doesn't count. That was not a promised feature, this was outright fraud.
Volkswagen has been advertising their EV charging network (Electrify America), promising to expand availability in the future:
If you buy one of their electric cars planning to use it, you're assuming they're actually going to build it, and if you live near one of the proposed sites, actually build that one in particular.
And this is hardly new. Ford Sync is more than a decade old, but when it came out they were advertising all the things you could theoretically do with it, many of which were contingent on third parties writing apps for it. Some of that actually happened, some of it didn't, but it wasn't all there on day one.
Parent was saying “X will probably be bad because similar Y was bad in the same dimension (here, overpromising)”.
Maybe not a solid argument, but it isn’t whataboutism.
Edit: Never mind, it looks like I misread the parent's point, and he was saying that it's no big deal to promise something without being sure you can deliver because the Big Three did it with pensions.
Tesla's head of AI is Andrej Karpathy, who many in this community hold in high esteem. I know this is "argument by authority", but we're working with a black box here, so it will have to do. Do you really think he is wasting his best years on a project that anyone in the field can "guarantee" will never happen? Or could it be that he knows something you don't?
By the way, it seems you also don't know that they hold most FSD revenue in reserve on their financials, it's not being spent. So if they need to return it, they can.
Regarding the TPU chip that is 100x faster than NVidia's chip, I also take that with a grain of salt. Note that 3rd generation Google TPUs are on par with latest NVidia GPUs in terms of performance according to Google. If Tesla has made a chip that is 100x faster, they should spin it off as a separate company that could be valuable as much as 2x Tesla's market cap.
I don't think they've claimed that the FSD computer (hw3) is 100x faster than "the latest NVidia GPUs". He's said that it's about an order of magnitude in the number of video images the current Nvidia hardware in a Tesla can process (that is, from 150-200fps to ~2000fps) without needing to scale down any frames.
I think he may have said in one live interview it would be "2000%" better, but since he said previously it would do 2000 frames/s, that may have just been a mistake of saying "percent" instead of "frames".
About the stuff that's based on Elon's assertions:
First, yes, he is often wrong on timelines. Nobody doubts that. By the way, for other car companies (even Waymo!) who claim they'll have X milestone by Y date, everyone is understanding, since timelines slip. For Tesla, apparently it's a capital crime to say "I think we'll have it by then" and not have it. But your original points were not about timelines.
As for the miles they have registered with the DMV, Tesla's self-driving programme does not follow the same path as others. They are progressing from level 2 upwards, and deploying improvements to their fleet of cars in production. Other companies are working with tiny fleets and aiming directly at level 4+. So basically, you're looking in the wrong place. But even so, Elon's latest prediction is that they'll have "feature completeness" by end of year, and then they'll start working on regulatory approval. So I assume that's when you'll start seeing miles there, and you will very likely see lots of them, all at once.
Since when is middle management with an org size of 80 an "executive position"?
> His department is building something that is actually used in cars today.
So do the departments at Cruise, Waymo, BMW, and even universities. Karpathy isn't special -- and neither is Tesla Autopilot's progress.
>By the way, it seems you also don't know that they hold most FSD revenue in reserve on their financials, it's not being spent. So if they need to return it, they can.
Please point to the line and note in the financials where this is done, because I'm quite sure you're mistaken. Tesla hardly has any warranty reserves, let alone FSD refunds.
Blind worries me a lot. Because if the general claims made on the website are true, then the people in top companies making $300k+ annually are some of the most immature and toxic people on the face of the earth.
Please take a step back and think about the average human being for a moment
Even if it’s not just negligeance, if an accident were to move the markers, would it cascade accidents until these markers are fixed ?
In the end, it seems we’d still need cars way clever than what we have now for it to be trustworthy.
Many localities can barely keep the lines painted. Many more have all sorts of adverse weather conditions (e.g., snow and slush) that make road markings hard to see. The chances of this being a workable solution, at least in the US, is nil.
Sure, for commuting in large cities anything is possible, but then cars still need to be able to do both old an new navigation.
I'm reading this from inside a train.
Let the driver drive the car manually for the last mile of travel. Most of the problems of autonomous driving are on the last mile (pedestrians, unmarked roads, poor lighting, school crossings, train crossings, etc.). The last mile is the most expensive to outfit and maintain, too.
I wouldn't be disappointed at all to find that in the future, cars drive by themselves 80% of the time, and humans take over for the tricky 20%. If the roadway isn't autonomous-ready the car shouldn't become a brick. You just drive it manually.
You would either have to add massive taxes to road usage (not just gas), heavily tax automobile ownership or use general purpose taxes for such very expensive upgrades.
As a non-owner of a car I would balk at the last option.
Sure, a car that doesn't need any aids is much cooler. But if you look at the history of technology, especially in the PC world, there were a lot of clever hacks and external aids that had to be used in order to make the tech feasible. Then with time, they were rendered unnecessary.
You don't live in the US, do you?
Except, y'know, those newfangled automobile things that they had to build special roads for.
It is. But it's not that unusual these days: it's basically a Kickstarter!
Well, not exactly. Kickstarter projects have no responsibility to ever deliver. They just have to make a good faith effort.
Tesla has to eventually deliver some version of full self driving to customers, but they made sure not to say when.
So it's hard for me to see the legal liability for them. How can someone sue for a product that was never guaranteed a delivery date?
Worst case would be a lot of customers demanding refunds: which I doubt Tesla would fight.
So I guess that's the liability on the books: some % of $5000 refunds.
But a lot of customers will be happy with whatever they deliver whenever they deliver it. It doesn't feel like an existential risk to me.
Part of why it doesnt faze me is Tesla has so many knobs they can dial here. They can decide where FSD is enabled: if it works only on roads where the autopilot is highly confident, and only during good weather conditions, that probably fulfills their legal obligations.
This is why I think self-driving bears are wrong.... They argue computers will never be able to handle every corner case, which is true. But it only needs to work for one slice of drivers in one location for there to be a market, and from there it's just an iterative process to grow the service area.
It's like arguing my ice cream shop will fail because many people are lactose intolerant, pale, vegan, etc. Sure, but to have a successful business I just need one customer, then two, then four. And I get to choose which customers I court.
Tesla is the same wrt FSD.
Woah, that's a long time! What are the problems in computer vision that need to be solved?
if this is accurate, the current models give a hint of the issue. the car can't yet see very far and any minor obstruction confuses the hell out of it. there's too little pixel to make out the road at a distance and the car doesn't use any of the other clues human do to figure out what the road is doing next - i.e. we can guess a corner from vertical signs, guard rails, trees and even hill sides - see this example: in red the pixels that hints a tesla of an incoming corner, in blu those that also a human can use: https://i.imgur.com/CvntZuZ.png
the problem is that a camera, especially if it's not at a high vantage point, will have very few pixel to represent distant features.
Humans while not perfect are capable if making split-second decisions on incomplete data under a surprising range if conditions based on the fact that our brains are an unmatched pattern-matching beast feeding off of our experience.
Unless you're talking traditional computer security, which it doesn't seem like you are, these types of threats have not prohibited human drivers despite the fact that humans are very susceptible to "adversarial attacks" while driving too. Whether it's putting carefully crafted stickers over a stop sign to confuse a CNN or yanking it out of the ground to get a human driver killed.. you're talking about interfering with the operator of a moving vehicle... so what's the critical difference here?
> so what's the critical difference here?
If neural networks are deployed at scale in self driving cars, a single bug could trigger millions of accidents.
Printing an adversarial example on billboards would lead to crashes all around the country. Are we going to assume no one is going to try? (btw: real world adversarial examples are easy to craft ).
Like literally the article we’re commenting on. Image recognition systems in general are much more susceptible to errors in cases where humans wouldn’t even think twice.
And yes, “hey, a stop sign was here just yesterday” is also a situation for which humans are uniquely equipped, and computers aren’t.
Humans are LOUSY in almost all of those conditions as well as much less challenging ones--we engineer the road in an attempt to deal with human imperfections. The I-5/I-805 intersection in California used to have a very slight S-curve to it--there would be an accident at that point every single day. Signs. Warning markers. Police enforcement. NOTHING worked. They eventually just had to straighten the road.
Humans SUCK at driving.
Most humans have a time and landmark-based memory of a path and they follow that. Any deviation from that memory and boom accident.
This is the problem I have with the current crop of self-driving cars. They are solving the wrong problem. Driving is two intertwined tasks--long-term pathing, which is mostly spatial memorization, and immediate response, which is mostly station keeping with occasional excursions into real-time changes.
Once they solve station-keeping, the pathing will come almost immediately afterward.
Ever notice how a bunch of stupid drivers playing with their phones tend to lock to the same speed and wind up abreast of one another? Ever notice how you feel compelled to start rolling forward at a light even when it is still red simply because the car next to you started moving? When in fog, you are paying attention to lane markers if you can see them, but you are also paying attention to what the tail lights ahead of you are doing.
All of that is "station keeping".
And it's normally extremely important to give it priority--generally even over external signals and markings (a green light is only relevant if the car in front of you moves). It's the kind of thing that prevents you from running into a barrier because everybody else is avoiding the barrier, too.
Of course, it's also what leads to 20 car pile ups, so it's not always good...
It's also not objectively a simpler problem. Humans are actually not particularly good at speech recognition, especially when talking to strangers and when they can't ask the speaker to repeat themselves. Consider how often you need subtitles to understand an unfamiliar accent, or reach for a lyrics sheet to understand what's being sung in a song. For certain tasks ASR may be approaching the noise floor of human speech as a communication channel.
Humans may not be particularly great at speech transcription, but they're phenomenal at speech recognition, because they can fill in any gaps in transcription from context and memory. At 95% accuracy, you're talking about a dozen errors per printed page. Any secretary that made that many errors in dictation, or a court reporter that made that many errors in transcribing a trial, would quickly be fired. In reality, you'd be hard pressed to find one obvious error in dozens of pages in a transcript prepared by an experienced court reporter. It is not uncommon in the legal field to receive a "proof" of a deposition transcript, comprising hundreds of pages, and have only a handful of substantive errors that need to be corrected. That is to say, whether or not the result is exactly what was said, it's semantically indistinguishable from what was actually said. (And that is why WER is a garbage metric--what matters is semantically meaningful errors.)
The proof of the pudding is in the eating. If automatic speech recognition worked, people would use it. (After all, executives used to dictate letters to secretaries back in the day.) The rare occasions you see people dictate something into Siri and Android, more often then not what you see is hilarious results.
Yes, Switchboard has problems (I've mentioned many of them here) but it was something that 1990s systems could be tuned for. You would see even more dramatic improvements when using newer test sets. A speech recognition system from the 1990s will fall flat on its face when transcribing (say) a set of modern YouTube searches. Most systems in those days also didn't even make a real attempt at speaker-independence, which makes the the problem vastly easier.
Executives don't dictate as much any more because most of them learned to touch type.
Now it works great!
Maybe if it could be made to work well while whispering....
I'm skeptical a bit.
We confirm or disprove this by getting human test subjects to drive cars, using nothing but video feed from exactly same cameras.
If the humans perform significantly better, then that shows that there is a lot of work to be done independently of the camera resolution.
I could drive reasonably safely without my glasses (equivalent to a massive reduction in resolution); I just wouldn't be able to read direction signs and such. I'd have to drive right up to a parking sign to get the exact days and hours of the parking restrictions and such, but I wouldn't run over a pedestrian, or go the wrong way down some lane.
As a human driver I'd say you could drive ok with that although looking into the distance the images are annoyingly blurry compared to what I'd see by naked eye driving normally. Maybe comparable to driving in the rain with so so windscreen wipers.
quite a lot of the time, the vertical 'smear' prevents me seeing what something is or if it's moving towards me, and the dynamic range is too poor to make out black cars on a black background.
Then they'll go mainstream, and they'll keep getting better and better, while human drivers don't.
Every single time I get in a vehicle, the people coming towards me might be talking on their phone, drunk or whatever. There is a very real chance I will be injured or killed on a daily basis even if I drive perfectly, and nothing beyond my abilities happens.
In the same way that seat belts and airbags and automatic braking improved safety, imperfect self-driving cars will improve overall safety.
The important thing to note is that seat belts, airbags and automatic braking a far from perfect, and thousands of people a year still die even though they are using them. People still use them, because it is safer than the alternative - which imperfect self driving cars will be too.
People will not accept a computer driver, even if it manages a significant improvement on the current rate of car accidents, because there is a real psychological barrier. Your counterargument of "but it's an improvement" would miss that point.
Humans are prone to all sorts of fallacies, especially surrounding "destiny" and our own influence over our lives - that's why ideas like "bad things happen to bad people" are so popular. That kind of problem is not surmountable by statistical fact. You cannot sway the majority of road users to trust a machine that way. They want to be in control, because it makes them feel something that the computer can't - safe in their own hands.
In your opinion.
We can't know what people will accept, because we've never tried something like this before.
I've pointed out my view, and you've pointed out yours. Time will be the only way to see what people are willing to accept, or not.
Yes, they certainly are, but those people also vote. You can't just ignore them and their idiotic arguments, unless you live in a techno-authoritarian nation where things like this are forced on the populace against their will (even if their will is stupid). This is the whole problem here; you can't just convince everyone with a technical argument, because most people are non-technical, emotional, and frequently quite stupid, but they also have a say in decisions.
“Better than human drivers” is not an analytically useful criterion. The question is, can you handle all the corner cases human drivers manage to handle every day?
It's extremely simple, and the same criteria they use to determine if driving is safer with many safety devices (ie. airbags or seat belts) than without them.
Injuries or deaths per x miles or x hours driven.
It's very important to note that in some cases airbags and seat belts actually result in more severe injuries or deaths than without them, but overall they are better because they reduce the fatalities per x miles driven.
Just because airbags sometimes kill people, it doesn't mean I'm going to choose a car that doesn't have them. Overall, I'm better off to have them.
For example, right now Teslas cannot detect stationary obstacles. They slam right into them at highway speeds: https://www.caranddriver.com/features/a24511826/safety-featu.... This is not a matter of just tweaking the algorithms to get better and better error rates—it’s a fundamental problem with the system Tesla uses to detect obstacles.
In order to actually get close to the accident rate of human drivers on average, you have to be “perfect” in the sense you have to be able to handle every edge case a human driver is likely to ever run into in their entire lives.
That's like saying for airbags to be overall better, they have to be better at every single edge case like kids in the front seat or unrestrained passengers. We know for a fact they are not better at those edge cases, yet overall airbags are safer.
Why? Because 99.99999% of driving is not edge cases (that's why they're called edge cases), and as long as you're better for that very vast majority of cases, then you're better overall.
You need to handle the “long tail” of exceptional cases. While most driving is not exceptional cases (on a per mile basis), they arise quite often for any given driver, and more importantly, for drivers in the aggregate. A vehicle stopped in the middle of the road is an edge case. It’s also something that just happened to me today, and in the DC metro area happens hundreds of times per day. Encountering a traffic cop in the middle of the street directing traffic happens to thousands of cars per day in DC. Traffic detours happen thousands of times per day. Road construction, unplanned rerouting, screwed up lane markings, misleading signs, etc.
The basic problem you’re having is that you’re assuming that failure modes for self driving vehicles are basically random. That’s not the case. A particular human might plow into a stopped vehicle because she is not paying attention, but the vast majority of people will not. But a particular Tesla will plow into a stopped vehicle because it doesn’t know how to handle that edge case, and so will every other Tesla on the road. A human driver might blow through a construction worker signaling traffic by hand because they’re texting. But the vast vast majority will not. But all the Teslas on that stretch of highway will blow through that guy signaling traffic, because they don’t know how to handle that edge case. A human driver might hit a panhandler who jumps into the street because she isn’t paying attention to body language. Every Tesla will do it, every time. Humans can handle all the edge cases most of the time. That means self driving cars must be able to handle all the edge cases, because any edge case they can’t handle, they can’t handle it any of the time.
We know how many fatalities there are in the US per year, but I wonder how many crashes there are? How many near misses, or how many times does someone "luck" out and miss death by inches while being completely oblivious to it?
> A vehicle stopped in the middle of the road is an edge case
No it's not. It happens all the time (vehicles waiting to turn across on comming traffic) and I'm sure training models are already dealing with it.
> But all the Teslas on that stretch of highway will blow through that guy signaling traffic, because they don’t know how to handle that edge case
You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.
> A human driver might hit a panhandler who jumps into the street because she isn’t paying attention to body language. Every Tesla will do it, every time.
Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.
> Humans can handle all the edge cases most of the time
The number of road deaths per day around the world makes me strongly disagree with that.
It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is.
They don't have to be perfect, but they do have to continually get better. And they are.
Yet Tesla released a vehicle with an “auto pilot” that can’t handle that case. Makes me skeptical they’ll ever be able to handle the real edge cases.
> You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.
Teslas can’t. And will the be able to read the hand signals of the Verizon worker who didn’t have a stop sign while directing traffic on my commute last week?
> Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.
For a human driver, it doesn’t come down to reaction time. The human driver will know to be careful from the pan handler’s body language long before they jump into traffic.
Also, being able to emergency stop isn’t the issue. Designing a system that can emergency stop while not generating false positives is the issue. That’s why that Uber killed the lady in Arizona. Uber had to disable the emergency breaking because it generated too many false positives.
> Humans can handle all the edge cases most of the time
The number of road deaths per day around the world makes me strongly disagree with that.
Humans drive 3.2 trillion miles every year in the US, in every sort of condition. Statistically, people encounter a lifetime’s worth of edge cases without ever getting into a collision (there is one collision for about every 520,000 miles driven in the US). In order to reach human levels of safety, self driving cars must be able to handle every edge case a human is likely to encounter over a entire lifetime.
> It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is. They don't have to be perfect, but they do have to continually get better. And they are.
I have a bent against techno optimism. Engineering is really hard, and most technology doesn’t pan out. Technology gets “continually better” until you hit a wall, and then it stops, and where it stops may not be advanced enough to do what you need. That happened with aerospace, for example. I grew up during a period of techno optimism about aerospace, but by the time I actually got my degree in aerospace engineering, I realized that we had hit a plateau. In the 60 years between the 1900s and 1960s, we went from the Wright Flyer to putting a man in space. But we hit a plateau since then. When the Boeing engineers where designing the 747 in the 1960s, I don’t think they realized that they were basically at the end of aviation history. That 50+ years later (nearly the same gap between the Wright Flyer and themselves), the Tokyo to LA flight would take basically the same time as it would in their 747.
The history of technology is the history of plateaus. We never got pervasive nuclear power. We never got supersonic airliners. Voice control of computers is still an absurd joke three generations after it was shown on Star Trek.
It’s 2019. The folks who pioneered automatic memory management and high level languages in their youth are now octogenarians, or dead. But the “sufficiently smart compiler” or garbage collector never happened. We still write systems software in what is fundamentally a 1960s-era language. The big new trend is languages (like Rust), that require even more human input about object lifetimes.
CPUs have hit a wall. You used to be able to saturate the fastest Ethernet link of the day with one core and an ordinary TCP stack. No longer. Because CPUs have hit a wall, we’re again trading human brain cells for gains: multi-core CPUs that require even more clever programming, vector instruction sets that often require hand rolled assembly, systems like DPDK that bypass the abstraction of sockets and force programmers to think at the packet level. This is all a symptom of the fact that we’ve hit a plateau in key areas of computing.
There is no reason to assume self driving tech will keep getting better until it gets good enough. It may, or it may not. This is real engineering, where the last 10% is 90% of the work, and where that last 10% often proves intractable.
It's called Supplemental Restraint System for a reason.
Fear mongering is not the way to view problems, or to make our society better. We need to improve things, not live in fear of what "Bad" people could do to us.
You mean using cameras without LIDARs right? Presumably Waymo itself (cameras + LIDARs) is roughly at that level of advancement today.
In 10-20 years LIDARs should improve dramatically as well, in both price and performance. So I'm guessing cars will continue to rely on both technologies even when cameras become viable by themselves.
BTW, did you enjoy working at Waymo? Would you recommend the perception group as an employer?
It should be a blended, combined approach: FLIR, LIDAR, RADAR, USONAR, CV, etc
There doesn't need to be The One True Technology To Rule Them All™
The big issue is still in terms of processing speed and sensor fusion and Tesla isn’t leading the pack in either.
As far as wide scale autonomous driving goes for it to be good enough there needs to be an agreeable and verifiable decision making model for all manufacturers to follow and for regulators to validate, this could be as simple as never to perform an action that would cause a collision but that’s not exactly an ideal model either because you might then get a silly decision such as you don’t engage the breaks to slow down because you’ll end up hitting the car in front of you in either case.
Sadly I’m seeing too many people that tend to complicate things and chase the red herrings when it comes to autonomous driving such as “ethics” while it’s an interesting philosophical exercise in the real world it’s not the important part as when it comes to accidents the vast majority of drivers act on basic instinct and don’t weight the orphan vs the disabled veteran simply because they don’t have that data nor the capacity to incorporate it into their decision making process.
First we need to get sensors that don’t get confused by oddly placed traffic cones, birds and shadows the rest is pretty much irrelevant at this point.
Have you ever driven in a city? People don’t follow right of way rules and road signs. Pedestrians jump into traffic randomly. You absolutely have to be looking around trying to gauge peoples’ intentions in order to drive safely.
And the “theory of mind” comes to assume the other driver doesn’t want to cause an accident either and will slow down at which case the estimation can be broken to math such as would they have to slow down given the average response time and their speed.
As a person you have no idea as well if they are drunk, paying attention or not the only thing you can see is how fast they are going and usually if they are going well above the speed limit you’ll assume they are jerks and won’t let you merge which luckily is simple enough for the car to do as well
You can infer motivations and intentions based on how they’re driving. If you turn on your blinker and the guy in the lane next to you speeds up to close your available space, you have a pretty good idea of what they’re trying to do.
I’m not sure if there’s an algorithm that will figure out if other people on the road are complete fucking assholes that you want to avoid.
If you have enough time to stop if say a kid just jumped in front of you you will stop, if you don’t you don’t have enough time to process all the information around you and gauge the most optimal outcome if you swerve you swerve but that is likely going to happen regardless of what jumps out in front of you simply because that what your instincts are telling you to do.
The best thing an autonomous vehicle can offer is a better response time so you’ll be more likely to stop or to slow down to non/less than fatal velocity.
Drivers are drivers doesn’t matter if it’s humans or robots distracting them with nonsense wouldn’t make them better or safer but rather quite the opposite.
There is no right answer to that question, because I didn't specify what the animal is. Whether the animal is a baby deer or a human child will influence your choice. This isn't contrived. When I hit a deer, I made a choice to stay on the road. Had it been a child, I would have swerved, endangering myself in the process.
- Are they zooming by, then suddenly moving to the right and slowing down? Probably got a phone call, make ready to overtake them. And watch out, they might get really slow.
- Is the car in front stuck behind a truck and slowly moving to the left? Driver probably plans to overtake the truck soon. Reduce speed to not crash into his rear end.
Looking forward to when these things will be included in the reasoning model of self-driving cars.
0. Otherwise I don't think self-driving cars is happening soon.
are you willing to put money on your time estimate?
But yeah, okay, Elon Musk said it's going to happen. He also said they were going to have it years ago and we're still waiting on that cross-country trip from New York to Los Angeles that never materialized.
We'll see them roll out an error-prone stop-sign+traffic-light detection as fsd beta, and everyone will act like that was what was promised. Just like Lane keeping assist is "autopilot".
I realize this is a tangent, but I feel like it ends with Tesla going bankrupt before they ever come close to building a self-driving car, or even for the narrative to wear off for the general public. I believe a criminal investigation should be brought against Tesla for negligent manslaughter due to the fraudulent advertising and consequential deaths, but I have no expectation of that ever happening.
I just get really angry when people lie, then people die, and nobody does anything about it. As another commenter pointed out, Elon Musk himself is still demonstrating the technology as being completely autonomous just a few months ago.
But it seems nobody cares.
"The upshot is that Autopilot might, in fact, be saving a ton of lives. Or maybe not. We just don’t know."
...which doesn't support the claim that banning Tesla's autopilot would be a net positive.
Maybe the effect is zero-sum, which means that preventing the deaths where Autopilot did wrong by banning Autopilot, means that other people would die, because Autopilot saved them from a situation they would have crashed in.
If Tesla's Autopilot is demonstrably net negative, then yes, it should be banned, and banning it would save lives.
But if it's currently zero-sum, we should absolutely allow it, we should absolutely allow it to be improved, because improvements to the system will probably tip it over into net positive.
It is absolutely concerning that the company is trying to spin the tech as already being net positive, without any clear evidence of that, yes, I agree. If the tech were net negative, and they were trying to cover that up, that would be even worse. But that's not the position we're in.
Instead they are burying it behind legal procedures and doing everything they can to make sure nobody knows what the actual data says.
I cannot deny that I do not know anything with certainty. But it's more than just a guess that autopilot is not what's it's advertised to be.
If you'd actually care to educate yourself, you can start by reading the article I linked in my original comment.
maybe the rate dropped 40% or 13% but
>Now NHTSA says that’s not exactly right—and there’s no clear evidence for how safe the pseudo-self-driving feature actually is.
Which doesn't seem so terrible. Personally I'm optimistic that as the systems get better they can roll them out to other cars and make a dent in the 1.3m/year global deaths.
The real issue is comparing miles driven to similar miles driven, and autopilot miles are only supposed to be on the highway in good conditions (which is when the fewest accidents occur...well, probably). But the breakdown of accidents into categories such as speed, weather, traffic, etcetera does not exist (or at least I am unaware of it). It's further complicated by demographics, where older more affluent drivers - the kind likely to buy a Tesla - are safer as well. Then it's also confounded by the fact that Tesla is not a trustworthy company, at least in my opinion, and they will give OTA updates without warning owners which can revitalize old bugs (https://www.techspot.com/news/79331-tesla-autopilot-steering...). A lack of regression testing for a safety critical system is just terrifying.
Now admittedly, you came back to me with a reasonable response and I am throwing you a litany of "yeah, but" rebuttals. Do I believe Tesla Autopilot has the potential, when used properly, to make driving under certain situations safer? Probably. The main problem is the human element, making sure they're actually monitoring the car, informing them correctly of what Autopilot can and cannot do, etcetera. There are also issues with how Tesla not only improves the technology, but validates it.
It's the gross overpromising (honestly I believe it is probably fraud, but I cannot be sure) that makes me despise Tesla as a company. But I can admit they make a product a lot of people like. But I think a lot of people like them because they are misinformed.
Tesla: drive fast and break people.
Not arguing that they're taking steps in that direction but the implication that they're anywhere reasonably close to the goal is unrealistic. It's like saying we can travel to space so we have all we need to get to the nearest star.
Please if you downvote at least leave a comment why you think I am wrong. This is a well known fact in the autopilot community.