Hacker News new | past | comments | ask | show | jobs | submit login
Honda's now selling the first production car with level 3 self-driving (thedrive.com)
335 points by nradov 39 days ago | hide | past | favorite | 383 comments



"Just 100 cars will be made available with the technology in Japan, and they will cost the equivalent of $101,900. None of these Legends will make their way to the U.S."

I'm assuming if it won't come to the US it won't come to the UK, Canada or Australia either.

Is 100 units really a "production car"? I don't agree.

Also:

"It does all this with zero input from the driver, who Honda says can "watch television/DVD on the navigation screen or operate the navigation system to search for a destination address."

and yet:

"Please do not overestimate the capabilities of each Honda Sensing Elite function and drive safely while paying constant attention to your surroundings. Please remain in condition where you can respond to the handover request issued by the system, and immediately resume driving upon the handover request."


This mixed messaging of “you don’t need to be looking at the road!”, “but you should be looking at the road.” Is going to lead to more deaths. As someone who works for one of the major global car companies, I’m very concerned for how breathlessly we (as an industry) talk about things like “autopilot” instead of “driver assist”, and a million other flashy marketing terms that make it difficult to just understand what the system is and is not capable of handling.


YipYipYip. This is the blue wart in the green sea [0]. The system drives until you should have taken over and died. The situations that are detectable as faulty are not the lethal ones typically. As far as I remember, Volvo denied to implement Level3 years ago [1], which is the only grown-up and responsible answer to this problem.

[0] https://www.sae.org/binaries/content/gallery/cm/articles/pre...

[1] https://auto2xtech.com/volvo-to-skip-level-3-autonomous-mode...


Volvo is absolutely making the right call here. There is too much irrational exuberance around SDC.

The obvious answer is to apply AI to the entire system, not just the car navigating the road

* Is the driver paying attention, how much attention should they be paying verses the conditions?

* What is the environment like, how should I react, refuse to drive? Go slowly? Honk?

We already have self driving vehicles, they are called Mules. Why not make artificial mules instead of artificial pole position. The car should be an active participant, more of a copilot and than a blind automaton.

Self driving cars should be shown to have passed a repeatable rigorous adversarial gauntlet before being allowed on the road. And the individual cars should have to be re-certified every 6 months by retaking the test.


As a side note - I think humans should require retaking their test on a regular basis. Maybe every 5 years, and just a short 'refresher' test on the way home from work. It would help find those who simply can't drive safely anymore, and may dissuade others who don't think it's worth going through that process. It would also take a lot of stress and anxiety from the problem of families who know mildly-demented grandad really shouldn't be on the road because he can hardly see at night, but find it hard/impossible to actually do something about it.


there's a deeper problem here in the US. a large portion of the workforce needs to drive to work. a lot of these people are not very safe drivers, through some combination of incapability, unwillingness, or ignorance. it's hard to strike a good balance between keeping unsafe drivers off the road and not upending people's livelihoods.


Also the driving test is a joke in the first place so there are a ton of unsafe drivers.


I'm sure this varies state-to-state. I wouldn't say my driving test was a joke (in the sense of easy), but the hard parts had very little to do with driving safely. the whole thing took place in a parking lot at 5 mph. I failed the test my first two times because it was really difficult to perfectly parallel park my dad's massive truck.

I think it would be okay to make the initial test a little harder, and ideally more focused on real world driving safety. the real problem is if you let someone have a license, they plan their life around it, and then you yank the rug out from under them because they couldn't back up 100 feet in a straight line ten years later.


In my state, the test never touches the highway or any busy roads so you don't have to learn how to merge, stay in your lane, drive at speed, etc. Actually most highways around explicitly prohibit permit holders. I personally know more than one person who had their first experience on the highway after they had a license and it was an absolute disaster. They were white knuckled, driving 20-30mph under the limit and every merge was a close call. I think the only thing they actually test is whether you can parallel park or not.


In Indiana we went on the highway in drivers Ed, which had a speed limit of 55. Then we crossed for a day into Michigan because their state limit was 65. Indiana has since raised the state limit...


Maybe getting to work should be an incentive for practicing safe driving. You cant get to work if you crash your car either.


sure, that's not unreasonable. but with the current situation in the US, you would need to drastically expand welfare, public transit, or both if you really want to lean into the "driving is a privilege" idea.


Yeah I don’t think “drive safe or lose your job” makes for a good billboard.


Thanks for linking to good source material.

> When the feature requests, you must drive

Is there some kind of expected delay from the system requesting driver attention to the driver assuming control? Ie something like 3 seconds?


That's the problem, no such system can guarantee it gives you a minimum amount of time. Some situations appear and develop in less than 3 seconds (think small child jumping in front of your car from between parked cars). You might only have 1s to react and if you are not "plugged in" and fully attentive at that instant you missed the window to react.

The more you "disconnect" as a driver, the less likely you are to take over in a split second and this has been proven again and again with humans. A system that works 90% of the time is probably the worst because it's good enough to give you confidence but bad enough that it's a false one. The driver is in the position to buy the advertised "you can disconnect" feature but then only be able to use it under explicit risk of harm to themselves and others. It actually increases the mental load as drivers will try to both "disconnect" (type on the phone, watch a movie) while also "paying attention to the road". Of course both can't be reasonably done at the same time so both experiences are sacrificed.

It's like encouraging people to take a taxi after drinking but then expecting them to take over whenever the driver makes a mistake and holding them responsible and accountable for the outcome. You won't enjoy the drink, or the drive.

These half-way solutions are great for marketing and for people who are all about the hype. But not only are they the bad compromises, they're also fueled by bad and incomplete data, and by marketing departments wanting to sell more. Almost no self driving outfit provides data on how many times did a human make the slightest correction while the self driving was enabled. Or out of all possible driving conditions, what percentage of them were covered by those self driving miles.

My personal philosophy is that cars are either self driving or they're not. Meaning they can either match an average human driver in all conditions they're expected to encounter over the lifetime, or they're just assisting the driver. If it's self driving except when it's not then it's just like a student driver. You wouldn't confuse them with being perfect drivers just because of their perfect safety record, given the supervisor corrects their every mistake.


>situations appear and develop in less than 3 seconds (think small child jumping in front of your car from between parked cars)

I trust full self driving tech to handle this situation better than a human driver more often than not. The computer doesn't get distracted and has better reaction time.


> full self driving tech

Full self driving tech is presumably (and by definition) fully able to handle things by itself. We're talking about "partial self driving tech" (SAE L3) which relies on the driver taking over when it doesn't know what to do. The context switching for a regular human absolutely kills the reaction time in these situations making this a "I'll mostly drive myself but when I can't you're almost guaranteed to cause a crash" type issue.


The Tiger Woods driving experience as informal industry use case standard to focus on is as good as the informal two golf bags carrying capacity.


> Some situations appear and develop in less than 3 seconds (think small child jumping in front of your car from between parked cars). You might only have 1s to react and if you are not "plugged in" and fully attentive at that instant you missed the window to react.

Yeah, humans are weak at this task, which leads to numerous deaths on roads. Can we develop a tech for this? Something, that will watch the road and will assist the driver.


This sort of assistance is an excellent use of current technology, and a way to move forward to true autonomy. What needs to stop here is the two-faced, fatally ambiguous marketing, and if manufacturers cannot act responsibly, regulation will be needed (more than the current regulation for responsible driving, which only kicks in after the damage is done.)


That's exactly the sort of task I'd expect an automated vehicle to perform better on. If they can't beat humans at that they're not ready for the public road.

Of course hand-off could be for liability purposes, like

Car: "I'm sorry Dave, I'm about to plow into a school, this is your problem now"

Dave: [looks up briefly from his game of candy crush before dying in a crash]


The problem is, they can't do this today with the necessary classification quality. Elaine was killed, because someone at Uber wanted these cars to drive so badly. It also is about culture and prediction. If the other participants behave differently to your prediction, it can get hazardous quickly at highway speeds. I can't find a better link [0], but maybe it's good enough to get the point across. The term is "warning dilemma" and means that you only have perfect information at the time of impact. Before, the classification if there will be a problem or not is worse, the earlier you ask. But asking early is necessary for the human to have a chance to react.

[0] https://www.researchgate.net/publication/326568066_Towards_a...


From what I remember, Elaine was mostly killed because she stepped in front of a moving car in the middle of the night.

I’m not convinced that’s the best example, since the AI might have actually done better if it had full control of the car instead of defaulting to ‘please take over now’ mode.


Right. I could have stated it better, what I meant was that the (safety) driver was not paying (enough) attention to the road and was basically driving Level 3 (the car drives and it will beep whenever I need to take over, so just relax and browse the web). It was impossible to react to the short notice and the warning was _not_ given 10 seconds ago. I personally think the system had sensed her as a succession of standing targets and never predicted any velocity vector. Braking for static radar targets is not happening, because of the clutter you get everywhere you would not drive at all. So the misclassification and misprediction was not detectable for the system (in it's world model, everything looked consistent) and it warned too late.


If I remember correctly, the cars built in system for emergency braking in this scenario was disabled. So, any other car of that type would have avoided the accident. However, because the AI apparently knew better, it was disabled and someone died.

AI has the problem that not only does it need to observe the environment and determine what to do, but it has to predict the actions of non-rational actor (humans). We aren’t always predictable. AI isn’t always predictable either. The woman who died may have assumed that a car driven by a person would have stopped.


>That's exactly the sort of task I'd expect an automated vehicle to perform better on

If the AI could talk, it wouldn't be so much "I'm sorry Dave" as "I cannae change the laws of physics!"

An AI can think about the situation for several million instructions worth, if it is running at >GHz speeds, but it can't prevent a crash given a few milliseconds, because inertia. And for the same reason, it's futile to hand off to a human.


Press release: "While the self-driving feature of our car had been engaged earlier in the drive, the driver was in control at the time of the crash."


Humans already kill 3000 people every day while driving


Politics is emotional, and emotions do not always agree with Utilitarian ethics.

Self driving doesn’t just need to be safer, people also need to feel that it is.


Sorta. Self-driving cars cannot a injure or kill someone in any situation where a human wouldn’t have.

I feel this too. I would rather have a greater risk of death but be in control of my fate.


Americans drive more than 3 trillion miles per year.


Who would the car optimize for? Saving the kid or saving the driver from getting rear-ended or swerving into an object?


See kid => hit brakes. This is already way better than a driver who doesn't notice the kid and doesn't hit the brakes. The inside of the car is very well designed to keep the occupant alive in case of collision, the car should prioritise not hitting other people. I hope self-driving technology will at some point be regulated regarding what the car should attempt to do in those situations. With more vehicles having collision avoidance technology, the car behind you won't rear-end you because its AI will react fast enough while also refusing to drive dangerously close to the car in front.


I wouldn't have asked the question if it was that simple :) If you have no other cars around you, are traveling less than 25mph and have plenty of stopping distance, sure, 'see kid => hit brakes' should work to save everyone in that circumstance.

If your assumption is that you won't get rear-ended because everyone will be using automated collision avoidance technology... Well, we're at least 15 years away from that.

As someone who bikes and is used to seeing every other person on their phone while driving, fully self-driving vehicles can't come soon enough. But we need to have real discussions about who the self-driving vehicles will save in regular circumstances. I'm skeptical about them optimizing for, or even having the ability to optimize for saving people outside of the vehicle over protecting the occupants.


Ok now bonus Problem. The car has to swerve and either hit a kid or an old lady, which do you hit and why?


The systems probably already have some kind of an estimate of "surprise possibility in the next 3 s".

For example on a narrow road with a building right in the corner, going around the corner, anything might happen. You don't know what's around the corner. So the system will either go so slow that there's ample warning time or then require driver attention already before attempting to turn, because the driver needs to be ready to react to potential up-and-coming new information fast.

In contrast on a wide highway, it's a more easy-to-estimate environment, visibility is better etc. So as long as other cars are far away and the velocity differences are not huge etc, the driver can stay inattentive.


The challenge self driving cars face is that being "overall better" is probably not enough. Regressions in certain segments would hardly be acceptable for anyone. You can't tell people "FSD cars lowered overall deaths by 10% but we now kill 30% more toddlers because [technical reason]", or that "highway deaths are going up", or basically any segment that anyone could reasonably care about. Cars have to match and exceed human drivers in every category for them to be acceptable, and even 99% there may not be enough.


Another point is that there is a psychological difference between sitting behind the wheel in a vehicle knowing that around 10 in 100,000 people get themselves killed each year, and then sitting in a machine that kills 10 out of 100,000 people each year.

In the first situation, you feel in control, and at least think that you can do what is necessary to avoid that you end up in the statistics.

Personally I would be very skeptical about trusting my life to a machine that has a non-zero chance of getting me killed, even if the machine, on average, performs better than a human, because most of us think that we are better than the average driver.


Unless, you know, there’s a reflective truck passing by. Or faulty recognition of lines on the road steer you straight into a barrier.


This is the blue wart in the green sea.

The system gives you more power and you'll kill yourself. Similar situations with horse-drawn are not as lethal. As far as I remember, Volvo denied to use horseless carriages years ago, which is the only grown-up and responsible answer to this problem.


I do not see how this gives more power to the driver. Decreasing responsibility is not equivalent.

Edit: sp


>YipYipYip...

Is this a Sesame Street reference?

https://youtu.be/KTc3PsW5ghQ?t=97


YipYipYip =¦-D

Com...pjutaaa.


I blame this largely on Telsa essentially driving the industry like a heard of cattle. This is/was a great when it came to accelerating the development of electric cars, and I realise that the software development community really likes the "move fast and break stuff" mantra, but there are reasons why engineering fields with chartered engineers have processes like FMEA: people die if things fail.

A friend who is an engineer at a premium car manufacturer working on sensors for self-driving is telling me that the internal policy inside the manufacturer is, that the only competitor that matters is Tesla, so they only compare themselves to Tesla. The same friend also believes that real self-driving is still many years out, even sensors that can deal with general weather conditions do not exist yet.


How did your friend arrive at the conclusion that we don’t have good enough sensors?

IMHO AI to analyse the data is what’s missing - we have cameras that are just as good as human eyes, so we know for a fact that the current sensors are enough to drive in general weather conditions.


Cameras don't give you depth.

You have two choices:

o have a sensor that gives you depth directly (ie laser, radar, or lenticular array) o try and infer depth from stereo cameras

Lidars are expensive and power hungry, Radar is cheap and mature, but doesn't give you anywhere near as much resoultion

lenitucular only has a tiny range. There are other time of flight sensors, but they are either not production ready, expensive or both.

What Tesla have chosen to do is kinda try and merge radar and monocular object detection to give a higher frame rate depth estimation of objects. However its expensive to develop, unreliable, and terrible in corner cases (ie, if sees an unknown object, it can't place its depth.) Now humans can do this, because we've had years of training, Tesla can't.

Tesla's fancy cruise control is dangerous. Its auto drive stuff its trying to develop is even more dangerous. Instead of finding the sensors and processing spec needed to drive safely, they are trying to use the sensors and GPU they already have. Its not going to work and is unsafe.


There are surprisingly sophisticated (and precise) machine learning algorithms for mono-camera depth estimation. It's not a problem anymore. Furthermore, most companies use various focal length lenses. The real problem lies in sensor fusion. Which sensor's input do you believe most, especially if one or more is occluded/dirty.


> There are surprisingly sophisticated (and precise) machine learning algorithms for mono-camera depth estimation

There are, and I wouldn't trust them with life critical stuff. The more accurate ones are not realtime. The hardpart is that they are very noisy. unknown objects wobble about in depth considerably. You need to do lots of filtering to get useful results, which eats into time.

It is very much an unsolved problem. Its one that I'm partly working on now. However for the device that I"m partly working with, monocular estimation is far too power hungry, noisy and generally shite.

> Furthermore, most companies use various focal length lenses.

no, tesla _rely_ on having a wide, medium and zoom camera. However they blended into the same effective sensor to give a better chance at tracking objects. I bet you a $10 that if you block one, the whole system turns to shit.

> The real problem lies in sensor fusion.

Sensor fusion is trivial. Its understanding what they sensors are telling you, thats the hard part.

for example, when trying to turn across oncoming traffic, all the sensors will tell you that stuff is coming towards you, what they won't tell you is if its safe to cross. Thats the hard part. Given that tesla can't accurately place a car on the road yet, they can't safely cross traffic.

Sensor understanding is the problem that is really unsolved. Depth estimation with stereo cameras is 85% of the way there, monocular estimation is no where near that level, simply because robust object recognition isn't going to be a thing for at least another 6 years.


I have yet to see any ML algorithm that can reach 99.999% accuracy or better (we need better).


Why do we even need such accuracy? Humans are extremely bad at estimating distance and nevertheless are quite ok drivers. ML needs to make sure that the car is not bumping at things or people, which is a very different task.


Because the failure modes are different. Humans with basically 100% accuracy both correctly recognize depth and put it into the right bucket of near/far but are woefully bad at estimating the numeric distance. So you’re right that you don’t need to do better than extremely coarse estimates like close, a few car lengths, in the distance, etc..

If computer vision fails at all to recognize depth (semi truck with clouds painted on it isn’t recognized) or misclassifies whether something is near or far that’s much more dangerous.


Yes, so "can/cannot bump into" is a classification tasks which needs quite some accuracy.

A human on the other hand can recognize another human with near perfect accuracy.


> Now humans can do this, because we've had years of training...

Humans can mostly do this. Like 90% of UFO sightings are people seeing the moon and getting confused and thinking it is a nearby object following them.


A camera is a light sensor, a camera is not an object sensor. Cameras as light sensors definitely exist.

A camera feeding into phenomenally sophisticated AI trained on all sorts of weather conditions with windshield wiper smear and glare and everything else, which can turn those horrible images into a picture of what actually lies ahead, is an object sensor. Those aren't good enough yet.


Goodness gracious doesn't anyone in the auto industry ever even look at how the word "autopilot" has been used in the aviation and marine industries? It has _never_ meant that the vehicle operator gets to abandon their responsibilities.

However I agree that the mixed messaging is sad and irresponsible, and this article's author could have been more circumspect.


How the term "autopilot" is used by professionals in the aviation and marine industries isn't the main point here. What matters is what non-professionals think when they hear the term "autopilot". If you market something as "autopilot" don't be surprised that general consumers don't have the same nuanced understanding of the term as professional pilots.


You're right, but pilots are highly trained and must maintain their training/skills. On the other hand, it's possible to get a driver's license without any formal training (in many Western countries at least) and then buy a "self driving" car without any training on how to use it.

Because of that divide, I think there should at least be an online training curriculum and test before your car will unlock autopilot mode.


Right, and this is what Tesla autopilot is. Tesla no attention dream they want to someday roll out is under FSD (full self driving).


> talk about things like “autopilot” instead of “driver assist”

I actually find driver assist technologies almost as damaging. For example, lane assist (car centers itself in lane) can cause the car to veer when lanes are mis-painted. After having it turned on for a few minutes without issues, you begin to relax, then the car decides to take an off-ramp or swerve towards the shoulder.

As a comparative example, let's look at automatic gear boxes vs manual. As far as I know, there are no auto gear boxes that will occasionally require you to hit the clutch in order to successfully change gears. Their either completely auto, completely manual, or auto with manual override. Having something halfway between automatic and manual is just ASKING for problems; that being said, an auto gearbox that is expected to work only 99% of the time also really doesn't exist, which is why self driving is hard.

I'd much rather better alerting and imminent danger capabilities; more ubiquitous and accurate blind spot sensors, lane change cameras, brake alarms (or even auto-braking, if it's high reliability). These technologies will allow the overall self-driving landscape to improve (because there's so much overlap; self-driving NEEDS all of these sensing techs, and needs them to be REALLY good in order to work) and mature over time, while road infrastructure, mapping, legislature, and public opinion catch up.


Humans mess up mispainted lane lines also. Have seen many times where there were 2-3 conflicting lane lines painted and people just kinda drove where ever. I think both humans and robots could benefit for federal laws about road construction and lane markings.


> Humans mess up mispainted lane lines also. Have seen many times where there were 2-3 conflicting lane lines painted and people just kinda drove where ever.

That's very true, I've seen it as well. I think the big difference is that humans have the context and experience to deal with those situations on the fly, making choices that a self driving systems just haven't matured enough to make; this is why I advocate for better alerting systems rather than anything that changes the trajectory of the car on its own.


I’m sure they will continue to sell alerting systems for many years. I hope I don’t have to use them. I quite enjoy autopilot on long trips.


Yes, it should be quite simple:

Self-driving: car comes with no steering wheel.

Lane/drive assist: car has steering wheel and you have to steer the whole time—with occasional nudges from the AI.


>Is going to lead to more deaths.

It'll lead to the same number of deaths and slightly more complicated lawsuits.

At a statistical level Nobody(TM) is heeding fine print warnings.


The Honda is level 3 only during traffic jams -- it's limited to 50km/h and only on the expressway. If it's completely incompetent the worst that will happen is some minor body damage. It's not going to cause any deaths.


> how breathlessly we (as an industry) talk about things like “autopilot” instead of “driver assist”

Is this true? I was just purchasing a car and did a lot of research. I was looking only at the European brands because I like the overall offering. There are different names, but none too flashy or promising too much.

How they call the driving assistant systems:

- BMW: Driving Assist (Regular/Plus/Professional)

- Volvo: Pilot assist

- Mercedes-Benz: they just describe the functions, i.e. active cruise control, steering assist, etc.

- VW group (VW, Skoda, Seat): Travel assist


It's because they know that Self Driving Cars are the way to Guarantee a revenue stream for the life of the vehicle.

Screw actual utility, this is about revenue, plain and simple.


From my observations, it seems the most boastful (Uber, Tesla) are behaving the most recklessly. Waymo's driver is probably 10x better (however you want to measure that), and they seem the most measured and careful in their communication.


More important than short run death is longer term road deaths. Their messaging is flawed but hundreds of thousands die in car accidents yearly... The rollout of these cars needs to be smooth and quick to prevent longer term death. It's incumbent upon early adopters to be responsible for the sake of society.


Regarding this, I strongly believe the correct approach is L3 capability but only expressed through L2 features.

We're not at a point where it's safe for drivers to move their attention from driving, but the technology is mature enough that it can and should intervene wherever possible to avoid collisions and other incidents.

The expectation that drivers can divert their attention from driving to perform other tasks on the expectation that they can resume control at short notice is extremely misguided. This should not prevent similar sensor and software tech that could enable L3 autonomy from enabling more effective accident avoidance technology that intervenes in situations where the driver may drive in a careless or dangerous way.


Meh. It only needs to be better than the average driver. Several companies have already achieved that


Yes in California/Arizona sunny weather on wide US roads with. My experiences with even driving assist on small twisty road in the snow tells me me things are nowhere close. The car was accelerating in situations (e.g. coming over the crest of a small incline with a corner at the end), where no driver would ever accelerate and which would have resulted in some bad situation without intervention. We have seen Teslas getting confused by fork in the roads on the highway.


I don't think anybody's sanctioning the use of driver assist features on small twisty roads the snow yet, are they? If so please share what you were driving - that would be interesting even if it didn't work well at the time. Most of these things, AFAIK, are in the LKAS, ACC, LDW range and meant for highways and there are mutterings about that in the manual. I sometimes use ACC and/or LKAS on country roads but only with very low expectations.

Teslas at least get better over time. If it got confused over a fork in the road, there's a decent chance that after a near-future firmware update that same car will no longer be confused. A few folks have acknowledged that they're now smart enough to slow down for curves (even sometimes a bit generously), and they are among the few that work pretty well off-highway (most of the time).


It may not being marketed directly, but calling something "full self driving" greatly implies that self driving is fully supported in all scenarios. I have not seen any Tesla marketing that says "full self driving, on big roads with clear markings in dry/slightly damp weather".

Words matter


The average driver got to that level after years of practice. With this system in place, the new-average-driver will have the skills of someone who's just barely gotten their drivers license years ago and have never used it since. And that person will be expected to pilot the vehicle only in times of such dire conditions that the computer cannot do it.

Perhaps the use of this system should be licensed separately from a standard drivers license, akin to an IFR rating. And to keep the rating, manual driving must be performed periodically and logged as well.


If we have evidence that this is an issue, I'm on board with that.

If it turns out this is not an issue and we unnecessarily made self driving cars less common, that would have the same effect as shooting a few hundred random people every day (3,700 people die every day from crashes)

We'll find out in a couple of years


> 3,700 people die every day from crashes

Can you provide your source please? Closest I could find is https://www.cdc.gov/injury/features/global-road-safety/index... which states that “[e]very day, almost 3,700 people are killed globally in crashes involving cars, buses, motorcycles, bicycles, trucks, or pedestrians. More than half of those killed are pedestrians, motorcyclists, or cyclists.” I didn’t find a number for just motor vehicle crashes, and since the rates are three times higher in developing countries, it seems like the cost of ownership is going to be a huge factor.


I think the bar needs to be much higher than "average".

Although beating the average is arguably "good enough" to deploy in a collective sense, we live in a society of individual actors, and if I'm an above average driver, it needs to be better than me, not better than average, for me to want to use it.

Assuming a society in which each person individually and rationally chooses whether or not to use it, if you want 99% of people to use it, your software needs to be better than the 99th percentile driver.


This assumes a lot about the correctness of people's perceptions of their own driving skills. My experience is 99% of drivers think they're above average, and that obviously can't be true.


In my experience there's two kinds of people who'll say their good drivers and they're both good but with wildly different definitions of "good"

You've got Jose the 35yo MRI service technician who logs 100k/yr for work and has had so much time to get good he can tell exactly what traffic is doing and are highly capable of printing what is about to happen and are preemptively making moves based on that. He knows his insurance company would crucify him if the saw how he flings an overloaded Transit Connect through an on-ramp or parallel parks by braille so if you press the issue he'll tell you he's good at getting where he's going but that the bureaucrats who write the state driver's manual wouldn't like him.

And then you've got Karen the elementary school teacher nearing retirement who logs 10k/yr 5k of which are spent looking at her phone. She has spent a cumulative one hour of her life above the speed limit despite spending much more than that on highways where the traffic flow is well above the speed limit. She follows every rule in the book to the letter, gets honked at daily and once a week she has a story about "some asshole" she got in a conflicting situation with. She doesn't know how far down the gas pedal on her 4Runner goes but oh boy does that brake pedal get a workout when "oh crap almost missed my turn". She swears up and down that she's a good driver because of the seventeen fender benders she's been in she was only at fault in the three that were caught on camera or where witnesses stopped.

Which definition of "good" do you want near you when there's 3" of snow on the road?


> Which definition of "good" do you want near you when there's 3" of snow on the road?

When there is snow on the road you should not be near either of them.

You should be far enough behind that you can stop when the car in front gets into difficulties. The rule of thumb on a dry road is that you should be three seconds behind. At 100 km/h (about 60 mph) that's 83 m (about 270 ft), say 17 car lengths (for my Tesla S that is). If there is snow on the road perhaps it would be wise to allow more distance.

See, for instance, https://www.driveincontrol.org/drivingtips/the-three-second-...


> My experience is 99% of drivers think they're above average, and that obviously can't be true.

It actually can be true, depending on the distribution of skills you assume. Say you have 100 drivers, 99 with the same skill level, and one driver which is worse. 99% will be better than the average. I'll see myself out ;)


If you really think you're better than a self driving car, then don't buy one. Or do you want the gov't to choose for you?

You should be happy that other people can buy them because they'll no longer crash into you ;)


There is also the case of "liability". An average driver can drive the car into a tree with no problems at all. No one except the driver is responsible there.

But if a company sells a product with the name "Autonomous Driver", than that company is liable if the product mulfunctions (ie. drives a car into a tree).


Let the courts decide who's liable. I'm sure that in practice this would be decided on a case by case basis with logs from the car's computer.


would be decided on a case by case basis with logs from the car's computer.

If I was a car manufacturer and I knew that my log files would undoubtedly be used as evidence against me in a criminal negligence trial, I would think very hard about what I did and didn't put into those logs.


But if the car is in full auto mode the driver is not expected to pay attention or look at the road. Based on the situation the car thinks it can drive autonomously.

At least a partial liability will have to fall on the OEM that's why nobody is making any binding promises (including Tesla)


Actually, it only needs to be better than the average mammal.

And I am not sure any level of AI has reached that level yet.


It doesn't lead to more deaths. See Tesla for proof. This is a super bad take.


The issue isn’t whether having the technology saves more lives than not having the technology. The issue is whether, given the technology, the marketing creates a perception that results in unsafe usage of the technology, costing lives.


It’s is easy, even almost trivial to drive long hours on almost nothing happens freeways. We don’t have any sort of statistics on the dangerous close encounters, and on whether these remain close encounters with teslas.


We have all sorts of statistics, every car maker keeps track of disengagements and events. What are you talking about?


Accidents per miles is not too informative. For example do teslas fair better than an average human driver on eg. someone running a red in front of the car? On a suddenly overturned car on the freeway? Do we have enough data to answer these questions?


I don’t understand this kind of question. Obviously you need some statistical measure to compare safety, and incidents per mile (or vehicle years, or hours driven) is an accurate way to estimate accident rates. All we care about is that on average it crashes less than human drivers. If there are specific conditions where it fares worse those would be pretty obvious to address, and naturally the odds of when crashing the situation being weirder than usual is a given if safety improves for the average case.

That aside, the answer to the first one is yes. Because of sonar and the cameras, a Tesla can see traffic two or three cars ahead and will initiate braking way earlier than a human would in the case of someone else running a red light - there a few videos of this exact situation available in YouTube. As for the second, probably not enough data, but in absolute numbers humans are responsible for nearly al of those cases so far, they happen weekly.

By the way, the overturned truck was not a fatal accident - the car triggered emergency braking and the driver came out without a scratch. The fatal one was a couple years ago when a Model S ran under a white truck making an unsafe u-turn.


Incidents per miles is fine with a big enough data set —- but I disagree that it is enough in case of a new technology that can potentially kill. Like let’s say teslas are absolutely safe on the highways, much more than human drivers, but would have a tendency to hit pedestrians much more so than humans. It is possible that it will have a better incident per mile record even though no sane person would legalize them in this case.

> If there are specific conditions where it fares worse those would be pretty obvious to address

Like, if (inSpecificSituation) { payMoreAttention(); } ? This is the actually hard part of the problem, not breaking when something is close and stay in the lane. They can’t even create adequate test environments for the many many special cases that can trivially happen in a city.

Also, how does it see traffic two or three cars ahead? It sounds like a marketing gimmick but correct me if I’m wrong.


> Incidents per miles is fine with a big enough data set

Only if you're comparing comparable conditions. If you compare autopilot in the sun to humans in the snow, infinite miles won't make it any more valid of a autopilot/human comparison.


Tesla's miles driven is self-selecting, because AP won't engage when it can't because of poor conditions.

Humans don't get that luxury.

You can't say for Tesla (deaths/miles driven on good-for-AP roads in acceptable conditions) and for humans (deaths/miles driven on all roads in all conditions) and then act as if they are comparable or prove "improved safety", because they don't.


> All we care about is that on average it crashes less than human drivers.

All we care about is that on average it crashes less than human drivers in comparable conditions. If you compare a 40 year old driving a tesla on autopilot on a sunny freeway to an 18 year old driving a 1995 beater without modern safety features, no ADAS, etc, in a snowy busy intersection, autopilot doesn't have to be that great to look better. If you compare tesla on autopilot to "average human miles" you rolling some very invalid comparisons into the averages. You can only usefully compare averages if they're averaged over comparable contexts.

A less extreme comparison, if my memory serves, is tesla rolling out ADAS with autopilot when they did a before/after comparison. They compared autopilot-available + ADAS-engaged to autopilot-unavailable + ADAS-unavailable, or something along those lines, if I remember correctly.


The stats I’ve seen quoted about Tesla are super misleading and not at all apples to apples comparisons.


Tesla's miles have the advantage of being inherently autopilot capable, and turning off when things get dicey for it.

Human drivers do not have those advantage.

So Tesla is comparing drivers in situations where AP doesn't even try to drive the vehicle because it couldn't.

This is amazingly misleading. And Tesla knows it.


I'm not sure why you are going downvoted, this is true.



It should be noted that that site is run by a group of pseudonymous people who do not hide the fact that they have been shorting Tesla for years. They count every death involving a Tesla. If someone jumps in front of a manually piloted Tesla and is run over before the driver can react, it is added to the spreadsheet. They’re holding the company to an impossible standard. Moreover, they don't actually care about reducing deaths involving Tesla vehicles. They want Tesla to fail so that they can profit. It’s a purely selfish act.

The most prominent member of the TSLAQ people and (IIRC) the first to publicize a spreadsheet of Tesla deaths is Elon Bachman (a pseudonym). He has also been a charlatan on COVID. A year ago he said, “After 4 months of white hot Coronavirus panic, the disaster is visible everywhere except in the data. Total deaths, rounded to the nearest percent, remain 0% of annual flu deaths, and serious cases are falling”[1]

I tried to bet him up to $1,000 that US COVID deaths would exceed 25,000 by the end of 2020.[2] He ignored me and continued to deny the harm caused by the disease.

One simply cannot trust anything he’s involved in.

1. https://twitter.com/ElonBachman/status/1237030292340314112

2. https://twitter.com/ggreer/status/1237164317142736896


> They want Tesla to fail so that they can profit.

A more realistic take is: They think Tesla is massively overvalued and should be held accountable for fraudulent corporate behaviour.

> He has also been a charlatan on COVID.

Kind of ironic given that Elon himself was (is?) one of the first vocal Covid deniers.

Check out TC's Chartcast [1] in case you are interested in TSLAQ, beware it's a deep rabbit hole.

I've previously tried to sum up Tesla's red flags here [2].

[1] https://www.buzzsprout.com/758369

[2] https://news.ycombinator.com/item?id=26065075


Musk has never been a covid denier. He understood that covid wasn't as dangerous as many claimed, especially for younger and healthier people. The only thing close to covid denialism that I can find is Musk's decision to resume production at Tesla's Fremont plant in defiance of Alameda county lockdown orders. This was in May of 2020 after months of failed negotiations with government officials. It's important to note that unlike Bachman, Musk had skin in the game. He asked that if the government sent cops to shut down the factory, only he be arrested.[1] The factory resumed production and nobody was arrested. A week or so later, Alameda county changed its rules to allow the factory to operate legally.

1. https://twitter.com/elonmusk/status/1259945593805221891 "Tesla is restarting production today against Alameda County rules. I will be on the line with everyone else. If anyone is arrested, I ask that it only be me."


Here's a selection of tweets for you, by a celebrity many still trust and consider a "scientific genius":

@elonmusk Mar 6, 2020

The coronavirus panic is dumb

https://twitter.com/elonmusk/status/1236029449042198528

@elonmusk Mar 19, 2020

Based on current trends, probably close to zero new cases in US too by end of April

https://twitter.com/elonmusk/status/1240754657263144960

@elonmusk Mar 19, 2020

Kids are essentially immune, but elderly with existing conditions are vulnerable. Family gatherings with close contact between kids & grandparents probably most risky.

https://twitter.com/elonmusk/status/1240758710646878208

@elonmusk Jun 29, 2020

There are a ridiculous number of false positive C19 tests, in some cases ~50%. False positives scale linearly with # of tests. This is a big part of why C19 positive tests are going up while hospitalizations & mortality are declining. Anyone who tests positive should retest.

https://twitter.com/elonmusk/status/1277507826529660928


The only screw-up I see is his prediction that the US would conquer the spread by the end of April. All the other info is pretty accurate, especially considering how little we knew back in March. Compared to most authorities and experts, he did a pretty good job. Remember two weeks to flatten the curve? Remember when the official line was that masks don't work?[1][2][3] Remember when the WHO said that travel bans are a bad idea?[4]

Musk was calling the panic dumb at the same time as the WHO was saying, "Our greatest enemy right now is not the coronavirus itself. It’s fear, rumours and stigma."[5] If we judge him by the same standards as every major institution (media, governments, NGOs), Musk is far less deserving of criticism. And unlike those institutions, Musk never claimed to be an expert on the topic.

1. https://twitter.com/WHOWPRO/status/1243171683067777024

2. https://twitter.com/UNGeneva/status/1244661916535930886

3. https://web.archive.org/web/20200312104152if_/https://twitte...

4. https://twitter.com/WHO/status/1224734993966096387

5. https://twitter.com/WHO/status/1233418231261646849


> They think Tesla is massively overvalued

> fraudulent corporate behaviour.

These two things have nothing to do with each other.

> and should be held accountable for

Holding a company accountable should go through the courts, not personal profit.

I'd be happy if short selling weren't an option. If you believe something is overvalued, stay out. You're not wanted.

If you believe a company did something wrong, file a lawsuit.


You haven't made an argument against short selling.


Pessimism is counterproductive to the long-term development of humanity and rapid deployment of electric vehicles and clean energy. Armchair pessimists don't deserve a place in the stock market. If you think Tesla is doing something wrong, either (a) go work there and fix it or (b) start your own Tesla competitor. Short selling isn't constructive.


What's wrong with short selling? The only reason why people like Elon Musk don't like short selling is because they know that their companies are massively overvalued.


The problem is profiting from spreading misinformation. Tesla short-sellers run large operations that fabricate and spread lies. It's usually easier to spread misinformation than to debunk it.


And Tesla will happily release misinformation too.

One of the last fatalities, Tesla was more than happy to push out a press release based on telemetry, saying "Autopilot wasn't at fault, the driver was inattentive - the vehicle even told him to put his hands on the steering wheel!".

They somehow neglected to mention that the steering wheel alert was triggered, ONCE, and FOURTEEN MINUTES before the crash.

Misinformation is not a good thing. But lets not pretend that Tesla is some downtrodden underdog just trying to make our lives better.

Also, if you have an accident in your Tesla, you'll have a lot of fun trying to get any telemetry information from them, even if Tesla isn't a named party and you're just dealing with the other involved person. You'll need multiple subpoenas and expect them to resist releasing any data as "proprietary".

But should your telemetry from an accident be able to be spun (correctly or otherwise) into a Get out of Jail Free card for Tesla, expect it to be released to the media without your consent or authorization (I'm sure it's buried in Section 48, Subsection 24, Paragraph 14c iii that you consent, but still).


And the companies don't spread misinformation? Not saying it's right in either case but companies, Tesla included aren't shining beacons of morality and truthfulness.


It's not in their interest to lie in the long term. Those companies who lie are scams and they're spotted pretty quickly (e.g. Theranos, Nikola). Elon has made some mishaps, such as 'funding secured @420' tweet, which he was punished for. Public companies have to be pretty careful in their communications. Making promises with too optimistic schedules is not lying, it's just an error in forecasting, and is common in technology.


That couldn't be further from the truth. Theranos survived for 15 years. Wirecard 22 years. Enron 10+ years. Fraud gives you a massive competitive advantage.

Not too along ago I thought of Musk as a misunderstood genius, now I'm pretty certain he knows exactly what he is doing (for the most part). If you look at all the oddities surrounding Tesla, there are clear patterns emerging.

Plainsite has a good summary [1].

[1] https://www.plainsite.org/realitycheck/tsla.pdf


That report is a bunch of horseshit. Lot of words without any substance. He even cherry-picked some data to "prove" that Tesla's sales are declining. Everyone can see the actual progress that Tesla and SpaceX has made. Their cars are winning awards and they're innovating and building new factories as fast as they can. Who cares if they miss a couple of estimates. SpaceX can deliver payload to orbit with much lower cost than competitors.

True though that those companies survived for too long.


I don't know what world you've experienced, but lying in business is incredibly common. I've seen it personally, and there are plenty of cases of Tobacco companies, DOW, Alcohol companies, and many many more knowingly lying.

Of course they do, it's in their best interest to lie. To think that companies will only tell the truth "because it's in their best interest" is childish, and not at all historically accurate.


Have you mentioned how sparse the autopilot claimed column is? And how even sparser the verified autopilot is? Seems like the authors are trying to inflate the total numbers very hard.


Not to mention the spreadsheet admits Autopilot was released in 2015 but they are still including deaths from 2013.

This can't be more blatant.


Autopilot might not be around, but up until very recently Tesla refused to participate in a lot of auto safety testing saying that they believed the testing regime was "flawed".

I have no issue with keeping track of auto deaths from a company who is claiming that their vehicles are safe while preventing anything but the legal minimum necessary tests from occurring.


That's an interesting table. I would say that it reinforces the case that Autopilot (actually labelled Autosteer in the UI of my 2015 Model S) is actually not dangerous.

I presume that the aim of the site is the opposite though.


Ok, reduce that to where autopilot is engaged.


The issue is when there is a crash or a death could that situation have been avoided if the driver was looking ahead with two hands on the wheel, like you would in any other car without the marketing hype.

I've seen more then enough crashes and deaths in Telsas that were perfectly avoidable providing the driver wasn't watching their ipad.


Really? I've been driving for over forty years (about 15k km per year) and I have never seen a death involving any car let alone the relatively rare Tesla.

I've seen emergency vehicles attending a crash perhaps once or twice a year but never seen the crash occur. In fact the only ones I have personal first hand experience of is the time I rear ended a car, the two times that I was rear ended, and the time I slid off the road on an icy corner. All three rear ending events were relatively low speed incidents (under 30 km/h) and would quite likely have been mitigated or even completely avoided if the cars in question had had automatic emergency braking. The icy corner was my own fault for not thinking ahead.

So, to me, your statement, without some more context, sounds like an exaggeration.


I too have been driving some some time now but we have this thing called the internet, where you can see and share information.

Lemme help you out friend - https://www.youtube.com/results?search_query=tesla+autopilot...


So you didn't mean: "I've seen more then enough crashes and deaths in Telsas"

you meant: "I have heard about .." or "I have seen accounts of .."

Those are not quite the same thing.


It doesn't say you should be looking at the road, simply that you are aware of your surroundings and that you remain ready to retake control should the vehicle notify you should.

That's different.


How can you keep aware of your surroundings going 60+ km/h without paying attention to the road? Even more if you are required to take control in an instant, you have to be completely enveloped by your spatial awareness.

I have raced go-karts, I've raced GT cars, there is absolutely no way to keep aware of your surroundings if your focus isn't constantly on the road and checking mirrors, full stop.


The language here is critical. The question is where your focus lies, and at level 3 autonomous driving, there are situations where your focus can be elsewhere besides the specific road conditions.

You are assuming “aware” means “ready to take over in an instant”. That is not what it means in this context.


> talk about things like “autopilot” instead of “driver assist”

As stated countless times before, autopilot in planes isn't even geared towards handing most flight scenarios or challenging conditions. Tesla is technically correct to call it this, even though this naming is confusing to consumers who think "plane flies itself" and think it means their car can drive itself in all conditions and avoid at-fault incidents - which most likely will indeed lead to more deaths.


> As stated countless times before, autopilot in planes isn't even geared towards handing most flight scenarios

As stated countless times in response, it doesn't matter what autopilot actually does when the concern is the public perception of the marketing.

If "autopilot" sounds to the general public like "it drives itself" (which is the simple etymology of the word), then it doesn't matter one flying fig what autopilot on planes actually does.


Let's not forget that Tesla is well aware the car doesn't "drive itself" and have classified AP/FSD for legal/liability reasons as a level 2/3 system.

From a marketing perspective however, robotaxis are just around the corner.


Why isn’t there the same level of hand wringing over Ford’s “co-pilot”? Surely a copilot is even more capable than an autopilot.


This assumes that no one is hand wringing over Ford's marketing. I haven't seen as much of Ford's marketing as I have as Tesla's, so I personally have written as many words about it.

But I have the same concerns about all marketing that positions cars as being capable enough that people don't have to pay attention. Until we hit true Level 4 or Level 5 self driving cards, I'm extremely concerned about the public's perception that they don't have to operate a motor vehicle. *Especially* if the vehicle is capable of enough automated driving that the driver doesn't need to pay attention during most of the operation of the vehicle.

To your specific point though, I think in general the public would view something marketed at "autopilot" as more capable than something marketed as "co-pilot", despite the significant capability advantage that a "co-pilot" actually offers over an "autopilot" inside the cockpit of an airplane.


The difference between Ford and Tesla is that Ford is "one of the good ones".

Not interested in leading the pack; happy enough to follow, crush smaller competition by their bloat, declare bankruptcy and take in hundreds of billions in handouts because of their irresponsibility and ineptitude.


Maybe that's the difference for other people, but it's not the difference for me.

Legacy branding doesn't mean anything when it comes to implementing and marketing self driving vehicles.

I care about the quality and level of the implementation, and the marketing positioning.


Because unlike auto-pilot the implication of co-pilot is that you're still primarily in charge of flying/driving/piloting the vehicle.


Umm no. Here's how it works

me: "you have the controls"

co-pilot: "I have the controls"

me: "you have the controls"

I am no longer flying the aircraft or even paying much attention to it. I could even get up and walk away from the controls. That's why co-pilots exist


The response I had earlier in the thread is absolutely applicable to this.

It doesn't matter. At. All. How the operations work inside the cockpit of an airplane. What actually happens in an airplane is 0% important to the conversation.

What actually matters is the public at large's perception of the term "autopilot" and their perception of the term "co-pilot". If the public perceives "co-pilot" as being less functional than "autopilot" then it does not matter at all that in aviation a co-pilot is actually more capable than an autopilot.


It’s not, the co-pilot might fly the plane for most of the flight.


That would imply that people know aircraft-slang. Which they clearly don't.


Common perception of a co-pilot is that a co-pilot _assists_ the pilot.

It's another perception/reality thing. I realize that the co-pilot shares the load, and is capable of being the pilot-in-charge, or having control of the aircraft. But one implies self piloting (Tesla), the other implies an intelligent assistant pilot.


I think it's because of the general nature of the prefixes of the words.

Co-pilot says that the two of you are doing it together. Co-operatively.

Auto = automatic = you no longer have to do it.

Think of an automatic vs. stick shift car. And automatic car you no longer have to shift.


Just like tobaco is forced to print "this leads to cancer" on their products, self driving cars should have labels like "car crashing into wall" and "car killing other human".


Please link to reports of a Tesla running over someone in AP.


Autopilot saves lives even in current state.


Based on what?

What could save lives is intelligent automatic breaking - because that is something we are currently capable of. Humans as a species is terrible at paying attention to boring tasks and quickly react - so these gimmick autopilot takes are dangerous if anything.


Most standard cars already come with emergency automatic breaking.


Is that so? Now if they also could brake...


That's exactly what self driving cars do, silly.


They do it as well on top of some other non-safe features. Less is sometimes more. Breaking when a pedestrian steps before us in sub-human reaction time is great - it doesn’t have to be combined with full self driving, which is simply really far away from now.


It does that as well. Have you ever driven one?


The average consumer is not familiar enough with the limits of automated avionics to understand the limits of commercial jets' autopilot.

If the average consumer was also an airline pilot then it would make sense to use an industry term with nuanced meaning. Instead, the average consumer assumes a straightforward meaning of the term, especially with pop culture throwing out phrases like "planes practically fly themselves these days".

Claiming "autopilot is technically correct" would only approach being true if Tesla & others went into great detail to educate buyers about the limits of the avionic autopilot features referenced by the term, and difficulties of flying a plane in non-optimal conditions, so they actually understood the "technical" meaning and could apply it correctly to the car they're buying.


They tell you every time you engage it the limitations. Stop this fud. It is better than humans being in control.


What are the rules to this "technically correct"?

It's not even flying the car!


You need the latest Tesla firmware upgrade to activate flight. Or an unfinished bridge combined with lane tracking and an inattentive driver.



The consumers are not techsavy like Tesla or HN world. Every word used for marketing should be wisely thought or we will end up with consequences


I can't set target altitude nor heading on this thing, it can't use ILS, follow VORs. It doesn't make sense to call it autopilot


The passengers have no idea there are no pilots around when things don’t go as expected


Remember how dangerous the roads were when the car companies producing cars without gear shifts started calling their cars "automatic" as opposed to manual?


This is ridiculous. A transmission failure and a self-driving failure have very different implications. Automatic transmissions don't have pedestrians walking through them, etc.


You say this is ridiculous but your premise that these new "self-driving" cars are "walking trough pedestrians" is even more ridiculous and not backed by any data of real world experiments currently being undertaken.


I think the point was that automatic transmissions are far simpler systems that are harder to screw up. And if you do screw them up it’s bad but not driving into pedestrians bad.

No, there’s not many self driving cars that have driven into pedestrians but Uber ATG’s car notably did, and there is certainly a lot of room for failure.


Meanwhile, humans are driving into pedestrians every minute of every day, but yes, get upset about what Tesla called their system...


The bit about watching DVDs is for the traffic jam mode, where it is crawling forward locked to the car in front of you. Pretty much the same as automatic parking features. Thankfully they won't recommend having a snooze though, since you will need to take control at the end of the jam or at an intersection. The warning about overestimating capabilities is on the rest of the features, where failing to respond to a request to take control will assume you are asleep and do an emergency stop. It seems to be a bunch of apps that you have to select and turn on/off manually, rather than 'autonomous driving'. ie. it assists you driving, rather than you assisting the car to drive itself.


First question is, where do you buy a DVD? Second question, where do you put it in?


You put it in your Honda Legend of course!


> I'm assuming if it won't come to the US it won't come to the UK, Canada or Australia either.

Since Japan drives on the left side of the road[1], a car manufactured for the Japanese market would have its steering wheel on the opposite side of a US or Canadian vehicle. It would be OK for the UK or Australia, which also drive on the left, but the cars can only be leased for now[2], and the provisions of the lease would probably disallow taking the car to another country.

[1] https://en.wikipedia.org/wiki/Left-_and_right-hand_traffic#W...

[2] https://asia.nikkei.com/Business/Automobiles/Honda-launches-...


I'm not sure but I bet it's perfectly legal to drive a right hand drive in North America.


Yes, it is, but you can't sell a right hand car here. Individuals have to import it, but sometimes this happens.


Seems to be a lively market: http://www.texasjdm.com/

Jeep has a RHD Wrangler for 2021 and I’m not certain if they ever paused making them.


Your comment elides that Jeep exclusively markets these (afaik) to USPS for rural delivery drivers. Though they are widely available in the secondary market, are you sure they sell them directly to consumers?


Rural mail carriers are contractors who buy their own vehicles. Anyone can order that jeep or a RHD Subaru.


Great to know, thanks.


There are no laws against selling right hand drive cars in the US.


Possibly the self-driving system on a car sold in JP/UK is optimized for left side, and vice versa.


Mail carriers often have right-hand drive in NA.


I was more getting at that if they aren't interested in the US market they're unlikely interested in the other ones I mentioned either, rather than commenting on sidedness.

Yes, technically it's easier to bring this car to other RHD countries but I'm assuming Honda has plenty of manufacturing capacity to do that if they want to. It just seems like they don't want to.


It's rare, but I have occasionally seen left hand drive cars in Australia

https://www.qld.gov.au/transport/registration/register/left


> Is 100 units really a "production car"? I don't agree.

Considering you couldn't even get into Group B rally homoligation with those numbers (200) in the 80s, or Dakar race spec production (2500) in 90-2000s I don't think it is, either.

> It’s possible they are selling these at a loss to seed or whet the market. If it takes off then maybe it will reappear at a profitable price.

Exactly. It seems more like a mass produced prototype, or a proper limited edition model and if I'm honest this looks exactly like an Accord/Insight with all the JDM goodies we don't get in the West. But this is about marketing.

I wish them well, Honda (auto) has a ton of muck in their face due to their failure to, once again, make any inroads in Formula 1 after their epic success with Mclaren in the 80s--this last time around was so hard to watch, and the mere utterance of 'GP2 engine' will live in infamy forever. Which it's raison d'être to trickle down it's production car, but with V6 turbos and KERS systems pretty much maxed they need something like this to justify their R/D budgets.

I'm not sure how to feel about it, to be honest: had this been on an limited edition EV insight it would definitely be a step in the right direction. Whereas this will fall into obscurity as more manufacturers move to EV.


> you couldn't even get into Group B rally homoligation with those numbers (200)

Unless your name is Lancia and you move 100 cars from one parking spot to the other while you're treating the officials to a lunch with loooots of wine.


> Unless your name is Lancia and you move 100 cars from one parking spot to the other while you're treating the officials to a lunch with loooots of wine.

I really wish Clarkson would do more of these kind of stories, the trio would be the ideal team to make a docu-series for the Motorex and GTR fiasco!

It's such an insane story that I cannot believe they're on the 100th version of Fast and Furious but we have not seen this story be told.

Having lived through it myself, I was in the early drifting community in SoCal back in 2002 and a regular on Fresh Alloy/NICO since 2000. I even saw a few of those cars that got sold at the old meetups at Life Plaza before the canyon runs, the Bee*R rev limit kits on those R chassis [0] were so absurd back then when they first came out, they sounded like the meanest rally cars.

My memory is fuzzy after all these years, and I can't remember if it was big bird, or black bird but we saw it being tuned on the freeway in LA doing high speed wangan runs and the helicopters may or may not have showed up, good times.

I'm going to miss the ICE days because of this, but... they're dinosaurs and we can always tell the story from the glory days. I'm just glad I was born when I was, because I think that is a culture that has since peaked and is now on a very sad descent, other than Tesla and Rimac I don't see anyone even trying to make EV cars be anything more than a soulless utilitarian computer enclosure with wheels to get you to point A to B.

0: https://www.youtube.com/watch?v=7_pUReXK3zM


> It does all this with zero input from the driver

Isn't this different from things like Tesla's autopilot in that in certain conditions the Honda will take full responsibility for driving and the driver can concentrate on other things. But needs to be ready when the car tells the driver to take control/responsibility. I'm guessing these conditions could be quite specific and limited - like traffic jams and freeway driving when the car is effectively in a 'convoy'


It's super weird to only sell 100 of these in a single market. If it does what they say it does, they should be able to make a killing as the first mover. Just more reason to be skeptical.


I'm guessing it's more of the idea that Japan will tolerate 100 possibly dangerous vehicles to further their engineering goals.


That's still incredibly sketchy.

What any sane engineer would do is to keep testing these cars with a safety driver and gather anough information/miles to claim higher statistical safety than a human driver, while keeping track that those miles were really in fact driven without intervention/(or with predictable and managed intervention). Once they would have those numbers, then they would know for sure that the car is safe and they could release/manufacture hundreds of thousands/unlimited number of units to the market. If they didn't - it means either the car really isn't tested or it doesn't do what they claim it does.

Releasing a 100 vechicles just doesn't make sense, unless the claims are somehow shady or misrepresentative.


It’s possible they are selling these at a loss to seed or whet the market. If it takes off then maybe it will reappear at a profitable price.


Is very careful initial launch for such product weird? I don't think so.


> If it does what they say it does, they should be able to make a killing as the first mover

Of course, if it _doesn't_ do what they say it does, they will, ah, also make a killing. Or many killings. You can understand the caution.


I'm not sure "make a killing as the first mover" is the best choice of wording when the topic is self-driving cars.


No different to the Toyota Century, or the Nissan President, or (to a lesser degree) Alphard etc. The major Japanese car makers have quite a few vehicles (or options) that don't get exported, but whose features show up later in a Lexus or similar.


Toyota Century/Crown/Alphard/etc is specially designed car for Japan but Honda Legend isn't. It's not well sold, I believe its primary market is US or China (sold as Acura RLX).


Worth noting the Acura RLX has been discontinued with no successor planned. The Legend will continue to be sold in select markets, but as of right now, plans have only been announced for Japan.


It's just a marketing piece. Tesla's FSD is way more advanced than just a 'traffic jam pilot', but they don't market it as level 3. It's a low-hanging fruit for marketing.


I guess you are technically right, since Elon promised level 5 last year.


"Full self driving, coast to coast, this year, no hands on the steering wheel!"

He also promised it by the end of 2018.

Oh, and before that, 2016.


Who cares. Few years here and there don't matter. Here's a video of the current FSB beta driving from San Fransisco to Los Angeles without driver intervention: https://www.youtube.com/watch?v=dQG2IynmRf8


I hear this repeatedly from people. "Who cares, it's marketing, he's cheerleading", as if Elon's quotes don't have a material impact on Tesla's stock price. They'll then cry and complain about "the shorts, the shorts", and how unfair it is that they're having an impact on stock prices.

Coast to coast means places like Iowa, Pittsburgh in winter, downpours, poor roads.

Not LA to SF on a perfect, cloudy (no sunglare) day on well-maintained interstate. Or a 14 minute video from a Tesla fan site that says "Here's a brief video of a 8 hour drive where, we swear, there was no intervention".


I don't care about the stock price. I just observe what the car does compared to "first car with Level 3 autonomous driving".


> Is 100 units really a "production car"? I don't agree.

Sounds more like a beta test.


Tesla has been doing a much larger beta test and charges 9k for the sign up.


I felt like Tesla did a cash grab when they recently offered an upgrade from "auto pilot" (which I pre-ordered on the forthcoming Cyber Truck) to "full auto". This upgrade raised my estimated delivery price by $10k. I went ahead and did it, but WTF? My initial estimate already included the $9k for "auto pilot" so now I'm paying $19k for self driving?


I loled at this, not at all because I find joy in your situation, but because it's so difficult to picture (A) someone who buys a cybertruck being price sensitive, or (B) someone who is like "well, ok I guess" in the face of a $9k price hike on a truck.


It should only be $10K, but remember it's not for the life of the car nor is it for the life of you. It is for the time you own that particular car and then you lose it even if they still haven't released FSD by then (which they probably won't)


Autopilot is included in the base price, it has never cost 9k. The FSD price went up from 7k to 10k last year.


The second quote seems to insinuate that you should remain in a state where you can drive and can use the context of your surroundings. For example, if you see flashing lights, you will likely get a handoff request because there are emergency vehicles which suggests an unpredictable situation.

To me, it appears to mean "don't sleep or 'drive' intoxicated." The car can presumably predict when it needs to handoff with enough notice to allow you to regain the full focus needed to drive.


> The car can presumably predict when it needs to handoff with enough notice to allow you to regain the full focus needed to drive.

This is the critical assumption. And:

> suggests an unpredictable situation.

All driving situations are unpredictable. From the kid running into the street after their ball to the drunk guy in a truck careening into the same crossroad.

We can never reach this dream of "watching a movie while the car drives" without railroading our roads in some form or agreeing on a new distribution of risk.


Honda seems like a very conservative company. It's released a couple of low volume "production" cars as it's dipped its toes into electric cars as well. The Honda Fit EV was a real production car, available to the general public (and I did see a couple of them on the roads in the US). But it was only available to lease, and Honda only leased about 1000 of them IIRC. Seems like a hedge - it gets them some experience with running an electric vehicle program, and data on how their vehicles do in the wild, but in a limited enough number that there won't be any serious harm to their reputation if the vehicles are deficient in some way.

So where Tesla might want to put this sort of tech into the hands of as many customers as possible, as soon as possible, Honda is being Honda and entering the water very slowly.


Sounds a lot like Tesla's marketing.

"Summon your car while dealing with a fussy child."

( do not summon your vehicle while distracted, maintain constant visual contact with the vehicle)

"Full self driving capability"

" vehicle has all equipment capable to self drive, but may be limited by laws or regulations in your area"

and so on. Tesla is full of nudge nudge wink wink, and disclaimers that walk back their ledes, so what exactly is problematic about Honda's statement about limitations?


WRC has (or had) a rule for car homologation which required at least 100 road-going copies to be made for some higher end tiers (This is why we have Lancer Evo & Impreza WRX on the roads). There may be another rule/law in the automative industry where 100 units are considered "serial production".


It seems to make sense. You can be distracted, but should be able to take back control when the car asks you to.


That doesn't really work. It will take seconds for you to orient yourself to what is going on if you're not paying attention. At 70 MPH that's about 200-300 ft of travel.


Level 3(eyes off)can perform emergency actions and doesn’t require drivers to pay attention. It will handle all immediate response situations. Tesla is level 2(hands off) because it will run into things like stationary fire trucks.


I think you have answered the question of why they only are releasing a 100 cars. Tesla has million cars on the road, yet everyone talks about the car that runs into a truck or fire-engine.

Considering the odds, it is likely you would see no accidents if you only look at just 100 Tesla cars.

This would be more impressive they put 100,000 cars on the road and then we saw the stats.


> everyone talks about the car that runs into a truck or fire-engine.

Not to mention that plenty of non-self driving cars run into stationary vehicles on both motorways and ordinary roads.

A fireman I spoke to says it happens all the time and that the fire engine is parked upstream of the incident that they are dealing with so that errant vehicles run into it rather than the fire crew.

It is a matter of lively debate in the UK right now with regard to 'Smart Motorways' which have no hard shoulder, several people have been killed in collisions with stationary vehicles.

In fact even on motorways with hard shoulders a number of people have died because they stopped on the hard shoulder because of a breakdown and another vehicle strayed onto the hard shoulder and rear ended them at 70 mph.


I would prefer roads that are not the test lab of some dystopian experiment for some rich people.


You might be disappointed to learn that human drivers run into firetrucks way more often than Teslas.

The current rate as of October is:

Tesla Autopilot Accidents: 1 out of 4,530,000 Miles; US Average: 1 out of 479,000 Miles


Most accidents happen where Tesla autopilot isn’t being used like city driving and exiting and entering the freeway. Sitting in one lane on the freeway is the safest thing you can be doing and any car with lane centering and adaptive cruise control can do it. That’s why those numbers are a joke. I use autopilot everyday day on a 45 mile commute and my only accident is entering the freeway where my car and another car merged into the same lane.


Nothing better than A/B testing within the same population, right?

> In the 4th quarter, we registered one accident for every 3.45 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.05 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.27 million miles driven.

Even if you assume most accidents happen in that last group, city driving with AP off, that ratio (3.45:1.27) is better than the overall estimated proportion of city vs highway accidents for all vehicles, at 2.3:1. At a minimum, AP is making highway driving 10-20% safer, and obviously not causing any new city accidents when it’s off.


AP doesn't switch lanes. SO you turn it off during these events. It's not an AB test since AP is used in a very specific scenario where a person driving is used in situations where AP is incapable of handling like dealing with a stalled vehicle on the road.


How confident are we in the first number? Accidents usually do not happen just because on a straight road, they happen when something unexpected happens. How many unexpected cases were recorded in the first number vs the second? Because simply because of the number of teslas on the roads, I doubt many.


We're absolutely not, but Tesla fans love to repeat this quote, no matter how many times it is explained to them.

You cannot at all equate "subset of miles where AP is a possibility, be it good conditions/road/weather", where AP will turn off or not be available because it can't work, with "all the miles driven by humans in all conditions", and say in any way with a straight face, "Look, safer!". But that's what Elon, and a large number of people do.


That’s where the “per mile” part comes in.


Not sure how this factors in but autopilot primarily only works on the highway.

It’s certainly possible that city driving is safer and also more complex to be done autonomously but it’s something that needs to be considered.


The ratio of city vs highway accidents is 2.3 to 1. At worse the Tesla number is cut in half.


As an owner of a Tesla with FSD. My car would have ran into a stalled car in the carpool lane and has tried to change lanes into a concrete barrier. Tesla is level 2 and needs your attention to be safe.


Solving self-driving in dense slow moving traffic on narrow city rodes is a simpler problem than on high speed freeway style roads, mainly because the risks are much lower - you wont get injured at 20mph in moderns cars. Dings and dents can be compensated relatively cheaply.

Much of urban Europe with pre-industrial-revolution cities and part of urban Asia is slow moving traffic. Slow, compared to California/Atlanta/Houston non-rush-hour freeways.


Surely not. Motorway driving is very simple compared to urban driving. This consequences of a collision are probably less serious in slow moving traffic but the risk of them happening is much higher. And the density of traffic and of pedestrians makes the job much more complicated.


I wonder if cars will be able to be put into "overly cautious mode" implying any other following: inclement weather, children inside, inebriated passengers who may not be in any condition to respond to situations, "HOSPITAL NOW" voice function, "POLICE NOW" voice function. As well as other one-word voice commands "Car, Home" "Car, Work" etc..


They really are making it seem like it's level "2.5" self-driving and it was rounded out to 3.


I can think of a few reasons to be japan only:

- probably easier to have a japan-only dataset

- probably easier to get engineers to work on this at headquarters

- japan highway markings might be more predictable than say 50 states all with subtly different rules and state of repair

- us liability law


Perhaps it’s no coincidence both examples involve the navigation system? Makes it easier to flash an alert and return the driver’s attention to the road.


I would have thought some sort of display on or in the windscreen, but maybe that is too distracting and causes panic?


Sound is way better, especially if you aren't already looking out the windscreen.


This where Tesla is underestimated I think. There is a huge difference between a high volume, sellable product, and a highly priced concept designed for PR. You could argue Tesla has both, but at least they are attempting to place this technology into the hands of consumers.

The unavoidable penalty for this is a longer period of driver responsibility and perhaps never reaching level 3. It's a trade off tesla buyers seem willing to accept.


Wasn't the TV watching specifically when you're in traffic jam mode?


I really think this is an example where the litigious nature of the US is going to drag on self-driving efforts. There is simply no way to deploy this tech without learning from crashes, and there will be crashes, people will get killed, and the US market is the last place where you'd want to be a manufacturer if that happens.


People have already been killed. I just don’t see it as that big of an impediment.


This isn’t an issue. Congress can create safe harbours if it becomes a problem.


That's like saying Congress can reform our liability laws and turn it into more of a Japanese system.

Well, OK, sure, it's a possibility, but the reason why that wont happen is because we as a nation are much more litigious and expect to have a right to sue. We love to blame big business and make them pay.

People have been clamoring for reform of medical liability for decades - still no progress on that front, either.

Safe harbours are not politically popular here, and you tend to get them only in very niche areas that are below the radar of most people. Auto accidents and traffic injury law, not so much.


The US is like one of the most pro-company countries I can imagine. The amount of lobbying to only benefit Big Co is ridiculous from a European point of view - I don’t see it as a negative to even increase the liability of companies. They should very well be responsible for everything they do, and it should not cost/risk innocent lives to further a private companies profits without public benefit!


Several US states have actually imposed caps on medical malpractice damages.


Do you know how many lawyers are in Congress or donate to politicians? Trial lawyers as a group are big donors to the Democratic Party. There is no way, Congress is going to do anything to make it harder for lawyers to sue people.


Probably true but I’d think Google, GM and every other company working on this tech that is probably going to be a 100B+ industry would have at least some influence here.


Haven't there are already been self-driving car accidents that killed people? And also - is the US behind in the self-driving race?


Fewer than cars by people and no.


Rocket launchers kill much less people than guns, but I doubt it is a reasonable assumption that the former is less dangerous/deadly..


What is notable is level 3. Not the features, but the legal aspect of claiming drivers can now be distracted with your mode.

Tesla "auto"pilot does more, while pretending to enforce an always alert driver. This has resulted in a couple wrongful deaths no one at Tesla has been prosecuted for.

Honda meanwhile is taking advantage of the Japanese governments legal changes which make level 3 legally tenable. Drivers need not watch the road, but they must respond when the car asks them to. This means Honda is promising your Honda will not silently drive you into a wall as the Tesla autopilot is known to do.


> while pretending to enforce an always alert driver.

To be fair amongst automakers, not many other mass-market car is different (right now) - they only auto-steer if the user applies torque to the wheel periodically and beep at the driver if torque isn't applied every 10/30 seconds (30 seconds is only set for interstates from my experience). Only sometime this year will the Ford Mustang Mach-E get an optional $600 upgrade to enable hands-off-the-wheel driving which uses hardware (already installed) in the steering wheel to monitor the driver's road attentiveness[0].

0: https://www.slashgear.com/ford-active-drive-assist-mustang-m....

e: no other -> not many other - apparently Cadillac has eye tracking


I agree - I don't know how it use to work, but today, Tesla's a pretty pushy about keep you alert. I can't imagine blaming my car for crashing when I'm using autopilot.


Cadillac supercruise has eye tracking that makes sure you are watching the road.


Doug Demuro has a nice video about this: https://youtu.be/AhthZ5rxQJs


Since last Passat facelift, all* VWs have capacitive steering wheel, where it is enough to just have a hand on it. I also think other carmakers moved to this in past ~2 years.


How does it determine torque application if you're on a straight away? Or does it detect hands-on-wheel? (Preferably both)


With Honda you must move the wheel a bit, even if it's not necessary for correcting the car's direction. It does not detect if your hands are merely on the wheel, that would be nicer though.


My Hyundai applies a bit of resistance to the steering wheel, so I need to keep the opposite torque.

It won't drive to a ditch if I let go, but will start howling after 20-30 seconds of not touching the wheel.


It won't silently drive them into a wall, but at 65mph it might only give them just enough time to look up and panic as they hit the wall, internal alarms blaring for their attention. Lane keeping combined with construction combined with curves in the road won't be a good mix for for distracted drivers in these cars.


How far ahead to the sensors work? Maybe there is lots of time at 100km/h. They certainly need to look far enough ahead to detect a stationary object in the road and come to a complete stop, as does a human. A car coming towards you in the wrong lane at 100km/h is a different kettle of fish though (as it is for a human).

Over here, the speed limit with road works is 40km/h


In the US on highways the speed limit near construction generally drops from the usual 65mph/105kmh down to 50 or 55mph/85kmh. I think the federal reference is 45mph to 55mph, but in my experience it's usually at the highest level, 55mph, unless there's really active work with crew on the ground. If you're accustomed to 40kmh, I can only imagine how insane these speeds probably seem to you. (And I'm not saying you'd be wrong in that)

Overall, the problem of course comes down to reaction speed. An attentive human with hands on the wheel might respond in 250ms. A computer obviously can respond faster. But some problems won't give you any more than that 250ms to respond. Situations where faster sensors or cpu are meaningless in giving the driver more warning because there is literally only 250ms from onset of stimulus to the time needed for a reaction if catastrophe is to be avoided. Meaning that if the computer can't respond without human intervention, an inattentive driver is screwed because their 250ms reaction speed was based on them already paying attention. To break it down in rough estimates:

1) The car detects & make its own assessment and raises an alarm. 1ms? 10ms?, It almost doesn't matter.

2) The human receives & registers the alert. This is the initial 250ms reaction time.

3) Only then can the real work begin: The human, from a cold start, needs to take in the entire situation and understand what's going on. This is more than just reaction speed. This is task switching, which is cognitively more costly, especially for un-practiced tasks, and the task here is rapid analysis & synthesis of a panic inducing situation that most people never experience. 750ms is probably about right, based on research that shows the cognitive burden of task switching can be even higher [0]

So, a full 1000ms, a full second.

This is way, way too slow. As I said at the beginning, there are moments driving where that 250ms is required to avoid catastrophe. When those situations arise then there is simply not enough time for the car to get help of an inattentive driver to respond. It's simply not possible. Either the car makes the decision, or the human has just enough time to go wide-eyed before it's over.

Basically the only situations where the car's notifications can facilitate meaningful human response are those that can be foreseen at least a full 1000ms in advance. Certainly this encompasses a lot of difficult scenarios, but not many of the most dangerous ones. Any time a split-second decision saved the life of a driver? That same driver, if inattentive when the car's alert comes in, is now dead.

There is simply no self-driving car that that can safely allow human inattention until the car can make a whole lot more of its own decisions.

In the meantime, if overzealous marketing & breathless hyperbole don't continue to oversell these capabilities, then computer-assisted driving by attentive drivers should greatly improve safety.

[0] https://www.semanticscholar.org/paper/Task-switch-costs-subs...


Of course, since they’re only releasing 100 of them, that’s a low bar. Tesla has sold over a million vehicles.


> This means Honda is promising your Honda will not silently drive you into a wall as the Tesla autopilot is known to do.

If Tesla only sold 100 vehicles, I assume it could also have avoided them driving into walls...


Stats matter, Tesla has a million cars on the road. Choose a random 100 and odds are very favorable you see no accidents.

If Honda had a million cars on the road, I would not be surprised if the accident rate was well over that of even humans drivers.


Is that the case? Most people get into an accident at least once in their lifetime and most people don’t drive for 100 years.

It certainly wouldn’t be statistically significant but over the course of a year or two you’d except a couple accidents for 100 cars.


Maybe, but in Tesla's case it's a known bug/design choice that can probably be consistently reproduced and not just a random edge case.


Your last sentence should read:

... as previous versions of Tesla autopilot have done.

Talking about any one of these systems via their branding as if they are a singular fixed thing (and not a network of software versions and models under continual update) leads to bad mental models.


Here's a Tesla hitting a highway wall from last week.

https://youtu.be/XgzpAN4qsmg


Interesting, I think this is the spot: https://www.google.com/maps/@34.0357838,-118.169596,3a,75y,2...

Seems like when the Street View pic was taken this was not a lane. I'm shocked it was converted into one. Seems very dangerous, for human or AI.


I’d probably hit that wall too. I wonder how many non-teslas have hit the same spot.


One frame is that Teslas will always hit walls another is that these mean Teslas will never hit such walls again. Absence of collisions isn’t evidence the systems work, it may be, but could also be that other systems aren’t deployed enough to hit the tail of the distribution to see these scenarios. However, known incidents like this one do grant evidence to the claim the said system won’t continue make the same kinds of mistakes from the feature space of inputs most like it (which is hard to know as a human observer)


The Tesla should have caught that.

But that is the stupidest way of temporarily? (I hope) relaning a highway ever. What were they thinking?

One of those situations where Musks insistence on not using LIDAR is clearly wrong.


The car has radar which can see that wall. It’s most likely a problem of what to do, which humans face just the same - do you swerve into the right lane potentially causing a much worse accident?


I believe Teslas rely 100% on cameras for this type of maneuvers. Regardless, it didn’t need to swerve into the next lane, just move to the right 6 inches in the same lane.


Can radar see concrete? I thought radar was only reliable for metal obstacles.


It could check its side cameras first.


LIDAR would not have made any difference if the car is not coded to handle the situation, let alone how in the hell is that even legal? We can only make an assessment of what the car saw if he were lucky enough to have the screen data as seen in car if not all telemetry from its recordings.

I own a TM3 and I do not have the "FSD beta" that is in limited testing. I will say that the car display portion of the UI really does not impress me in what it shows and does not show.

As in it is damn happy to show cones and garbage cans but I have instances where it showed me cones front and back of a vehicle but did not draw the vehicle in between. Same with a floating garbage can where sometimes the loader was visible but the truck itself would wink in and out.

I really wish people would quit ascribing super natural abilities to LIDAR let alone superiority. Plus I really want to see studies done which show the impact having hundreds of LIDAR equipped vehicles would have on people, animals, and even insect, life. (then again how many vehicles in traffic does it take to cause an issue for them)


The only reason I mentioned LIDAR is because I strongly suspect the reason the Tesla didn’t avoid that is because the angle of the sunlight made that pillar difficult to see.

I fully agree with you that that spot on the road is pretty horrible, which is why I mentioned it. But dealing with crappy roads is the inherent problem any self driving car has. It’s the car’s job to not hit things.

I own a Model Y and I feel like Tesla’s autopilot gets a worse rap than it deserves. But there is an inherent disconnect between what the car is capable of and what some people presume it is capable of. Tesla doesn’t help this with their branding.

Why did the driver just let his car plow into the pillar instead of taking control? Were they even looking up at the time? Would you have let your car drive into a pillar like that? I’d like to think I would have, but perhaps not.


How can any AI make sense from such a low-quality video? Why isn't there some kind of lens hood that blocks the sun glare?


Yes, but that's not the old version. The latest versions are failing in new and interesting ways. /s


Here is your "but this time it is fine" Autopilot attempting to murder yet another driver just 9 months ago: https://www.youtube.com/watch?v=LfmAG4dk-rU


Here is a human driving into 36 cars: https://m.youtube.com/watch?v=u5KgLVh-4Mg

Here is another doing a burnout and crashing into a store, a month ago: https://m.youtube.com/watch?v=tJu5TZ6rwXg

Human-driven truck runs straight into a 100-car pile up, 1 month ago: https://m.youtube.com/watch?v=VqNy7v5YekM

Beat that, Tesla!


This isn't how statistics work. Teslas are only about .1% of all cars in the US, and only a fraction of those have autopilot on. If autopilot is as safe as a human driver, we'd expect to see about 10000 cases of human stupidity for every case of autopilot stupidity.


That’s exactly my point. There are millions of Teslas on the road, a single incident nine months ago means very little. You will find thousands of these for humans in the past month alone.

Teslas per-mile incident rate is currently about 1/10th that of human drivers.


> There are millions of Teslas on the road,

It's only 1.4 million ever sold worldwide.

> Teslas per-mile incident rate is currently about 1/10th that of human drivers.

But the reason for that is obvious, isn't it? It is strongly encouraged to activate autopilot in trivial situations (e.g. lane assist on interstates), whereas most dangerous conditions (rain, freezing, bad roads, road works and so on) and their higher incident share is almost completely in the human driver column.


These cars have driven about 4 billion miles with AP on, only six confirmed AP fatalities total since 2016. If it was such a hazard you’d expect to see hundreds or thousands of serious incidents.


Its at 15 deaths for currently 3.3 billion miles driven. The number 6 comes from Tesla itself, which isn't exactly in a neutral position. Thats 4.5 deaths per billion mile driven.

No idea about the US, but in Denmark the deaths per billion mile of highway roads driven is 0.7. In other words: On a European freeway equivalent you are over 6x more likely to die from Teslas autopilot than if you would drive yourself.


It looks like the car at least tried to stop. I'm sure some software engineer at Tesla saw that and said "Yes! A real-world UAT on the latest don't hit stationary objects point release! Now I can close out that story in Jira."


I suppose we need more captcha images of overturned trucks rather than a brain behind the wheel.


What is really amazing about that video is the people who must obviously be witnessing a really bad crash but just keep in driving.


What a bunch of zombies holy moly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: