Hacker News new | past | comments | ask | show | jobs | submit login
San Francisco raises Tesla 'self-driving' safety concerns as public test nears (reuters.com)
141 points by codechicago277 on Sept 26, 2021 | hide | past | favorite | 383 comments




The problem with level 2 "FSD" is that the better it gets, the more dangerous it is because the driver gets complacent. Autopilot is already being investigated because drivers don't monitor it enough and hit emergency vehicles, and all autopilot does is go in a straight line. FSD is making far more decisions, and gives the driver far less time to react to it behaving badly.

Tesla FSD has several situations where the car regularly performs maneuvers that, if the driver didn't intervene, it would likely kill the driver and other people on the road. It also has many types of routes where it can operate without any disengagements for a long period of time. I think that is a very dangerous combination.

If they really wanted to reduce deaths due to driving, they could use this FSD software to add better collision avoidance technology. Once you have that, introducing self-driving should be safer, even if it doesn't always make the right decisions it will be caught by the collision avoidance tech.


> and all autopilot does is go in a straight line

This isn't true, and there's any number of YouTube videos demonstrating that. At the very least, it knows how to navigate highway interchanges, how to change lanes, how to avoid other cars that would otherwise collide with it, and to avoid/slow down for pedestrians and cyclists.

The last feature is perhaps overly sensitive in my personal experience. It used to slow down dramatically whenever it was passing a cyclist, that has improved in recent versions.


To clarify, I wasn't saying that it literally just goes in a straight line without any additional collision avoidance logic, just that from the drivers perspective all they have to do is make sure it continues in it's lane, as opposed to FSD where there is an order of magnitude more decisions the driver has to monitor. There is a huge difference between the focus required to make sure your car driving on the highway stays in it's lane and doesn't run into stopped vehicles, vs the focus required to make sure your car doesn't turn in front of other cars or take a turn to wide and hit something.

I wasn't including Navigate on Autopilot as a part of that but even with Navigate on Autopilot, the driver has far less to monitor than with FSD, since it just changes lanes on the highway occasionally rather than having to turn across traffic, frequently switch lanes in the city, or manage stop signs and lights.


Perhaps someone can build a Tesla-detector, so when it sees one the owner can brace for impact ...

Sadly, only half-joking here.


Valets driving manual seems to be a bigger risk, in my experience.


You seen to overlook the fact that Tesla basic autopilot is already saving lives and preventing accidents.


Source?


This is yet another instance of the "what is the acceptable risk for introducing a new technology" problem.

New technologies tend to be rather unsafe when introduced: steam engines, cars, trains, airplanes, electricity, gas heating, all those things tended to kill the early adopters and some bystanders as well. Then they improve and the next generations get used to them and their inherent risks that diminish over time but rarely get down to pure 0.

We are much more risk-aware than our ancestors, though, because we no longer live in a world of ubiquitous premature death. With an average life expectancy exceeding 80 in many countries, we tend to be much more careful than our ancestors in times when life expectancy was 47. (As in the USA in the year 1900.)

I can understand that - I do not want to die under wheels of a robot gone wild any more than anyone else - but we should still find a balance, unless we want to stagnate indefinitely.

The risks of self driving cars are nothing compared to the risks of biotech, which is just coming of age. But biotech holds a lot of promises, too.

PS.: Frankly, my worst fear about self-driving cars is malware and ransomware, not honest mistakes in the code. Computer security is basically an oxymoron and some people are smart and evil at the same time.


This is less about the introduction of self driving cars as a new technology, and more about a company introducing a level 2 driving assistance feature and calling it “self driving”, when in technical terms, it never takes over driving responsibilities from the human driver.

The danger in this case is entirely manufactured. Level 2 driving assistance features are safe when they are not oversold.


The Tesla FSD Beta program requires drivers to understand this is a level 2 system. The human driver has to be in the driver's seat paying attention 100% of the time. If they move to the back seat they will be banned from the system.


Safety critical systems should never be labelled with conflicting statements regarding safe use. Disclaimers aren't a safe way to mitigate this. When given conflicting statements, people tend to psychologically cling to the statement that was the most strong, prominent, or reenforcing to their predisposed views. Also, the driver of a car is not necessarily the owner.

There is a long history of people dying because they were confused by conflicting information about safety systems meant to protect them. Safety engineering starts with consistent and clear messaging.

It's not 2012 anymore, and there are now a number of vehicles with level 2 driving assistance features on the market. Yet, Tesla is the only manufacturer consistently in the headlines when one of their users is peer-pressured into showing off features their car doesn't actually have.


I agree that the media unfairly singles out Tesla. I guess there is a lot of people looking for articles that reinforce their view that Teslas are unsafe.


So the Full in full self driving is like the Full in full speed USB? Actually it means not full, but mostly empty?


Beta as in it isn't done yet, but when it is done it will be safer than most drivers.


The SFCTA isn't raising concerns about hypothetical technologies that Tesla might release at some point.

They are raising concerns about what is actually being delivered in reality.


You don’t have to move to the back seat forgo paying attention in a vehicle.


I'm not sure what you're arguing in terms of acceptable risk. Biotech is incredibly regulated, specifically because the risks are so high, effectively there is very little acceptable risk. In biotech, a patient dying due to your drug is a Big Problem that will at best cause you to put a disclaimer on the package (see Black Box Warning) and at worst immediately end your drug's prospects. We can argue about trade-offs (if you've got terminal cancer, maybe a rare heart event is a worthwhile risk, probably less so if you have a rash), but this is exactly the way it should be.

Self driving cars are a nice luxury, especially in city driving, not something that radically improves our world. You get to read your phone instead of paying attention, and the trade-off is someone might get killed. It's like treating a rash with a drug that could give you a heart attack. That's a far cry from, "with this technology something that took days and $$$$ now takes hours and $" as was the case with all the older examples you listed.

If self driving cars were more like airplanes, I'd have a little more faith. Tesla's marketing BS doesn't inspire me with lots of faith.

On black boxes: https://health.clevelandclinic.org/what-does-it-mean-if-my-m...


> Self driving cars are a nice luxury, especially in city driving, not something that radically improves our world. You get to read your phone instead of paying attention, and the trade-off is someone might get killed. It's like treating a rash with a drug that could give you a heart attack. That's a far cry from, "with this technology something that took days and $$$$ now takes hours and $" as was the case with all the older examples you listed.

This is the opposite of the premise and the conclusion is the total opposite of the goal of self driving cars. A core premise of self driving cars is that they will be far safer than human-driven vehicles. 1.35 million people are killed on roadways every year globally. Saving over a million lives a year means a lot. The technology isn't quite there yet, but it is likely that it will be and the promise is quite real. It's not like Tesla's are killing people at a significantly higher rate than regular drivers with Autopilot - which does not seem to be true [1].

1. https://www.tesla.com/VehicleSafetyReport


There are plenty of resources that demonstrate why those statistics can be misleading. Chief among them, not all miles are created equal. It’s like claiming autopilot in airplanes is significantly safer than ape-controlled aircraft. It’s partly true because apes control the hard parts (takeoff and landing) and leave the more easy parts to software


> It’s like claiming autopilot in airplanes is significantly safer than ape-controlled aircraft. It’s partly true because apes control the hard parts (takeoff and landing) and leave the more easy parts to software

That's because the term "autopilot" is badly named, both in cars and in aircraft. People just thing autopilot in aircraft means "plane flies itself" because they get on a jetliner and never see the pilot actually control it, leaving the impression that some combination of magic and electricity-infused rocks got them there instead of a human with assistive software.


Those statistics are pretty relevant when it comes to fatalities, which are more likely to occur at highway speeds. Total accidents, yes, self driving cars aren't generally operating in city traffic yet. Those same resources will note that the most common accident for Teslas is being rear-ended by another vehicle, which is also relevant.

Let's not lose sight of the fact that this technology is under active development, with a theoretical target being eliminating the vast majority of car-related deaths. Nobody is arguing that it's already superior, though it may already be close in certain circumstances.


That’s the thing though. Those statistics aren’t showing that level of rigor. They aren’t even saying “miles driven at highway speeds”, just “miles with autopilot engaged”.

They don’t control for things like vehicle age. They don’t control for safety features, or even for driver assisting software control. They aren’t comparing driving conditions. They aren’t comparing driving duration. There aren’t comparing driving speed. They aren’t even from the same datasets. Etc etc.

It’s not a rigorous study but used as if it’s evidence when it’s only slightly better than anecdotal


It'll probably be best to stick to U.S. statistics because it'll show the statistical difference between manual driving in a modern car and automated driving in a modern car, versus 'globally' which includes cars in countries that aren't designed with the same safety we have and will lag at least a decade behind the US in receiving level 3+ ADAS when it becomes available.


Self driving cars will be sold globally though.

Either way, 40k US deaths per year plus many more injuries is still a lot, even if we only care about the US.


> You get to read your phone instead of paying attention, and the trade-off is someone might get killed.

You get to drive, and the trade-off is someone might get killed. Your comment almost makes me think you haven't driven a car before, because you would remember the dull terror of seeing your life flash before your eyes for the 40th time this year because some moron ran a red and slammed the brakes in the middle of the intersection you were about to cross.

Until recently motor vehicle accidents were a leading cause of death in the US. Saying that self driving would just be a luxury feature is truly a luxury position compared to those that have lost loved ones to drunk driving, speeding, snow, rain, new drivers, old drivers, blind drivers, and any other of the myriad of ways to get yourself killed on a road. All of which would disappear with level 5 self driving.

> That's a far cry from, "with this technology something that took days and $$$$ now takes hours and $"

Extrapolate the future and realize that once self driving is solved for one vehicle it's solved for all of them, and truck/bus/taxi driving as a profession will go bust. Without having to pay human drivers that also need breaks, pensions, health insurance etc. all these services can offer lower prices.


>All of which would disappear with level 5 self driving.

I think the post was about managing the risk that occurs before level 5 is reached. Assuming that it’s either on the immediate horizon or a foregone conclusion seems to be dismissive of those nascent risks


I drive a lot, thanks. If you can prove me level-5 or even very good level 4 autonomous driving, and that a computational driver makes radically fewer fatal mistakes than a human, then I'm with you. In other words, if you can satisfy a good regulatory regime like the say, airplanes or drugs, then great.

Short of that, it's a luxury and a danger.


The thing is that we're not going to get there if we disallow anything in between 2 and 5 just because it can be a danger when used incorrectly. Level 3 by definition[0] is where the driver can look at their phone until the car/beeps at taps them to start driving again, and we know that system won't be able to tell when it needs to request human intervention perfectly 100% of the time, yet we need to get level 3 systems before anything above it.

0: https://www.nhtsa.gov/technology-innovation/automated-vehicl...


Even autopilot in its current form has proven to be such an attractive nuisance for abuse that there is a market for those stupid steering wheel weights. How can you possibly not appreciate this problem?


> because you would remember the dull terror of seeing your life flash before your eyes for the 40th time this year

Is this really something you go through?

In ~35 years of driving, I've never experienced this.

If you are having near-death driving experiences 40 times per year, something is wrong.


Not the OP, and well, it's been a year and a half, but I regularly encountered unguided automotive cruise missiles in my morning drive down 237 in the bay area, with Tesla drivers being _particularly_ bad about sitting there playing with their phones. This is not a "life flashing before your eyes" situation, but it is a "this is very concerning" situation.


Perhaps I misunderstood the OP, but I didn't think they were talking about Tesla drivers paying no attention to the road and being absorbed on the phone, just their own driving experience of getting into near-accidents all the time.


"Self driving cars are a nice luxury, especially in city driving, not something that radically improves our world."

With truly autonomous vehicles, you can have a radically different logistics for goods, delivery services etc. You can also have specialized "sleeper cars" that get you to your destination overnight, fresh and ready.

Self driving cars can also park themselves somewhere out of sight and stop clogging inner cities.


I'd agree if it works as advertised. Level 5 or very close to it is the key. No system has shown that, much less Tesla's. In the meantime, doing a live experiment with 3,000+ pound machines moving at 30+ mph seems like a bad idea.


There seems to be a misunderstanding on what they're referring to. You with "self driving cars are a nice luxury" refers to the current iteration, while the main GP is talking about biotech and self driving cars in terms of the potential they hold, perhaps 15-30 years in the future.

We can't go straight from level 2 to level 4 without some real effort, and it's not exactly helpful when new level 2 systems can't even handle curves[0] or will continue to drive for you when you take your seatbelt off[1].

0: https://youtu.be/GCRNYP5Qg34?t=321

1: https://twitter.com/MinimalDuck/status/1388557772921344005?s...


> You can also have specialized "sleeper cars" that get you to your destination overnight, fresh and ready.

Trains have had this capability for decades, and that is a very mature technology, with the upside of also carrying far more people at a time than a car-based system would.


I have taken them several times. Prague-Warsaw, Prague-Frankfurt, Vienna-Venezia, Prague-Tatras, Prague-Krakow.

First of all, trains shake a lot. I could at best take hourly naps, it was better than nothing, but far from optimal. The loud proclamations of station speakers whenever you stop somewhere do not help your sleep either.

Second, night trains are a paradise for opportunistic thieves. Yes, it is a solvable problem, but I haven't seen it solved.


I have no problem driving or biking among the LIDAR-based systems from Waymo and others in the SF Bay Area. I’ve seen them do stupid stuff, but it was always on the side of safety. On the other hand I’ve seen hands-free Tesla drivers reading books, using their phone, etc, which is both stupid and dangerous.


Tesla definitely should have a camera watching the driver and detect if they're being an ass.

  if user.is_an_ass == true: disable_autopilot(days=7)
That should teach them pretty quickly.


wait were these 5 years of hype all for a self driving car that I've to constantly be driving?


They literally do have that.


If they have that, why then do we see videos and news of people reading newspapers and sleeping behind the wheel of a Tesla?

The only feature I've heard is that if you don't engage with the AP by applying torque to the steering wheel it'll disable it until you come to a full stop and park the car.


>> This is yet another instance of the "what is the acceptable risk for introducing a new technology" problem.

That's a strawman. Just make the company fully liable for any injuries or deaths from accidents involving their cars. The market will figure it out from there. But that's not how it's being handled at all.


At level 5 that makes sense - there is nobody in the vehicle responsible for the safety of the vehicle.


Tesla in legal disclaimers: "It's all on you, driver."

Tesla in FSD advertising of current state, not future goal: "The driver is only there for legal reasons. The car is driving itself". To be clear, that's a word for word quote from previous Tesla marketing (because I'm sure someone will chime in to say "Tesla doesn't do any advertising!").


FYI that video is still the main feature on tesla.com/autopilot


Not sure the math works out here. Even if you have 100 deaths/y nationally assuming wide scale rollout (tens of millions of cars, billions of trips) which would be astonishing achievement that’s still $ billions of payouts every year (assuming ~10M payout per death). Yeah… not gonna happen


No it's not? It's about whether a specific company who has an extensive history of irresponsibility and killing people should be allowed to test. Tesla is far behind the competition and taking extensive risks to catch up. There are companies who are much further along and take safety seriously.


Exactly. This is a problem with Tesla, not autonomous vehicles. There are lots of other autonomous vehicles being tested in San Francisco. In fact, it is a pretty popular place to test autonomous vehicles.

Waymo is even offering a self-driving taxi service in San Francisco: https://arstechnica.com/gadgets/2021/08/waymo-expands-to-san...


> life expectancy was 47

Life expectancy for a child. A white person reaching the age of twenty would have an average life expectancy into their 60s. Which is still lower than today, of course, but I'm not convinced that the difference supports a conclusion about differing attitudes toward risk.


Unless they were killed on the battlefield ... this kind of risk seems to have gone down too in developed states, and that is pretty significant. Europe in the first half of the 20th century, with exception of a few lucky countries, was one big war graveyard.


It just seems like this whole endeavor is solving the wrong problem. Building a self driving car is like making a faster horse. Individual cars on roads is incredibly inefficient and even EVs have a huge negative impact on health and air quality from tire and brake dust.


I think it should be obvious that the FSD thing is and has always been an attempt to boost Tesla's appeal and stock valuation. Right now it really seems like the company is playing chicken with regulators so that they have a plausible target of blame and an excuse for not releasing a feature they claimed was almost ready _years ago_ and sold to people which is quite obviously not even close to ready. It's a pretty common strategy in scams.


But it’s a problem that can be addressed at the individual level, rather than requiring large societal shifts like a move to more public transportation would. As much as I love the idea of easy-to-use public transport, the changes in US society needed to make that happen are unlikely to come in the next 5-10 years.


Self-driving cars can wait. Nothing of importance depends on it. But if it's regulated properly, and liability is where it should be, then introduction can come, in due time. Biotech is indeed a whole different can of worms.

> not honest mistakes in the code

It's not mistakes in the code, but oversight in the design that worries me.


"Nothing of importance"

I would argue that the total amount of time spent driving is huge and that as humans, with limited lifespans, we could mostly use that time for better purpose. (Not necessarily for work. Even Netflix would be better.)

I personally live in a country where being carless is feasible, and given that I hate driving, I am indeed carless. But I feel sorry for anyone in my situation who really does not have much choice and must spend X hours weekly behind the wheel.


Oh you mean a thing you could solve by public transport and proper street planning?

Or reducing commute due to remote working in non physical labour?

I feel that there are a lot of simpler ways to address the issues at hand.


Yeah transformations like that litterally take decades and require public approval. And there absolutly no reason to believe the US is moving in that direction.

So sure, if you want to sit around and think 'I wish I lived in Netherlands'thats fine, but people actually have to live where they do.

And even if you started the most massive program of public and bike infrastructure, it would still be worthwil to develop this technology.


Transformations like self-driving cars have been taking over a hundred years.

> 'I wish I lived in Netherlands'

Don't have to.


Is rebuilding the U.S. of 2021 to a totally different modal split really simpler than self driving cars?

Even California's high speed rail seems to be languishing in limbo.


Simpler in concept maybe. In practice, I am not so sure.

Just one example: In SV we voted for a tax increase in 2000 for a Bart Extension to San Jose. If we're (really) lucky, we might get it in 2030 after spending ~$10B. The total distance is ~20mi with less than 10 stations.

https://en.wikipedia.org/wiki/Silicon_Valley_BART_extension


> Self-driving cars can wait.

Absolutely not!

> Nothing of importance depends on it.

Lives of 35000 people a year does.


A question for you, and all self-driving safety proponents. If Tesla's FSD cars are so safe, why are they not taking the liability of the cars. Why have only a regular car insurance punted on to the owners? When the manufacturers of self-driving cars roll out liability of all owners, or taxi services with a monitoring driver with no liability, is the time when self-driving is actually available. Till then it is vaporware.


> why are they not taking the liability of the cars.

Because that's still incredibly costly. We as a society care greatly about a decrease in net risk, but that doesn't mean that net risk is small enough for one party to bear.


That makes so sense. When they say "it's safer than a human", that should definitely translate to a smaller insurance cost than a human driven car.


And it does. It just doesn't mean that one company can suddenly afford to be an insurer of tens of thousands of drivers.


If they make driving safer, they could insure cheaper and make more profit than existing insurance companies. Or they could collaborate with insurers to reduce the insurance cost for Tesla drivers, e.g. by underwriting the costs of deadly accidents, and splitting the profit. 35000 deaths, that's a lot of dough. A money making machine like Tesla must surely have thought of that, if it is remotely true.


Tesla still don't have a public FSD. All they have is traffic aware cruise control with lane assist + stop signs. And some other bits and pieces.


They may not have public FSD, but they do have public FSD capable cars! Hopefully that’s clear to all the consumers involved.


all our cars' hardware support some unspecified version of self driving software that doesn't exist; to use our version of self driving software you have to add this 10k piece, and while using this version of self driving software you'll have to be in control of the vehicle all the time.


So we should do absolutely nothing to improve the safety of cars (both for people inside and outside the vehicles) until magic self-driving cars provide the solution for all our problems?

There's plenty that can be done to make driving safer. Traffic calming measures. Stricter enforcement of traffic laws. Redesigning dangerous intersections. There's no need to wait for self-driving cars, and I suspect that you'll save far more lives by actually doing that stuff instead of hoping self-driving cars might solve the problem for you someday in the future.


Yep, and on another level of importance but still important within that level, untold hours of human idleness and boredom sitting behind the wheel. I think drudgery is not too strong a word.


Radar and AI does not fix the problem of cars being a dangerous way to travel.


This is, to a certain extent, a post advocating for regulation. Take the aviation example. It was certainly dangerous for early adopters when it was a nascent technology. But one of the reasons why the current level of aviation is relatively safe (they are one of the very few examples of five sigma quality) is because they are heavily regulated. Everything from maintenance to licensure to duty cycles are regulated.

I think one of the risks of the current paradigm is the inability of US government to effectively introduce new regulation legislature. The new method seems to be to put the onus on the industry to regulate themselves in lieu of having to create actual legislation, which can lead to perverse incentives. In the instances where this isn’t the case, we have to deal with regulatory capture or the revolving door of industry/government which creates its own skewed incentives.


The real risk with self-driving cars is managers rushing automatic software updates when bugs are discovered.


Schedule and cost risk are always going to be present. I don’t know how you’ll fully mitigate that given humans struggle with adequately gauging risk and asymmetrical incentives in the face of low-probability events.


> but we should still find a balance, unless we want to stagnate indefinitely.

I don't think there is any risk of stagnation. Maybe we should work on why we need cars in the first place. We can simultaneously work on public transport, or virtual presence, etc.


"Self driving cars" are just a subset of "autonomous vehicles". Public transport would benefit greatly from driverless buses.

Bus drivers are a scarce resource. Where I live, there is a shortage of people willing to rise at 4 a.m., take responsibility for 40-50 lives at a time and, at the same time, have unpleasant interactions with members of the public (meeting a few asshole passengers everyday belongs to the job, unfortunately).


You're right. I guess I'm thinking that mass transport systems with dedicated/fixed routes can be designed in ways that allow for much safer operation, versus an open-ended system like FSD. e.g. no risk of pedestrians on train lines.


We could also introduce safety features that use the FSD hardware and software, but only to avoid accidents rather than drive the car. Once you have a system that you know does a better job than humans at avoiding accidents, it is safe to introduce a system that actually drives the car.


i wonder about the commercial feasibility of restricting auto pilot to specified zones, maybe even specified expressways similar to turnpikes. would this allow self driving technology the room to develop with a socially acceptable level of risk or would it condemn it to go the way of the concord?


I've said this before, so apologies to those who've heard it already, but I believe testing big fast robots in public is irresponsible. These machines have already killed people. We know that human drivers kill and maim tens of thousands of people every year, so it's pretty obvious that robotic drivers will do so too, unless and until they become better drivers than humans. If somebody wanted to let their killer robots move around in public with limited supervision it would be obviously illegal, but somehow dressing up as cars falls into a weird blind spot.

I've encountered at least two major objections:

> Human drivers kill people so it's okay if robot drivers do too.

But is it really okay for human drivers to kill people? Our complacency in the face of automotive mayhem is the result of a deliberate campaign of social engineering: "The Real Reason Jaywalking Is A Crime" (Adam Ruins Everything) https://www.youtube.com/watch?v=vxopfjXkArM

We went from decrying "speed demons" (reckless drivers who hit pedestrians) to victim-blaming "jaywalkers". Now, for example, more Americans have been killed by cars than by all the wars we've fought. (And that's just counting collisions, not deaths from air pollution, etc.)

> The machines will eventually be safer than human drivers (a belief I share) but they need to practice in real world conditions to do it.

So build a model city, populate it with people willing to take the risks, and test them there. (This is setting aside the additional idea of doing extensive modeling in simulated cities.) It would be expensive but it would avoid killing people who haven't e.g. signed a release agreeing to be test dummies in the robot car experiments.

The obvious way to make large, heavy, fast robot cars is to start with small, light, slow robot cars. Imagine a self-driving golf cart covered in nerf (soft foam) that doesn't go faster than about 5mph. There are applications for that and it would be much more difficult for such a robot to injure or kill people.

Going straight for self-driving cars (and can we please call them "auto-autos"?) seems to me to be hubris.


Though say for argument's sake you just want to reduce road deaths in total. Wikipedia has the annual world total at 1,350,000. Tesladeaths.com had confirmed Tesla autopilot deaths for 2021 at 10. On that basis you might want to crack ahead with developing self driving to bring the 1.3m figure down at some point.


Those numbers aren’t remotely comparable. Airline autopilot deaths are tiny compared to human pilots, but they do different jobs and are available in different kinds of planes flown by different kinds of pilots in different situations, so you really can’t compare auto-pilot miles to non-autopilot miles by just comparing deaths per mile.


In Germany 2,719 people died in traffic in 2020. In the US it was around 40,000. Germany has 1/4 the population, so maybe there are many other factors that can be tweaked before we let 5,000 lbs death machines roam around autonomously.


my feeling is I won't trust it until the company takes the liability. Tesla won't so they clearly do not believe it works. Waymo does so I'd consider riding a Wayno car.

To me it's a useless feature for me to have to be ready to correct the car's mistakes. To actually do it is way way more stressful than just driving myself. And, if I ever get comfortable in what the car is doing then I'll stop giving it my full attention which is exactly what your told you're not allowed to do.


> Tesla won't so they clearly do not believe it works.

But Tesla offer their own insurance. Isn't that taking responsibility?


Is that a joke? You must be joking. Insurance? Responsibility??


How timely. I was almost hit by a Tesla while on a run yesterday in San Francisco. The driver was not paying attention at all, slowly rolling through a stop sign. While this is one data point and could happen with any type of automobile, this type of behavior in "autonomous" cars worries me a lot - the carelessness other posters have noted in which these drivers put down their guard and think the car is going to do everything for them. They put their full faith in this nascent technology and therefore put others' safety at risk. Sure, I am very excited for this technology to mature, but in the meantime, let's not fool ourselves.


This is another chapter in the battle of ethical mindsets that I see playing over and over again in America, consequentialism vs deontology. From everything I've heard of Tesla self driving is that per-mile it is more safe than human control (consequentialism argument for self driving). However, there have been a few high profile accidents recently with emergency vehicles (deontological argument against self driving). You ultimately need to pick an ethical mindset to judge the viability of Tesla self driving. I think we have seen a trend in American culture recently to more of deontological ethics especially in places like San Francisco. I'm personally more towards consequentialism and therefore are pro Tesla self driving, if per-mile it is statistically safer than a human driver.


> Tesla self driving is that per-mile it is more safe than human control

You are mixing up autopilot on highway and FSD. And even then the numbers are comparing very easy stretches of highway driving where people feel comfortable activating autopilot to all driving miles.

Watch the videos on youtube for FSD beta. On most of them it does something or some things very wrong every couple miles and is saved by the hypervigilant driver. There is no way FSD is safer than human drivers right now. I could believe what you say is true about Waymo. But they are being much more responsible and it seems like SF doesn't have a problem with them.


> From everything I've heard of Tesla self driving is that per-mile it is more safe than human control (consequentialism argument for self driving).

I think this might not be true! I looked at the numbers a few years ago, and Uber had a much higher fatality rate per mile driven then human US drivers. Waymo was better; I forget if it was better than human or not. I couldn't find comparable Tesla numbers --- only PR-style numbers that could be based on all sorts of weird definitions --- but I don't get the impression that they're at the head of the pack.

EDIT: The number to beat is around ~15 deaths per billion miles driven, on the road in the US.

https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...


Tesla reports aggregated accident statistics publicly (for its Autopilot product, not this current round of FSD Beta testing obviously): https://www.tesla.com/VehicleSafetyReport

And indeed, they're better than human drivers. People will nitpick a little around the edges (e.g. cherry picking drivers of cars of the same age or in the same price range, pointing out the lack of specific numbers in various categories, arguing about the definition of "on autopilot"...), but the general truth is that yes: the "better than human" bar has already been cleared. We are not seeing the rate of accidents from this system that we would see if it was unsafe. Period.


> People will nitpick a little around the edges

Sorry what? That is a comparison of highway miles with Autopilot (driver felt comfortable activating autopilot in that stretch) vs all miles in any situation. That's not a nitpick, it's a completely apples and oranges comparison.

And that's autopilot. If you watch a video with FSD beta you can't honestly say it's better than any sober humans who have driven for longer than a week.


FSD beta has had no known accidents at all, though. The argument above was presuming that measurement and statistics can show whether a system is "safe" or not. If you refuse to accept that as an axiom I don't know how to reply.

The question in the linked article we're discussing isn't whether Tesla FSD is "finished" as a product, or perfect, or whatever. It's simply whether or not we should allow it to be tested in a wide beta on public roads.

And where's the evidence that we shouldn't?


> FSD beta has had no known accidents at all, though.

Do you have a source on Tesla FSD? I don't know much about it. The basic info we'd need is number of fatalities, number of miles driven, and to check that these numbers mean roughly the right thing.


You want a source on the absence of evidence? I don't have that. No one has reported an accident to a FSD beta car anywhere I am aware of. The source is me.


You are a fine source! So fatalities = 0. Any idea about miles driven, by Tesla FSD beta cars? Or is there no public info on that?

If it was human drivers (in the US), you'd expect a fatality by around the 70 million mile mark. And fatalities caused by self-driving cars are very newsworthy, so you'd hear about it.


> nitpick a little around the edges

It is not nitpicking around the edges when the foundation is shaky and opaque.


That's "nitpicking" though. Unless you want to claim the existence of a very large population of "autopilot accidents" not present in these reports, it's just not possible to construct a scenario where autopilot is significantly more dangerous than a human driver.

You're attempting to use uncertainty or controversy in the measurement mechanism to stand in for an argument for the contrary point, and that doesn't work logically.

It's not killing more people, basically. That's what "safe" means.


No, this is not correct. Tesla persists in comparing autopilot miles (primarily highway driving, only good weather, recent model cars) to all miles driven by all road users under all weather conditions.

Tesla cannot claim autopilot is safer than a human driver based on this type of comparison.

Still, they fooled you and many others in this thread, so I guess their marketing works.


Very much this is the reality. And moreover, people have been pointing this very, very obvious problem with the Tesla numbers out for years, and yet people continue to take Tesla's statements at face value. It is incredible to watch and the most solid demonstration of how weak the critical thinking skills are in the technology field.

For historical reasons, I have lots of friends who do what is now called "data science" professionally and literally every one of them has asked, unprompted, about whether the numbers Tesla reports are based on non-comparable sets, which they are, early in any conversation on Tesla FSD. It is totally obvious.


Next time people complain about “why do i need math class” here’s a great example. Unfortunately a huge majority of the reasons are for reasoning through advertising BS.


In my experience, the tesla crowd does not care, they'll just cite it in the next Tesla thread.


Do you have a source that constructs a story around existing knowns that does show AP to be unsafe, then? Because again, the deaths and accident counts simply aren't there. There's not enough of them to be "unsafe", no matter how you do the accounting.

That is, take the numbers we have and show that if you select out the right data set from public numbers about other kinds of driving, that the Tesla AP values are actually more dangerous. This surely isn't easy but it's absolutely doable (and, let's be clear, such an analysis would be worth a lot of money to Tesla's competitors -- there's a built-in incentive to do this). But no one has numbers like that anywhere I've seen. Likely because, as seems like the obvious hypothesis, there aren't enough events to make a story for a safety problem via any analysis.

So you are arguing about methodology and not results. That's pretty much the definition of "nitpicking". And it will never end. You'll insist that Tesla AP is somehow hiding phantom deaths for years, because it's your prior. But it's not supported, it's just not.


Tesla has this info but chooses not to share it.

However, here’s an analysis from a few years ago that should at least give you pause in claiming that Tesla’s are definitively safer:

https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...


Wow, that was a good read. Thanks for sharing


>I'm personally more towards consequentialism and therefore are pro Tesla self driving, if per-mile it is statistically safer than a human driver.

For me to accept Tesla FSD it should be several orders of magnitude better than a human driver. I don't want an 'average driver' driving me to work. The average driver is involved in 6 million car accidents per-year.


But an average driver already drives you to work.


Is it not already several orders of magnitude better? The average human only drives a few thousand miles a year and experiences a varying amount of accidents during that time. FSD drives millions if not billions of miles a year, and lifetime accidents are still in double or low triple digits since being released in 2012 (I think). A far cry from 6 million per year.


Has any neutral party compiled the data? It would increase our confidence in FSD.


> However, there have been a few high profile accidents recently with emergency vehicles

Interestingly even that's a bit spun. There was one recent accident, in August. And the NTSB going back discovered that there's a cluster of (I think) 11 others where the car behaved similarly and appeared to strike an undetected emergency vehicle without trying to avoid.

Now... that's interesting, and potentially represents a bug worth fixing. But it was also rapidly pointed out that (1) this is an extremely common kind of accident (human drivers hit these things all the time too) and (2) given the number of reported Tesla AP miles driven and assuming no other unreported collisions cases, Autopilot is actually about twice as safe as the average US driver vs. stopped emergency vehicles.


The issue of “per mile” is complicated though, it really depends on the mile you’re driving. A mile on I-280 is not a mile in SF, and I wouldn’t be surprised if it’s a lot safer than a human on the former but not the latter.


> From everything I've heard of Tesla self driving is that per-mile it is more safe than human control

There’s no evidence this is true and what evidence exists is more suggestive of the opposite.


> From everything I've heard of Tesla self driving is that per-mile it is more safe than human control

So far you've only heard this from Elon Musk, who is a serial liar and hype machine. Wait until you've heard it from a trustworthy third party with access to internal Tesla data.

Just from the total number of accidents, I don't see any way for FSD to be safer than human drivers. It's likely not to even be within two orders of magnitude. Human-caused fatalities are in the order of a billion miles driven, while Tesla seems to be hundreds of thousands or millions.


Is Tesla still comparing essentially 'highway miles driven' with their self-driving to 'all miles driven in all situations' for regular human controlled autos?


Not just that. It is "highway miles driven in a high safety rated premium car by a younger set of drivers on better roads in better weather"


Yes. And if you look at even the rosiest FSD beta videos online you know it's nowhere near as good at driving as a human, yet people always come into these threads talking about how if it's better than humans it should be used and ignore that the premise is false.


Of course, it depends which human you are talking about. I've been hit three times by human drivers in the past 18 months. Twice while stopped.


i made this argument a few weeks ago on HN, and was promptly downvoted by anti self driving shills.

I knew a girl (in the sf bay area) who hated self driving cars. She said people should just learn to drive better. While at the same time, you go to the DMV and people are on their phones in the testing area getting the test answers. Anybody with a CA drivers license should be audited by an out of state driving agency


I'm very pro self-driving cars (and Tesla), but there isn't data supporting the claim that Tesla's FSD system today is better than normal drivers. Elon has made some imprecise claims and never provided the data to back any of them up.

I do think it's reckless to not publish their data and allow public discussion of the tradeoffs. Tesla is subjecting everyone on the road to their interpretation of the data that these systems are ready and just saying "trust us".


100% agree that I’d love to see them release more raw data. It’d be fascinating to dig through.

I wonder if they have some internal cultural scarring from the TSLAQ debacle. People were weaving (now verifiably false) damaging stories about sales rates by flying private planes over parking lots. More recently, the media was claiming a Tesla ran into a tree on autopilot… until a regulatory agency tested it and concluded that couldn’t possibly have happened.

I could see the thought inside Tesla being “any data we could possibly release would be twisted and used against us; therefore, why bother?”


They're taking a calculated risk for sure and it may pay off. I just fundamentally believe that everyone else on the road should have a say in what self-driving systems are allowed to share the road with them. It's no different than requiring vehicles to have mirrors, brake lights, tires with >2/32" of tread, and everything else that we require to drive on public roads.


People aren’t consistent though. The argument for vaccine mandates is 100% consequentialist and more or less identical to the argument for pushing forward with self drive: net fewer people will die. The antivaxxers are deontological, arguing that a few vaccine reactions and deaths should halt the whole thing.

Most of the SF folks you are describing lean toward the pro-vaccine-mandate position. They should therefore also be for self driving cars.


There's also the factor that Musk is a high-profile 0.01%er who tweets a lot.


I find it hard to square those company reported stats with the extremely common experience of your tesla trying to drive through a red light or similarly insane maneuver.


How is it that we all rail against the naming of “unlimited data plans” and shit like that, and then let tesla get away calling their level 2 system “full self driving?” Or even worse, “FSD capable” without being capable of FSD today?


I’m confused by sf. They did this for Uber in 2016, Amazon’s zoox, and a host of other companies, etc.

Tesla has a safety record that’s not unimpressive compared to the 36k people who die every year even though their failures have been more public and newsworthy. Their stats have been relatively objective.

We also have a framework for this: https://www.nhtsa.gov/technology-innovation/automated-vehicl...

As there is a framework in place already, shouldn’t we push forward? As long as we are reducing the statistical risk of driving below the risk of human drivers, shouldn’t that be the litmus for whether these programs are allowed to progress? I know I am being simplistic, and I understand that no statistical model will ever make up for the loss of a family member, the generally accepted foundation for the morality of self-driving car technologies hinges on whether they can beat the human-driven statistical equivalent.

If there is concern about that they shouldn’t continue. But if there isn’t, these experiments should progress as they could ultimately reduce mortality.

There were 3,723 deaths in California in 2020. The lifetime odds of dying in a motor-vehicle crash for a person born in 2019 were 1 in 107. These odds suck. Is that really acceptable? Can we do better? We know we can. If we can reduce that number we should.

If we can’t, we should restrict this technology’s rollout until it can.

Am I overly objective?

I’m not an expert.

I mean ultimately I want legislation built on objective data and loving values with human compassion at the core. I feel like sometimes we see these decisions based on how fast the system can adapt the ocean of existing automotive and roadway regulatory compliance to the technology changes, rather than new realities. People have no idea how much the books and legal cases that many cal trans engineers have spent their lives developing stand in the way of self-driving vehicles. I honestly think the biggest issue is probably the legal a s regulatory overhead of responding to the technology rather than the technology itself, but since it’s so hard to quantify I don’t often bring this up. But the biggest passion I have seen if from those same engineering minds, who truly understand the complexity, but also want to see it happen. Their timeframes are just long. I’ve heard 2045 and 2050.

Anyone more expert than I care to weigh in? I’d love to learn more. I am not trying to weigh in with any merit — I am a nubile entrant and simply sharing the output of conversation with people who will always be more qualified and I, but I’d love to learn more about this.


>Tesla has a safety record that’s not unimpressive compared to the 36k people who die every year even though their failures have been more public and newsworthy. Their stats have been relatively objective.

They haven't released a detailed deep dive on their statistics, other than some definitions of what counts as a crash. This is from the company headed by the "funding secured" guy. Elon is clearly willing to lie if he believes that it will serve his interests or mission.

Even if we accept their safety stats at face value, Tesla is getting ready to launch a new feature - "Full Self Driving" - which behaves substantially different from autopilot and has a much different operating design domain and intended functionality. They've had close to 2k internal testers and O(100) external testers of "Full Self Driving" for about a year - nowhere near enough time and data to get the same level of confidence they have for autopilot. Add on to this that it will operate in environments that are substantially more accident prone than highway driving or clearly marked single lane roads, and you can see why regulators might be worried by a larger rollout.


>This is from the company headed by the "funding secured" guy.

Even worse he's the guy willing to falsely accuse someone of being a child rapist because that person pointed out he had simply tried a publicity stunt that failed, badly, and his tech solution to a cave rescue with a huge media focus was actually worse than useless.

In other words when correctly and honestly called out for sprouting bullshit Musk goes fully delusional in the nastiest way possible. It's pretty worrying behaviour. He may well go on to do amazing work, i don't know either way on that and I don't have to have an opinion on it either. Surely nobody sane would never trust his word on anything for any purpose ever again after that display. I'd expect from a five year old.


> In other words when correctly and honestly called out for sprouting bullshit Musk goes fully delusional in the nastiest way possible.

It's called narcissistic injury, a behavior associated with narcissistic personality disorder.


Or perhaps.. aspergers? Which Elon does have. Yall are placing way too much emphasis on an outburst of an autist with a medical disorder.


There’s a pattern of deception on his part that is way more regular and extends beyond that insult that can’t be attributed to Asperger’s - I didn’t event mention the pedophile insult in my parent comment.


> Surely nobody sane would never trust his word on anything for any purpose ever again after that display.

TSLA is one of the most widely held retail stocks. Why do you think this is? Might you be assigning too much weight to this?


> In the short term the market is a popularity contest; in the long term it is a weighing machine

Warren Buffett

Elon is undoubtedly skilled at driving up hype - the popularity contest part. As for long term - Tesla themselves claim they never plan to pay dividends.


[flagged]


This reply is non-sensical and unrelated.

A widely held stock means that many people fundamentally believe in a company and its leadership, and are voting with their wallets.

#2 most widely held on Robinhood, #1 most traded on Fidelity this last Friday.

https://www.google.com/amp/s/www.nasdaq.com/articles/the-top...

https://eresearch.fidelity.com/eresearch/gotoBL/fidelityTopO...


Your argument in non-sensical - how are retail investors are good judge of character or safety when they can't even judge financial performance?

80% to 95% of retails traders/investors lose money in the stock market.

When Elon tweeted 'use Signal' (the app) they bought shares of 'Signal Advance' and it soared 400%, a totally wrong organisation. Then there is GME and many other stories. Obviously they are free to do what they want with their money, but are these the people you would trust with, well, anything?

https://www.cnbc.com/2021/01/11/signal-advance-jumps-another...


To be fair, though, holding a position is really only an indication that you believe in the position — not in the company itself. If you think Elon Musk tweeting “use Signal” will send a speciously-related stock soaring, it makes sense to buy that stock!


> how are retail investors are good judge of character or safety when they can't even judge financial performance?

[Citation needed]

Price discovery is a primary function of a financial market. TSLA is widely held and highly valued. These are facts. The burden is on you to refute this and general platitudes and whataboutism doesn't cut it.

I'd go so far to bet that 85-95% of long term TSLA investors have made money, not lost it. Guess who's leading TSLA?


Is your premise here that retail investors are sharper than professionals? The two groups, on average, tend to have very different opinions of Tesla stock.


Is your premise institutional investors are not bullish on TSLA? Short interest is under 4%, and the company is one of the largest by market cap in the world.

Parent was suggesting no one should trust Musk, due to a few Twitter lapses.

My premise is that given the breadth, and depth of people investing in Musk's companies, there may be more to the story.


Look, if 85% retail investors loosing money does not convince you that they can't judge financial performance, then I probably can't convince you that Earth is round.

Replace TSLA with in your post Bitcoin and you sound like a 'true believer'.

I am perfectly happy with people believing/cheering on TSLA/Crypto, just let's clearly tell that apart from an independent, unbiased assessment.


It only seems non-sensical because you seem have shifted the bar from "sane" to "wise, prudent and good judges of character". This is the original hyperbolic claim that was being responded to.

> Surely nobody sane would never trust his word on anything for any purpose ever again after that display.


> Even worse he's the guy willing to falsely accuse someone of being a child rapist because that person pointed out he had simply tried a publicity stunt that failed, badly, and his tech solution to a cave rescue with a huge media focus was actually worse than useless.

The impression I get is that was just a silly attempt at tit for tat. E.g. guy says "You're doing X for Y bad reasons!" while not having any evidence that you're doing things for Y reasons, so you shoot back with something in the same vein. Y being either "getting publicity" and "being a pedo" respectively.

Yeah, they're not really the same severity of accusation/insult (particularly as Musk has a bigger megaphone), but raising the stakes is common if someone starts a fight with you.

Also note: "Richard Stanton, leader of the international rescue diving team, urged Musk to continue construction of the mini-submarine as a back-up, in case flooding worsened."


"So what if I burnt him alive for calling my hairstyle stupid? It was just tit for tat!"


> There were 3,723 deaths in California in 2020. The lifetime odds of dying in a motor-vehicle crash for a person born in 2019 were 1 in 107. These odds suck. Is that really acceptable? Can we do better? We know we can. If we can reduce that number we should.

This is the crux of the argument. However, it falls flat when you consider that many other simple "can reduce that number" measures have not been adopted – like fitting speed governors to cars limiting them to some speed limit, putting tracking beacons in cars to making them automatically stop at red lights, or putting in devices in cars to limit drunk driving.

I don't see a reason why a much more complicated technology should leapfrog these simpler ones and be put into production so quickly, potentially risking innocent lives in the process. In fact, I believe the reason why it is being rushed so quickly is arguments like yours coupled with a low understanding of the technology involved. People are viscerally more willing to accept the idea that an "artificial intelligence" is going to drive the car, rather than the idea that their car is a multi-ton hunk of metal that needs to be limited and monitored carefully.

> I honestly think the biggest issue is probably the legal a s regulatory overhead of responding to the technology rather than the technology itself, but since it’s so hard to quantify I don’t often bring this up. But the biggest passion I have seen if from those same engineering minds, who truly understand the complexity, but also want to see it happen. Their timeframes are just long. I’ve heard 2045 and 2050.

Those timeframes seem normal to me. Rollout dates should not be decided by the whims of corporate shareholders. Let's put simple speed limiters on cars first, everyone can understand that technology – and see how that goes!


But people want to buy self-driving cars. And nobody wants self—limiting cars. So it’s not really a case of leapfrogging anything

It’s not like manufacturers are waiting on the government to let them sell limited cars


Or get this, we could walk and never get into a car crash!


I take it you don't live in an American suburb then?


I think it was sarcasm


This is too simplistic thinking. For example a basic speed limiter can be very dangerous when overtaking slower cars, where it might be dangerous in some situations to not be able to go above the speed limit for a short moment.


See? You can visualize flaws with these kind of simple systems fairly quickly, don't you think similar ones exist in the purportedly "self-driving" cars?

There are literally millions of situations where a Tesla might do the wrong thing, and some of them have been caught on camera.

If we can't put speed limiters on cars, we definitely shouldn't be putting highly complicated computers in charge with no humans at the wheel.


> If we can't put speed limiters on cars, we definitely shouldn't be putting highly complicated computers in charge with no humans at the wheel.

This argument makes no sense.

How do obvious edgecases invalidating a simple solution have any bearing on the feasibility of a complex solution being able to handle as many edgecases as a human driver?


I think the point is more of a bikeshedding problem: people are more willing to accept “AI” because they are not knowledgeable enough to come up with a counter-example, whereas given a simple automation like “don’t go above the speed limit” any joe on the street can think of why that’s a problem.

But AI is not magic, its just more rules.


If there’s no way to safely overtake a slower car without breaking the speed limit, then just don’t. A self driving car shouldn’t decide to do that either. Also, a speed limiter could easily be designed to allow exceeding the speed limit for a short moment.


Ideally true, but not true in practice. When you're on the open road and the speed limit is 55mph, but everyone is going 80mph+, then anyone going the speed limit is a hazard. The speed limit is just an abstract notion at that point. Sure you may be violating the law, but I'm more worried about laws of physics at that velocity.

You'll have people passing you on the left and behind you at +30mph. The cars behind you in your lane will form a queue of waiting to pass you, and they'll be getting into the right lane needing to accelerate from 55mph to 80mph (hard for a lot of cars). This is going to cause the cars in the fast lane to brake, which will cascade through the lane and cause a traffic slowdown. If someone isn't paying attention, it's going to result in a rear end collision and an hours-long traffic jam for hundreds of people.

This is one of my worries with self driving cars: They'll follow the letter of the law even when it's not practically safe.


Which is exactly why they are proposing to install speed limiters in all cars, which would address the first part of your argument.


what are the logistics for this? There were 263.6 million cars in the US in 2015 (source: https://en.wikipedia.org/wiki/Passenger_vehicles_in_the_Unit...), so I'm sure there's even more today. I could see having manufacturers install speed governers in new cars, but how would you coordinate installing in all the "old" cars? There'd be a period (probably years) where some, but not all cars don't have that limitation.


> When you're on the open road and the speed limit is 55mph, but everyone is going 80mph+, then anyone going the speed limit is a hazard

Surely it's the ones doing 80mph that are the hazzard


No, it's the opposite in a situation where people are mostly going 80 and only a few are doing the speed limit. Try traveling down interstate 95, this is an every day situation.

Look at traffic as a fluid: A stream is calm when the water flows at the same speed and direction, when there is nothing to get in the way. Put a rock into the stream and now water has to move around it, causing flow patterns that collide. Traffic can be looked at as a fluid in the same way. If you want it to flow without friction, it all needs to go the same speed in one direction. Accidents and traffic slow-downs are caused when people merge and people have to slow down unexpectedly, which is going to happen when you're going 30mph under the prevailing traffic speed.

Yes, the person going 55 is following the law, but the law was written in a faraway room, a long time ago, by people not driving on the road in that instant. Sometimes in the instant, following the law is not the safe thing to do. It's the same reason it's not safe to go 25 in a 55, and that trucks are required to put their flashers on if they're going under a certain speed. Because they hare a hazard if they are going too slow.


> trucks are required to put their flashers on if they're going under a certain speed.

This is not correct. Many states in the US do not allow the use of hazard lights while moving.



Any moving vehicle is a hazard, the faster it's moving (relative to the road) the greater the hazard

A stopped car won't crash. A moving car will.


My car has a natural speed limiter (the engine is only 56 hp), and it just means I don't try to do overtaking maneuvers that would be dangerous. If I had a supercar, I would try many more dangerous overtaking maneuvers. Which is safer?


Trucks have been running with governors for decades. That has reduced truck accidents, not increased them. It makes for some idiotic overtaking behavior every now and then (when one truck that goes 89 km/h overtakes a truck that goes 88.5, and it takes forever and then some), but that's a nuisance, not a danger.


> That has reduced truck accidents, not increased them.

How did you determine cause and effect here? There have been a LOT of changes to trucks, roads, signage etc.


This is the absolute state of this debate. You can not overtake if doing so requires exceeding the speed limit.

The best part is the same ignorant people develop the driverless cars, so we get Google cars that turn right across the bike lane instead of merging into it.


You can not overtake if doing so requires exceeding the speed limit.

Unless you are driving in the state of WA:

https://app.leg.wa.gov/rcw/default.aspx?cite=46.61.425


One of my first cars 'lost' the highest gear (5th) after a couple of years.

You adapt and do less takeovers and only when it is absolutely safe.


I'm not sure in what scenario you would use your highest (ie slowest accelerating) gear if wanting to do a fast overtake...


> For example a basic speed limiter can be very dangerous when overtaking slower cars

Eh I have a speed limiter I use quite often just to make driving through average speed areas more chill, and it doesn’t stop me pushing the throttle and getting up to full speed if I need to.

It’s a bit smarter than “never go over the speed” - if you push the accelerator in a definite way the car will still respond, while if you hold your foot fairly still and only accelerate gently it will cap out at the max.


It’s baffling to me that robots are allowed to break the speed limit.


> I’m confused by sf. They did this for Uber in 2016, Amazon’s zoox, and a host of other companies, etc.

My understanding is that those companies had trained test drivers, but Telsa is pushing this out as a beta feature update and allowing the consumers be their test drivers?


> trained test drivers

Are normal people paid minimum wage who become complacent and occasionally fall asleep (e.g. Uber accident), I would argue non-paid Tesla customers are at least as good as "trained drivers". There are 2K drivers using FSD beta for almost a year, w/o any accidents.


100 times this. Tesla Beta testers are people driving their own cars, and are - by selection of being able to afford a Tesla - in average more qualified then the minimum wage test driver.

The word "trained test driver" is misleading. It sound like "test pilot", which are usually the best in their field. But paid test drivers are just people that applied for a low pay job. Heck, they maybe even be tired all the time because they have to have two jobs.


> by selection of being able to afford a Tesla - in average more qualified then the minimum wage test driver.

How does “income received” translate into “qualified for driving”? A good lawyer, doctor, it consultant, … is probably on average a less safe driver than someone that has been through a training, despite earning more.

If your worry is that the test drivers are tired because they need to work multiple jobs - mandate better pay. And from what I know from my doctor acquaintances, being tired from overwork is part of the job description.


One could argue that young drivers are disproportionately likely to get into accidents (inexperience, bravado...) and are also less likely to be able to afford a Tesla.

But even that is not really sufficient for supporting the statement either.


> are - by selection of being able to afford a Tesla - in average more qualified then the minimum wage test driver.

You think having money qualifies someone for being a good test driver? This is one of the most classist comments I've ever read.

A responsible company would hire people based on some criteria of competence before putting them in the "supervisor" seat for an AI driving assist. Even an irresponsible company like Tesla or Uber will still do more due diligence for the competency of their test drivers than for their customers, whose only qualification is a wallet.


> A responsible company would hire people based on some criteria of competence before putting them in the "supervisor" seat for an AI driving assist.

Yeah, but this does not happen.

Source: How much is a test driver paid?

Exact wages vary at different companies. Cruise, the self-driving arm of General Motors (GENERAL MOTORS), has advertised positions that pay $23 an hour. Meanwhile, Waymo, the self-driving entity of Google's parent company, has posted jobs offering $20 an hour.

Uber declined to disclose how much it pays its safety drivers.

https://money.cnn.com/2018/03/21/technology/uber-test-driver...


I just bought a car that costs more than the highest optioned Tesla.

According to you, Tesla should be hiring me for FSD beta testing, according to "selection".

I don't even know how to respond to your assertion.


Why does the salary indicate they are not chosen for their ability to do this job well? Do you also believe people who own a Tesla would be better at serving food than most waiters because they have higher income?


That is a critical difference for sure.


>> I know I am being simplistic, and I understand that no statistical model will ever make up for the loss of a family member, the generally accepted foundation for the morality of self-driving car technologies hinges on whether they can beat the human-driven statistical equivalent.

It's not about emotions; it's about statistics.

The problem that needs to be overcome is not the emotions of people losing loved ones, but the difficulty of accurately estimating safety by test-driving autonomous cars.

The following is from a Rand Corporation study titled: "Driving to Safety"; subtitle "How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?"

Key Findings

Autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.

Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.

Therefore, at least for fatalities and injuries, test-driving alone cannot provide sufficient evidence for demonstrating autonomous vehicle safety.

Developers of this technology and third-party testers will need to develop innovative methods of demonstrating safety and reliability.

Even with these methods, it may not be possible to establish with certainty the safety of autonomous vehicles. Uncertainty will remain.

In parallel to developing new testing methods, it is imperative to develop adaptive regulations that are designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.

https://www.rand.org/pubs/research_reports/RR1478.html

Bottom line, Tesla's (and everybody else's) "safety record" isn't and it will take a lot more time to know whether self-driving cars are as safe as human-driven cars. The stats you quote are not sufficiently informative.


Tesla is resolving this by releasing a great many vehicles with the FSD systems installed and running and feeding data back for review, while not actually operating the controls - result is to observe how the FSD would have behaved under billions of miles of driving, noting where real drivers differ from automation and why, without any real accidents attributable to on-parole systems.


That's silly. We'd never release any product with ridiculous standards like "Test for 100 years".

Original horseless carriages didn't have any rules like that. We liked them anyway. That seems a more realistic model of adoption, than ivory tower perfect-safety pipe dreams.


>> We'd never release any product with ridiculous standards like "Test for 100 years".

The same timeframe does not necessarily apply to other kinds of systems. The study is about self-driving cars and the numbers quoted are about self-driving cars, in particular self-driving cars compared to human-driven cars.

The study starts by defining various industry-standard measures of reliability, for example scroll down to page 3 and look at equation 1 and equation 2. You can see that those are not about "perfect safety" but about measuring the probability of an accident and so on.

As a general rule, if you are confused by the text of a study, check out the maths. They should be unambiguous.


Sure, but the conclusion of the post was something like "we can't test these things enough". Which seems like a deliberate misrepresentation of the math as well.


I'm sorry, which post do you mean? Who said 'something like "we can't test these things enough"'?


Original horseless carriages had enough weird rules: https://en.m.wikipedia.org/wiki/Locomotive_Acts

Edit: it took almost 4 decades to improve them. We are starting with driverless vehicles just right now.


Fully agree, there are simply too many factors, all I know, a fsd shouldn't hit stationary vehicles.

Besides, i don't even see the value proposition for fsd other than commercial trucks maybe.

If people want to be driven around, maybe a taxi or public transport would be more economical and better for congestion and the environment in general. Not sure what people want to do while being driven in a fsd car. A human to human discussion is already possible. Watching a movie? Read a book? Sleep? All these are not practical in a driver's seat.


> The lifetime odds of dying in a motor-vehicle crash for a person born in 2019 were 1 in 107. These odds suck. Is that really acceptable? Can we do better? We know we can. If we can reduce that number we should.

This is like the old shitty political saying “Something must be done! And this is something, therefore this must be done!” It sucks that driving is so dangerous. That doesn’t mean letting tesla do whatever they want will help improve things. As others have commented, the data on tesla’s safety is apples-to-oranges. It doesn’t mean much of anything.


> the generally accepted foundation for the morality of self-driving car technologies hinges on whether they can beat the human-driven statistical equivalent.

On a side note, legally if a Full Self-Driving system kills someone ... who gets sued? The manufacturer? Is Tesla going to tank every single lawsuit from its Full Self-Driving system injuring/killing someone?


I don’t think this is a side note. In the article they appear to have issue with Tesla’s marketing of their feature and the reality.

On Monday, California's state regulator said: "Based on information Tesla has provided the DMV, the feature does not make the vehicle an autonomous vehicle per California regulations."

So the safety concern seems justified. Tesla said it’s full self driving. In reality it’s just assisted driving. Maybe they aren’t prepared to deal with the risk???


Too many are caught up in word-thinking, flustered about “full” when nobody buying it thinks it a 100% safe tech. People want to bitch about it, and do so with “but but but it says ‘full’ when it isn’t!”


Why do you think Tesla chooses to call it this way versus "assisted driving" or something similar?


How do you have autonomous robotaxies if the "full" in "full self driving" isn't "full" in the sense that every native english speaker assumes?


So ”full self driving” in the car world is just marketing for “assisted driving”?

This is why we’re never going to have self driving cars. If everything that they say in the industry has to be taken with a grain of salt, why would any city agree to take on that liability?

Doesn’t even matter if the tech saves lives. You can’t even be straight about what you can deliver.


But Full self Driving isn’t the same as saying fully autonomous. The beta makes it clear you are still the primary driver of the vehicle.

Really we are deciding car company marketing terms are suddenly dubious when someone is legitimately on the cusp of a breakthrough.

VW Blue Motion… there is no blue in that motion.

Audi Quattro Engine… is it four cylinders? 4 litres?

Ford Mach-E — Mach has a defined meaning… seems deceptive to me.


Alright, let me just go to market with my “Cure For Cancer”. It doesn’t actually cure anything, but what’s that got to do with anything?


You won't get to market. The Susan G Komen Foundation will sue you into oblivion, first. Apropos of anything else, they have trademarks on anything resembling "for the cure" when it comes to cancer.

Source: https://en.wikipedia.org/wiki/Susan_G._Komen_for_the_Cure#Co...


If it cures most instances of cancer, nobody will complain that the moniker isn’t totally correct in all cases.

If FSD can handle >90% of my driving, I’ll not quibble.


By definition, the driver is in control of the vehicle 100% of the time when using a level 2 driver assistance system. These systems assist the driver with some of the steering wheel, accelerator, and brake functions, but ultimately, the human driver is always driving.


So explain to me why Tesla themselves put out a video that featured the text:

> The driver is only in the seat for legal reasons. The car is driving itself.

and talk to me about "it's only marketing, you should understand".

Oh, and as the owner of three Audis over my life, there is NO material from Audi that calls it a "Quattro Engine".

It's the "Quattro all wheel drive system".


So it’s Audi using a confusing name for 4WD (the accepted term).

What’s the accepted term for level 5 autonomy?

What’s the accepted term for level 2?

Do you think there are tesla consumers being tricked by FSD capabilities? Right now it highlights that it auto drives on highways but not on streets… which is pretty accurate in my opinion. Where is the dishonesty?


You’re really grasping at straws with the Audi thing. First it was something that has never been said, and now a word for “four” is confusing for naming a four wheel drive system.

Full Self Driving would be acceptable and accurate for level 5.

Now for level 2 advanced driver assistance.

“The driver is only in the seat for legal reasons. The car is driving itself” is a current Tesla statement. That’s inaccurate.

“The car is driving itself until it can’t, or it does something it shouldn’t. But you’re in charge, and we legally disclaim any responsibility for failures of the car to drive itself” doesn’t have quite the same ring to it, I admit.


Not clutching at straws. Using a direct comparison to a competing car company on marketing terms. If we are okay with companies calling features whatever they want. Then FSD is fine. If we aren’t, then let’s get the outrage train going on the entire car industry.


Your original phrasing was "misleading" and "confusing". "Quattro" may be vague, at worst, but I challenge you to define how it is either of those things.


Misleading: it originally referred to a rally car model made by Audi but is used to describe 4WD. It is usually advertised with shots of said rally car driving in a now illegal rally race.

Not misleading or confusing at all. I’m glad people focused on Audi’s Quattro out of my examples because it’s honestly the worst one.


Those other marketing terms are not trying to mislead people into behaving dangerously in an environment where people can get killed. Perhaps I'm a minority, but I absolutely don't want companies to use marketing weasel words when it comes to self-driving capabilities.

Let's not beat about the bush, it's clear that some drivers are already abusing the current "self-driving but not really-self driving" autopilot features. This steps up the bullshit saying "oh this time it is full self driving (but not the total absolute hands off autonomous driving". So which is it? If it's not fully autonomous, don't bullshit saying it's full self driving.

In other words, if there's no blue in Blue Motion, no count of 4 in a quattro and no mach in Mach-E, that's not likely to get people killed. But if there's no full self driving in something called "Full Self Driving", that can and will get people killed.


How will it kill people exactly? You mean the people that (despite the countless warnings and explanations) still think that they can completely ignore their surroundings… then what? The very cautious auto driving car that follows the speed limit and is criticised for being “too conservative” will… do what exactly?


You seem to think Tesla’s system is perfectly safe. I’ve seen nothing that indicates that—just the opposite, in fact.


Could you reference an FSD safety incident?

I know there have been some autopilot accidents but that’s not this system.


Do a web search for “Tesla fsd beta scary” for video examples.


The only examples I could find were of people uncomfortable with the automation, but not scared of an accident. Also no actual accident. Do you have a direct link?


My contention is only indirectly with the safety of the system, and mostly with the labeling and messaging. A cruise control system that is unsafe to leave unattended but isn't called "Fully Self-Driving" or even "Autopilot" is fine. There's no ambiguity. Everyone knows that cruise control is a dumb system that will happily crash you into the wall.

However, I have issues with calling and marketing something "Fully Self-Driving" and then adding paragraphs of text to explain that it's actually not fully autonomous, in fact it's not supposed to be autonomous at all and the driver has to have their full attention on the road and hands on the wheel and any other operation is dangerous. If this was something benign, I'd just consider it disingenuous and shady marketing (that seems par for the course for corporations), but in case of a 2 ton vehicle it's not just their own customers at risk, but everyone around them.

Yes, you and me know what this system really is and can approach it responsibly. But we see so much evidence that there are many people who have no scruples about ignoring this (willfully or not) and assuming that the system must be good enough if it's called "Fully Self-Driving" to unpack a lunch on their lap.


You're either being facetious or ignorant. The marketing name of a model, especially nonsensical terms such as "blue motion", are clearly recognized by anyone as being pure marketing terms chosen for aesthetic reasons.

By contrast, car features generally have non-misleading names - "anti-lock breaking system", "cruise [speed] control" etc. "Full self driving" goes beyond the pail any way you look at it. It is a very explicit name: it means very specifically that a car can drive itself, with no human intervention whatsoever. Even more, Elon Musk, the CEO of the company, has repeatedly claimed to buyers that FSD will allow the cars to work as robo-taxis that can be allowed to work for their owners while their owners are otherwise occupied. This again very explicitly says that FSD doesn't require any human intervention whatsoever.

The fact that, after realizing that it obviously is nowhere near close to such a thing*, they started putting in warning that Full Self Driving is in fact not full self driving in any way shouldn't excuse them. They have obviously repeatedly lied and misled buying customers, and the name has to be changed.

*(and won't be in the next 50 years, most likely, until we actually start seeing AGI or something)


You think there are consumers out there who are being tricked by Tesla? Like they buy FSD and are like “what!?” This isn’t what I expected!? You lied to me!

Also FSD is offered as a subscription so you can “try it out” if you are cautious. Do people really see this as an issue?

There are MUCH worse things on the road than FSD driven Teslas - what consequence do people think will occur?


How long are you going to hold to your disingenuous argument?

"Full self driving" is an explicit term - "self driving" means the car drives itself. "Full" means it does it fully. It couldn't be any clearer what the term means.

Terms like "blue motion", "quattro", "mach-e" are just made up branding terms that would mean nothing to anyone without knowing about them before hand.


I’m not being disingenuous. I really don’t think what features are named on cars are important. The disingenuous position I see is that everyone agrees with me… except if Tesla do it.

FSD doesnt mean anything to anyone either, as it’s never been done before.


> except if Tesla do it.

Its because only Telsa does it. Name an example of another manufacturer using a term as explicit and misleading as "full self driving".

You don't see other manufacturers calling ABS "fully tailgating safe".


Mercedes call their version: Intelligent Drive System

BMW: BMW Personal CoPilot systems

Audi: Traffic Jam Pilot (which they class as Level 3 autonomous driving, higher than Tesla class both autopilot and FSD).

This isn’t an opinion… this is fact. People hate on Tesla out of ignorance of the automotive industry.

Some more facts, Tesla make: - the safest cars (no qualification required) - most efficient production cars - most advanced “auto pilot” but they class it the lowest level.

The nonsense in HN threads talking about safety issues in Tesla or FSD need to go look up cruise control crashes in any other manufacture.

(I don’t own a Tesla nor am I invested. I do work with almost all the major OEMs in my day job)


there is no “quattro engine”


It's marketing term for AWD. In which case it really is perfectly good name standing for number four.


“quattro” is that term. there is still no “quattro engine”


Why don’t they call or 4WD - that is the accepted term for a car that drives on all 4 wheels?


Why do we name things? Why do we have names for anything?


Agree. So FSD for something that drives a car for you. Is as valid as Quattro for a car that drives on all four wheels.


Full self driving means something. FULL SELF and driving something that fully self drives. Quattro, is just word for four... And there is 4 things that do things.

Other one is currently absolutely total lie which should result termination of company, other is not in anyway misleading.


Quattro was a rally car model. They used the name to describe their 4WD drivetrain. They constantly advertise it using the shots of that rally car racing in that (now illegal) rally race.

But sure, FSD is the irresponsible one.


Tesla has nothing to do with the person? Not founded by anyone named such, clearly dubious marketing too?


Sure. My point exactly was… no one seems to care about the countless meaningless terms automotive industry uses. But suddenly we are dissecting the possible Meanings of a feature that actually is pretty close to what it says.

Autopilot keeps your heading and velocity and is aware of objects in its path… so that’s extremely similar to autopilot in a plane.

Full self driving drives the car on its own. It still monitors driver awareness. But the car does most of the driving. I don’t see the issue. It does what it says.


> It does what it says.

> Full self driving drives the car on its own. It still monitors driver awareness. But the car does most of the driving.

Full directly contradicts most


Again, it’s closer to what it does than Blue Motion. Or Toyotas latest “Safety Sense” (even though it sense hazards aka forward collision detection…)

I can reel out examples all day, that no one bats an eye at. But Tesla so something… it’s a whole different ball game.


Uh neither of those even suggest what it does while Tesla's says it does something it doesn't do


So esoteric meaningless names are good.

Descriptive names are bad, because: I don’t think full self driving is full if there are any interventions.

(Despite there being a sliding scale of autonomous driving from levels 1 to 5 with FSD easily falling into 3 if not 4).

Teslas official literature classes it at level 2. But yep, super misleading compared to “Safe Sense” (aka forward hazard detection). Do you know what Tesla call that feature: “forward hazard detection” those bastards with their descriptive names!


When the system is registered as a level 2 system, even if it's fully autonomous, human driver is responsible.


I'm pretty much 100% certain it'll end up being the driver who engaged the self-driving mode in a situation where it wasn't safe.

Think of it like firing a gun into the air. You didn't aim to kill anyone, but if someone is unlucky enough to be hit when the bullet comes back down you are still responsible.

If someone is unlucky enough to be struck by your FSD vehicle it'll still be your fault, regardless of whether you could have known it'd happen.

Drivers will need to weigh trusting their car's software against the risk of being responsibly for what it does.


> But if there isn’t, these experiments should progress as they could ultimately reduce mortality.

Another aspect that is important but easy to miss - deaths caused by an autonomous vehicle are learning experiences - once a couple have happened there will be an engineering response and they will stop happening.

Compare that to ordinary drivers, where each death is just a pointless waste of a life. There is an argument in favour of putting these cars on the road even if they were more dangerous than humans. The faster we push through the painful learning experience the sooner the payoffs of superhuman driving will start being felt in the death statistics.


> once a couple have happened there will be an engineering response and they will stop happening

That's the mother of all assumptions. They might get worse as a result of the engineering response. You are assuming that there is a perfect understanding of the problem space. There isn't. This is not a 'mere matter of engineering' at this stage.


It isn't that big an assumption given how the engineering responses to incidents include incorporating those incidents into their simulated testing processes.

The point is that the understanding of the problem space increases with each incident. This doesn't mean that regressions are impossible (they are probably inevitable), but each regression also increases understanding of the problem space.


> Another aspect that is important but easy to miss - deaths caused by an autonomous vehicle are learning experiences - once a couple have happened there will be an engineering response and they will stop happening.

A death isn't the same as running into a stationary emergency vehicle of course, but AFAIK Tesla hasn't taken any meaningful engineering response to that issue. Assuming they will to other issues seems overly generous.

OTOH, Waymo seems to have only crashed into one bus, assuming it would move over.


That’s fine and all, but who’s going to volunteer to die for no fault of their own, just to progress science? Heck, what if it isn’t even someone who signed up for the FSD beta but just a random bystander the car decided to run over.


You’re missing the point: Random bystanders are run over by human drivers all the time. But when a human driver makes a mistake, we can’t analyze what happened and push out an update to everyone else.


That’s fine, but you can’t sign people up for medical trials just because people die from diseases anyway. You don’t get to force other people into becoming Guinea pigs for science.


Unfortunately or not, this is exactly what Deaths per X miles driven statistics actually are. And that’s the statistic set at the base of automotive safety policy decision making.

https://www.iihs.org/topics/fatality-statistics/detail/state...


Unfortunately or not, this is exactly what Deaths per X miles driven statistics actually are. And that’s the statistic set at the base of automotive policy making.

Here: https://www.iihs.org/topics/fatality-statistics/detail/state...


> I’m confused by sf. They did this for Uber in 2016, Amazon’s zoox, and a host of other companies, etc.

Each of those companies, and Waymo, etc., all agreed to accept liability for any damages from their efforts.

Tesla is sticking with "the driver is responsible, not us". So yeah, that's why there's a discrepancy.


Why is it every time self-driving comes up, proponents act like there's nothing between forging ahead towards L5 and improving over an unaided driver?

Currently AP's "self-driving" features cannot save a life, they have killed people.

To understand that better, examine AP in finer detail:

Teslas have safety features that can intervene when the current situation seems to be leading to an accident. Let's call those correction. Those can (and have) save lives.

Tesla also has features that are supposed to be more convenient for the driver. The claim is they take a load off for the driver. Call those the convenience features.

-

The convenience features have not evolved to the point where you are ever allowed to be less vigilant than you would be otherwise! By definition if AP is driving, you too must be fully attentive and "driving" hence the increasingly intrusive detection which has now evolved to detecting wheel input every 15 seconds...

Now realize that while for the sake of examination I'm describing convenience vs correction, these are all systems in one car. You can get the correction features without FSD, but you can't get the inverse.

Because of this, the convenience features alone cannot avoid an accident that the avoidance systems would not have.

-

This is tricky to think about, but consider a driver who would have drifted off the road without AP. As soon as the driver deviates too far from what AP would have done and starts touching a lane marker, the lane detection systems realize there's an issue and correct.

However, say there's a tricky lane marking where the correction system wrongly believes the driver input is wrong. They get an alert and an attempt at correcting steering inputs.

But because the driver is already inputting a direction with full intention, this wrong input is already actively being overridden by the human who plotted their turn and can clearly sense that they should fight the suggested input before they end up running off the road.

With the convenience features enabled, the driver's input will come after the wrong action has already been taken!

If the avoidance systems knew that the convenience feature was about to send you into a jersey barrier... the convenience feature wouldn't be "allowed" to send you into a jersey barrier!

-

I want to see studies that focus on the safety of driving with full driving aids but without convenience features. Claims that you are less tired from using them show that it's an inherently unsafe practice since you should be exactly as vigilant as normal.

Because of what's explained above, the expectation would be that there is no drop in safety if your car only corrects when there is a hazard... but there is a drop in safety when you are falsely lulled into a sense of safety.

To me WIP self-driving should be monitored by trained people. It shouldn't be a box you get to tick in your infotainment screen.

If Tesla wants to carefully pick who's allowed to run these betas more power to them, but the number of idiotic videos I can pull up right now of people intentionally testing FSD updates in situations they know are hairy and risking other people's safety is insane.

That is just setting up self-driving to be dead on arrival.


> Currently AP's "self-driving" features cannot save a life, they have killed people.

That is just not true. Distracted/impaired human driving is a real thing that kills people. Just two weeks ago there was a DUI event where a Tesla driver on AP literally passed out behind the wheel and the car was trivially brought to a stop anyway.

> This is tricky to think about, but consider

This is postulating a two-mode failure! That's just bad engineering. Yes, that can happen, but by definition (involving two rare events overlapping) it's an extremely rare condition and will show only a vanishing frequency in any final statistics.

Never compromise core features (in this case "don't let distracted drivers hit the thing in front of them") chasing this kind of thing. Work on the secondary/tertiary modes once you have everything else down.


> Just two weeks ago there was a DUI event where a Tesla driver on AP literally passed out behind the wheel and the car was trivially brought to a stop anyway.

Nope, and this is a perfect example!

AP didn't save that person's life, or more specifically, AP wasn't not needed to save this person's life

Tesla made Emergency Lane Departure Avoidance standard on all Teslas a while back.

Emergency Lane Departure Avoidance only intervenes if the driver deviates from expected behavior

So situations where AP would avoid a collision while being in control, the car still has a feature using the same sensor suite and the same capabilities as AP to intervene.

A Tesla without AP would have stayed in it's lane in the absence of driver input and slowed to a halt with the flashers on.

Honestly a better outcome...

It's as simple as, AP is half heartedly handling the happy path + the unhappy path.

It's ok to not have perfect handling of the unhappy path as it does currently, something is better than nothing in 99% of cases. But it's not ok to have this mode that falsely gives drivers the impression they're covered on the happy path.

Like your example could not be more perfect because a person driving under the influence relied on AP to handle the happy path for them. Now you can get into "well maybe they would have driven anyways if it didn't have AP"

And that's where emergency LKA and AEB come in. They wouldn't have given this driver a crutch to get on the road, but if they still chose to drive, they'd avoid an accident in the same manner AP did once things went off the rails.

This all goes back to the fact that the convenience features by definition cannot save a life. AP, as in the feature manually activated to assist the driver actively, cannot save a life, because it only works on the happy path. Other features like LKA and AEB are what save lives

> This is postulating a two-mode failure! That's just bad engineering. Yes, that can happen, but by definition (involving two rare events overlapping) it's an extremely rare condition and will show only a vanishing frequency in any final statistics.

> Never compromise core features (in this case "don't let distracted drivers hit the thing in front of them") chasing this kind of thing. Work on the secondary/tertiary modes once you have everything else down.

I have no idea at all where any of this is coming from... please quote where I suggested core features should be compromised for secondary features.


> A Tesla without AP would have stayed in it's lane in the absence of driver input and slowed to a halt with the flashers on.

That's not how those systems work. You're confused, I think. If you let Jesus take the wheel in a Tesla under manual control it will absolutely depart your lane and hit whatever you are aimed at. The emergency features are last minute corrections to avoid an imminent collision, and they work about as well as other car vendor's products do, they reduce the likelihood and velocity of an immiment collision. They don't drive the car for you. The feature that does is AP.

> please quote where I suggested core features should be compromised for secondary features.

You want to refuse autonomy solutions to everyone to save hypothetical bikers at sunset in Mendocino! (Edit: sorry, I got the wrong elaborate example and was paraphrasing another poster. You want to save drivers where they incorrectly countermand an incorrect autonomy decision in a circumstance where the incorrect autonomy would have been correct. Or something. But the logic is the same.)


> If you let Jesus take the wheel in a Tesla under manual control it will absolutely depart your lane and hit whatever you are aimed at.

You are wrong.

https://www.teslarati.com/tesla-lane-departure-avoidance-sav...

I understand your confusion though, Tesla is well behind other manufacturers here.

VWs have had this behavior for years now, almost as soon as they started dabbling in advanced safety assists, meanwhile Tesla got it just two years ago.

https://youtu.be/TITEf_taUto

Kind of shows where the priorities were early on, saving incapacitated drivers vs "look ma no hands!"

-

Also your second quote just leaves me with more questions.

What on earth are you talking about

> You want to save drivers where they incorrectly countermand an incorrect autonomy decision in a circumstance where the incorrect autonomy would have been correct.

What?


I don't know what to tell you. I have tested this feature on my own car, and its behavior matches the public documentation (c.f. https://www.tesla.com/blog/more-advanced-safety-tesla-owners). It will scream at you as it departs the lane, but it will depart the lane. It will correct only if it detects an object or barrier it needs to avoid. It does not do what you think it does. Tesla does offer a product to do that. It's called "Autopilot" and it absolutely works great. For example it prevented a collision (or even lane departure) entirely in that DUI case we're discussing.


Your test sounds like it failed then... did you read your own link?

> Emergency Lane Departure Avoidance is **designed to steer a Tesla vehicle back into the driving lane** if our system detects that it is departing its lane and **there could be a collision, or if the car is close to the edge of the road**.

And again, AP acted as a crutch that an intoxicated driver attempted to lean on.

Now an intoxicated driver could have decided to drive without the crutch... but emergency lane departure avoidance would have kicked in when they passed out and centered the vehicle to a stop instead of trying to maintain a certain speed.

(And let's head-off the straw-man here, no I'm not blaming AP for an intoxicated driver but I am stating the reality, that AP. Autopilot. Is marketed in a way that causes sober people to think of it as a co-pilot, let alone drunk people)

AP literally disengages when off the happy path, it's other safety features that help when things go wrong.

It's amazing that people still haven't caught onto this.


You just keep digging here, and it's all wrong. I don't know how else to reply but with facts:

> AP literally disengages when off the happy path, it's other safety features that help when things go wrong.

AP never disengages autonomously, ever. You turn it off via driver action, either via steering input that countermands it or via the stalk control. (Same deal for TACC -- it's on until you hit the brakes to kill it).

Really, go rent a Tesla for a few days and experiment. I think you'd find it enlightening.


Wrong again...

https://jalopnik.com/this-video-of-a-terrifying-high-speed-t...

https://www.techtimes.com/articles/254126/20201112/watch-tes...

> Autopilot disengaged about 40 seconds prior to impact due to the Tesla issuing a Forward Collision Warning (FCW) chime

Here's a reckless driver in a Tesla assuming AP could handle unsafe speeds on their behalf.

Once again AP is not the one to blame but... a driver tries to use it as a crutch and it disengages due to the FCW.

Due to the complete lack of input from the driver, many suspect the driver didn't realize AP would be disengaged by the FCW, and while AP disengaging would have given an alert, it'd be at the same time as the multiple FCW alerts they got.

So just like the DUI, the safety features would have worked *better* without AP. If they had been driving without AP they would have not have expected a now-disabled feature to respond to the FCW for them...

-

Seems like you didn't realize this was the case either with FCW and AP, so you're welcome....

Ps: My Model S test drive (remember those) was not going so hot when the headliner started peeling off the demo car, but then went cold when the not-a-salesperson panicked when I didn't disable AP fast enough (since we were approaching a section of lane markers in their test circuit where it was known to veer towards barriers)

I'm pretty familiar with Teslas though, I was a huge fan until gestures to everything from AP marketing to silent CPO->used change, etc.

Pps: Please don't go and try this on a public road to prove an internet stranger wrong.

You already admitted to trying to test ELKA (which won't intervene until things are going seriously wrong.)

You're not going to be able to test a FCW with AP without doing something silly...


It’s a setting. You can have a just a warning or a corrective maneuver applied.


Emergency LKA (which maneuvers) turns on every time the car is started regardless of the setting and has to be re-disabled unless something changed since the press release


>Tesla has a safety record that’s not unimpressive compared to the 36k...

But when Tesla cars were then compared to similarly priced cars; that record dropped through the floor.


I’m not sure why we would care about controlling for price of the car…


There are many reasons why one could be interested in that.

On the personal decision making level you might want to know what car provides the best safety record for your budget. If you can afford a tesla you would be interested in the safety of similarly priced cars to compare with.

On the society wide statistics level Tesla drivers are not exactly like the whole driving population. If for no other reason the price selects a specific demographics with their associated behaviour patterns. By controlling for the price we can hope to better control for these differences too.


You’d also want to control for age of car. Recent expensive cars are more likely to have advanced safety features that older or cheaper cars lack. Maintenance can also be a factor.


Price indicates class of the vehicle (premium in Tesla’s case), so they can be compared with cars in the same class that have similar safety features.


What were the numbers, out of curiosity? Telsa is worse per mile for safety than others similarly priced cars?


I don't remember except that i found it significant, i tried to find the article i think it was a university that did the study, if i find it i will post it.


Product safety is more nuanced than simple statistics in aggregates. Assuming that Covid-19 has mortality rate of 2%, imagine if there was a medication that cures Covid-19 with 100% efficacy but has a side effect of causing immediate death for 1% of the patient who take this medication. One could argue that we'd be better off in aggregate but I hope you can see why this hypothetical medication wouldn't be approved.


Why wouldn't it be approved? vaccine 'side effects' might not sound serious but they are absolutely deadly to some if we were to give them to millions of people. For the elderly, a 1% risk of death might be far better than the risk of covid since mortality shoots up once you get older. Obviously you shouldn't give this vaccine for anyone whose risk of death by covid is less than 1%


I'm not referring to vaccine. Assuming you've been infected, if you have 98% change of surviving this disease, would you really be okay if you were forced to take a medication that has 1% chance of killing you?


It wouldn't be framed that way. It would read: "Chance of recovery improves from 98 to 99 percent."

Chemotherapy regularily kills people. I would absolutely consider it. Taking a lot of pain to get a chance at extending my life.


Chemo is a good example. It's used in a situation of last resort where probability of dying is already very high and if I were in that situation, I would too.


If my options are to be forced to have the medication or prevented from having it, I’d rather be forced to have it. It reduces my risk by half so why not?


YouTuber Mahmood Hikmet analysed Tesla safety numbers in his recent video https://youtu.be/DxXiTf4sO54?t=1190 and it doesn't look good for Tesla.

"Tesla autopilot saves lives" is a bold claim.


I'm pretty frustrated that Tesla has taken fantastic innovation in bringing electric cars to market and marred it with these stupid fixations on both self-driving tech and things like ridiculous steering wheels.

Go ahead and add more tech to avoid collisions and improve safety. But quit it already with these features that seem like the manifestation of Elon Musk's personal hangups.


> ridiculous steering wheels

I really don't get how regulators are OK with the yoke steering in a car! To me it says that Tesla really doesn't prioritize safety.


Tesla fsd has many parallels to Theranos Edison device.

They don't share the data. They don't work as promised. They charge investors up front for what is essentially vaporware. They advertise the ware can or will be able to do much more than it currently does. They both put peoples health at risk.

At least Tesla has a proper usable product, an electric car.

Good on sf to raise concerns, too many other governments are just looking away.


Instead of using the Edison device, as far as I understand, Theranos performed blood testing using traditional providers but hid this fact from investors. FSD, on the other hand, is something 3rd parties can evaluate on their own, the flaws are there for all to see. Hype, while not always okay, isn't fraud.


>Hype, while not always okay, isn't fraud.

There is a legal concept of “theft by false promises” though. I don’t know where the line is drawn between that and “hype”.


In 2016 Tesla falsely claimed that all Tesla vehicles being produced had the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver:

https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

It doesn't seem to have hurt Tesla so far. People are still spending money on the undelivered promise of full self driving.


Not fraud if they give you a free upgrade to more capable hardware.

https://www.tesla.com/support/full-self-driving-computer


I don’t know that I agree, especially if you used the previous software in a way that caused real harm or loss.

Imagine you were a financial consulting firm and I sold you software that guaranteed that you could make 20% per year every year for your clients. But, oops, my software couldn’t do that initially and you lost millions of dollars, garnered a bad reputation, etc. Would it be okay if I eventually delivered an update that fulfilled that promise at a later date?


Well, but where is the software?

What is the cut off point for those who believe in musk's empty promises? 2 years? 5? What if it doesn't materialize and the first car nears end of life?

Is the "ownership" transferable?

At what point can someone say "not delivered as promised"?

It seems like Tesla is investing more on promotion and marketing than the right thing, while others simply work on a solution silently.

Wouldn't be surprised if Tesla won't be the first to market with fsd.


A free upgrade to hardware that's still not capable of full self driving.

Tesla promised that full autonomy will enable a Tesla to be substantially safer than a human driver, lower the financial cost of transportation for those who own a car and provide low-cost on-demand mobility for those who do not.

When will Tesla deliver what was paid for?


What if they don't deliver over the entire lifetime of the car (very likely)?


Tell me the difference between fraud and hype and I'll have the games industry arrested.


I know the question was rhetorical, I think the case is made easier when one can point to a realized loss in terms of property, money, life/limb etc. I think there’s a much straighter line to than regarding safety critical software than gaming


FSD regularly drives me on highways for hours without intervention…pretty amazing stuff considering where we were 5 years ago…say what you will about it, it’s not vaporware and gets better every update


"FSD" might be vaporware, i.e it's misnamed because it's not "full" self driving.


Yeah the majority of the pushback probably comes from people over expecting what the system should do from overzealous marketing, but the tech is slowly getting there regardless.

Whoever says the software flat out doesn't perform either hasn't been keeping up with development or has other motives.


What marketing though?


I think calling it vapor ware is uncalled for, you can test drive a Tesla with FSD and see for yourself the level of autonomy it’s achieved relative to other cars.

More than anything else it’s marketing hype, and I agree calling it full self driving when it has documented flaws is disingenuous. But it’s not vaporware.


"level of autonomy it’s achieved relative to other cars."

Well Theranos machines did exist and provide results, they were just wrong, so they werent vapour ware either. Its just in healthcare there is a clear benchmark they must be able to meet.

There isn't universal agreement for the benchmark for calling it FSD, and there is debate as it's safety. Personally I am not happy with any Self Driving that accidentally drives into a stationary solid object, like a wall or a trailer - that should be impossible.


I'm not happy with human drivers doing that either, but it does seem to happen. And that is after all the real life benchmark.


> Theranos Edison device

Please. You can go online and buy a FSD beta car right now. Countless videos on YouTube of the product working or being improved upon release-to-release.


So can I sit in one and do that coast to coast trip and watch tv and sleep meanwhile?

I remember there was a promise of some coast to coast trip, what happened there?


They didn't promise you being able to watch TV and sleep, but there are countless videos of long trips. Go watch one.


Ok, what did they promise?


From those videos it's obvious the product is definitely not working if we take the name of the product at face value


Comparing Tesla to Theranos is laughable. Tesla FSD/Autopilot are already saving lives. There are countless examples of recorded drives on YouTube where you can see completely safe long drives in crowded city streets with no driver intervention. It's enough to look at FSD 1.0 (Less than A year ago) vs current FSD 10 to see the crazy rate of improvement.

There have been 2K testers for a year w/o any accidents, I would much rather my life be "at risk" by Autopilot than a drunk/tired/distracted human driver.


> It's enough to look at FSD 1.0 (Less than A year ago) vs current FSD 10 to see the crazy rate of improvement

We still haven't gotten basic technologies like speech recognition and OCR to work without proof-reading, and those have been around in consumer technology form since at least the 80's and have had an order of magnitude improvement every decade. These "fuzzy" technologies have an asymptotic improvement curve that never reaches 100% and self-driving is several orders of magnitude more complex.


You're right, autonomous driving will never reach 100%, however I'm convinced it will be 10x human in a few years, mostly due to:

* Better sensors - ~10 cameras around the car vs 2 cameras on a limited gimble.

* Very high consistency - Speed, distance from other cars, optimal stopping distance from a switching stop light, better speed/distance estimation of other entities (eventually)

* No distractions - a computer will never drunk drive / look at their phone / argue with the passenger / have road rage.

IMO These advantages will out-balance the very rare ambiguous soft situations a computer will have a hard time handling, let's say a custom written no-entry sign or a cop waving their hand.


"These advantages will out-balance the very rare ambiguous soft situations"

Tesla has driven into a solid concrete wall, that does not seem ambiguous to me


Humans have done that too. Meanwhile Tesla’s autopilot has many instances of saving humans who would’ve other crashed.


There is always the argument of Tesla being safer than drunk, intoxicated drivers. First, I would not be too sure about that, but I would rather the governments implement a DUI check as well as some sort of alertness test before an engine can be started than trust the current fsd software. Put speed limiters as per the given location and the DUI check in all cars , then let's compare the numbers again.

And how can a Tesla save any driver when any driver is supposed to be in full control of the vehicle at any given time? If a driver needs Tesla as a saviour, then he has no business being behind the steering wheel.


We don't need more government in our lives. However instead of all that overbearing safety equipment, why not just have automatic driving?

> "when any driver is supposed to be in full control of the vehicle at any given time?"

Because this is not reality. Nobody is in full awareness and control of the vehicle at all times, that's just how humans work. Even if we were, we still don't have the same reaction speeds. If you actually watch the Tesla examples, you'll lots of aware drivers who just didn't notice or react in time to a collision.

Your entire premise is faulty. We can't work with what should be but rather what actually is.


You're making a claim that in a counterfactual world, FSD might be safer than human drivers. Even if this were true (which obviously there's no evidence of), why is that an argument against FSD in the real world?


Have you ever heard of an alert, sober driver just driving into a wall for no reason? surely 99% of the time thy were drunk, or texting, or falling asleep at the wheel.

I am not using FSD untill it's better than an average professional driver, like a bus driver, who have to be sober, etc. If something goes wrong there is not enough time to intervene.


Yes, people have driven into walls, buildings, cars, lakes, even other people, while being completely aware.

Many accidents (including the ones that Tesla's have prevented) include completely normal drivers who made mistakes. If AI can make fewer/no mistakes, and react better to mistakes by others, than it's a better overall outcome.


Do you currently drive a car? Is there any evidence that you're an above-average driver? If you answered "yes" to the first question and "no" to the second, then why wouldn't you trust a self-driving car as soon as it's known to be better than the average human driver?


You are doing bait-and-switch:

I am not better than average driver in skill, I am safer than average.

Being 'better than average' by comparing to national crash statistics means better than a half-drunk guy who is sleep deprived to boot.

I don't drink, I don't need to drive for 8 hours straight, my vehicle is in good condition, I dont have to deal with a gravel path through the mountains, etc. Those are the people that dies and they are in that statistics


So? There's no fundamental problem with identifying static 3D objects with vision. If anything it will be one of the things that become more consistent than humans sooner than later. I think the control/planning part is probably much harder than vision/perception (maybe except "soft" perception).


Nothing will be 100% perfect but computing power and machine learning has drastically improved in the past few years and changed everything.

Also the basic tech you mentioned like OCR and speech recognition is actually so good now that it runs on your mobile device, can tailor itself to the way you talk and write, and can even enable real-time translation to other languages across mediums like something out of a sci-fi movie.


Does anyone ever use speech recognition to dictate a legal document where a milkion dollars hangs in the balance? No, it would be dumb.

So why should that layer, after meticulasly checking this document, trust his life to another ML system, just like the one that just failed him?


Lawyers make mistakes too. Perhaps you’re surprised to hear that but there’s an entire section of law to deal with this. And yes lawyers use software dictation all the time because it’s faster and more accurate than a human assistant.

In fact ML does already do a better job in tasks like crawling through case research, while people then take over for strategy and deduction.

This split between low-level mechanical vs high-level creative thinking is a good distinction between ML vs human application, and most driving falls in the former.


"lawyers use software dictation all the time"

And they don't check the output? Are you sure about that, because the few I've spoken to say they won't touch it with a barge pole.

If they do check, then they don't trust the system. In driving you don't to hit pause and check the output.


You're conflating issues. Software is already better at transcribing than humans. [1] And lawyers will verify the contents whether it's done by software or a person, because neither is perfect and their job is the higher-level intention of the document, not the copying of words. But they still like the machine saving the human from tedious work.

1. https://www.nuance.com/dragon/business-solutions/dragon-lega...


We're never going to achieve perfect self-driving cars, and that's okay. Waiting for perfection wouldn't be the prudent option, anyway. As soon as self-driving cars are better than the average human, using them becomes a moral imperative, because not using them would result in a net loss of human life.


Not how it works. If there was a way to prevent all smoke related cancers but it randomly kills 1 million person a year would you say it is also a moral imperative? After all it is a net gain of lives.

Total casualties is not the only the to consider. You have to think about responsibilities, control, separate cases etc.

In addition, currently FSD is NOT safer than manual driving when comparing on equal conditions.


I think we’re on the same page. My primary point was that we don’t need a 100% perfect solution for autonomous driving to make sense.

From a regulatory perspective, I do think that a system that results in a ‘net savings of human life’ should be permitted, even though removing the human from the equation would likely change the distribution of driving deaths, such that some people (like exceptionally careful drivers) may be marginally more likely to die in such a system. Those people should of course have the choice to drive themselves.

From the perspective of an individual, a ‘net savings of human life’ may not be a good enough threshold. I personally would not want to put my child in a self-driving car unless I felt confident that it was safer than driving myself, and the knowledge that the AI was “statistically better than the average driver” would not instill sufficient confidence.


> Tesla FSD/Autopilot are already saving lives.

Holy overstatement


Such an informative input to the thread


I just hope for one thing: that they ban automatic updates.


So if a bug is found such as the fact a particular pattern that happens to be the logo of an up-and-coming boy band crashes the car's visual software we just have to tell everyone to stay home and not use their self driving cars until they've installed the fix and hope people actually receive the message before tragedy happens?


What would you suggest otherwise? Allow an update to be rushed?

The cars should be grounded anyway (or self-driving should be disabled). This can be done remotely in a safe way.

Any new update to the software should be tested by the DoT for at least X miles and Y hours before it can be installed.

The installation of the update should be done by an agency of the DoT to prevent car manufacturers from cheating (and we know they will cheat when they can find a way, see VW scandal).

Let's not allow the car industry to be run like the software industry (!!!)


Most Bay Area Tesla drivers aren’t great. I look forward to a computer driving instead of them one day.


Why would anyone take seriously the opinion of SF government about “safety” when it’s official policy explicitly condones open IV drug use?


If you are worried about people’s safety then why don’t you stand in the back of a bar at closing and try to stop a few people in the mob of drunk people drunkenly stumbling into their cars? You would save more lives that way than protesting self driving. Do any of the Tesla detractors take any action about or even comment about the tens of thousands of deaths caused by people craning over their phones, doing their makeup or intoxication? No, because tesla alarmism is not a rational position and it is not based in objective assessment of public safety.

I’m so tired of the media and the mob taking these select issues out of context. The media plasters FSD accidents across every screen in the world, even when later it turns out the driver was drunk, but it doesn’t mention the zillion miles of uneventful riding. You see clips of FSD doing something weird, “diving toward pedestrians,” shooting straight to the top of reddit/HN (is there any distinction nowadays?) but videos of FSD doing amazing things, negotiating the Berkeley hills without intervention, never reach the front page.

If you showed a person from the 90s two hours of v10 FSD in various environments, mistakes and interventions and all, that person would say with astonishment “that car basically drives itself.” And anyone who denies that is being intellectually dishonest.


> anyone who denies that is being intellectually dishonest.

Nobody disputes that it can drive fine most of the time on easy roads.


Berkeley hills are easy roads?


I haven't driven there but I had a look at a random spot (Grizzly Peak Blvd) and yeah they look very easy.

I'll say "this car can drive itself" when it can drive over double roundabouts and down streets with on-street parking on both sides:

https://i.dailymail.co.uk/i/pix/2016/01/15/16/3032888D000005...


Because you are intellectually dishonest! If that’s what you would do in the 90s then you would be completely alone! And griz peak is literally nothing like the Berkeley hills.


I am terrified at the concept of "self-driving" or anything marketed as "self-driving" even if it is really advanced driver assist. When I talk to my engineer friends they are also deeply skeptical. Maybe its our age (mid 40's) or that after dealing with software development for 20+ years you tend to be jaded. Maybe it's that I know that ML based systems have limits based on the training data and it's not real deep AI. Maybe its that all the training is done in CA and someone will try to use this software where I live in New England and do it when it is snowing at night on a drive to Vermont and the system will fall apart.

I don't think we understand the human factors involved and the the hubris that most people will ignore their role in oversight as the software gets more advanced. If the software is 90% effective for most driving people will tune out. Holy crap that is scary.


This is broadly the reasoning behind the fear in the linked article. But it's... really it's just fear. All the stuff you posit is something that can be tested and analyzed. Thousands of people have been driving this stack around since March with no accidents. At some point data will win, right? Have you worked out in your head when that will be?

> I don't think we understand the human factors involved

Do we? Because human factors responsible for literally every traffic accident in history, and those don't seem as scary to you. Basically: it's fear. It's just fear. And it's reaching a peak right now (and being deliberately pushed, frankly) just as Tesla is reaching a public rollout.

And why? Because there's another fear at work here too. If this rolls out and doesn't kill people. If it turns out that it actually is safe, then a lot of people across a lot of industries stand to lose a lot of money. Stories like this are, fundamentally, just marketing press hits. The public FSD beta rollout in some sense marks the last chance to kill Tesla's autonomy product before... it wins.


By human factors I mean people's interaction with and dependency on technology. As example say Tesla FSD works great on the CA highways and people start to depend on it. Maybe they starting scrolling through instagram on regular basis on their commute. Then they decide to go to Mendocino up whatever that super twisty and scary road is called. The put on FSD and the Tesla does a great job. They start to depend on the Tesla to do even the more technical driving. So they stop paying attention even on the harder driving sections.

Six months down the line they are using the FSD drive on a twisty section of road. The sun is the eyes of vision based FSD and there are a bunch of cyclists in the road. The FSD hasn't been trained on this case (or hits a bug or whatever) and doesn't detect the cyclists. The driver who has been "Trained" by the FSD to not pay attention is fucking around with their phone. The tesla plows into the cyclists.

This is my fear - the fear that you detrain people on the responsibility of driving. I suppose at the base of this is I have very low trust in how people will reason about their role with technology. People who don't know about the limitations of the machine will assume it is accurate. People will work to exploit the limitations (two hands on the wheel, eye tracking, etc).

If this technology was marked as "crash avoidance" technology I'd be much less worried. To take the twisty road example, imagine if the Tesla can not only self drive the car to a safe point in case of drive incapciation - that's pretty awesome but also puts the clear responsibility on the human that they are doing the driving.


> Then they decide to go to Mendocino

Again, that's a multi-mode failure. You're assuming that (1) edge cases must exist somewhere where FSD is unsafe, (2) that drivers will find them, and (3) that when they do, the existing supervised beta paradigm won't work. In isolation any of those can happen, sure. But in combination they're vanishing.

Broadly you're doing the luddite/reactionary thing and demanding perfection: as long as you can conjure a scenario in your head where something "might" happen you want to act as if it "will" and demand that we refuse to implement new solutions.

But the goal isn't perfection and never has been, for the simple reason that the existing solution also sucks really bad. Real human drivers in Mendocino with the sun in their eyes hit bikers too!


To be a bit more obvious: I am afraid. I am not telling other people to be afraid. But I am very curious about the general public's view of this technology.


> At some point data will win, right?

Nope. If you test your software and it shows no bugs, I'd be more worried that the test was flawed. Underneath the shiny bits we developers know how buggy software really is. Its just a fact that the smartest humans keep introducing bugs in software. The more complex the software the more complex the bugs and difficult to find.

> Stories like this are, fundamentally, just marketing press hits. The public FSD beta rollout in some sense marks the last chance to kill Tesla's autonomy product before... it wins.

Nobody owes it to Tesla to give them free publicity or be forced to immediately accept their ideas. Personally, I think its makes more sense to invest in tech that reduces dependence on needing cars in the first place.


> thousands of people driving this stack since March with no accidents

Don't these people regularly have to intervene and take over, hence the no accidents? Unless I can responsibly cede full control to the self driving system it's less than useful for most drivers imo. If I'm wrong let me know but that is the impression I have taken away reading on the tesla self drive system.


That's right, this is still beta software and the users are still responsible for supervising. "Less than useful" isn't the standard, though. SF and the poster above aren't making an argument about consumer protection or lemon laws or anything. They're claiming a safety problem.

And there's no evidence of a safety problem.


How many accidents would have happened if those people would have done all of that driving manually? If you count accidents that FSD would have caused without a human driver, then you have to also count accidents that human drivers would have caused without FSD.


Yeah, besides Tesla has been collecting data for like a decade now, I'm sure they have enough of it to make a robust enough system for the US at large, especially with their Dojo training chip.

At the end of the day it just needs to be better than the average human, and that's frankly not the highest bar unfortunately.


>At the end of the day it just needs to be better than the average human

This comes up a lot on techno-centric forums like HN but I think it misses an important point. It’s not necessarily if it performs better than the average human that will lead to policy that allows it on roads. It has to have a high degree of trust in the public’s eye. That bar is likely much higher than “just better than the average human.”


Afaik the full self driving is not better than the average human, not even close. The reason for the lack of accidents are the human drivers intervening when the system suddenly behaves irrationally, which is often. But if the FSD was left entirely to its own devices there would be a huge uptick in everything ranging from fender benders and fatal collisions.


That's a strawman though. The argument upthread and in the linked article is that the supervised FSD Beta is presumptively unsafe.

Arguing that full autonomy isn't possible yet (duh, I mean, it says "beta" right there in the name) isn't an argument to disallow testing.


The difference is: you can understand a drunk driver, you can arrest them, and possibly feel some sense of "closure".

What are you gonna do when a car goes rogue and murders somebody?


When a drunk driver causes an accident, you can arrest them and keep them from driving anymore, but this doesn't stop anyone else from driving drunk. With self-driving cars, you fix the bug that caused the accident and deploy it to the whole fleet, so none of them will ever make the same mistake again.


That's a new one to me. You're arguing that we can't allow autonomous systems in our society because when they have bugs that cause problems... there isn't a moral actor who we can punish?

Do you feel the same way about your gas furnace or the autopilot of the last airliner you flew on?


>> At some point data will win, right? Have you worked out in your head when that will be?

According to a Rand corp. study I quote in another comment, it will take millions or billions of miles and decades or centuries before we can know self-driving cars are safe, simply from observation:

Autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.

Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.

https://www.rand.org/pubs/research_reports/RR1478.html

That's statistics, btw, not fear. Data, like you say, will "win" at some point but that will be far in the future. Until then we have a few thousand systems whose safety we can't measure but are available commercially (and sold under pretense of improving safety).

Note also that before Tesla started selling cars with "autopilot" there was no way to know how safe it would be, in advance. That is not the behaviour of a company that cares at all about safety, other than as a marketing term.


That Rand paper is sort of a joke. It's positing an idea of "certainty" we don't apply to any other industries anywhere. The same logic would argue that human drivers are not known with certainty to be safe, that industrial accidents are not proven not to happen, that airline autopilots aren't known with certainty to be safe, etc..

The goal isn't certainty. It's "better". It seems like it might be "better" already.


The Rand study is not about "certainty", but about determining the safety of self-driving cars and the number of miles needed to do it. Are you confusing the colloquial meaning of "certainty" (as in "I'm sure about it") with the meaning of "certainty", and particulary uncertainty in probabilities and statistics?

Is there some other published work that you prefer over the Rand study?


"Certainty" appears in the abstract! But yes, the article is talking about the work required to derive a 95%-confident result of a specific improvement. But that's backwards, and not how one does statistics. You measure an effect, and then compute its confidence.

And it's spun anyway, since most of those "trillions of miles!" numbers reflect things like proving 95% confidence of 100%+ improvements in safety. When all we really want to know to release this to the public is 95% confidence of 0% improvement (i.e. "is it not worse?").


>> But that's backwards, and not how one does statistics.

Are you saying one cannot use statistics to make predictions about future events?


I'm saying that to make good decisions, you need to make predictions about the right future events. And Rand is pontificating about the wrong ones.


Where is it that they are "pontificating"? They're calculating how long and how far self-driving cars need to drive before we can accept they're safe.


I see it like this... Crashes will happen if we block self driving cars. Crashes costs lives... People will die. But if they die and the algorhythm learns from it and prevents the crash the next time... In a couple of years of further crashes maybe we can iron out most flaws and we save lives.


You think because the computer is in California that it only knows how to drive in California? Maybe you should learn the first thing about something before spreading FUD about it?


lol no, Its that we have different road conditions and unique circumstances in different parts of the country.


It wouldn’t matter if the training computer were on the moon. Training data is collected by Tesla owners all across the country including the Eastern United States. Regardless of how well you think it can drive, it can drive equally well in New England as it can in California. And there may be an unequal number of Tesla’s on each coast but that’s irrelevant.


I know that there are probably versions of AI that are hidden from the public both private and military but something tells me if we were close to the advent of anything self operational it would be evident. There would be transitional technologies emerging on the markets. More than likely there are pieces of very complex intelligence resembling mechanisms that are held together by teams of people.

The idea of even the most modern intelligence system navigating through an environment as elaborate and dynamic as New York City or downtown Chicago should give anyone pause. I can see them being more useful for long distance journeys such as hauling a trailer across rural desert areas where all the vehicle needs to do is stay in its own lane, maintain a set speed and brake if something obvious gets in front of it.


To be frank, the lane keeping ADAS has been out for years and most car manufacturers offer such a thing on their high end models. Usually radar supported and pretty reliable from what I hear.

Perhaps it's not the best idea to use these more advanced Level 4 systems in the densest city yet, but most of driving does usually happen on highways and regional roads so it's not a major loss.

I'm sure someone will eventually figure out how to detect "eyyy im walkin eer" and honk back automatically, and then it should be usable in NY as well.

> versions of AI that are hidden from the public both private and military

I do think you overestimate how much the military has its shit together. These systems need gigantic amounts of driving data to train and nobody else has that aside from car manufacturers with their fleets. And out of those Tesla is the one that's been doing it the longest.


No, I think it's reasonable to wonder whether some powerful enough agents have some "super secret tech" and aren't telling. It wouldn't be the first time. For instance, the US didn't exactly advertise having the atomic bomb before they first used it (although it was a very short time after and they did specifically work on the atomic bomb to use it as soon as possible).

That said, personally, I think it's unlikely.

First, the vast majority of AI research is happening out in the open. It's often funded by militaries (e.g. DARPA) but it's not owned by them directly, rather the owners are universities and similar.

Second, private corps promote their AI advances very aggressively and this marketing is a big part of the advantage they try to establish over their competitors, so it doesn't seem obvious why they would keep a new advance "under wraps". For example, what would Google do with a secret machine vision system capable of 20% better performance than that of its rivals? I can't tell.

Of course we can assume that Google or someome similar have a much more advanced system than anyone else, like a real AGI. We can imagine such a system sitting at the heart of Google, dirctly advice Larry and Sergey how to run the company. But then, if we can imagine that, we can imagine anything.

And I do mean anything:

https://gatherer.wizards.com/Pages/Card/Details.aspx?multive...


Well I do think it's very possible if not certain that they have unusually advanced systems that have been kept secret, but I would expect them to be in the areas they actually have data for.

As in security cameras, finances, cell tower data, etc. Facial and gait recognition, location tracking, etc. far beyond anything commercially available. But I'd doubt they have an AGI behind closed doors :P


Well actually if you look at the rest of what I said, it’s that if there were advanced systems hidden behind the curtain, transitional stages of said systems would have made their way into the public realm. You can’t just jump to something as massive as full blown AI and not have byproduct tech. Consider the space race for instance, and all the resulting byproduct techs it brought. So many inventions that were created along the way that made their way into the public realm. I find it difficult to believe that any other breakthrough of similar or greater magnitude would not do the same.


I am appalled at the way we are attempting to introduce driverless cars.

Cities install traffic lights with sensors and an internet connection so that it can be controlled from the city traffic management HQ. There are dozens of professionals in front of giant screens, and a supercomputer at their disposal. Could that information be sent directly to cars?

No, instead we cram a computer and a camera into every car and hope it can find the right pixels on the picture in fog and rain.

Smart approach would be to invest in infrastructure - standard location beacons that tell cars exactly where they are and what is ahead, wireless communication between cars so they know locations and bearing of every vehicle within a square mile, wireless tags could be installed on all bikes and cycles, etc. We have an entire department for traffic management in each city that could suggest a route to every car?

We already installed speed cameras, number plate recognition, pedestrian counting, we have vehicle sensing sensors inside the road that count and weigh cars, and none of this information is being used. If cloud computing makes sence for crappy apps, then it definitely makes sence to assist cars.


What do you do when something falls on the road? Should everyone walk around with a wireless tag if they don't want to be run over by a car?


A centrally automated system could — and certainly would — be backed up by per-vehicle AI. Exactly the same way that self-driven cars are backed up by human intelligence.

(Note, I’m not advocating for either approach. I think personal automobiles should be banned, in fact. But I’m at least willing to be honest about how such systems would obviously be designed.)


Obviously not. In such a case, a much better option would be to fall back to the camera system or, worst case, force the car to go into manual mode with a human driver if it detects that nearby systems are unresponsive


Then the fallback must be so good that you don’t need primary system in the first place.


No, that would not be a smart approach. Then you can only drive where beacons are installed.

If you have no problem with humans driving by hoping they find the right pixels on their retinas, why would you worry about AI doing the same? If anything, AI has a perfect reaction, is never tired or sleepy or unattentive. It just needs to be taught properly.


> Then you can only drive where beacons are installed

Uh no, then you can only use automated driving systems where beacons are installed.


"It just needs to be taught properly."

So just like all miracle solutions, .i.e. communism, every time we tried it, we just weren't doing it right.


Technology, not miracle. It's being actively worked on, and there's an undeniable progress. YouTube has plenty of videos of Tesla's self driving system and its progress over the years is very obvious.


No, the problem is that AI systems are unpredictable, unverifiable, and their correctness is unprovable. There is a million academic articles detailing this issue.

When we need something that works 99.99999% of the time, it is written in ADA or similar safe language and tested exhaustively. You know how long processing will take to the microsecond, you use a special realtime OS, etc.

AI will have its own failings, some of them totally unpredictable, so yes, boiling it down to 'just needs to be trained properly' is terrifying when peoples lives are at stake

Look at this attack, and tell me how is this goonf to be avoided with 'proper training' https://www.theregister.com/2018/01/11/ai_adversarial_attack...


> the problem is that AI systems are unpredictable, unverifiable, and their correctness is unprovable.

Same can be said about humans.

None of what you listed matters in the real world. Self-driving systems just have to be better than average humans, on average, to be useful.

> boiling it down to 'just needs to be trained properly' is terrifying when peoples lives are at stake

You have no problem with humans being trained to drive. 38.6 thousand people died in car crashes in the US alone last year.


An AI which “just needs to be taught properly” is the same as a genie or magic pixie which also exist but for “just needing to be taught properly”


Transportation infrastructure investment is fairly zero-sum. A city full of self-driving cars is only marginally better than a city full of regular cars. The ‘smart approach’ is to reduce car friendliness in cities and rather invest in public transit and walk/bikeability.


The issue with this is that it requires such high capital investment before the first driverless car can be introduced.

What you are describing is actually probably the end game - but I think musk and others have correctly identified that if you wait for infrastructure to be ready then you will be waiting a long time, and if you want to see self driving cars anytime soon you have to assume current infra.


If this is where you want to go we could just install souped up train tracks instead.


The centralized approach you describe would work over less than a few percent of the land area of the USA, namely big city cores.


Those are also, incidentally, the hardest places for autos to exist in.

It's much, much easier to navigate a lonely highway in the middle of nowhere without much traffic than it is to navigate the perilous labyrinth of the suburbs and downtown traffic.

It can also be done much more simply by laying down copper in the road and making it a part of infrastructure so that the cars can sense it and figure out that it's the road. Stop signs, traffic lights, and interchanges can be marked using changes that the car can detect. The rest can be collision avoidance.


Ultimately I think the issue is that the government would have to take the lead and deploy the infrastructure to make self drive work better. That US government barely does infrastructure anymore. Nobody wants to take that lead, take the political risk, or pay for it. That leaves private sector cowboys to innovate with no help.


The US Government is trying to do infrastructure right now. To the tune of $3.5Tn over 10 years to make up for the lack of investment over decades. And it includes charging infrastructure and more. The budget has been balanced and paid for, thereby not impacting the deficit.

> Biden is proposing a $174 billion investment into electrifying cars and trucks in the United States. It would establish grant and incentive programs for state and local governments and the private sector to build a national network of 500,000 electric vehicle charging stations by 2030.

https://www.nytimes.com/2021/03/31/business/biden-electric-v...

America's infrastructure in such bad shape that there's a good chance that the arteries of US commerce will start having an increase in catastrophic failures leading to death.

But it's being pushed back against, because well, reasons. This spending, to transform America's degrading infrastructure, will come out to $350Bn/yr. The Middle East & Afghanistan wars come out to, $355Bn/yr at - $6.4 trillion over ~18 years. https://www.cnbc.com/2019/11/20/us-spent-6point4-trillion-on... . The tax cuts under the last administration come out to $200Bn/yr. Neither of these - the wars and the tax cuts - have been balanced and are coming out of America's ballooning deficit.

If you would like the Government to do more of this, please call your representatives and vote for people who invest money towards a better future for everyone.


And if anything, governments are in a privileged position to create and enforce these standards across multiple large regions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: