Hacker News new | past | comments | ask | show | jobs | submit login
US probes Tesla's Full Self-Driving software after fatal crash (reuters.com)
417 points by jjulius 52 days ago | hide | past | favorite | 977 comments



I'm on my second free FSD trial, just started for me today. Gave it another shot, and it seems largely similar to the last free trial they gave. Fun party trick, surprisingly good, right up until it's not. A hallmark of AI everywhere, is how great it is and just how abruptly and catastrophically it fails occasionally.

Please, if you're going to try it, keep both hands on the wheel and your foot ready for the brake. When it goes off the rails, it usually does so in surprising ways with little warning and little time to correct. And since it's so good much of the time, you can get lulled into complacence.

I never really understand the comments from people who think it's the greatest thing ever and makes their drive less stressful. Does the opposite for me. Entertaining but exhausting to supervise.


I just gave it another try after my last failed attempt. (https://tomverbeure.github.io/2024/05/20/Tesla-FSD-First-and...)

I still find it shockingly bad, especially in the way it reacts, or doesn’t, to the way things change around the car (think a car on the left in front of you who switches on indicators to merge in front of you) or the way it makes the most random lane changing decisions and changes it’s mind in the middle of that maneuver.

Those don’t count as disengagements, but they’re jarring and drivers around you will rightfully question your behavior.

And that’s all over just a few miles of driving in an easy environment if interstate or highway.

I totally agree that it’s an impressive party trick, but it has no business being on the road.

My experience with Waymo in SF couldn’t have been more different.


> it makes the most random lane changing decisions and changes it’s mind in the middle of that maneuver.

This happened to me during my first month of trialing FSD last year and was a big contributing factor for me not subscribing. I did NOT appreciate the mess the vehicle made in this type of situation. If I saw another driver doing the same, I'd seriously question if they were intoxicated.


> I still find it shockingly bad, especially in the way it reacts, or doesn’t, to the way things change around the car (think a car on the left in front of you who switches on indicators to merge in front of you) or the way it makes the most random lane changing decisions and changes it’s mind in the middle of that maneuver.

I have said it before, I will say it again. It seems that this software does not posses permanence, neither object nor decision.


> (think a car on the left in front of you who switches on indicators to merge in front of you)

That car is signaling an intention to merge into your lane once it is safe for them to do so. What does the Tesla do (or not do) in this case that's bad?


What I expect it to do is to be a courteous driver, and back off a little bit to signal to the car in front that I got the message and that it's safe to merge.

FSD is already defensive to a fault, with frequent stop-and-go indecisions of when to merge onto a highway, but that's a whole other story.

A major part of safe driving is about being predictable. You either commit and claim your right of way, or you don't. In this situation, both can be signaled easily to the other party by being a bit of a jerk (e.g. accelerating to close the gap and prevent somebody else from merging) or the opposite. Both are better than not doing anything at all and keeping the other dangling in a state of uncertainty.

FSD is in an almost permanent state of being indecisive and unpredictable. It behaves like a scared teenager with a learner's permit. Again, totally different than my experience in Waymo in the urban jungle of San Francisco, who's a defensive but confident driver.


Defensive driving is to assume they might not check their blindspot, etc. And just generally ease off in this situation if they would merge in tight if they began merging now.


That’s the issue: I would immediately slow a little bit to let the other one merge. FSD seems to be noticing something, and eventually slow down, but the action is too subtle (if at all) to signal the other guy that you’re letting them merge.


> That car is signaling an intention to merge into your lane once it is safe for them to do so.

Only under the assumption that the driver was trained in the US, to follow US traffic law, and is following that training.

For example, in the EU, you switch on the indicators when you start the merge; the indicator shows that you ARE moving.


That seems odd to the point of uselessness, and does not match the required training I received in Germany from my work colleagues at Daimler prior to being able to sign out company cars.

https://www.gesetze-im-internet.de/stvo_2013/__9.html seems to be the relevant law in Germany, which Google translates to "(1) Anyone wishing to turn must announce this clearly and in good time; direction indicators must be used."


Merging into the lane is probably better addressed by §7, with the same content: https://dejure.org/gesetze/StVO/7.html


Maybe the guy was talking about the reality, not the theory. From my autobahn travels it seems like the Germans don't know how to turn on the blinkers.


> … the Germans don’t know how turn on the blinkers.

[Insert nationality/regional area here] don’t know how to turn on the blinkers.


I wouldn't say so. It's a very marked difference with a sharp change the moment I drive through the border.


I’m only saying this from my experience in Canada where every region thinks its drivers are the worst.


I think the moral of the story is that cars may or may not turn their blinkers on. If they do, the self-driving should catch that just as easily and expect the car to switch lanes (with extreme caution).


> For example, in the EU, you switch on the indicators when you start the merge; the indicator shows that you ARE moving.

In my EU country it's theoretically at least 3 seconds before initiating the move.


In general, the requirement is the following:

a) Check for the possibility of the maneuver; b) signal the maneuver; c) perform the maneuver.

However the signaling needs to be done in a way that it helps other road users to read and act according to your maneuver, so 3 seconds seems to be a good amount of time for that.

There are, on the other hand, situations where signaling the maneuver is also desirable even though the maneuver might not be possible yet: merging into a full lane, so vehicles might free up some space to let you merge.


As I mentioned in my other comment, 1 second is negligible, I would even dare to say that 3 seconds, is, too. For a computer it should not be, however.


For anyone confused, this person’s statement about the EU is total bs.


It's what I was taught: you switch on your indicators when you have checked that you are clear to merge and you have effectively committed. I always assume that someone who has put their indicators in is going to move according to them, whether it's clear or not.


I don't doubt that it's the way you have been taught, but it doesn't make any sense. The whole point of blinkers/indicator lights in cars are to signal your intentions before you do them: if you're going to signal at the same time that you do the action you're signalling, you might as well not bother.


You signal in advance, but you check before you signal. Mirrors, signals, maneuver.


It is what I see in practice in Eastern Europe. They signal as they are shifting lanes. Even if they turn the blinker on and then start moving 1 second later, it could be considered the same thing as 1 second is negligible.

Thus "the indicator shows that you ARE moving." is correct, at least in practice.


It's the difference between actually purposefully blinking and blinking to avoid a fine. In the latter you just tap the blinker stalk as you're turning the wheel. If someone's trying to do a dangerousish turn (waiting for a line of cars to do an illegal U turn for example) they'll be blinking to signal intention most of the time.


I got my license in 2014, in Germany, and was taught to turn on the turn signal > check mirrors > turn your head to look over your shoulder and only then, when you're clear, do you merge.


It's an interesting trope among Tesla owners to feel the need to put a disclaimer like this in (quoting your post):

    > "I’ve had a Model Y for more
    > than 3 years now, well before
    > Elon revealed himself as
    > the kind of person he really is"
It's always fun to compare the timeline of Elon Musk's well-known shenanigans with the "But I got a Tesla in <year>!".

E.g. you got one around 3 years after the "pedo guy" incident[1].

I suppose to whatever extent you factor in the personalities of the executives whose companies you make car purchases from, that didn't rate as much of a factor?

One is left wondering what it was that did.

1. https://en.wikipedia.org/wiki/Tham_Luang_cave_rescue#Elon_Mu...


There's degrees to being a shitty human being.

Using your platform and millions of followers to publicly shit some random person who pissed you off is a degree of it.

Being a colossal hypocrite with your 'free speech' platform, or lying to your customers is something else.

Full mask-off throwing millions of dollars towards electing a convicted conman who is unabashedly corrupt, vindictive, nepotistic, already has a failed coup under his belt, and is running on a platform of punishing anyone who isn't a sycophant is... Also something else.


I'm a bit more cynical and see his turn as a business move. He has considerate market captured, so he went full wackjob to capture that market.

Apparently, this doesn't reflect reality and he actually went crazy because one of his kids is trans. I have no idea because I don't know him.


You slowly build a relationship with it and understand where it will fail.

I drive my 20-30 minute commutes largely with FSD, as well as our 8-10 hour road trips. It works great, but 100% needs to be supervised and is basically just nicer cruise control.


"You slowly build a relationship with it and understand where it will fail."

I spent over a decade working on production computer vision products. You think you can do this, and for some percentage of failures you can. The thing is, there will ALWAYS be some percentage of failure cases where you really can't perceive anything different from a success case.

If you want to trust your life to that, fine, but I certainly wouldn't.


Or until a software update quietly resets the relationship and introduces novel failure modes. There is little more dangerous on the road than false confidence.


Exactly. You may learn its patterns, but a software update could fuck it all up in a zillion different ways.


Elon Musk is a technologist. He knows a lot about computers. The last thing Musk would do is trust a computer program:

https://www.nbcnews.com/tech/tech-news/musk-pushes-debunked-...

So I guess that's game over for full self-driving.


Oooo maybe he'll get a similar treatment as Fox did versus Dominion.


Yea he’s just dedicating his life on something that he knows won’t even work. What are you on about?


Everyone else’s life seems to be completely irrelevant


You and I might have a different view of reality. Teslas are among the safest vehicles on the road.


This feels like the most dangerous possible combination (not for you, just to have on the road in large numbers).

Good enough that the average user will stop paying attention, but not actually good enough to be left alone.

And when the machine goes to do something lethally dumb, you have 5 seconds to notice and intervene.


This is what Waymo realized a decade ago and what helped define their rollout strategy: https://youtu.be/tiwVMrTLUWg?t=247&si=Twi_fQJC7whg3Oey


This video is great.

It looks like Wayno really understood the problem.

It explains concisely why it's a bad idea to roll our incremental progress, how difficult the problem really is, and why you should really throw all sensors you can at it.

I also appreciate the "we don't know when it's going to be ready" attitude. It shows they have a better understanding of what their task actually is than anybody who claims "next year" every year.


All their sensors didn't prevent them from crashing into stationary object. You'd think that would be the absolute easiest to avoid, especially with both radar and lidar on board. Accidents like that show the training data and software will be much more important than number of sensors.

https://techcrunch.com/2024/06/12/waymo-second-robotaxi-reca...


The issue was fixed, now handling 100'000 trips per week, and all seems to go well in the last 4 months, this is 1.5 million trips.


So they had "better understanding" of the problem as the other user put it, but their software was still flawed and needed fixing. That's my point. This happened two weeks ago btw: https://www.msn.com/en-in/autos/news/waymo-self-driving-car-...

I don't mean Waymo is bad or unsafe, it's pretty cool. My point is about true automation needing data and intelligence. A lot more data than we currently have, because the problem is in the "edge" cases, the kind of situation the software has never encountered. Waymo is in the lead for now but they have fewer cars on the road, which means less data.


Any idea how many accidents and how many fatalities? And how that compares to human drivers?


> It looks like Wayno really understood the problem.

All they needed was one systems safety engineering student


You don't get a $700B market cap by telling investors "We don't know."


Ironically, Robotaxis from Waymo are actually working really well. It's a true unsupervised system, very safe, used in production, where the manufacturer takes the full responsibility.

So the gradual rollout strategy is actually great.

Tesla wants to do "all or nothing", and ends up with nothing for now (example with Europe, where FSD is sold since 2016 but it is "pending regulatory approval", when actually, the problem is the tech that is not finished yet, sadly).

It's genuinely a difficult problem to solve, so it's better to do it step-by-step than a "big-bang deploy".


Does Tesla take full responsibility for FSD incidents?

It seemed like most players in tech a few years ago were using legal shenanigans to dodge liability here, which, to me, indicates a lack of seriousness toward the safety implications.


What does that mean? Tesla’s system isn’t unsupervised, so why would they take responsibility?


I don't know, maybe because they call it "Full Self-Driving"? :)


Doesn't really matter what they call it. The product name being descriptive of the current product or not is a different topic.

For what it's worth, I wouldn't care if they called it "Penis Enlarger 9000" if it drove me around like it now does.


> So the gradual rollout strategy is actually great.

I think you misunderstood, or it's a terminology problem.

Waymo's point in the video is that in contrast to Tesla, they are _not_ doing gradual rollout of seemingly-working-still-often-catastropically-failing tech.

See e.g. minute 5:33 -> 6:06. They are stating that they are targeting directly the shown upper curve of safety, and that they are not aiming for the "good enough that the average user will stop paying attention, but not actually good enough to be left alone".


Terminology.

Since they targeted very low risk, they did a geographically-segmented rollout, starting with Phoenix, which is one of the easiest places to drive: a lot of photons for visibility, very little rain, wide roads.


Not sure how tongue-in-cheek that was, but I think your statement is the heart of the problem. Investment money chases confidence and moonshots rather than backing organizations that pitch a more pragmatic (read: asterisks and unknowns) approach.


Five seconds is a long time in driving, usually you’ll need to react in under 2 seconds in situations where it disengages, those never happen while going straight.


Not if you are reading your emails…


When an update comes out does that relationship get reset (does it start failing on things that used to work), or has it been a uniform upward march?

I'm thinking of how every SaaS product I ever have to use regularly breaks my workflow to make 'improvements'.


I wouldn't take OP's word for it, if they really believe they know how it's going to react in every situation in the first place. Studies have shown this is a gross overestimation of their own ability to pay attention.


I never said I knew how it would react in every situation. I state you get a feeling for where it will fail and that you need to monitor it.


For me it does, but only somewhat. I'm much more cautious / aware for the first few drives while I figure it out again.

I also feel like it takes a bit (5-10 minutes of driving) for it to recalibrate after an update, and it's slightly worse than usual at the very beginning. I know they have to calibrate the cameras to the car, so it might be related to that, or it could just be me getting used to its quarks.


Yes, it definitely does. The behavior of the car significantly changes.


Something along this lines is the real danger. People will understand common failure modes and assume they have understood its behavior for most scenarios. Unlike common deterministic and even some probabilistic systems, where behavior boundaries are well behaved, there could be discontinuities in 'rarer' seen parts of the boundary. And these 'rarer' parts need not be obvious to us humans, since few pixel changes might cause wrinkles.

*vocabulary use is for a broad stroke explanation.


This was my experience as well. It tried to drive us (me, my wife, and my FIL) into a tree on a gentle low speed uphill turn and I’ll never trust it again.


But it’s clearly statistically much safer (https://www.tesla.com/VehicleSafetyReport) 7 million miles before an accident w FSD vs. 1 million when disengaged. I agree I didn’t like the feel of FSD either, but the numbers speak for themselves.


Teslas numbers have biases in them which paint a wrong picture:

https://www.forbes.com/sites/bradtempleton/2023/04/26/tesla-...

They compare incomparable data(city miles vs highway miles), autopilot is also mostly used on higways which is not where most accidents happen.


Tesla released a promotional video in 2016 saying that with FSD a human driver is not necessary and that "The person in the driver's seat is only there for legal reasons". The video was staged as we've learned in 2022.

2016 folks... Even with today's FSD which is several orders of magnitude better than the one in the video, you would still probably have a serious accident within a week (and I'm being generous here) if you didn't seat in the driver's seat.

How Trevor Milton got sentenced for fraud and the people responsible for this were not is a mystery to me.


AFAIK the owner's manual says you have to keep your hands on the wheel and be ready to take over at all times, but Elon Musk and co. love to pretend otherwise.


This part doesn’t seem to be common knowledge. I don’t own a Tesla but I have been a few. From my understanding the feature as always said it was in beta and that it still required that you have your hands on the wheel.

I like the idea of FSD, but I think we should have a serious talk about how the safety implications of making this more broadly available and also compatibility with making a mesh network so FSD vehicles can communicate. I’m not well versed in the tech but I feel like it would be safer if you have like say have more cars on the road that can communicate and making decisions together than separate cars existing in a vacuum having to make a decision.


I've wondered about the networked vehicle communication for a while. It doesn't even need to be FSD. I might be slightly wrong on this, but I would guess most cars going back at least a decade can have their software/firmware modified to do this if the manufacturers so choose. I imagine it would improve the reliability and reaction-times of FSD considerably.


[flagged]


> Why is he trying to buy a president in the first place?

Because he can make more money under one than the other.


Obviously yeah, but I do think his odd fervor in this allows us to speculate that there are some threats to his businesses that are not well understood publicly, and that would be solved by becoming a sort of American oligarch.


It's really not much of a stretch.

In addition to the risks to Tesla raised upthread, SpaceX needs an ambitious space program, and Mars program specifically along the lines of Musk's ideas.

Capturing the US government is a great way to get there.


[flagged]


He’s not doing anything activists haven’t done for years to get out the vote. In college famous rock and hip hop groups would come on campus to play shows that had voter registration tables upon entry and lots of messaging about who to vote for and then being endlessly recruited to volunteer/phone bank/canvas for some group that was supporting the event.

Activism cuts both ways.


Direct payments do seem to be illegal in a way that having a rally or concert or canvassing are not.

https://electionlawblog.org/?p=146397


You’re not obligated to do anything like vote or vote a certain way. Money is speech and it’s an advertisement.

That’s just some random blog. I’m sure Musks’s lawyers understand what they’re doing.

Frankly I find it inspiring he cares enough about our democracy to encourage people to participate in it at great expense of his own. You love to see innovation in turning out voters who may not otherwise have their voices heard.


It's not a random blog making some conjectures, Rick Hasen is a law professor who is an expert in this area and, moreover, he cites specifics statutes and DOJ information that's not all that ambiguous.


He gets basic facts wrong in his blog though. For instance the rewards are for referring people to sign a petition that says you support 1a and 2a. You need to be registered for your voice to count. He’s not paying them to register but rather to refer registered people to sign it. So it’s up to an individual to find registered voters to sign it so they can collect their $47 bounty per referred signee.


> He gets basic facts wrong in his blog though.

No, he doesn't. He references the $47 as 'murky legality'. What he's stating is clearly illegal is the $1M lottery also announced by Musk.

And Musk does what he wants regardless of lawyers. Remember when he tried to back out of buying Twitter...


Yes but he didn't take into consideration that laws don't apply when you are a billionaire and that you hold both state secrets (via DoD/Starlink) and connections to foreign countries.

So Musk will be fine, especially if Trump wins.


In case anyone looks at this older thread - supposedly the DOJ sent a warning to the PAC offering $1M lottery tickets.

https://www.24sight.news/p/scoop-doj-sends-musk-pac-warning


That’s just some random blog. I’m sure Musks’s lawyers understand what they’re doing.

I'm sure his lawyers know what they're doing, but did that stop their client from calling a cave diver a pedophile for objecting to his submarine design?

Musk is basically a valueless chaos monkey with a perfect 18 score in Luck. Even he doesn't know what he'll do, say, or believe next. He has the luxury of not caring because it doesn't matter anyway; he'll just continue to get away with things that would shut the rest of us down for good. Not surprising that he's found a kindred spirit in Trump.


That may be but were they required to sign a PAC's pledge to enter such concert? I think this might be over the line but a court has the decide this.


Paying someone to vote a certain way is in fact illegal though.


You can vote for whomever you’d like. His PAC isn’t asking you to prove you voted or voted for a particular person to get the money. They’re just generating buzz and interest in the candidate they feel is better.


Lots of people are asking how good the self driving has to be before we tolerate it. I got a one month free trial of FSD and turned it off after two weeks. Quite simply: it's dangerous.

- It failed with a cryptic system error while driving

- It started making a left turn far too early that would have scraped the left side of the car on a sign. I had to manually intervene.

- In my opinion, the default setting accelerates way too aggressively. I'd call myself a fairly aggressive driver and it is too aggressive for my taste.

- It tried to make way too many right turns on red when it wasn't safe to. It would creep into the road, almost into the path of oncoming vehicles.

- It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

- It would switch lanes to go faster on the highway, but then missed an exit on at least one occasion because it couldn't make it back into the right lane in time. Stupid.

After the system error, I lost all trust in FSD from Tesla. Until I ride in one and feel safe, I can't have any faith that this is a reasonable system. Hell, even autopilot does dumb shit on a regular basis. I'm grateful to be getting a car from another manufacturer this year.


> Lots of people are asking how good the self driving has to be before we tolerate it.

There’s a simple answer to this. As soon as it’s good enough for Tesla to accept liability for accidents. Until then if Tesla doesn’t trust it, why should I?


> As soon as it’s good enough for Tesla to accept liability for accidents.

That makes a lot of sense and not just from a selfish point of view. When a person drives a vehicle, then the person is held responsible for how the vehicle behaves on the roads, so it's logical that when a machine drives a vehicle that the machine's manufacturer/designer is held responsible.

It's a complete con that Tesla is promoting their autonomous driving, but also having their vehicles suddenly switch to non-autonomous driving which they claim moves the responsibility to the human in the driver seat. Presumably, the idea is that the human should have been watching and approving everything that the vehicle has done up to that point.


The responsibility doesn't shift, it always lies with the human. One problem is that humans are notoriously poor at maintaining attention when supervising automation

Until the car is ready to take over as legal driver, it's foolish to set the human driver up for failure in the way that Tesla (and the humans driving Tesla cars) do.


> The responsibility doesn't shift, it always lies with the human.

Indeed, and that goes for the person or persons who say that the products they sell are safe when used in a certain way.


What?! So if there is a failure and the car goes full throttle (no autonomous car) it is my responsibility?! You are pretty wrong!!!


You are responsible (Legally, contractually, morally) for supervising FSD today. If the car decided to stomp on the throttle you are expected to be ready to hit the brakes.

The whole point is that is somewhat of an unreasonable expectation but it’s what Tesla expects you to do today


> If the car decided to stomp on the throttle you are expected to be ready to hit the brakes.

Didn't Tesla have an issue a couple of years ago where pressing the brake did not disengage any throttle? i.e. if the car has a bug and puts throttle to 100% and you stand on the brake, the car should say "cut throttle to 0", but instead, you just had 100% throttle, 100% brake?


If it did, it wouldn’t matter. Brakes are required to be stronger than engines.


That makes no sense. Yes, they are. But brakes are going to be more reactive and performant with the throttle at 0 than 100.

You can't imagine that the stopping distances will be the same.


My example was clear about NOT about autonomous driving. Because the previous comment seems to imply for everything you are responsible


Autopilot, FSD, etc.. are all legally classified as ADAS, so it’s different from e.g. your car not responding to controls.

The liability lies with the driver, and all Tesla needs to prove is that input from the driver will override any decision made by the ADAS.


The point at which we decide that a defect is serious enough to transfer liability is quite case-dependent. If you knew that the throttle was glitchy but hadn't done anything to fix it, yes. If it affected every car from the manufacturer, it's obviously their fault -- but if you ignore the recall then it might be your fault again?

In this case, the behaviour of the system and the responsibility of the driver is well-established. I'd actually quite like it if Tesla were held responsible for their software, but they somehow continue to skirt the line wherein they require the driver to retain vigilance and any system failures are therefore the (legal) fault of the human not the car despite advertising it as "Full Self Driving".


> The point at which we decide that a defect is serious enough to transfer liability is quite case-dependent. If you knew that the throttle was glitchy but hadn't done anything to fix it, yes. If it affected every car from the manufacturer, it's obviously their fault -- but if you ignore the recall then it might be your fault again?

In most American jurisdictions' liability law, the more usual thing is to expand liability, rather than transferring liability. The idea that exactly one -- or at most one -- person or entity should be liable for any given portion of any given harm is a common popular one in places like HN, but the law is much more accepting of the situation where lots of people may have overlapping liability for the same harm, with none relieving the others.

The liability of a driver for maintenance and operation within the law is not categorically mutually exclusive with the liability of the manufacturer (and, indeed, every party in the chain of commerce) for manufacturing defects.

If a car is driven in a way that violates the rules of the road and causes an accident and a manufacturing defect in a driver assistance system contributed to that, it is quite possible for the driver, manufacturer of the driver assistance system, manufacturer of the vehicle (if different from that of the assistance system) and seller of the vehicle to the driver (if different from the last two), among others, to all be fully liable to those injured for the harms.


>> When a person drives a vehicle, then the person is held responsible for how the vehicle behaves on the roads, so it's logical that when a machine drives a vehicle that the machine's manufacturer/designer is held responsible.

Never really understood the supposed dilemma. What happens when the brakes fail because of bad quality?


> What happens when the brakes fail because of bad quality?

Depends on the root cause of the failure. Manufacturing faults would put the liability on the manufacturer; installation mistakes would put the liability on the mechanic; using them past their useful life would put the liability on the owner for not maintaining them in working order.


Then this would be manufacturing liability because they are not fit for purpose.


I think this is probably both the most concise and most reasonable take. It doesn't require anyone to define some level of autonomy or argue about specific edge cases of how the self driving system behaves. And it's easy to apply this principle to not only Tesla, but to all companies making self driving cars and similar features.


Note that Mercedes does take liability for accidents with their (very limited level) level 3 system: https://www.theverge.com/2023/9/27/23892154/mercedes-benz-dr...


Yes. That is the only way. That being said, I want to see the first incidents, and how are they resolved.


its pathetic. <40mph following a vehicle directly ahead. basically only usable in stop and go traffic

https://www.notebookcheck.net/Tesla-vs-Mercedes-self-driving...


The Mercedes system is definitely, as I said, very limited. But within it's operating conditions the Mercedes system is much more useful: you can safely and legally read, work, or watch a movie while in the driver's seat, literally not paying any attention to the road.


Whats the current total liability cost for all Tesla drivers?

The average for all USA cars seems to be around $2000/year, so even if FSD was half as dangerous Tesla would still be paying $1000/year equivalent (not sure how big insurance margins are, assuming nominal) per car.

Now, if legally the driver could avoid paying insurance for the few times they want/need to drive themselves (e.g. snow? Dunno what FSD supports atm) then it might make sense economically, but otherwise I don't think it would work out.


Liability alone isn’t nearly that high.

Car insurance payments include people stealing your car, uninsured motorists, rental cars, and other issues not the drivers fault. Further insurance payments also include profits for the insurance company, advertising, billing, and other overhead from running a business.

Also, if Tesla was taking on these risks you’d expect your insurance costs to drop.


How much would every death or severe injury caused by FSD cost Tesla? We probably won’t know anytime soon but since unlike anyone else they can afford to pay out virtually unlimited amounts and courts will presumably take that into account


Yeah any automaker doing this would just negotiate a flat rate per car in the US and the insurer would average the danger to make a rate. This would be much cheaper than the average individual’s cost for liability on their insurance.


And it would be supplementary to the driver’s insurance, only covering incidents that happen while FSD is engaged. Arguably they would self insure and only purchase insurance for Tesla as a back stop to their liability, maybe through a reinsurance market.


Somehow I doubt those savings would be passed along to the individual car buyer. Surely buying a car insured by the manufacturer would be much more expensive than buying the car plus your own individual insurance, because the car company would want to profit from both.


What if someone gets killed because of some clear bug/error and the jury decides to award 100s of millions just for that single ? I’m not sure it’s trivial to insurance companies to account for that sort of risk


It is trivial and they've done it for ages. It's called reinsurance.

Basically (_very_ basically, there's more to it) the insurance company insures itself against large claims.


I’m not sure Boeing etc. could have insured any liability risk resulting from engineering/design flaws in their vehicles?


Not trivial, but that is exactly the kind of thing that successful insurance companies factor into their premiums, or specifically exclude those scenarios (e.g. not covering war zones for house insurance).


Good points, thanks.


Also I wouldn’t be surprised if any potential wrongful death lawsuits could cost Tesla several magnitudes more than the current average.


The liability for killing someone can include prison time.


Good. If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.


Remember that this is neural networks doing the driving, more than old expert systems: What makes a crash happen is a network that fails to read an image correctly, or a network that fails to capture what is going on when melding input from different sensors.

So the blame won't be on a guy who got an if statement backwards, but signing off on stopping training, failing to have certain kinds of pictures in the set, or other similar, higher order problem. Blame will be incredibly nebulous.


This is the difference between a Professional Engineer (ie. the protected term) and everyone else who calls themselves engineers. They can put their signature on a system that would then hold them criminally liable if it fails.

Bridges, elevators, buildings, ski lifts etc. all require a professional engineer to sign off on them before they can be built. Maybe self driving cars need the same treatment.


Do we send Boeing engineers to jail when their plane crashes?

Intention matters when passing crime judgement. If a mother causes the death of her baby due to some poor decision (say feed her something contaminated), no one proposes or tries to jail the mother, because they know the intention was the opposite.


This is why we have criminal negligence. Did the mother open a sealed package from the grocery store or did she find an open one on the ground?

Harder to apply to software but maybe there should be a some legal liability involved when a sysadmin uses admin/admin and health information is leaked.

Some employees should be absolutely in jail from boeing regarding the MCAS system and the hundreds of people who died as a result. But the actions there go beyond negligence anyway.


Doesn't seem to happen in the medical and airplane industries, otherwise, Boeing would most likely not exist as a company anymore.


Perhaps one can debate whether it happens often enough or severely enough, but it certainly happens. For example, and only the first one to come to mind - the president of PIP went to jail.


Assuming there's the kind of guard rails as in other industries where this is true, absolutely. (In other words, proper licensing and credentialing, and the ability to prevent a deployment legally)

I would also say that if something gets signed off on by management, that carries an implicit transfer of accountability up the chain from the individual contributor to whoever signed off.


And such coders should carry malpractice insurance.


How is that working with Boeing?


People often forget corporations don’t go to jail. Murder when you’re not a person ends up with a slap.


Software requires hardware that can bit flip with gamma rays.


Which is why hardware used to run safety-critical software is made redundant.

Take the Boeing 777 Primary Flight Computer for example. This is a fully digital fly-by-wire aircraft. There are 3 separate racks of equipment housing identical flight computers; 2 in the avionics bay underneath the flight deck, 1 in the aft cargo section. Each flight computer has 3 separate processors, supporting 2 dissimilar instruction set architectures, running the same software built by 3 separate compilers. Each flight computer captures instances of the software not agreeing about an action to be undertaken and wins by majority vote. The processor that makes these decisions is different in each flight computer.

The power systems that provide each flight computer are also fully redundant; each computer gets power from a power supply assembly, which receives 2 power feeds from 3 separate power supplies; no 2 power supply assemblies share the same 2 sources of power. 2 of the 3 power systems (L engine generator, R engine generator, and the hot battery bus) would have to fail and the APU would have to be unavailable in order to knock out 1 of the 3 computers.

This system has never failed in 30 years of service. There's still a primary flight computer disconnect switch on the overhead panel in the cockpit, taking the software out of the loop, to logically connect all of your control inputs to the flight surface actuators. I'm not aware of it ever being used (edit: in a commercial flight).


You can’t guarantee the hardware was properly built.


Unless Intel, Motorola, and AMD all conspire to give you a faulty processor, you will get a working primary flight computer.

Besides, this is what flight testing is for. Aviation certification authorities don't let an aircraft serve passengers unless you can demonstrate that all of its safety-critical systems work properly and that it performs as described.

I find it hard to believe that automotive works much differently in this regard, which is what things like crumple zone crash tests are for.


You can control for that. Multiple machines doing is rival calculations for example


Are you suggesting that individuals should carry that liability?


The ones that are identified as making decisions leading to death, yes.

It's completely normal in other fields where engineers build systems that can kill.


Pretty much. Fuck. I just watched higher ups sign off on a project I know for a fact has defects all over the place going into production despite our very explicit: don't do it ( not quite Tesla level consequences, but still resulting in real issues for real people ). The sooner we can start having people in jail for knowingly approving half-baked software, the sooner it will improve.


Should we require Professional Engineers to sign off on such projects the same way they are required to for other safety critical infrastructure (like bridges and dams)? The Professional Engineer that signed off is liable for defects in the design. (Though, of course, if the design is not followed then liability can shift back to the company that built it)


I hesitate, because I shudder at government deciding which algorithm is best for a given scenario ( because that is effectively is where it would go ). Maybe the distinction is, the moment money changes hands based on product?

I am not an engineer, but I have watched clearly bad decisions take place from technical perspective so that a person with title that went to their head and a bonus that is not aligned with right incentives mess things up for us. Maybe some proffesionalization of software engineering is in order.


This isn't a matter of the government saying what you need to do. This is a matter of being held criminally liable if people get hurt.


You are only technically correct. And even then, in terms of civics, by having people held criminally liable government is telling you what to do ( or technically not do ). Note that no other body can ( legally ) do it. In fact, false imprisonment is in itself a punishable offense, but I digress..

Now, we could argue over whether that is/should/could/would be the law of the land, but have you considered how it would be enforced?

I mean, I can tell you first hand what it looks like, when government gives you a vague law for an industry to figure out and an enforcement agency with a broad mandate.

That said, I may have exaggerated a little bit on the algo choice. I was shooting for ghoulish overkill.


> You are only technically correct

You clearly have no idea how civil liability works. At all.


I am here to learn. You can help me by educating me. I do mean it sincerely. If you think you have a grasp on the subject, I think HN as a whole could benefit from your expertise.


Civil liability isn't determined by the "gov't" it's determined by a jury of your peers. More interesting to me is how you came to the impression that you had any idea what you were talking about to the point you felt justified in making your post.


My friend. Thank you. It is not often I get to be myself lately. Allow me to retort in kind.

Your original response to my response was in turn a response to the following sentence by "snovv_crash":

"This isn't a matter of the government saying what you need to do. This is a matter of being held criminally liable if people get hurt."

I do want to point out that from the beginning the narrow scope of this argument defined the type of liability as criminal and not civil as your post suggested. In other words, your whole point kinda falls apart as I was not talking about civil liability, but about the connection of civics and government's ( or society's depending on your philosophical bent ) monopoly on violence.

It is possible that the word civic threw you off, but I was merely referring to the study of the rights, duties, and obligations of citizens in a society. Surely, you would agree that writing code that kills people would be under the purview of the rights, duties and obligations of individuals in a society?

In either case, I am not sure what are you arguing for here, It is not just that you are wrong, but you seem to be oddly focused on trying to .. not even sure. Maybe I should ask you instead.

<<More interesting to me is how you came to the impression that you had any idea what you were talking about to the point you felt justified in making your post.

Yes, good question. Now that I replied I feel it would not be a bad idea ( edit: for you ) to present why you feel ( and I use that verb consciously ) you can just throw salad willy-nilly not only with confidence, but, clearly, justification worthy of a justicar.

tldr: You are wrong, but can you even accept that you are wrong.. now that will be an interesting thing to see.

<< that you had any idea

I am a guy on the internet man. No one has any idea about anything. Cheer up:D


In a criminal court, guilt (not liability) is also determined by a jury of your peers, and not the gov't.


That's liability for defective design, not any time it fails as suggested above.


Drug companies and the FDA (circa 1906) play a very dangerous and delicate dance all the time releasing new drugs to the public. But for over a century now we've managed to figure it out without holding pharma companies criminally liable for every death.

> If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.

Easy to type those words on the internet than make it a policy IRL. That sort of policy IRL would likely result in a) killing off all commercial efforts to solve traffic deaths via technology and vast amounts of other semi-autonomous technology like farm equipment or b) government/car companies mandating filming the driver every time they turn it on, because it's technically supposed to be human assisted autopilot in these testing stages (outside restricted pilot programs like Waymo taxis). Those distinctions would matter in a criminal court room, even if humans can't always be relied upon to always follow the instructions on the bottle's label.


> criminally liable for every death.

The fact that people generally consume drugs voluntarily and make that decision after being informed about most of the known risks probably mitigates that to some extent. Being killed by someone else’s FSD car seems to be very different


Imagine that in 2031, FSD cars could exactly halve all aspects of auto crashes (minor, major, single car, multi car, vs pedestrian, fatal/non, etc.)

Would you want FSD software to be developed or not? If you do, do you think holding devs or companies criminally liable for half of all crashes is the best way to ensure that progress happens?


From a utilitarian perspective sure, you might be right but how do you exempt those companies from civil liability and make it impossible for victims/their families to sue the manufacturer? Might be legally tricky (driver/owner can explicitly/implicitly agree with the EULA or other agreements, imposing that on third parties wouldn’t be right).


> how do you exempt those companies from civil liability and make it impossible for victims/their families to sue the manufacturer?

I don't think anyone in this thread has talked about an exemption from civil liability (sue for money), just criminal liability (go to jail).

Civil liability is the far less controversial issue because it's transferred all the time: governments even mandate that drivers carry insurance for this purpose.

With civil liability transfer, imperfect FSD can still make economic sense. Just as an insurance company needs to collect enough premium to pay claims, the FSD manufacturer would need to reserve enough revenue to pay its expected claims. In this case, FSD doesn't even need to be better than humans to make economic sense, in the same way that bad drivers can still buy (expensive) insurance.


> just criminal liability (go to jail).

That just seems like a theoretical possibility (even if that). I don’t see how any engineer or even someone in management could go to jail unless intent or gross negligence can be proven.

> drivers carry insurance for this purpose.

The mandatory limit is extremely low in many US states.

> expected claims

That seems like the problem. It might take a while until we reach an equilibrium of some sort.

> that bad drivers can still buy

That’s still capped by the amount of coverage + total assets held by that bad driver. In Tesl’s case there is no real limit (without legislation/established precedent). Juries/courts would likely be influenced by that fact as well.


In fact, if you buy your insurance from Tesla, you effectively do put civil responsibility for FSD back in their hands.


Say cars have near 0 casualty in northern hemisphere but occasionally fails for cars driving topsy turvy in south. If company knew about it and chooses to ignore it because of profits, yes they should be charged criminally.


> make that decision after being informed about most of the known risks

Like for the COVID-19 vaccines? Experimental yet given to billions without ever showing them a consent form.


Yes, but worse. Nobody physically forced anyone to get vaccinated so you still had some choice. Of course legally banning individuals from using public roads or sidewalks unless they give up their right to sue Tesla/etc. might be an option.


We should hold Pharma companies liable for every death. They make money off the success cases. Not doing so is another example of privatized profits and socialized risks/costs. Something like a program with reduced costs for those willing to sign away liability to help balance social good vs risk analysis


Your take is understandable and not surprising on a site full of software developers. Somehow, the general software industry has ingrained this pessimistic and fatalistic dogma that says bugs are inevitable and there’s nothing you can do to prevent them. Since everyone believes it, it is a self-fulfilling prophecy and we just accept it as some kind of law of nature.

Holding software developers (or their companies) liable for defects would definitely kill off a part of the industry: the very large part that YOLOs code into production and races to get features released without rigorous and exhaustive testing. And why don’t they spend 90% of their time testing and verifying and proving their software has no defects? Because defects are inevitable and they’re not held accountable for them!


It is true of every field I can think of. Food gets salmonella and what not frequently. Surgeons forget sponges inside of people (and worse). Truckers run over cars. Manufacturers miss some failures in QA.

Literally everywhere else, we accept that the costs of 100% safety are just unreasonably high. People would rather have a mostly safe device for $1 than a definitely safe one for $5. No one wants to pay to have every head of lettuce tested for E Coli, or truckers to drive at 10mph so they can’t kill anyone.

Software isn’t different. For the vast majority of applications where the costs of failure are low to none, people want it to be free and rapidly iterated on even if it fails. No one wants to pay for a formally verified Facebook or DoorDash.


> Literally everywhere else, we accept that the costs of 100% safety are just unreasonably high.

Yes, but also in none of these situations would the consumer/customer/patient be held responsible. I don’t expect a system to be perfect, but I won’t accept any liability if it malfunctions as I use it the way it is intended. And even worse, I would not accept that the designers evade their responsibilities if it kills someone I know.

As the other poster said, I am happy to consider it safe enough the day the company accepts to own its issues and the associated responsibility.

> No one wants to pay for a formally verified Facebook or DoorDash.

This is untenable. Does nobody want a formally verified avionics system in their airliner, either?


You could be held liable if it impacts someone else. A restaurant serving improperly cooked chicken that gives people E Coli is liable. Private citizens may not have that duty, I’m not sure.

You would likely also be liable if you overloaded an electrical cable, causing a fire that killed someone.

“Using it in the way it was intended” is largely circular reasoning; of course it wasn’t intended to hurt anyone, so any usage that does hurt someone was clearly unintended. People frequently harm each other by misusing items in ways they didn’t realize were misuses.

> This is untenable. Does nobody want a formally verified avionics system in their airliner, either?

Not for the price it would cost. Airbus is the pioneer here, and even they apply formal verification sparingly. Here’s a paper from a few years ago about it, and how it’s untenable to formally verify the whole thing: https://www.di.ens.fr/~delmas/papers/fm09.pdf

Software development effort generally tends to scale superlinearly with complexity. I am not an expert, but the impression I get is that formal verification grows exponentially with complexity to the point that it is untenable for most things beyond research and fairly simple problems. It is a huge pain in the ass to do something like putting time bounds around reading a config file.

IO also sucks in formal verification from what I hear, and that’s like 80% of what a plane does. Read these 300 signals, do some standard math, output new signals to controls.

These things are much easier to do with tests, but tests only check for scenarios you’ve thought of already


> You could be held liable if it impacts someone else. A restaurant serving improperly cooked chicken that gives people E Coli is liable. Private citizens may not have that duty, I’m not sure. > You would likely also be liable if you overloaded an electrical cable, causing a fire that killed someone.

Right. But neither of these examples are following guidelines or proper use. If I turn the car into people on the pavement, I am responsible. If the steering wheel breaks and the car does it, then the manufacturer is responsible (or the mechanic, if the steering wheel was changed). The question at hand is whose responsibility it is if the car’s software does it.

> “Using it in the way it was intended” is largely circular reasoning; of course it wasn’t intended to hurt anyone, so any usage that does hurt someone was clearly unintended.

This is puzzling. You seem to be conflating use and consequences and I am not quite sure how you read that in what I wrote. Using a device normally should not make it kill people, I guess at least we can agree on that. Therefore, if a device kills people, then it is either improper use (and the fault of the user), or a defective device, at which point it is the fault of the designer or manufacturer (or whoever did the maintenance, as the case might be, but that’s irrelevant in this case).

Each device has a manual and a bunch of regulations about its expected behaviour and standard operating procedures. There is nothing circular about it.

> Not for the price it would cost.

Ok, if you want to go full pedantic, note that I wrote “want”, not “expect”.


> And why don’t they spend 90% of their time testing and verifying and proving their software has no defects? Because defects are inevitable and they’re not held accountable for them!

For a huge part of the industry, the reason is entirely different. It is because software that mostly works today but has defects is much more valuable than software that always works and has no defects 10 years from now. Extremely well informed business customers will pay for delivering a buggy feature today rather than wait two more months for a comprehensively tested feature. This is the reality of the majority of the industry: consumers care little about bugs (below some defect rate) and care far more about timeliness.

This of course doesn't apply to critical systems like automatic drivers or medical devices. But the vast majority of the industry is not building these types of systems.


Punishing individual developers is of course absurd (unless intent can be proven) the company itself and the upper management on the hand? Would make perfect sense.


You have one person in that RACI accountable box. That’s the engineer signing it off as fit. They are held accountable, including with jail if required.


> that says bugs are inevitable and there’s nothing you can do to prevent them

I don't think people believe this as such. It may be the short way to write it, but actually what devs mean is "bugs are inevitable at the funding/time available". I often say "bugs are inevitable" when it practice it means "you're not going to pay a team for formal specification, validated implementation and enough reliable hardware".

Which business will agree to making the process 5x longer and require extra people? Especially if they're not forced there by regulation or potential liability?


That's a dangerous line and I don't think it's correct. Software I write shouldn't be relied on in critical situations. If someone makes that decision then it's on them not on me.

The line should be where a person tells others that they can rely on the software with their lives - as in the integrator for the end product. Even if I was working on the software for self driving, the same thing would apply - if I wrote some alpha level stuff for the internal demonstration and some manager decided "good enough, ship it", they should be liable for that decision. (Because I wouldn't be able to stop them / may have already left by then)


It’s not that complicated or outlandish. That’s how most engineering fields work. If a building collapses because of design flaws, then the builders and architects can be held responsible. Hell, if a car crashes because of a design or assembly flaw, the manufacturer is held responsible. Why should self-driving software be any different?

If the software is not reliable enough, then don’t use it in a context where it could kill people.


I think the example here is that the designer draws a bridge for a railway model, and someone decides to use the same design and sends real locomotives across it. Is the original designer (who neither intended nor could have foreseen this) liable in your understanding?


That's a ridiculous argument.

If a construction firm takes an arbitrary design and then tries to build it in a totally different environment and for a different purpose, then the construction firm is liable, not the original designer. It'd be like Boeing taking a child's paper aeroplane design and making a passenger jet out of it and then blaming the child when it inevitably fails.


Or alternatively, if Boeing uses wood screws to attach an airplane door and the screw fails that's on Boeing, not the airline, pilot or screw manufacturer. But if it's sold as aerospace-grade attachment bolt with attachments for safety wire and a spec sheet that suggests the required loads are within design parameters then it's the bolt manufacturers fault when it fails, and they might have to answer for any deaths resulting from that. Unless Boeing knew or should have known that the bolts weren't actually as good as claimed, then the buck passes back to them

Of course that's wildly oversimplifying and multiple entities can be at fault at once. My point is that these are normal things considered in regular engineering and manufacturing


> That's a ridiculous argument.

Not making an argument. Asking a clarifying question about someone else’s.

> It'd be like Boeing taking a child's paper aeroplane design and making a passenger jet out of it and then blaming the child when it inevitably fails.

Yes exactly. You are using the same example I used to say the same thing. So which part of my message was ridiculous?


If it's not an argument, then you're just misrepresenting your parent poster's comment by introducing a scenario that never happens.

If you didn't intend your comment as a criticism, then you phrased it poorly. Do you actually believe that your scenario happens in reality?


> you're just misrepresenting your parent poster's comment

I did not represent or misrepresent anything. I have asked a question to better understand their thinking.

> If you didn't intend your comment as a criticism, then you phrased it poorly.

Quite probably. I will have to meditate on it.

> Do you actually believe that your scenario happens in reality?

With railway bridges? Never. It would ring alarm bells for everyone from the fabricators to the locomotive engineer.

With software? All the time. Someone publishes some open source code, someone else at a corporation bolts the open source code into some application and now the former “toy train bridge” is a loadbearing key-component of something the original developer could never imagine nor plan for.

This is not theoretical. Very often I’m the one doing the bolting.

And to be clear: my opinion is that the liability should fall with whoever integrated the code and certified it to be fit for some safety critical purpose. As an example if you publish leftpad and i put it into a train brake controller it is my job to make sure it is doing the right thing. If the train crashes you as the author of leftpad bear no responsibility but me as the manufacturer of discount train brakes do.


It was not a misrepresentation of anything. They were just restating the worry that was stated in the GP comment. https://news.ycombinator.com/item?id=41892572

And the only reason the commenter I linked to had that response is because its parent comment was slightly careless in its phrasing. Probably just change “write” to “deploy” to capture the intended meaning.


Someone, at some point signed off on this being released. Not thinking things through seriously is not an excuse to sell defective cars.


Are you serious?! You must be trolling!


I assure you I am not trolling. You appear to have misread my message.

Take a deep breath. Read my message one more time carefully. Notice the question mark at the end of the last sentence. Think about it. If after that you still think I’m trolling you or anyone else I will be here and happy to respond to your further questions.


To be fair maybe the software you write shouldn’t be relied on in critical situations but in this case the only place this software could be used in are critical situations


Ultimately - yes. But as I mentioned, the fact it's sold as ready for critical situations doesn't mean the developers thought/said it's ready.


But someone slapped that label on it and made a pinky promise that it's true. That person needs to accept liability if things go wrong. If person A is loud and clear that something isn't ready, but person B tells the customer otherwise, B is at fault.

Look, there are well established procedures in a lot of industries where products are relied on to keep people safe. They all require quite rigorous development and certification processes and sneaking untested alpha quality software through such a process would be actively malicious and quite possibly criminal in and of itself, at least in some industries.


This is the beginning of the thread https://news.ycombinator.com/item?id=41891164

You're in violent agreement with me ;)


No, the beginning of the thread is earlier. And with that context it seems clear to me that the “you” in the post you linked means “the company”, not “the individual software developer”. No one else in your replies seems confused by that, we all understand self-driving software wasn’t written by a single person that has ultimate decision power within a company.


If the message said "you release software", or "approve" or "produce", or something like that, sure. But it said "you write software" - and I don't think that can apply to a company, because writing is what individuals do. But yeah, maybe that's not what the author meant.


> and I don't think that can apply to a company, because writing is what individuals do.

By that token, no action could ever apply to a company—including approving, producing, or releasing—since it is a legal entity, a concept, not a physical thing. For all those actions there was a person actually doing it in the name of the company.

It’s perfectly normal to say, for example, “GenericCorp wrote a press-release about their new product”.


I think it should be fairly obvious that it's not the individual developers who are responsible/liable. In critical systems there is a whole chain of liability. That one guy in Nebraska who thanklessly maintains some open source lib that BigCorp is using in their car should obviously not be liable.


It depends. If you do bad sw and skip reviews and processes, you may be liable. Even if you are told to do something, if you know is wrong, you should say it. Right now I’m in middle of s*t because of I spoked up.


> Right now I’m in middle of s*t because of I spoked up.

And you believe that, despite experiencing what happens if you speak up?

We shouldn’t simultaneously require people to take heroic responsibility, while also leaving them high and dry if they do.


I do believe I am responsible. I recognize I’am now in a position that I can speak without fear. If I get fired I would make a party tbh.


>Software I write shouldn't be relied on in critical situations.

Then don't write software to be used in things that are literally always critical situations, like cars.


What a laugh, would you take that deal?

Upside: you get paid a 200k salary, if all your code works perfectly. Downside: if it doesn't, you go to prison.

The users aren't compelled to use it. They can choose not to. They get to choose their own risks.

The internet is a gold mine of creatively moronic opinions.


Need far more regulation of the software industry, far too many people working in it fail to understand the scope of what they do.

Civil engineer kills someone with a bad building, jail. Surgeon removes the wrong lung, jail. Computer programmer kills someone, “oh well it’s your own fault”.


I've never heard of a surgeon going to jail over a genuine mistake even if it did kill someone. I'm also not sure what that would accomplish - take away their license to practice medicine sure, but they're not a threat to society more broadly.


You made all that up out of nothing. They'd only go to jail if it was intentional.

The only case where a computer programmer "kills someone" is where he hacks into a system and interferes with it in a way that foreseeably leads to someone's death.

Otherwise, the user voluntarily assumed the risk.

Frankly if someone lets a computer drive their car, given their own ample experiences of computers "crashing", it's basically a form of attempted suicide.


You can go to prison or die for being a bad driver, yet people choose to drive.


Arguing for the sake of it; you wouldn't take that risk reward.

Most code has bugs from time to time even when highly skilled developers are being careful. None of them would drive if the fault rate was similar and the outcome was death.


Or to put even more straightforwardly: people who choose to drive rarely expect to drive more than a few 10s of k per year. People who choose to write autonomous software's lines of code potentially drive a billion miles per year, experiencing a lot more edge cases they are expected to handle in a non-dangerous manner, and have to handle them via advance planning and interactions with a lot of other people's code.

The only practical way around this which permits autonomous vehicles (which are apparently dependent on much more complex and intractable codebases than, say, avionics) is a much higher threshold of criminal responsibility than the "the serious consequences resulted from the one-off execution of an dangerous manoeuvre which couldn't be justified in context" which sends human drivers to jail. And of course that double standard will be problematic if "willingness to accept liability" is the only safety threshold.


I don't think anyone's seriously suggesting people be held accountable for bugs which are ultimately accidents. But if you knowingly sign off on, oversea, or are otherwise directly responsible for the construction of software that you know has a good chance of killing people, then yes, there should be consequences for that.


Exactly. Just like most car accidents don't result in prison or death. But negligence or recklessness can do it.


Systems evolve to handle such liability: Drivers pass theory and practical tests to get licensed to drive (and periodically thereafter), and an insurance framework that gauges your risk-level and charges you accordingly.


Requiring formal licensing and possibly insurance for developers working on life-critical systems is not that outlandish. On the contrary, that is already the case in serious engineering fields.


And yet tens of thousands of people die on the roads right now every year. Working well?


Read the site rules.

And also, of course some people would take that deal, and of course some others wouldn't. Your argument is moot.


And corporations are people now, so Tesla can go to jail.


In the United States? Come on. Boeing executives are not in jail - they are getting bonuses.


But some little boy down the line will pay for it. Look for Eschede ICE accident.


There are many examples.

The Koch brothers, famous "anti-regulatory state" warriors, have fought oversight so hard that their gas pipelines were allowed to be barely intact.

Two teens get into a truck, turn the ignition key - and the air explodes:

https://www.southcoasttoday.com/story/news/nation-world/1996...

Does anyone go to jail? F*K NO.


To be fair, the teens knew about the gas leak and started the truck in an attempt to get away. Gas leaks like that shouldn't happen easily, but people near pipelines like that should also be made aware of the risks of gas leaks, as some leaks are inevitable.


As an alternative though, the company also failed at handling that the gas leak started. They could have had people all over the place guiding people out and away from the leak safely, and keeping the public away while the leak is fixed.

Or, they could buy sufficient buffer land around the pipeline such that the gas leak will be found and stopped before it could explode down the road


Presumably that is exactly when their taxi service rolls out?

While this has a dramatic rhetorical flourish, I don’t think it’s a good proxy. Even if it was safer, it would be an unnecessarily high burden to clear. You’d be effectively writing a free insurance policy which is obviously not free.

Just look at total accidents / deaths per mile driven, it’s the obvious and standard metric for measuring car safety. (You need to be careful to not stop the clock as soon as the system disengages of course. )


This is how I feel about nuclear energy. Every single plant should need to form a full insurance fund dedicated to paying out if there’s trouble. And the plant should have strict liability: anything that happens from materials it releases are its responsibility.

But people get upset about this. We need corporations to take responsibility.


While we're at it how about why apply the same standard to coal and natural gas plants? For some reason when we start taking about nuclear plants we all of a sudden become adverse to the idea of unfunded externalities but when we're talking about 'old' tech that has been steadily irradiating your community and changing the gas composition of the entire planet it becomes less concerning.


I think it is a matter of perceived risk.

Realistically speaking, nuclear power is pretty safe. In the history of nuclear power, there were two major incidents. Considering the number of nuclear power plants around the planet, that is pretty good. However, as those two accidents demonstrated, the potential fallout of those incidents is pretty severe and widespread. I think this massively contributes to the perceived risks. The warnings towards the public were pretty clear. I remember my mom telling stories from the time the Chernobyl incident became known to the public and people became worried about the produce they usually had from their gardens. Meanwhile, everything that has been done to address the hazards of fossil based power generation is pretty much happening behind the scenes.

With coal and natural gas, it seems like people perceive the risks as more abstract. The radioactive emissions of coal power plants have been known for a while and the (potential) dangers of fine particulate matters resulting from combustion are somewhat well known nowadays as well. However, the effects of those danger seem much more abstract and delayed, leading people to not be as worried about it. It also shows on a smaller, more individual scale: people still buy ICE cars at large and install gas stoves into their houses despite induction being readily available and at times even cheaper.


> However, the effects of those danger seem much more abstract and delayed, leading people to not be as worried about it.

Climate change is very visible in the present day to me. People are protesting about it frequently enough that it's hard to claim they are not worried.


Climate change is certainly visible, although the extend to which areas are affected varies wildly. However, there are still shockingly many people who have a hard time attributing ever increasing natural disasters and more extreme weather patterns to climate change.


During power outages, having natural gas in your home is a huge benefit. Many in my area just experienced it with Helene.

You can still cook. You can still get hot water. If you have gas logs you still have a heat source in the winter too.

These trade offs are far more important to a lot of people.


Granted, that is a valid concern if power outages are more frequent in your area. I have never experienced a power outage personally, so that is nothing I ever thought of. However, I feel like with solar power and battery storage systems becoming increasingly widespread, this won't be a major concern for much longer


They aren’t frequent but in the last 15-16 years there have been 2 outages that lasted almost 2 weeks in some areas around here. The first one was in the winter and the only gas appliance I had was a set of gas logs in the den.

It heated my whole house and we used a pan to cook over it. When we moved the first thing I did was install gas logs, gas stove and a gas water heater.

It’s nice to have options and backup plans. That’s one of the reasons I was a huge fan of the Chevy Volt when it first came out. I could easily take it on a long trip but still averaged 130mpg over 3 years (twice). Now I’ve got a Tesla and when there are fuel shortages it’s also really nice.

A friend of ours owns a cybertruck and was without power for 9 days, but just powered the whole house with the cybertruck. Every couple of days he’d drive to a supercharger station to recharge.


Sure, we can have a carbon tax on everything. That's fine. And then the nuclear plant has to pay for a Pripyat-sized exclusion zone around it. Just like the guy said about Tesla. All fair.


That's not a workable idea as it'd just encourage corporations to obfuscate the ownership of the plant (e.g. shell companies) and drastically underestimate the actual risks of catastrophes. Ultimately, the government will be left holding the bill for nuclear catastrophes, so it's better to just recognise that and get the government to regulate the energy companies.


The problem I see there is that if “corporations are responsible” then no one is. That is, no real person has the responsibility, and acts accordingly.


Even if it does, can it resurrect the deceased?


But people driving manually kill people all the time too. The bar for self driving isn’t «does it never kill anyone», it’s «does it kill people less than manual driving». We’re not there yet, and Tesla’s «FSD» is marketing bullshit, but we certainly will be there one day, and at that point, we need to understand what we as a society will do when a self driving car kills someone. It’s not obvious what the best solution is there, and we need to continue to have societal discussions to hash that out, but the correct solution definitely isn’t «don’t use self driving».


> The bar for self driving isn’t «does it never kill anyone», it’s «does it kill people less than manual driving».

Socially, that's not quite the standard. As a society, we're at ease with auto fatalities because there's often Someone To Blame. "Alcohol was involved in the incident," a report might say, and we're more comfortable even though nobody's been brought back to life. Alternatively, "he was asking for it, walking at night in dark clothing, nobody could have seen him."

This is an emotional standard that speaks to us as human, story-telling creatures that look for order in the universe, but this is not a proper actuarial standard. We might need FSD to be manifestly safer than even the best human drivers before we're comfortable with its universal use.


That may be true, but I think I personally would find it extremely hard to argue against when the numbers are clearly showing that it’s safer. I think once the numbers are unambiguously showing that autopilots are safer, it will be super hard for people to argue against it. Of course there is a huge intermediate state where the numbers aren’t clear (or at least not clear to the average person), and during that stage, emotions may rule the debate. But if the underlying data is there, I’m certain car companies can change the narrative - just look at how much American hates public transit and jaywalkers.


No, because every driver thinks they are better than average.

So nobody will accept it.


I expect insurance to figure out the relative risks and put a price sticker on that decision.


Assuming I understand the argument flow correctly, I think I disagree. If there is one thing that the past few decades have confirmed quite conclusively, it is that people will trade a lot of control and sense away in the name of convenience. The moment FSD reaches that sweet spot of 'take me home -- I am too drunk to drive' of reliability, I think it would be accepted; maybe even required by law. It does not seem there.


The level where someone personally uses it and the level where they accept it being on the road are different. Beating the average driver is all about the latter.

Also I will happily use self driving that matches the median driver in safety.


I think that’s implicit in the promise of the upcoming-any-year-now unattended full self driving.


Arguably the problem with Tesla self-driving is that it's stuck in an uncanny valley of performance where it's worse than better performing systems but also worse from a user experience perspective than even less capable systems.

Less capable driver assistance type systems might help the driver out (e.g. adaptive cruise control), but leave no doubt that the human is still driving. Tesla though goes far enough that it takes over driving from the human but it isn't reliable enough that the human can stop paying attention and be ready to take over at a moment's notice. This seems like the worst of all possible worlds since you are both disengaged by having to maintain alertness.

Autopilots in airplanes are much the same way, pilots can't just turn it on and take a nap. But the difference is that nothing an autopilot is going to do will instantly crash the plane, while Tesla screwing up will require split second reactions from the driver to correct for.

I feel like the real answer to your question is that having reasonable confidence in self-driving cars beyond "driver assistance" type features will ultimately require a car that will literally get from A to B reliably even if you're taking a nap. Anything close to that but not quite there is in my mind almost worse than something more basic.


> It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

This is what bugs me about ordinary autopilot. Autopilot doesn't switch lanes, but I like to slow down or speed up as needed to allow merging cars to enter my lane. Autopilot never does that, and I've had some close calls with irate mergers who expected me to work with them. And I don't think they're wrong.

Just means that when I'm cruising in the right lane with autopilot I have to take over if a car tries to merge.


While I certainly wouldn't object to how you handle merging cars (it's a nice, helpful thing to do!), I was always taught that if you want to merge into a lane, you are the sole person responsible for making that possible and making that safe. You need to get your speed and position right, and if you can't do that, you don't merge.

(That's for merging onto a highway from an entrance ramp, at least. If you're talking about a zipper merge due to a lane ending or a lane closure, sure, cooperation with other drivers is always the right thing to do.)


More Americans should go drive on the Autobahn. Everyone thinks the magic is “omg no speed limits!” which is neat but the really amazing thing is that NO ONE sits in the left hand lane and EVERYONE will let you merge immediately upon signaling.

It’s like a children’s book explanation of the nice things you can have (no speed limits) if everyone could just stop being such obscenely selfish people (like sitting in the left lane or preventing merges because of some weird “I need my car to be in front of their car” fixation).


Tesla FSD on German Autobahn = most dangerous thing ever. The car has never seen this rule and it's not ready for a 300km/h car behind you.


To be fair, Tesla FSD on German Autobahn = impossible because it's not released yet, precisely because it's not trained for German roads.


At least in the northeast/east coast US there are still lots of old parkways without modern onramps, where moving over to let people merge is super helpful. Frequently these have bad visibility and limited room to accelerate if any at all, so doing it your way is not really possible.

For example:

I use this onramp fairly frequently. It’s rural and rarely has much traffic, but when there is you can get stuck for a while trying to get on because it’s hard to see the coming cars, and there’s not much room to accelerate (unless people move over, which they often do). https://maps.app.goo.gl/ALt8UmJDzvn89uvM7?g_st=ic

Preemptively getting in the left lane before going under this bridge is a defensive safety maneuver I always make—being in the right lane nearly guarantees some amount of conflict with merging traffic.

https://maps.app.goo.gl/PumaSM9Bx8iyaH9n6?g_st=ic


I was taught that in every situation you should act as though you are the sole person responsible for making the interaction safe.

If you're the one merging? It's on you. If you're the one being merged into? Also you.

If you assume that every other driver has a malfunctioning vehicle or is driving irresponsibly then your odds of a crash go way down because you assume that they're going to try to merge incorrectly.


>cooperation with other drivers is always the right thing to do

Correct, including when the other driver may not have the strictly interpreted legal right of way. You don't know if their vehicle is malfunctioning, or if the driver is malfunctioning, or if they are being overly aggressive or distracted on their phone.

But most of the time, on an onramp to a highway, people on the highway in the lane that is being merged into need to be taking into account the potential conflicts due to people merging in from the acceleration lane. Acceleration lanes can be too short, other cars may not have the capability to accelerate quickly, other drivers may not be as confident, etc.

So while technically, the onus is on people merging in, a more realistic rule is to take turns whenever congestion appears, even if you have right of way.


> You need to get your speed and position right, and if you can't do that, you don't merge.

I agree, but my observation has been that the majority of drivers are absolutely trash at doing that and I'd rather they not crash into me, even if would be their fault.

Honestly I think Tesla's self-driving technology is long on marketing and short on performance, but it really helps their case that a lot of the competition is human drivers who are completely terrible at the job.


Autopilot is just adaptive cruise control with lane keep. Literally every car has this now. I don't see people on Toyota, Honda, or Ford forums complaining that a table-stakes feature of a car doesn't adjust speed or change lanes as a car is merging in. Do you know how insane that sounds. I'm assuming you're in software since you're on Hacker news.


It sounds zero insane. Adaptive cruise control taking into account merging would be great. And it's valid to complain about automations that make your car worse at cooperating.


This entire thread is people complaining about automation and FSD. Then you want an advanced feature, that requires a large amount of AI to do, as a toss-in feature to basic adaptive cruise control. Do you realize how far ahead Tesla is to everyone else?


A large amount of AI to shift slightly forward or backward based on turn signals? No.


My Audi doesn't advertise its predictive cruise control as Full Self Driving. So expectations are more controlled...


They're not talking about FSD.


> Just means that when I'm cruising in the right lane with autopilot I have to take over if a car tries to merge.

Which brings it right back to the original criticism of Tesla's "self driving" program. What you're describing is assisted driving, not anything close to "full self driving".


Agreed. Automatic lane changes are the only feature of enhanced autopilot that I think I'd be interested in, solely for this reason.


Tesla jumped the gun on the FSD free trial earlier this year. It was nowhere near good enough at the time. Most people who tried it for the first time probably share your opinion.

That said, there is a night and day difference between FSD 12.3 that you experienced earlier this year and the latest version 12.6. It will still make mistakes from time to time but the improvement is massive and obvious. More importantly, the rate of improvement in the past two months has been much faster than before.

Yesterday I spent an hour in the car over three drives and did not have to turn the steering wheel at all except for parking. That never happened on 12.3. And I don't even have 12.6 yet, this is still 12.5; others report that 12.6 is a noticeable improvement over 12.5. And version 13 is scheduled for release in the next two weeks, and the FSD team has actually hit their last few release milestones.

People are right that it is still not ready yet, but if they think it will stay that way forever they are about to be very surprised. At the current rate of improvement it will be quite good within a year and in two or three I could see it actually reaching the point where it could operate unsupervised.


I have a 2024 Model 3, and it's a a great car. That being said, I'm under no illusion that the car will ever be self driving (unsupervised).

12.5.6 Still fails to read very obvious signs for 30 Km/h playgrounds zones.

The current vehicles lack sufficient sensors, and likely do not have enough compute power and memory to cover all edge cases.

I think it's a matter of time before Tesla faces a lawsuit over continual FSD claims.

My hope is that the board will grow a spine and bring in a more focused CEO.

Hats off to Elon for getting Tesla to this point, but right now they need a mature (and boring) CEO.


The board is family and friends, so them ousting him will never happen.


At some point the risk of going to prison overtakes family loyalty.


There is no risk of going to prison. It just doesn’t happen, never have and never will, no matter how unfair that is. Board members and CEOs are not held accountable, ever.


https://fortune.com/2023/01/24/google-meta-spotify-layoffs-c...

As they say, they take "full responsibility"



I have yet to see a difference. I let it highway drive for an hour and it cut off a semi, coming within 9 to 12 inches of the bumper for no reason. I heard about that one believe me.

It got stuck in a side street trying to get to a target parking lot, shaking the wheel back and forth.

It's no better so far and this is the first day.


You have 12.6?

As I said, it still makes mistakes and it is not ready yet. But 12.3 was much worse. It's the rate of improvement I am impressed with.

I will also note that the predicted epidemic of crashes from people abusing FSD never happened. It's been on the road for a long time now. The idea that it is "irresponsible" to deploy it in its current state seems conclusively disproven. You can argue about exactly what the rate of crashes is but it seems clear that it has been at the very least no worse than normal driving.


Hm. I thought that was the latest release but it looks like no. But there seems to be no improvements from the last trial, so maybe 12.6 is magically better.


A lot of people have been getting the free trial with 12.3 still on their cars today. Tesla has really screwed up on the free trial for sure. Nobody should be getting it unless they have 12.6 at least.


I have 12.5. maybe 12.6 is better but I've heard that before.

Don't get me wrong without a concerted data team building maps a priori, this is pretty incredible. But from a pure performance standpoint it's a shaky product.


The latest version is 12.5.6, I think he got confused by the .6 at the end. If you think that's bad then there isn't a better version available. However it is a dramatic improvement over 12.3, don't know how much you tested on it.


You're right, thanks. One of the biggest updates in 12.5.6 is transitioning the highway Autopilot to FSD. If he has 12.5.4 then it may still be using the old non-FSD Autopilot on highways which would explain why he hasn't noticed improvement there; there hasn't been any until 12.5.6.


> ... coming within 9 to 12 inches of the bumper for no reason. I heard about that one believe me.

Oh dear.

Glad you're okay!


Is it possible you have a lemon? Genuine question. I’ve had nothing but positive experiences with FSD for the last several months and many thousands of miles.


If the incidence of problems is some relatively small number, like 5% or 10%, it's very easily possible that you've never personally seen a problem, but overall we'd still consider that the total incidence of problems is unacceptable.

Please stop presenting arguments of the form "I haven't seen problems so people who have problems must be extreme outliers". At best it's ignorant, at worst it's actively in bad faith.


I've had nothing but positive experiences with ChatGPT-4o, that doesn't make people wrong to criticise either as modelling their training data too much and generalising too little when they need to use it for something where the inference domain is too far outside the training domain.


I suspect the performance might vary widely depending on if you're on a road in california they have a lot of data on, or if its a road FSD has rarely seen before.


A lot of haters mistake safety critical disengagements with "oh the car is doing something I don't like or I wouldn't do"

If you treat the car like it's a student driver or someone else driving, disengagements will go do. If you treat it like you're driving there's also something to complain about.


If I had a dime for every hackernews who commented that FSD version X was like a revelation compared to FSD version X-ε I'd have like thirty bucks. I will grant you that every release has surprisingly different behaviors.

Here's an unintentionally hilarious meta-post on the subject https://news.ycombinator.com/item?id=29531915


Sure, plenty of people have been saying it's great for a long time, when it clearly was not (looking at you, Whole Mars Catalog). I was not saying it was super great back then. I have consistently been critical of Elon for promising human level self driving "next year" for like 10 years in a row and being wrong every time. He said it this year again and I still think he's wrong.

But the rate of progress I see right now has me thinking that it may not be more than two or three years before that threshold is finally reached.


The most important lesson I've had from me incorrectly predicting in 2009 that we'd have cars that don't come with steering wheels in 2018, and thinking that the progress I saw each year up to then was consistent with that prediction, is that it's really hard to guess how long it takes to walk the fractal path that is software R&D.

How far are we now, 6 years later than I expected?

Dunno.

I suspect it's gonna need an invention on the same level as Diffusion or Transformer models to be able to get all the edge cases we can get, and that might mean we only get it with human level AGI.

But I don't know that, it might be we've already got all we need architecture-wise and it's just a matter of scale.

Only thing I can be really sure of is we're making progress "quite fast" in a non-objective use of the words — it's not going to need a re-run of 6 million years of mammilian evolution or anything like that, but even 20 years wall clock time would be a disappointment.


Waymo went driverless in 2020, maybe you weren't that far off. Predicting that in 2009 would have been pretty good. They could and should have had vehicles without steering wheels anytime since then, it's just a matter of hardware development. Their steering wheel free car program was derailed when they hired traditional car company executives.


Waymo for sure, but I meant also without any geolock etc., so I can't claim credit for my prediction.

They may well best Tesla to this, though.


Waymo is using full lidar and other sensors, whereas Tesla is relying on pure vision systems (to the point of removing radar on newer models). So they're solving a much harder problem.

As for whether it's worthwhile to solve that problem when having more sensors will always be safer, that's another issue...


Indeed.

While it ought to be possible to solve for just RGB… making it needlessly hard for yourself is a fun hack-day side project, not a valuable business solution.


On one hand, it really has gotten much better over time. It's quite impressive.

On the other hand, I fear/suspect it is asymptotically, rather than linearly, approaching good enough to be unsupervised. It might get halfway there, each year, forever.


Doesn’t this just mean it’s improving rapidly which is a good thing?


No, the fact that people say FSD is on the verge of readiness constantly for a decade means there is no widely shared benchmark.


> That said, there is a night and day difference between FSD 12.3 that you experienced earlier this year and the latest version 12.6

>And I don't even have 12.6 yet, this is still 12.5;

How am i supposed to take anything you say seriously when your only claim is a personal anecdote that doesn't even apply to your own argument. Please, think about what you're writing, and please stop repeating information you heard on youtube as if it's fact.

The is one of the reasons (among many) that I can't take Tesla booster seriously. I have absolutely zero faith in your anecdote that you didn't touch the steering wheel. I bet it's a lie.


> I have absolutely zero faith in your anecdote that you didn't touch the steering wheel. I bet it's a lie.

I’m not GP, but I can share video showing it driving across residential, city, highway, and even gravel roads all in a single trip without touching the steering wheel a single time over a 90min trip (using 12.5.4.1).


And if someone wants to claim I’m cherry picking the video, happy to shoot a new video with this post visible on an iPad in the seat next to me. Is it autonomous? Hell no. Can it drive in Manhattan? Nope. But can it do >80% of my regular city (suburb outside nyc) and highway driving, yep.


It's so obviously cherry-picking, I have no idea what you are even thinking. To not be cherry-picking would mean that it's actually ready and works fine in all situations, and there's no way Musk would not shout that out from rooftops and sell it yesterday.

Obviously it works some time on some roads, but not all the time on all the roads. A film with it when it works on the road it works is cherry-picking. Look up what the term means.


> To not be cherry-picking would mean that it's actually ready and works fine in all situations

A claim I never made in any of my posts. What a way to straw man and fail! ;)

Way to cherry pick yourself into the one cherry picked claim I never actually made!


I can second this experience. I rarely touch the wheel anymore. I’d say I’m 98% FSD. I take over in school zones, parking lots, and complex construction.


The version I have is already a night and day difference from 12.3 and the current version is better still. Nothing I said is contradictory in the slightest. Apply some basic reasoning, please.

I didn't say I didn't touch the steering wheel. I had my hands lightly touching it most of the time, as one should for safety. I occasionally used the controls on the wheel as well as the accelerator pedal to adjust the set speed, and I used the turn signal to suggest lane changes from time to time, though most lane choices were made automatically. But I did not turn the wheel. All turning was performed by the system. (If you turn the wheel manually the system disengages). Other than parking, as I mentioned, though FSD did handle some navigation into and inside parking lots.


> At the current rate of improvement it will be quite good within a year and in two or three I could see it actually reaching the point where it could operate unsupervised.

That’s not a reasonable assumption. You can’t just extrapolate “software rate of improvement”, that’s not how it works.


The timing of the rate of improvement increasing corresponds with finishing their switch to end-to-end machine learning. ML does have scaling laws actually.

Tesla collects their own data, builds their own training clusters with both Nvidia hardware and their own custom hardware, and deploys their own custom inference hardware in the cars. There is no obstacle to them scaling up massively in all dimensions, which basically guarantees significant progress. Obviously you can disagree about whether that progress will be enough, but based on the evidence I see from using it, I think it will be.


So just a few more years of death and injury until they reach a finished product?


If this is what society has to pay to improve Tesla's product, then perhaps they should have to share the software with other car manufacturers too.

Otherwise every car brand will have to kill a whole heap of people too until they manage to make a FSD system.


Elon has said many times that they are willing to license FSD but nobody else has been interested so far. Clearly that will change if they reach their goals.

Also, "years of death and injury" is a bald-faced lie. NHTSA would have shut down FSD a long time ago if it were happening. The statistics Tesla has released to the public are lacking, it's true, but they cannot hide things from the NHTSA. FSD has been on the road for years and a billion miles and if it was overall significantly worse than normal driving (when supervised, of course) the NHTSA would know by now.

The current investigation is about performance under specific conditions, and it's possible that improvement is possible and necessary. But overall crash rates have not reflected any significant extra danger by public use of FSD even in its primitive and flawed form of earlier this year and before.


If the answer was yes, presumably there’s a tradeoff where that deal would be reasonable.


So far, data points to it having far fewer crashes than a human alone. Teslas data shows that, but 3rd party data seems to imply the same.


Tesla does not release the data required to substantiate such a claim. It simply doesn’t and you’re either lying or being lied to.



No, it releases enough data to actively mislead you (because there is no way Tesla's data people are unaware of these factors):

The report measures accidents in FSD mode. Qualifiers to FSD mode: the conditions, weather, road, location, traffic all have to meet a certain quality threshold before the system will be enabled (or not disable itself). Compare Sunnyvale on a clear spring day to Pittsburgh December nights.

There's no qualifier to the "comparison": all drivers, all conditions, all weather, all roads, all location, all traffic.

It's not remotely comparable, and Tesla's data people are not that stupid, so it's willfully misleading.

This report does not include fatalities. It also doesn't consider any incident where there was not airbag deployment to be an accident. Sounds potentially reasonable until you consider:

- first gen airbag systems were primitive: collision exceeds threshold, deploy. Currently, vehicle safety systems consider duration of impact, speeds, G-forces, amount of intrusion, angle of collision, and a multitude of other factors before deciding what, if any, systems to fire (seatbelt tensioners, airbags, etc.) So hit something at 30mph with the right variables? Tesla: "this is not an accident".

- Tesla also does not consider "incident was so catastrophic that airbags COULD NOT deploy*" to be an accident, because "airbags didn't deploy". This umbrella could also include egregious, "systems failed to deploy for any reason up to and including poor assembly line quality control", as also not an accident and also "not counted".


That data is not an apples to apples comparison unless autopilot is used in exactly the same mix of conditions as human driving. Tesla doesn't share that in the report, but I'd bet it's not equivalent. I personally tend to turn on driving automation features (in my non-Tesla car) in easier conditions and drive myself when anything unusual or complicated is going on, and I'd bet most drivers of Teslas and otherwise do the same.

This is important because I'd bet similar data on the use of standard, non-adaptive cruise control would similarly show it's much safer than human drivers. But of course that would be because people use cruise control most in long-distance highway driving outside of congested areas, where you're least likely to have an accident.


Per the other comment: no, they don't. This data is not enough to evaluate its safety. This is enough data to mislead people who spend <30 seconds thinking about the question though, so I guess that's something (something == misdirection and dishonesty).

You've been lied to.


It disconnects in case of dangerous situations, so every 33 miles to 77 miles driven (depending on the version), versus 400'000 miles for a human


We also pay this price with every new human driver we train. again and again.


You won't be able to bring logic to people with Elon derangement syndrome.


> the rate of improvement in the past two months has been much faster than before.

I suspect the free trials let tesla collect orders of magnitude more data on events requiring human intervention. If each one is a learning event, it could exponentially improve things.

I tried it on a loaner car and thought it was pretty good.

One bit of feedback I would give tesla - when you get some sort of FSD message on the center screen, make the text BIG and either make it linger more, or let you recall it.

For example, it took me a couple tries to read the message that gave instructions on how to give tesla feedback on why you intervened.

EDIT: look at this graph

https://electrek.co/wp-content/uploads/sites/3/2024/10/Scree...


> it will be quite good within a year

The regressions are getting worse. For the first release anouncement it was only hitting regulatory hurdles and now the entire software stack is broken? They should fire whoever is in charge and restore the state Elon tried to release a decade ago.


> At the current rate of improvement it will be quite good within a year

I'll believe it when I see it. I'm not sure "quite good" is the next step after "feels dangerous".


"Just round the corner" (2016)


Musk in 2016 (these are quotes, not paraphrases): "Self driving is a solved problem. We are just tuning the details."

Musk in 2021: "Right now our highest priority is working on solving the problem."


i have the same experience 12.5 is insanely good. HN is full of people that dont want self driving to succeed for some reason. fortunately, it's clear as day to some of us that tesla approach will work


> HN is full of people that dont want self driving to succeed for some reason.

I would love for self-driving to succeed. I do long-ish car trips several times a year, and it would be wonderful if instead of driving, I could be watching a movie or working on something on my laptop.

I've tried Waymo a few times, and it feels like magic, and feels safe. Their record backs up that feeling. After everything I've seen and read and heard about Tesla, if I got into a Tesla with someone who uses FSD, I'd ask them to drive manually, and probably decline the ride entirely if they wouldn't honor my request.

> fortunately, it's clear as day to some of us that tesla approach will work

And based on my experience with Tesla FSD boosters, I expect you're basing that on feelings, not on any empirical evidence or actual understanding of the hardware or software.


Time will show I'm right and you're wrong.


I would love self-driving to succeed. I should be a Tesla fan, because I'm very much a fan of geekery and tech anywhere and everywhere.

But no. I want self-driving to succeed, and when it does (which I don't think is that soon, because the last 10% takes 90% of the time), I don't think Tesla or their approach will be the "winner".


Curiousity about why they're against it and enunciating your why you think it will work would be more helpful.


It's evident to Tesla drivers using Full Self-Driving (FSD) that the technology is rapidly improving and will likely succeed. The key reason for this anticipated success is data: any reasonably intelligent observer recognizes that training exceptional deep neural networks requires vast amounts of data, and Tesla has accumulated more relevant data than any of its competitors. Tesla recently held a robotaxi event, explicitly informing investors of their plans to launch an autonomous competitor to Uber. While Elon Musk's timeline predictions and politics may be controversial, his ability to achieve results and attract top engineering and management talent is undeniable.


> It's evident to Tesla drivers using Full Self-Driving (FSD) that the technology is rapidly improving and will likely succeed

Sounds like Tesla drivers have been at the Kool-Aid then.

But to be a bit more serious, the problem isn't necessarily that people don't think it's improving (I do believe it is) or that they will likely succeed (I'm not sure where I stand on this). The problem is that every year Musk says the next year will be the Year of FSD. And every next year, it doesn't materialize. This is like the Boy Who Cried Wolf; Musk has zero credibility with me when it comes to predictions. And that loss of credibility affects my feeling as to whether he'll be successful at all.

On top of that, I'm not convinced that autonomous driving that only makes use of cameras will ever be reliably safer than human drivers.


I have consistently been critical of Musk for this over the many years it's been happening. Even right now, I don't believe FSD will be unsupervised next year like he just claimed. And yet, I can see the real progress and I am convinced that while it won't be next year, it could absolutely happen within two or three years.

One of these years, he is going to be right. And at that point, the fact that he was wrong for a long time won't diminish their achievement. As he likes to say, he specializes in transforming technology from "impossible" to "late".

> I'm not convinced that autonomous driving that only makes use of cameras will ever be reliably safer than human drivers.

Believing this means that you believe AIs will never match or surpass the human brain. Which I think is a much less common view today than it was a few years ago. Personally I think it is obviously wrong. And also I don't believe surpassing the human brain in every respect will be necessary to beat humans in driving safety. Unsupervised FSD will come before AGI.


Then why have we been just a year or two away from actual working self-driving, for the last 10 years? If I told my boss that my project would be done in a year, and then the following year said the same thing, and continued that for years, that’s not what “achieving results” means.


> and Tesla has accumulated more relevant data than any of its competitors.

Has it really? How much data is each car sending to Tesla HQ? Anybody actually know? That's a lot of cell phone bandwidth to pay for, and a lot of data to digest.

Vast amounts of data about routine driving is not all that useful, anyway. A "highlights reel" of interesting situations is probably more valuable for training. Waymo has shown some highlights reels like that, such as the one were someone in a powered wheelchair is chasing a duck in the middle of a residential street.


Anyone who believes Tesla beats Google because they are better at collecting and handling data can be safely ignored.


The argument wouldn't be "better at" but simply "more".

Sensor platforms deployed at scale, that you have the right to take data from, are difficult to replicate.


For most organizations data is a burden rather than a benefit. Tesla has never demonstrated that they can convert data to money, while that is the sole purpose of everything Google has built for decades.


The crux of the issue is that your interpretation of performance cannot be trusted. It is absolutely irrelevant.

Even a system that is 99% reliable will honestly feel very, very good to an individual operator, but would result in huge loss of life when scaled up.

Tesla can earn more trust be releasing the data necessary to evaluate the system’s performance. The fact that they do not is far more informative than a bunch of commentators saying “hey it’s better than it was last month!” for the last several years — even if it is true that it’s getting better and even if it’s true it’s hypothetically possible to get to the finish line.


Tesla's sensor suite does not support safe FSD.

It relies on inferred depth from a single point of view. This means that the depth/positioning info for the entire world is noisy.

From a safety critical point of view its also bollocks, because a single birdshit/smear/raindrop/oil can render the entire system inoperable. Does it degrade safely? does it fuck.

> recognizes that training exceptional deep neural networks requires vast amounts of data,

You missed good data. Recording generic driver's journeys isn't going to yield good data, especially if the people who are driving aren't very good. You need to have a bunch of decent drivers doing specific scenarios.

Moreover that data isn't easily generalisable to other sensor suites. Add another camera? yeahna, new model.

> Tesla recently held a robotaxi event, explicitly informing investors of their plans

When has Musk ever delivered on time?

> his ability to achieve results

most of those results aren't that great. Tesla isn't growing anymore, its reliant on state subsidies to be profitable. They still only ship 400k units a quarter, which is tiny compared to VW's 2.2million.

> attract top engineering and management talent is undeniable

Most of the decent computer vision people are not in tesla. Hardware wise, their factories aren't fun places to be. He's a dick to work for, capricious and vindictive.


Completely agree. It’s very strange. But honestly it’s their loss. FSD is fantastic.


Very strange not wanting poorly controlled 4,000lb steel cages driving around at 70mph stewarded by people calling “only had to stop it from killing me 4 times today!” as great success.


If this is the case, the calls for heavy regulation in this thread will lead to many more deaths than otherwise.


The thing that doesn't make sense is the numbers. If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

When I did the trial on my Tesla, I also noted these kinds of things and felt like I had to take control.

But at the end of the day, only the numbers matter.


> If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

Even if it is true that the data show that with FSD (not Autopilot) enabled, drivers are in fewer crashes, I would be worried about other confounding factors.

For instance, I would assume that drivers are more likely to engage FSD in situations of lower complexity (less traffic, little construction or other impediments, overall lesser traffic flow control complexity, etc.) I also believe that at least initially, Tesla only released FSD to drivers with high safety scores relative to their total driver base, another obvious confounding factor.

Happy to be proven wrong though if you have a link to a recent study that goes through all of this.


[flagged]


> Either the system causes less loss of life than a human driver or it doesn’t. The confounding factors don’t matter.

Confounding factors are what allows one to tell appart "the system cause less loss of life" from "the system causes more loss of life yet it is only enabled in situations were fewer lives are lost".


No, that's absolutely not how this works. Confounding factors are things that make your data not tell you what you are actually trying to understand. You can't just hand-wave that away, sorry.

Consider: what I expect is actually true based on the data is that Tesla FSD is as safe or safer than the average human driver, but only if the driver is paying attention and is ready to take over in case FSD does something unsafe, even if FSD doesn't warn the driver it needs to disengage.

That's not an autonomous driving system. Which is potentially fine, but the value prop of that system is low to me: I have to pay just as much attention as if I were driving manually, with the added problem that my attention is going to start to wander because the car is doing most of the work, and the longer the car successfully does most of the work, the more I'm going to unconsciously believe I can allow my attention to slip.

I do like current common ADAS features because they hit a good sweet spot: I still need to actively hold onto the wheel and handle initiating lane changes, turns, stopping and starting at traffic lights and stop signs, etc. I look at the ADAS as a sort of "backup" to my own driving, and not as what's primarily in control of the car. In contrast, Tesla FSD wants to be primarily in control of the car, but it's not trustworthy enough to do that without constant supervision.


Like I said, the time for studies is in the future. FSD is a product in development and they know which stats they need to track in order to track progress.

You’re arguing for something that: 1. Isn’t under contention and 2. Isn’t rooted in the real world.

You’re right FSD isn’t an autonomous driving system. It’s not meant to be, right now.


> You’re right FSD isn’t an autonomous driving system

Oh, weird. Are you not aware it's called Full SELF Driving?


Does the brand name matter? The description should tell you all you need to know when making a purchase decision.


Yes, a company's marketing is absolutely part of the representations the company makes about a product they sell in the context of a product liability lawsuit.


We’re just waiting for that lawsuit to happen then. Are you a betting man? I’d be happy to have a little wager that in 3 years time from now, Tesla hasn’t faced legal problems for their product naming.


Legal problems for their product naming? They don't get sued for their product name, they get sued for being negligent in selling a defective product to which their advertising is just evidence. I'm not going to bet against someone who is making up scenarios that don't even exist. It's a moot issue anyway, Tesla has already been sued many times for their defective products.

Here's one example:https://www.reuters.com/legal/tesla-must-face-vehicle-owners...

> Are you a betting man?

A fool and his money are easily separated.


There is an easy way to know what is really behind the numbers: look who is paying in case of accident.

You have a Mercedes, Mercedes takes responsibility.

You have a Tesla, you take the responsibility.

Says a lot.


Mercedes had the insight that if no one is able to actually use the system then it can't cause any crashes.

Technically, that is the easiest way to get a perfect safety record and journalists will seemingly just go along with the charade.


You have a Mercedes, and you have a system that works virtually nowhere.


Better that way than "Oh it tried to run red light, but otherwise it's great."


"Oh we tried to build it but no one bought it! So we gave up." - Mercedes before Tesla.

Perhaps FSD isn't ready for city streets yet, but it's great on the highways and I'd 1000x prefer we make progress rather than settle for the status quo garbage that the legacy makers put out. Also, human drivers are the most dangerous, by far, we need to make progress to eventual phase them out.


2-ton blocks of metal that go 80mph next to me on the highway is not the place I would want people to go "fuck it let's just do it" with their new tech. Human drivers might be dangerous but adding more danger and unpredictability on top just because we can skip a few steps in the engineering process is crazy.

Maybe you have a deathwish, but I definitely don't. Your choices affect other humans in traffic.


It sounds like you are the one with a deathwish, because objectively by the numbers Autopilot on the highway has greatly reduced death. So you are literally advocating for more death.

You have two imperfect systems for highway driving: Autopilot with human oversight, and humans. The first has far far less death. Yet you are choosing the second.


While I don't disagree with your point in general, it should be noted that there is more to taking responsibility than just paying. Even if Mercedes Drive Pilot was enabled, anything that involves court appearances and criminal liability is still your problem if you're in the driver's seat.


Because it is bad enough that people really do supervise it. I see people who say that wouldn't happen because the drivers become complacent.

Maybe that could be a problem with future versions, but I don't see it happening with 12.3.x. I've also heard that driver attention monitoring is pretty good in the later versions, but I have no first hand experience yet.


Very good point. The product that requires supervision and tells the user to keep their hands on the wheel every 10 seconds is not good enough to be used unsupervised.

I wonder how things are inside your head. Are you ignorant or affected by some strong bias?


Yeah, it definitely isn't good enough to be used unsupervised. TBH, they've switched to eye and head tracking as the primary mechanism of attention monitoring now. It seems to work pretty well, now that I've had a chance to try it.

I'm not quite sure what you meant by your second paragraph, but I'm sure I have my blind spots and biases. I do have direct experience with various versions of 12.x though (12.3 and now 12.5).


Agree that only the numbers matter, but only if the numbers are comprehensive and useful.

How often does an autonomous driving system get the driver into a dicey situation, but the driver notices the bad behavior, takes control, and avoids a crash? I don't think we have publicly-available data on that at all.

You admit that you ran into some of these sorts of situations during your trial. Those situations are unacceptable. An autonomous driving system should be safer than a human driver, and should not make mistakes that a human driver would not make.

Despite all the YouTube videos out there of people doing unsafe things with Tesla FSD, I expect that most people that use it are pretty responsible, are paying attention, and are ready to take over if they notice FSD doing something wrong. But if people need to do that, it's not a safe, successful autonomous driving system. Safety means everyone can watch TV, mess around on their phone, or even take a nap, and we still end up with a lower crash rate than with human drivers.

The numbers that are available can't tell us if that would be the case. My belief is that we're absolutely not there.


Is Tesla required to report system failures or the vehicle damaging itself? How do we know they're not optimizing for the benchmark (what they're legally required to report)?


If the question is: “was FSD activated at the time of the accident: yes/no”, they can legally claim no, for example if luckily the FSD disconnects half a second before a dangerous situation (eg: glare obstructing cameras), which may coincide exactly with the times of some accidents.


> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.

Scroll down to Methodology at https://www.tesla.com/VehicleSafetyReport


This is for Autopilot, which is the car following system on highways. If you are in cruise control and staying on your lane, not much is supposed to happen.

The FSD numbers are much more hidden.

The general accident rate is 1 per 400’000 miles driven.

FSD has one “critical disengagement” (aka before accident if human or safety braking doesn’t intervene) every 33 miles driven.

It means to reach unsupervised with human quality they would need to improve it 10’000 times in few months. Not saying it is impossible, just highly optimistic. In 10 years we will be there, but in 2 months, sounds a bit overpromising.


All manufacturers have for some time been required by regulators to report any accident where an autonomous or partially autonomous system was active within 30 seconds of an accident.


My question is better rephrased as "what is legally considered an accident that needs to be reported?" If the car scrapes a barricade or curbs it hard but the airbags don't deploy and the car doesn't sense the damage, clearly they don't. There's a wide spectrum of issues up to the point where someone is injured or another car is damaged.


And not to move the goalposts, but I think we should also be tracking any time the human driver feels they need to take control because the autonomous system did something they didn't believe was safe.

That's not a crash (fortunately!), but it is a failure of the autonomous system.

This is hard to track, though, of course: people might take over control for reasons unrelated to safety, or people may misinterpret something that's safe as unsafe. So you can't just track this from a simple "human driver took control".


The numbers collected by the NHTSA and insurance companies do show that FSD is dangerous...that's why the NHTSA started investigating and its why most insurance companies won't insure Tesla vehicles or charge significantly higher rates.

Also, Tesla is known to disable self-driving features right before collisions to give the appearance of driver fault.

And the coup de grace: if Tesla's own data showed that FSD was actually safer, they'd be shouting it from the moon, using that data to get self-driving permits in CA, and offering to assume liability if FSD actually caused an accident (like Mercedes does with its self driving system).


What numbers? Who’s measuring? What are they measuring?


Maybe other human drivers are reacting quickly and avoiding potential accidents from dangerous computer driving? That would be ironic, but I'm sure it's possible in some situations.


You can measure risks without having to witness disaster.


> The thing that doesn't make sense is the numbers.

Oh? Who are presenting the numbers?

Is a crash that fails to trigger the airbags still not counted as a crash?

What about the car turning off FSD right before a crash?

How about adjusting for factors such as age of driver and the type of miles driven?

The numbers don't make sense because they're not good comparisons and are made to make Tesla look good.


are there even transparent reported numbers available ?

for whatever does exist, it is also easy to imagine how they could be misleading. for instance i've disengaged FSD when i noticed i was about to be in an accident. if i couldn't recover in time, the accident would not be when FSD is on and depending on the metric, would not be reported as a FSD induced accident.


> But at the end of the day, only the numbers matter.

Are these the numbers reported by tesla, or by some third party?


AIUI the numbers are for accidents where FSD is in control. Which means if it does a turn into oncoming traffic and the driver yanks the wheel or slams the brakes 500ms before collision, it's not considered a crash during FSD.


That is not correct. Tesla counts any accident within 5 seconds of Autopilot/FSD turning off as the system being involved. Regulators extend that period to 30 seconds, and Tesla must comply with that when reporting to them.


How about when it turns into oncoming traffic, the driver yanks the wheel, manages to get back on track, and avoids a crash? Do we know how often things like that happen? Because that's also a failure of the system, and that should affect how reliable and safe we rate these things. I expect we don't have data on that.

Also how about: it turns into oncoming traffic, but there isn't much oncoming traffic, and that traffic swerves to get out of the way, before FSD realizes what it's done and pulls back into the correct lane. We certainly don't have data on that.


Several people in this thread have been saying this or similar. It's incorrect, from Tesla:

"To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact"

https://www.tesla.com/en_gb/VehicleSafetyReport

Situations which inevitably cause a crash more than 5 seconds later seem like they would be extremely rare.


This is Autopilot, not FSD which is an entirely different product


> Lots of people are asking how good the self driving has to be before we tolerate it.

When I feel as safe as I do sitting in the back of a Waymo.


> I'm grateful to be getting a car from another manufacturer this year.

I'm curious, what is the alternative that you are considering? I've been delaying an upgrade to electric for some time. And now, a car manufacturer that is contributing to the making of another Jan 6th, 2021 is not an option, in my opinion.


I also went into car shopping with that opinion, but the options are bleak in terms of other carmakers' software. For some reason, if you want basic software features of a Tesla, the other carmakers want an extra $20k+ (and still don't have some).

A big example is why do the other carmakers not yet offer camera recording on their cars? They are all using cameras all around, but only Tesla makes it available to you in case you want the footage? Bizarre. And then they want to charge you an extra $500+ for one dash cam on the windshield.

I even had Carplay/Android Auto as a basic requirement, but I was willing to forgo that after trying out the other brands. And not having to spend hours at a dealership doing paperwork was amazing. Literally bought the car on my phone and was out the door within 15 minutes on the day of my appointment.


Rivian also allows recording drives to an SSD. They also just released a feature where you can view the cameras while it's parked. I'm kinda surprised other manufacturers aren't allowing that.


Rivians start at $30k more than Teslas, and while they may be nice, they don’t have the track record yet that Tesla does, and there is a risk the company goes bust since it is currently losing a lot of money.


I've got a deposit on the Dodge Charger Daytona EV


> I'd call myself a fairly aggressive driver

This is puzzling. It’s as if it was said without apology. How about not endangering others on the road with manual driving before trying out self driving?


It's not just about relative safety compared to all human driving.

We all know that some humans are sometimes terrible drivers!

We also know what that looks like: Driving too fast or slow relative to surroundings. Quickly turning every once in a while to stay in their lane. Aggressively weaving through traffic. Going through an intersection without spending the time to actually look for pedestrians. The list goes on..

Bad human driving can be seen. Bad automated driving is invisible. Do you think the people who were about to be hit by a Tesla even realized that was the case? I sincerely doubt it.


> Bad automated driving is invisible.

I'm literally saying that it is visible, to me, the passenger. And for reasons that aren't just bad vibes. If I'm in an Uber and I feel unsafe, I'll report the driver. Why would I pay for my car to do that to me?


GP means that the signs aren't obvious to other drivers. We generally underestimate how important psychological modelling is for communication, because it's transparent to most of us under most circumstances, but AI systems have very different psychology to humans. It is easier to interpret the body language of a fox than a self-driving car.


We are taking about the same thing: unpredictability. If you and everyone else can't predict what your car will do, then that seems objectively unsafe to me. It also sounds like we agree with each other.


Was this the last version, or the version released today?

I’ve been pretty skeptical of FSD and didn’t use the last version much. Today I used the latest test version, enabled yesterday, and rode around SF, to and from GGP, and it did really well.

Waymo well? Almost. But whereas I haven’t ridden Waymo on the highway yet, FSD got me from Hunters Point to the east bay with no disruptions.

The biggest improvement I noticed was its optimizations on highway progress.. it’ll change lanes, nicely, when the lane you’re in is slower than the surrounding lanes. And when you’re in the fast/passing lane it’ll return to the next closest lane.

Definitely better than the last release.


I'm clearly not using the FSD today because I refused to complete my free trial of it a few months ago. The post of mine that you're responding to doesn't mention my troubles with Autopilot, which I highly doubt are addressed by today's update (see my other comment for a list of problems). They need to really, really prove to me that Autopilot is working reliably before I'd even consider accepting another free trial of FSD, which I doubt they'd do anyway.


> Until I ride in one and feel safe, I can't have any faith that this is a reasonable system

This is probably the worst way to evaluate self-driving for society though, right?


Why would I be supportive of a system that has actively scared me for objectively scary reasons? Even if it's the worst reason, it's not a bad reason.


How you feel while riding isn’t an objective thing. It’s entirely subjective. You and I can sit side by side and feel differently about the same experience.

I don’t see how this is in any way objective besides the fact that you want it to be objective.

You can support things for society that scare you and feel unsafe because you can admit your feelings are subjective and the thing is actually safer than it feels to you personally.


I also did write about times when the car would have damaged itself or likely caused an accident, and those are indeed objective problems.


> It failed with a cryptic system error while driving

I’ll give you this one.

> In my opinion, the default setting accelerates way too aggressively. I'd call myself a fairly aggressive driver and it is too aggressive for my taste

Subjective.

> It started making a left turn far too early that would have scraped the left side of the car on a sign. I had to manually intervene.

Since you intervened and don’t know what would’ve happened, subjective.

> It tried to make way too many right turns on red when it wasn't safe to. It would creep into the road, almost into the path of oncoming vehicles

Subjective.

> It would switch lanes to go faster on the highway, but then missed an exit on at least one occasion because it couldn't make it back into the right lane in time. Stupid.

Objective.

You’ve got some fair complaints but the idea that feeling safe is what’s needed remains subjective.


> I'm grateful to be getting a car from another manufacturer this year.

I have no illusions about Tesla's ability to deliver an unsupervised self-driving car any time soon. However, as far as I understand, their autosteer system, in spite of all its flaws, is still the best out there.

Do you have any reason to believe that there actually is something better?


Autopilot has not been good. I have a cabin four hours from my home and I've used autopilot for long stretches on the highway. Some of the problems:

- Certain exits are not detected as such and the car violently veers right before returning to the lane. I simply can't believe they don't have telemetry to remedy this.

- Sometimes the GPS becomes miscalibrated. This makes the car think I'm taking an exit when I'm not, causing the car to abruptly reduce its speed to the speed of the ramp. It does not readjust.

- It frequently slows for "emergency lights" that don't exist.

- If traffic comes to a complete stop, the car accelerates way too hard and brakes hard when the car in front moves any substantial amount.

At this point, I'd rather have something less good than something which is an active danger. For all intents and purposes, my Tesla doesn't have reliable cruise control, period.

Beyond that, though, I simply don't have trust in Tesla software. I've encountered so many problems at this point that I can't possibly expect them to deliver a product that works reliably at any point in the future. What reason do I have to believe things will magically improve?


I'll add that it randomly brakes hard on the interstate because it thinks the speed limit drops to 45. There aren't speed limit signs anywhere nearby on different roads that it could be mistakenly reading either.


I noticed that this happens when the triangle on the map is slightly offset from the road, which I've attributed to miscalibrated GPS. It happens consistently when I'm in the right lane and pass an exit when the triangle is ever so slightly misaligned.


I believe they're fine with losing auto steering capabilities, based on the tone of their comment.


My experience has been directionally the same as yours but not of the same magnitude. There's a lot of room from improvement but it's still very good. I'm in a slightly suburban setting... I suspect you're in a fender denser location that me, in which case your experience may be different.


Their irresponsible behavior says enough. Even if they fix all their technical issues, they are not driven by a safety culture.

The first question that comes to their minds is not "how can we prevent this accident?" but it's "how can we further inflate this bubble?"


Same here, but I tried the new 12.5.4.1 yesterday and the difference is night and day. It was near flawless except for some unexplained slowdowns and you don't even need to hold the steering anymore (it detects attention by looking at your face), they clearly are improving rapidly.


How many miles have you driven since the update yesterday? OP described a half dozen different failure modes in a variety of situations that seem to indicate quite extensive testing before they turned it off. How far did you drive the new version and in what circumstances?


I recently took a 3000 mile road trip on 12.5.4.1 on a mix of interstate, country roads, and city streets and there were only a small handful of instances where I felt like FSD completely failed. It's certainly not perfect, but I have never had the same failures that the original thread poster had.


> It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

I've been on the receiving end of this with the offender being a Tesla so many times that I figured it must be FSD.


Probably autopilot, honestly.


I'm not disagreeing with your experience. But if it's as bad as you say, why aren't we seeing tens or hundreds of FSD fatalities per day or at least per week? Even if only 1000 people globally have it on, these issues sound like we should be seeing tens per week.


Perhaps having more accidents doesn't mean more fatal accidents.


I would not even try. The reason is simple, there is absolutely no ability of understanding in any of current self claimed auto driving approach, no matter how well they market them.


> right turns on red

This is a idiosyncrasy of the US (maybe other places too?) and I wonder if it's easier to do self driving at junctions, in countries without this rule.


Only some states allow turn on red, and it's also often overridden by a road sign that forbids. But for me the ultimate test of AGI is four-or-perhaps-three-or-perhaps-two way stop intersections. You have to know whether the other drivers have a stop sign or not in order to understand how to proceed, and you can't see that information. As an immigrant to the US this baffles me, but my US-native family members shrug like there's some telepathy way to know. There's also a rule that you yield to vehicles on your right at uncontrolled intersections (if you can determine that it is uncontrolled...) that almost no drivers here seem to have heard of. You have to eye-ball the other driver to determine whether or not they look like they remember road rules. Not sure how a Tesla will do that.


If it's all-way stop there will often be a small placard below the stop sign. If there's no placard there then (usually) cross traffic doesn't stop. Sometimes there's a placard that says "two-way" stop or one that says "cross traffic does not stop", but that's not as common in my experience.


This would be more helpful with a date. Was this in 2020 or 2024? I've been told FSD had a complete rearchitecting.


It was a few months ago


> After the system error, I lost all trust in FSD from Tesla.

May I ask how this initial trust was established?


The numbers that are reported aren't abysmal, and people have anecdotally said good things. I was willing to give it a try while being hyper vigilant.


That sucks that you had that negative experience. I’ve driven thousands of miles in FSD and love it. Could not imagine going back. I rarely need to intervene and when I do it’s not because the car did something dangerous. There are just times I’d rather take over due to cyclists, road construction, etc.


These "works for me!" comments are exhausting. Nobody believes you "rarely intervene", otherwise Tesla themselves would be promoting the heck out of the technology.

Bring on the videos of you in the passenger seat on FSD for any amount of time.


Thank god someone else said it.

I want some of these tesla bulls to PROVE that they are actually "not intervening". I think the one's who claim they aren't doing things for hours are liars.


> tesla bulls

I’m not a Tesla financial speculator.

Consider the effort it would take a normal individual to prove how well their cars FSD works for them. Now consider how somebody with no investment in the technology stands to benefit from that level of effort. That’s ridiculous. If you’re curious about the technology, go spend time with it. That’s a better way to gather data. And then you don’t have to troll forums calling people liars.


I did! I own a comma! I’ve put in hundreds of hours into using teslas FSD. I know more about it than most in this thread and I repeat, folks who claim it’s as good as they say it is are liars. Yes, even with the 12.6 firmware. Yes, even with the 13.0 firmware that’s about to come out.


It’s the counter-point to the “it doesn’t work for me” posts. Are you okay with those ones?


I think the problem with the "it works for me" type posts is that most people reading them think the person writing it is trying to refute what the person with the problem is saying. As in, "it works for me, so the problem must be with you, not the car".

I will refrain from commenting on whether or not that's a fair assumption to make, but I think that's where the frustration comes from.

I think when people make "WFM" posts, it would go a long way to acknowledge that the person who had a problem really did have a problem, even if implicitly.

"That's a bummer; I've driven thousands of miles using FSD, and I've felt safe and have never had to intervene. I wonder what's different about our travel that's given us such different experiences."

That kind of thing would be a lot more palatable, I think, even if you might think it's silly/tiring/whatever to have to do that every time.


I can see it. How FSD performs depends on the environment. In some places it's great, in others I take over relatively frequently, although it's usually because it's being annoying, not because it poses any risk.

Being in the passenger seat is still off limits for obvious reasons.


I don't believe this at all. I don't own one but know about a half dozen people that got suckered into paying for FSD. All of them don't use it and 3 of them have stated it's put them in dangerous situations.

I've ridden in an X, S and Y with it on. Talk about vomit inducing when letting it drive during "city" driving. I don't doubt it's OK on highway driving, but Ford Blue Cruise and GM's Super Cruise are better there.


You can believe what you want to believe. It works fantastic for me whether you believe it or not.

I do wonder if people who have wildly different experiences than I have are living in a part of the country that, for one reason or another, Tesla FSD does not yet do as well in.


I think GP is going too far in calling you a liar, but I think for the most part your FSD praise is just kinda... unimportant and irrelevant. GP's aggressive attitude notwithstanding, I think most reasonable people will agree that FSD handles a lot of situations really well, and believe that some people have travel routes where FSD always handles things well.

But ok, great, so what? If that wasn't the case, FSD would be an unmitigated disaster with a body count in the tens of thousands. So in a comment thread about someone talking about the problems and unsafe behavior they've seen, a "well it works for me" reply is just annoying noise, and doesn't really add anything to the discussion.


Open discussion and sharing different experiences with technology is “annoying noise” to you but not to me. Slamming technology that works great for others should receive no counter points and become an echo chamber or what?


I'm glad for you, I guess.

I'll say the autopark was kind of neat, but parking has never been something I have struggled with.


I hope I never get to share road with you. Oh wait I won't, this crazyness is illegal here.


If you were a poorer driver who did these things you wouldn't find these faults so damning because it'd only be say 10% dumber than you rather than 40% or whatever (just making up those numbers).


That just implies FSD is as good as a bad driver, which isn't really an endorsement.


I agree it's not an endorsement but we allow chronically bad drivers on the road as long as they're legally bad and not illegally bad.


We do that for reasons of practicality: the US is built around cars. If we were to revoke the licenses of the 20% worst drivers, most of those people would be unable to get to work and end up homeless.

So we accept that there are some bad drivers on the road because the alternative would be cruel.

But we don't have to accept bad software drivers.


Oh, I'm well aware how things work.

But we should look down on them and speak poorly of them same as we look down on and speak poorly of everyone else who's discourteous in public spaces.


I don't think you're supposed to merge left when people are merging on the highway into your lane- you have right of way. I find even with the right of way many people merging aren't paying attention, but I deal with that by slightly speeding up (so they can see me in front of them).


You don't have a right of way over a slow moving vehicle that merged ahead of you. Most ramps are not long enough to allow merging traffic to accelerate to highway speeds before merging, so many drivers free up the right-most lane for this purpose (by merging left)


If you can safely move left to make room for merging traffic, you should. It’s considerate and reduces the chances of an accident.


Since a number of people are giving pushback, can you point to any (California-oriented) driving instructions consistent with this? I'm not seeing any. I see people saying "it's curteous", but when I'm driving I'm managing hundreds of variables and changing lanes is often risky, given motorcycles lanesplitting at high speed (quite common).


It's not just courteous, it's self serving, AFAIK, a self-emergent phenomenon. If you're driving at 65 mph and anticipate a slow down in your lane due merging traffic, do you stay in your lane and slow down to 40 mph, or do you change lanes (if it's safe to do so) and maintain your speed?

Texas highways allow for much higher merging speeds at the cost of far large (land area), 5-level interchanges rather than 35 mph offramps and onramps common in California.

Any defensive driving course (which fall under instruction IMO) states that you don't always have to exercise your right of way, and indeed it may be unsafe to do so in some circumstances. Anticipating the actions of other drivers around you and avoiding potentially dangerous are the other aspects of being a defensive driver, and those concepts are consistent with freeing up the lane slower-moving vehicles are merging onto when it's safe to do so.


Definitely not California but literally the first part of traffic law in Germany says that caution and consideration are required from all partaking in traffic.

Germans are not known for poor driving.


Right- but the "consideration" here is the person merging onto the highway actually paying attention and adjusting, rather than pointedly not even looking (this is a very common merging behavior where I life). Changing lanes isn't without risk even on a clear day with good visibility. Seems like my suggestion of slowing down or speeding up makes perfect sense because it's less risky overall, and is still being considerate.

Note that I personally do change lanes at times when it's safe, convenient, I am experienced with the intersection, and the merging driver is being especially unaware.


Consideration is also making space for slower car wanting to merge and Germans do it.


Most ramps are more than long enough to accelerate close enough to traffic speed if one wants to, especially in most modern vehicles.


Unless the driver in front of you didn't.


Just because you have the right of way doesn't mean the correct thing to do is to remain in the lane. If remaining in your lane is likely to make someone else do something reckless, you should have been proactive. Not legally, for the sake of being a good driver.


Can you point to some online documentation that recommends changing lanes in preference to speeding up when a person is merging at too slow a speed? What I'm doing is following CHP guidance in this post: https://www.facebook.com/chpmarin/posts/lets-talk-about-merg... """Finally, if you are the vehicle already traveling in the slow lane, show some common courtesy and do what you can to create a space for the person by slowing down a bit or speeding up if it is safer. """

(you probably misinterpreted what I said. I do sometimes change lanes, even well in advance of a merge I know is prone to problems, if that's the safest and most convenient. What I am saying is the guidance I have read indicates that staying in the same lane is generally safer than changing lanes, and speeding up into an empty space is better for everybody than slowing down, especially because many people who are merging will keep slowing down more and more when the highway driver slows for them)


I read all this thread and all I can say is not everything in the world is written down somewhere


> recommends changing lanes in preference to speeding up when a person is merging at too slow a speed

It doesn't matter, Tesla does neither. It always does the worst possible non-malicious behavior.


"Tesla says on its website its FSD software in on-road vehicles requires active driver supervision and does not make vehicles autonomous."

Despite it being called "Full Self-Driving."

Tesla should be sued out of existence.


It didn't always say that. It used to be more misleading, and claim that the cars have "Full Self Driving Hardware", with an exercise for the reader to deduce that it didn't come with "Full Self Driving Software" too.


And Musk doesn't want to "get nuanced" about the hardware:

https://electrek.co/2024/10/15/tesla-needs-to-come-clean-abo...


Our non-Tesla has steering assist. In my 500 miles of driving before I found the buried setting that let me completely disable it, the active safety systems never made it more than 10-20 miles without attempting to actively steer the car left-of-center or into another vehicle, even when it was "turned off" via the steering wheel controls.

When it was turned on according to the dashboard UI, things were even worse. It'd disengage less than every ten miles. However, there wasn't an alarm when it disengaged, just a tiny gray blinking icon on the dash. A second or so after the blinking, it'd beep once and then pull crap like attempt a sharp left on an exit ramp that curved to the right.

I can't imagine this model kills fewer people per mile than Tesla FSD.

I think there should be a recall, but it should hit pretty much all manufacturers shipping stuff in this space.


I'm not sure how any of this is related to the article. Does this non-Tesla manufacturer claim that their steering assist is "full self driving"?

If you believe their steering assist kills more people than Tesla FSD then you're welcome, encouraged even, to file a report with the NHTSA here [1].

[1] https://www.nhtsa.gov/report-a-safety-problem


Ive had similar experience with a Hyundai with steering assist. It would get confused by messed road lining all the time. Meanwhile it had no problem climbing a road curb that was unmarked. And it would try to constantly nudge the steering wheel meaning I had to put force into holding it in place all the time since it which was extra fatigue.

Oh and it was on by default, meaning I had to disable it every time I turned the car on.


What model year? I'm guessing it's an older one?

My Hyundai is a 2021 and I have to turn on the steering assist every time which I find annoying. My guess is that you had an earlier model where the steering assist was more liability than asset.

It's understandable that earlier versions of this kind of thing wouldn't function as well, but it is very strange that they would have it on by default.


>What model year? I'm guessing it's an older one?

Not 100% sure which year since it wasn't mine I think around 2018 +-2y. It was good at following bright painted white lines and nothing else. I didn't mind the beeping and the vibration when I stepped on a line but it wanted to actively steer the wheel which was infuriating. I wouldn't mind it if it was just a suggestion.


My Hyundai has a similar feature and it's excellent. I don't think you should be painting with such a broad brush.


If what you say is true, name the car model and file a report with the NHTSA.


I believe it's called "Full Self Driving (Supervised)"


The correct name would be "Not Self Driving". Or, at least, Partial Self Driving.


The part in parentheses has only recently been added.


Prior to that, FSD was labeled ‘Full Self Driving (Beta)’ and enabling it triggered a modal that required two confirmations explaining that the human driver must always pay attention and is ultimately responsible for the vehicle. The feature also had/has active driver monitoring (via both vision and steering-torque sensors) that would disengage FSD if the driver ignored the loud audible alarm to “Pay attention”. Since changing the label to ‘(Supervised)’, the audible nag is significantly reduced.


The problem is not so much the lack of disclaimers, it is the adberitising. Tesla is asking for something like 15 000 dollars for access to this "beta", and you don't get two modal dialogs before you sign up for that.

This is called "false advertising", and even worse - recognizing revenue on a feature you are not delivering (a beta is not a delivered feature) is not GAAP.


> The problem is not so much the lack of disclaimers, it is the adberitising.

I agree; the entire advertising industry is well known to be misleading and/or dishonest; it’s annoying and often hurts consumers.

> Tesla is asking for something like 15 000 dollars for access to this "beta",

The cost of FSD is $8000 for the life of the vehicle, $5000 for 3 years (includes free supercharging and premium connectivity), or as a no-contract, a la carte option for $99/month—which IMO is pretty cheap if you just want to try it out or if you only want/need it during special occasions.

> and you don't get two modal dialogs before you sign up for that.

Depends on how you purchase FSD; if done from the vehicle, you get the dialogs. If done at the time of vehicle purchase you get plenty of disclaimers and documentation about its capabilities—though not as obviously prominent and scary as modal dialogs. I haven’t witnessed a subscription purchase so I’m not sure if the dialogs are present during the subscription process; perhaps that’s where the scam lies but I doubt it.

> This is called "false advertising", and even worse - recognizing revenue on a feature you are not delivering (a beta is not a delivered feature) is not GAAP.

Perhaps in your opinion but, well… that’s not how the world works, nor the law. For decades orgs have been delivering revenue-generating products, marketed and labeled as “beta”; a product being incomplete doesn’t mean it doesn’t have value. Heck, most of the software we use is ever changing and often considered a beta release—but they still (usually) offer value. Remember, FSD is software, not hardware; I suspect folks are uncomfortable with what appears to be the new paradigm of cars that change their capabilities over time even while they demand regular new capabilities in other products like their phone or computer.

For what it’s worth, here’s the FSD disclaimer currently present on the Tesla website:

“Full Self-Driving (Supervised)

Your car will be able to drive itself almost anywhere with minimal driver intervention.

Currently enabled features require active driver supervision and do not make the vehicle autonomous. The activation and use of these features are dependent on development and regulatory approval, which may take longer in some jurisdictions.”

Seems pretty clear to me.


Do they have warnings as big as "full self driving" texts in advertisements? And if it is NOT actually full self driving, why call it full self driving?

That's just false advertising. You can't get around that.

I can't believe our current laws let Tesla get away like that.


> Do they have warnings as big as "full self driving" texts in advertisements?

Tesla doesn’t advertise; they rely entirely on word of mouth, storefronts (both online and physical), and publicity/news coverage. But the answer to your question is that, on their website at least, the text disclaimers for the FSD option are the same sizes as the disclaimers for other options like the Tow Package (the disclaimer for which says “Tow up to 3,500 lbs with a class II steel tow bar”) or the wheels (the disclaimer for which shows range estimates depending on the chosen wheel diameter).

> And if it is NOT actually full self driving, why call it full self driving?

To me, this is like asking why ISPs offer “Unlimited Data” plans that have very strict limits on what constitutes “unlimited”.

It’s important to remember that the phrase “Full Self Driving” has no legal or industry-standard definition. For the sake of this discussion, and as far as I’m aware, the FSD product has never been available for purchase or subscription without a parenthetical designation, e.g. “Full Self Driving (Beta)” or “Full Self Driving (Supervised)” which, to me, suggests Tesla is acting in good faith—well, at least as far as good-faith acts exist in our marketing-driven culture. It’s only been within the last year or so that Musk has talked about “Full Self Driving (Unsupervised)” which is, I believe, the designation for what will ultimately become the Level 4/5 autonomy product.

FSD is currently classified as Level 2 autonomy by SAE. While a Level 3 autonomy product is available in the US, it is: - only available in the Mercedes Drive Pilot product, - only available in CA or NV, - limited to 40mph on pre-approved roads, - only available during daylight/good weather conditions.

The difference between the real-world capabilities of Drive Pilot and FSD is quite stark; while FSD is not officially classified as Level 3 autonomy, it’s dramatically closer to what I believe most consumers would consider “autonomous driving” than is the Mercedes product. I only got to try it for a few days so it wasn’t a detailed comparison but my own experience with Mercedes product was disappointing when compared to Tesla’s product. IOW, while perhaps not semantically accurate, the product name “Full Self Driving” is far more accurate than any other available product offering.

> That's just false advertising. You can't get around that.

Product names are very rarely subject to scrutiny for being “false advertising”. Again, the phrase “Full Self Driving” has no legal or official definition. Should it have a legal definition? I don’t know, but I do know that the “Unlimited Data” plans from carriers and ISPs are widely understood not to be “unlimited”; I don’t love those kinds of product naming schemes but I’m not sure how the FSD case is any different from a legal perspective.

> I can't believe our current laws let Tesla get away like that.

Get away with what? IME, Tesla (and pretty much every org on the planet) carefully skirt the boundaries of the law. Sometimes, if they cross a legal boundary, they’ll become subject to investigation and possibly consequences but, in the case of FSD, the court has already dismissed the lawsuit claiming Tesla lied about its capabilities. They “get away like that” by not breaking the law. Until laws change, orgs will continue to be incredibly and often overly optimistic when discussing their products.


And is, well, entirely contradictory. An absolute absurdity; what happens when the irresistible force of the legal department meets the immovable object of marketing.


“Sixty percent of the time, it works every time”


[flagged]


Magic isn’t real. No one should be confused that the eraser isn’t magic.

Fully self driving cars are real. Just not made by Tesla.


What's the verdict on X-Ray Specs?


I think everyone just gave up and went to pornhub


Have you used one? They basically do what they say, at least, which is erase things.


Nobody who buys a magic eraser thinks it’s literally a magical object or in any way utilizes magic. It’s not comparable.


Just like nobody who buys FSD actually thinks it's really self driving.


Surely you can understand why “magic” and “fully self driving” have different levels of plausibility?

In 2024 if you tell me a car is “fully self driving” it’s pretty reasonable of me to think it’s a fully self driving car given the current state of vehicle technology. They didn’t say “magic steering” or something clearly ridiculous to take at face value. It sounds like what it should be able to do. Especially with “full” in the name. Just call it “assisted driving” or hell “self driving.” The inclusion of “fully” makes this impossible to debate in good faith.


Even if at least it won't kill you or anyone around you for using it.


[flagged]


Nuclear power adoption is the largest force to combat climate change.


Historically, hydro has prevented for more CO2 than nuclear by a wide margin. https://ourworldindata.org/grapher/electricity-prod-source-s...

Looking forward Nuclear isn’t moving the needle. Solar grew more in 2023 alone than nuclear has grown since 1995. Worse nuclear can’t ramp up significantly in the next decade simply due to construction bottlenecks. 40 years ago nuclear could have played a larger role, but we wasted that opportunity.

It’s been helpful, but suggesting it’s going to play a larger role anytime soon is seriously wishful thinking at this point.


> Historically, hydro has

done harm to the ecosystems where they are installed. This is quite often overlooked and brushed aside.

There is no single method of generating electricity without downsides.


We’ve made dams long before we knew about electricity. At which point tacking hydropower to a dam that would exist either way has basically zero environmental impact.

Pure hydropower dams definitely do have significant environmental impact.


I just don't get the premise of your argument. Are you honestly saying that stopping the normal flow of water has no negative impact on the ecosystem? What about the area behind the dam that is now flooded? What about the area in front of the dam where there is now no way to traverse back up stream?

Maybe your just okay and willing to accept that kind of change. That's fine, just as some people are okay with the risk of nuclear, the use of land for solar/wind. But to just flat out deny that it has impact is just dishonest discourse at best


It’s the same premise as rooftop solar. You’re building a home anyway so adding solar panels to the roof isn’t destroying pristine habitat.

People build dams for many reasons not just electricity.

Having a reserve of rainwater is a big deal in California, Texas, etc. Letting millions of cubic meters more water flow into the ocean would make the water problems much worse in much of the world. Flood control is similarly a serious concern. Blaming 100% of the issues from dams on Hydropower is silly if outlawing hydropower isn’t going to remove those dams.


You are asserting building a dam has downsides. That’s correct (there are upsides too - flood control, fresh water storage etc)

However you are conflating dam building with hydro generation.


History is a great reference, but it doesn't solve our problems now. Just because hydro has prevented more CO2 until now doesn't mean that plus solar are the combination that delivers abundant, clean energy. There are power storage challenges and storage mechanisms aren't carbon neutral. Even if we assume that nuclear, wind, and solar (without storage) all have the same carbon footprint - I believe nuclear is less that solar pretty much equivalent to wind - you have to add the storage mechanisms for scenarios where there's no wind, sun, or water.

All of the above are significantly better than burning gas or coal - but nuclear is the clear winner from an CO2 and general availability perspective.


Seriously scaling nuclear would involve batteries. Nuclear has issues being cost effective at 80+% capacity factors. When you start talking sub 40% capacity factors the cost per kWh spirals.

The full cost of operating a multiple nuclear reactor for just 5 hours per day just costs more than a power plant at 80% capacity factor charging batteries.


> Seriously scaling nuclear would involve batteries. Nuclear has issues being cost effective at 80+% capacity factors.

I assume you mean that sub 80% capacity nuclear has issues being cost effective (which I agree is true).

You could pair the baseload nuclear with renewables during peak times and reduce battery dependency for scaling and maintaining higher utilization.


I meant even if you’re operating nuclear as baseload power looking forward the market rate for electricity looks rough without significant subsidies.

Daytime you’re facing solar head to head which is already dropping wholesale rates. Off peak is mostly users seeking cheap electricity so demand at 2AM is going to fall if power ends up cheaper at noon. Which means nuclear needs to make most of its money from the duck curve price peaks. But batteries are driving down peak prices.

Actually cheap nuclear would make this far easier, but there’s no obvious silver bullet.


That just goes to show how incredibly short sighted humanity is. We new about the risk of massive CO2 emissions from burning fossil fuels but just ignored it while irrationally demonizing nuclear energy because it is scawy. If humans were sane and able to plan earth would be getting 100% of all electricity from super-efficient 7th generation nuclear reactors.


When talking to my parents, I hear a lot about Jane Fonda and the China Syndrome as far as the fears of nuclear power.

She's made the same baseless argument for a long time: "Nuclear power is slow, expensive — and wildly dangerous"

https://ourworldindata.org/nuclear-energy#:~:text=The%20key%....

CO2 issues aside, it's just outright safer than all forms of coal and gas and about as safe as solar and wind, all three of which are a bit safer than hydro (still very safe).


She’s two thirds right. It’s slow and expensive.


I agree costs could have dropped significantly, but I doubt 100% nuclear was ever going to happen.

Large scale dams will exist to store water, tacking hydroelectric on top of them is incredibly cost effective. Safety wise dams are seriously dangerous, but they also save a shocking number of lives by reducing flooding.


There was adequate evidence that nuclear is capable of killing millions of people and causing large scale environmental issues.

It’s still not clear today what effect CO2 or fossil fuel usage has on us.


Nuclear reactors are not nuclear bombs. Nuclear reactors are very safe on a Joules per death bases


> Historically, hydro has prevented for more CO2 than nuclear by a wide margin.

Hydro is not evenly distributed and mostly tapped out outside of a few exceptions. Hydro literally can not solve the issue.

Even less so as AGW starts running meltwater sources dry.


I wasn’t imply it would, just covering the very short term.

Annual production from nuclear is getting passed by wind in 2025 and possibly 2024. So just this second it’s possibly #1 among wind, solar and nuclear but they are all well behind hydro.


I think solar is a lot cheaper than nuclear, even if you factor in battery storage.


Are you proposing that cars should have nuclear reactors in them?

Teslas run great on nuclear power, unlike fossil fuel ICE cars.


Of course not.


In a world where nuclear power helped with climate change, would also be a world where Teslas would eliminate a good chunk of harmful pollution by allowing cars to be moved by nuclear, so not sure what point you were trying to make.

Even at this minute, Teslas are moving around powered by nuclear power.


Why not? We just need to use Mr Fusion in everything

https://backtothefuture.fandom.com/wiki/Mr._Fusion


Every year Musk personally flies enough in his private jet to undo the emissions savings of over 100,000 EVs...

Remember that every time you get in your Tesla that you're just a carbon offset for a spoiled billionaire.


Hmmmm average car uses 489 gallons a year. Large private jet uses 500 gallons an hour. There are 9125 hours in a year.

So if Elon lives in a jet that flys 24/7 you're only very wrong. Since that's obviously not the case you're colossally and completely wrong.

Remember that the next time you try to make an argument that Tesla is not an incredible force for decarbonization.


Not Tesla exactly, but Musk has gone all-in trying to get a man elected to be US President who consistently says climate change is a hoax, or words to that effect.


US oil production under the current administration is at 13.5M barrels per day. The highest ever. The US is shitting the bed on the energy transition. Meanwhile global solar cell production is slated to hit 2TW/year by the end of 2025 @ under 10cents/watt. China, the land of coal, is on track to hit net zero before the US. Both parties and all levels of government have a disgraceful record on climate change.

PS: For context 2TW of solar can generate about 10% of global electricity. Production capacity will not stop at 2TW. All other forms of electricity are basically doomed, no matter what the GOP says about climate change.


Both parties have a disgraceful record on climate change, but the GOP is still clearly much worse. High as US oil production is, Republicans complain that it should be higher. And Trump making the hoax claim dogma for his followers is incredibly damaging.


I think you missed the 'EV' part of the post.


As opposed to all the other execs whose companies aren’t a force to combat climate change and still fly their private jets.

But don’t get me wrong, anyone and everyone can fly their private jets if they can afford such things. They will already have generated enough taxes at that point that they’re offsetting thousands or millions of Prius drivers.


As opposed to all the other execs

Yes, actually.

Other execs fly as needed because they recognize that in this wondrous age of the internet that teleconferencing can replace most in-person meetings. Somehow, only a supposed technology genius like Elon Musk thinks that in-person meetings required for everything.

Other execs also don't claim to be trying to save the planet while doing everything in their power to exploit its resources or destroy natural habitats.


As I understand, electric cars are more polluting than non-electric, because first of all manufacturing and resources footprint is larger, but also because they are heavier (because of the batteries), the tires wear down much faster, needing more tire replacement, which is so significantly much that their emission free-ness doesn't compensate for it.

Besides, electric vehicles still seem to be very impractical compared to normal cars, because they can't drive very far without needing a lengthy recharge.

So I think the eco-friendliness of electric vehicles is maybe like the full self-driving system: nice promises but no delivery.


That has been falsified by more studies than I can keep track of. And yes, if you charge your electric with electricity produced by oil, the climate effect will be non-optimal.


Pretty much everything you've said here isn't true. You are just repeating tropes that are fossil fuel industry FUD.


[flagged]


It’s really unfortunate that puffery survived as a common law defence. It’s really from an earlier era, when fraud was far more acceptable and people were more conditioned to assume that vendors were outright lying to them; it has no place in modern society.

Also, that’s investors, not consumers. While the rise of retail investing has made this kind of dubious, investors are generally assumed to be far less in need of protection than consumers by the law; it is assumed that they take care about their investment that a consumer couldn’t reasonably take around every single product that they buy.


This was a lawsuit by shareholders, and the judge thought investors should know whatever Elon says is bullshit.

Completely different from e.g. consumers, of whom less such understanding is expected.


I think you mean fortunately?


Unfortunately for them and their ideological allies, fortunately for people with common sense.


Tesla's BS with FSD is as bad as Theranos was with their blood tests.


It's called "Full Self-Driving (Supervised) Beta" and you agree that you understand that you have to pay attention and are responsible for the safety of the car before you turn it on.


So the name of it is a contradiction, and the fine print contradicts the name. "Full self driving" (the concept, not the Tesla product) does not need to be supervised.


Come on, you know it's an oxymoron. "full" and "supervised" don't belong to the same sentence. Ask any 10 year old or a non native English speaker who only learned the language from textbooks for 5 years can tell you that. Just... stop defending Tesla.


It's a name that accurately describes the ultimate goal of the technology. It's not there yet, and Tesla makes it clear that this is the case. I don't see an issue with it and it works exceptionally well as is.


"Driver is mostly disengaged, but then must intervene in a sudden fail state" is also one of the most dangerous types of automation due to how long it takes the driver to reach full control as well.


Yeah, I don't drive but I would think it would be worse than actually paying attention all the time


It's also a problem that gets worse as the software gets better. Having to intervene once every 5 minutes is a lot easier than having to intervene once every 5 weeks. If lack of intervention causes an accident, I'd bet on the 5 minute car avoiding an accident longer than the 5 week car for any span of time longer than 10 weeks.


I feel like the full self driving cars should have a "budget". Every time you drive, say, 1000 km in FSD, you then need to drive 100 km in "normal" mode to keep sharp. Or whatever the ratio / exact numbers TBD. You can reset the counter upfront by driving smaller mileage more regularly.


That's not solving the right problem. That keeps you sharp driving, but does not keep you sharp supervising. If the car did drive you 1000 km flawlessly, but can still kill you with a random erratic bug on the 1001th km (or on the 1234th). That is where people will zone out. Keeping people driving will keep them able to drive, but won't make them less zoned out when they are not driving.


Just as driving practice?

It's not going to help the problem of keeping up vigilance when monitoring a level 3 system.


It's more like being a driving instructor, which has a higher effort and skill bar than just driving.


You are required to pay attention all the time. That's what the "supervised" in "FSD (supervised)" means.


FSD stands for Fully Supervised Driving, right?


Same energy as

Unlimited Data!! (up to 100GB)


yeah, that sounds like Elon’s marketing to me.


The point is the transition from "Driving Supervisor" to "Driver" is non-trivial in terms of time.


This is an opinion almost certainly based more in emotion than logic, but I don't think I could trust any sort of fully autonomous driving system that didn't involve communication with transmitters along the road itself (like a glideslope and localiser for aircraft approaches) and with other cars on the road.

Motorway driving sure, there it's closer to fancy cruise control. But around town, no thank you. I regularly drive through some really crappily designed bits of road, like unlabelled approaches to multi-lane roundabouts where the lane you need to be in for a particular exit sorta just depends on what the people in front and to the side of you happen to have chosen. If it's difficult as a human to work out what the intent is, I don't trust a largely computer vision-based system to work it out.

The roads here are also in a terrible state, and the lines on them even moreso. There's one particular patch of road where the lane keep assist in my car regularly tries to steer me into the central reservation, because repair work has left what looks a bit like lane markings diagonally across the lane.


> didn't involve communication with transmitters along the road itself (like a glideslope and localiser for aircraft approaches) and with other cars on the road

There will be a large number of non-participating vehicles on the road for at least another 50 years. (The average age of a car in the US is a little over 12 years and rising. I doubt we'll see a comms-based standard emerge and be required equipment on new cars for at least another 20 years.)


"There will be a large number of non-participating vehicles on the road for at least another 50 years."

I think so too, but I also think, if we would really want to, all it would take is a GPS device with internet connection, like a smart phone, to make a normal car into a realtime connected one.

But I also think we need to work out some social and institutional issues first.

Currently I would not like my position to be avaiable in real time to some obscure agency.


Hell, ignore vehicles. What about pedestrians, cyclists, animals, construction equipment, potholes, etc?


I agree with you about the trust issues and feel similarly, but also feel like the younger generations who grow up with these technologies might be less skeptical about adopting them.

I've been kind of amazed how much younger people take some newer technologies for granted, the ability of humans to adapt to changes is marvelous.


Once insurance requires it or makes you pay triple to drive manually, that will likely be the tipping point for many people.


Potential problem with transmitters is that they could be faked.

You could certainly never rely on them alone.


There are lots of other areas where intentionally violating FCC regulations to transmit harmful signals is already technologically feasible and cheap, but hasn't become a widespread problem in practice. Why would it be any worse for cars communicating with each other? If anything, having lots of cars on the road logging what they receive from other cars (spoofed or otherwise) would make it too easy to identify which signals are fake, thwarting potential use cases like insurance fraud (since it's safe to assume the car broadcasting fake data is at fault in any collision).


I agree, the problem has been solved.

If a consensus mechanism similar to those used in blockchain were implemented, vehicles could cross-reference the data they receive with data from multiple other vehicles. If inconsistencies are detected (for example, a car reporting a different speed than what others are observing), that data could be flagged as potentially fraudulent.

Just as blockchain technologies can provide a means of verifying the authenticity of transactions, a network of cars could establish a decentralized validation process for the data they exchange. If one car broadcasts false data, the consensus mechanism among the surrounding vehicles would allow for the identification of this "anomaly", similar to how fraudulent transactions can be identified and rejected in a blockchain system.

What you mentioned with regarding to insurance could be used as a deterrent, too, along with laws making it illegal to spoof relevant data.

In any case, privacy is going to take a toll here, I believe.


This is a complicated, technical solution looking for a problem.

Simple, asymmetrically-authenticated signals and felonies for the edge cases solve this problem without any futuristic computer wizardry.


I did not intend to state that we ought to use the blockchain, at all, for what it is worth. Vehicles should cross-reference the data they receive with data from multiple other vehicles and detect inconsistencies, any consensus mechanism could work, if we could call it that.


> If it's difficult as a human to work out what the intent is, I don't trust a largely computer vision-based system to work it out.

Most likely, every self-driving car company will send drivers down every road in the country, recording everything they see. Then they'll have human labellers figure out any junctions where the road markings are ambiguous.

They've had sat nav maps covering every road for decades, and the likes of Google Street View, so to have a detailed map of every junction is totally possible.


In that case I hope they're prepared to work with local authorities to immediately update the map every time road layouts change, temporarily or permanently. Google Maps gets lane guidance wrong very often in my experience, so that doesn't exactly fill me with confidence.


I kind of assumed that already happened. Does it not? Is anyone pushing for it?

Honestly it seems like it ought to be federal law by now that municipalities need to notify a designated centralized service of all road/lane/sign/etc. changes in a standardized format, that all digital mapping providers can ingest from.

Is this not a thing? If not, is anyone lobbying for it? Is there opposition?


> I kind of assumed that already happened.

Road layout can change daily, sometimes multiple times per day. Sometimes in a second, like when a tree falls on a lane and now you have to reroute on the oncoming lane for some distance, etc.


Coordinating roadwork is challenging in most places, I think. Over here, it's apparently cheaper to open up a road multiple times in a year, rather than coordinating all the different parties that need underground access in the foreseeable future.


"Honestly it seems like it ought to be federal law by now that municipalities need to notify a designated centralized service of all road/lane/sign/etc. changes in a standardized format, that all digital mapping providers can ingest from"

Why not just anyone and make that data openly avaiable?


And the contractors employed by the local authorities to do roadworks big and small.


The interesting question is how good self-driving has to be before people tolerate it.

It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable. How about a quarter? Or a tenth? Accidents caused by human drivers are one of the largest causes of injury and death, but they're not newsworthy the way an accident involving automated driving is. It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.


This is about lying to the public and stoking false expectations for years.

If it's "fully self driving" Tesla should be liable for when its vehicles kill people. If it's not fully self driving and Tesla keeps using that name in all its marketing, regardless of any fine print, then Tesla should be liable for people acting as though their cars could FULLY self drive and be sued accordingly.

You don't get to lie just because you're allegedly safer than a human.


I think this is the answer: the company takes on full liability. If a Tesla is Fully Self Driving then Tesla is driving it. The insurance market will ensure that dodgy software/hardware developers exit the industry.


This is very much what I would like to see.

The price of insurance is baked into the price of a car. If the car is as safe as I am, I pay the same price in the end. If it's safer, I pay less.

From my perspective:

1) I would *much* rather have Honda kill someone than myself. If I killed someone, the psychological impact on myself would be horrible. In the city I live in, I dread ageing; as my reflexes get slower, I'm more and more likely to kill someone.

2) As a pedestrian, most of the risk seems to come from outliers -- people who drive hyper-aggressively. Replacing all cars with a median driver would make me much safer (and traffic, much more predictable).

If we want safer cars, we can simply raise insurance payouts, and vice-versa. The market works everything else out.

But my stress levels go way down, whether in a car, on a bike, or on foot.


>> I would much rather have Honda kill someone than myself. If I killed someone, the psychological impact on myself would be horrible.

Except that we know that it doesn't work like that. Train drivers are ridden with extreme guilt every time "their" train runs over someone, even though they know that logically there was absolutely nothing they could have done to prevent it. Don't see why it would be any different here.

>>If we want safer cars, we can simply raise insurance payouts, and vice-versa

In what way? In the EU the minimum covered amount for any car insurance is 5 million euro, it has had no impact on the safety of cars. And of course the recent increase in payouts(due to the general increase in labour and parts cost) has led to a dramatic increase in insurance premiums which in turn has lead to a drastic increase in the number of people driving without insurance. So now that needs increased policing and enforcement, which we pay for through taxes. So no, market doesn't "work everything out".


Being in a vehicle that collides with someone and kills them is going to be traumatic regardless of whether or not you're driving.

But it's almost certainly going to be more traumatic and more guilt-inducing if you are driving.

If I only had two choices, I would much rather my car kill someone than I kill someone with my car. I'm gonna feel bad about it either way, but one is much worse than the other.


> Except that we know that it doesn't work like that. Train drivers are ridden with extreme guilt every time "their" train runs over someone, even though they know that logically there was absolutely nothing they could have done to prevent it. Don't see why it would be any different here.

It's not binary. Someone dying -- even with no involvement -- can be traumatic. I've been in a position where I could have taken actions to prevent someone from being harmed. Rationally not my fault, but in retrospect, I can describe the exact set of steps needed to prevent it. I feel guilty about it, even though I know rationally it's not my fault (there's no way I could have known ahead of time).

However, it's a manageable guilt. I don't think it would be if I knew rationally that it was my fault.

> So no, market doesn't "work everything out".

Whether or not a market works things out depends on issues like transparency and information. Parties will offload costs wherever possible. In the model you gave, there is no direct cost to a car maker making less safe cars or vice-versa. It assumes the car buyer will even look at insurance premiums, and a whole chain of events beyond that.

That's different if it's the same party making cars, paying money, and doing so at scale.

If Tesla pays for everyone damaged in any accident a Tesla car has, then Tesla has a very, very strong incentive to make safe cars to whatever optimum is set by the damages. Scales are big enough -- millions of cars and billions of dollars -- where Tesla can afford to hire actuaries and a team of analysts to make sure they're at the optimum.

As an individual car buyer, I have no chance of doing that.

Ergo, in one case, the market will work it out. In the other, it won't.


That's just reducing the value of a life to a number. It can be gamed to a situation where it's just more profitable to mow down people.

What's an acceptable number/financial cost is also just an indirect approximated way of implementing a more direct/scientific regulation. Not everything needs to be reduced to money.


There is no way to game it successfully; if your insurance costs are much higher than your competitors you will lose in the long run. That doesn’t mean there can’t be other penalties when there is gross negligence.


Who said management and shareholders are in it for the long run. Plenty of examples where businesses are purely run in the short term. Bonuses and stock pumps.


That would be good because it would incentivize all FSD cars communicating with each other. Imagine how safe driving would be if they are all broadcasting their speed and position to each other. And each vehicle sending/receiving gets cheaper insurance.


It goes kinda dsytopic if access to the network becomes a monopolistic barrier.


Not to mention the possibility of requiring pedestrians and cyclists to also be connected to the same network. Anyone with access to the automotive network could track any pedestrian who passes by the vicinity of a road.


It's hard to think of a good blend of traffic safety, privacy guarantees, and resistance to bad-actors. Having/avoiding persistent identification is certainly a factor.

Perhaps one approach would be to declare that automated systems are responsible for determining the position/speed of everything around them using regular sensors, but may elect to take hints from anonymous "notice me" marks or beacons.


no need.


I’m for this as long as the company also takes on liability for human errors they could prevent. I’d want to see cars enforcing speed limits and similar things. Humans are too dangerous to drive.


Also force other auto makers to be liable when their over-tall SUVs cause more deaths than sedan type cars.


Tesla officially renamed it to “Full Self Driving (supervised)” a few months ago, previously it was “Full Self Driving (beta)”

Both names are ridiculous, for different reasons. Nothing called a “beta” should be tested on public roads without a trained employee supervising it (i.e. being paid to pay attention). And of course it was not “full”, it always required supervision.

And “Full Self Driving (supervised)” is an absurd oxymoron. Given the deaths and crashes that we’ve already seen, I’m skeptical of the entire concept of a system that works 98% of the time, but also needs to be closely supervised for the 2% of the time when it tries to kill you or others (with no alerts).

It’s an abdication of duty that NHTSA has let this continue for so long, they’ve picked up the pace recently and I wouldn’t be surprised if they come down hard on Tesla (unless Trump wins, in which case Elon will be put in charge of NHTSA, the SEC, and FAA)


I hope they soon rename it into "Fully Supervised Driving".


It’s your car, so ultimately the liability is yours. That’s why you have insurance. If Tesla retains ownership, and just lets you drive it, then they have (more) liability.


> It’s your car, so ultimately the liability is yours

No, that's not how it works. The driver and the driver's insurer are on the hook when something bad happens. The owner is not, except when the owner is also the one driving, or if the owner has been negligent with maintenance, and the crash was caused by mechanical failure related to that negligence.

If someone else is driving my car and I'm a passenger, and they hurt someone with it, the driver is liable, not me. If that "someone else" is a piece of software, and that piece of software has been licensed/certified/whatever to drive a car, why should I be liable for its failures? That piece of software needs to be insured, certainly. It doesn't matter if I'm required to insure it, or if the manufacturer is required to insure it.

Tesla FSD doesn't fit into this scenario because it's not the driver. You are still the driver when you engage FSD, because despite its name, FSD is not capable of filling that role.


Incorrect. Or at least, it varies by state. I was visiting my mother and borrowed her car, had a minor accident with it. Her insurance paid, not mine.

This is why you are required to have insurance for the cars you own. You may from time to time be driving cars you do not own, and the owners of those cars are required to have insurance for those cars, not you.


Hesitation around self-driving technology is not just about the raw accident rate, but the nature of the accidents. Self-driving failures often involve highly visible, preventable mistakes that seem avoidable by a human (e.g., failing to stop for an obvious obstacle). Humans find such incidents harder to tolerate because they can seem fundamentally different from human error.


Exactly -- it's not just the overall accident rate, but the rate per accident type.

Imagine if self-driving is 10x safer on freeways, but on the other hand is 3x more likely to run over your dog in the driveway.

Or it's 5x safer on city streets overall, but actually 2x worse in rain and ice.

We're fundamentally wired for loss aversion. So I'd say it's less about what the total improvement rate is, and more about whether it has categorizable scenarios where it's still worse than a human.


If Tesla's FSD was actually self-driving, maybe half the casualty rate of the median human driver would be fine.

But it's not. It requires constant supervision, and drivers sometimes have to take control (without the system disengaging on its own) in order to correct it from doing something unsafe.

If we had stats for what the casualty rate would be if every driver using it never took control back unless the car signaled it was going to disengage, I suspect that casualty rate would be much worse than the median human driver. But we don't have those stats, so we shouldn't trust it until we do.

This is why Waymo is safe and tolerated and Tesla FSD is not. Waymo test drivers record every time they have to take over control of the car for safety reasons. That was a metric they had to track and improve, or it would have been impossible to offer people rides without someone in the driver's seat.


>>. How about a quarter? Or a tenth?

The answer is zero. An airplane autopilot has increased the overall safety of airplanes by several orders of magnitude compared to human pilots, but literally no errors in its operation are tolerated, whether they are deadly or not. The exact same standard has to apply to cars or any automated machine for that matter. If there is any issue discovered in any car with this tech then it should be disabled worldwide until the root cause is found and eliminated.

>> It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

I really don't like this argument, because we could already prevent literally all automotive deaths tomorrow through existing technology and legislation and yet we are choosing not to do this for economic and social reasons.


You can't equate airplane safety with automotive safety. I worked at an aircraft repair facility doing government contracts for a number of years. In one instance, somebody lost the toilet paper holder for one of the aircraft. This holder was simply a piece of 10 gauge wire that was bent in a way to hold it and supported by wire clamps screwed to the wall. Making a new one was easy but since it was a new part going on the aircraft we had to send it to a lab to be certified to hold a roll of toilet paper to 9 g's. In case the airplane crashed you wouldn't want a roll of toilet paper flying around I guess. And that cost $1,200.


No, I'm pretty sure I can in this regard - any automotive "autopilot" has to be held to the same standard. It's either zero accidents or nothing.


This only works for aerospace because everything and everyone is held to that standard. It's stupid to hold automotive autopilots to the same standard as a plane's autopilot when a third of fatalities in cars are caused by the pilots being drunk.


I don't think that's a useful argument.

I think we should start allowing autonomous driving when the "driver" is at least as safe as the median driver when the software is unsupervised. (Teslas may or may not be that safe when supervised, but they absolutely are not when unsupervised.)

But once we get to that point, we should absolutely ratchet those standards so automobile safety over time becomes just as safe as airline safety. Safer, if possible.

> It's stupid to hold automotive autopilots to the same standard as a plane's autopilot when a third of fatalities in cars are caused by the pilots being drunk.

That's a weird argument, because both pilots and drivers get thrown in jail if they fly/drive drunk. The standard is the same.


> The answer is zero

If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

I agree that it should be regulated and incidents thoroughly investigated, however letting perfect be the enemy of good leads to stagnation and lack of practical improvement and greater injury to the population as a whole.


>>If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

If an automated system makes a wrong decision and it contributes to harm/death then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.


Depends on what one considers a "problem." As long as the autopilot's failures conditions and mitigation procedures are documented, the burden is largely shifted to the operator.

Autopilot didn't prevent slamming into a mountain? Not a problem as long as it wasn't designed to.

Crashed on landing? No problem, the manual says not to operate it below 500 feet.

Runaway pitch trim? The manual says you must constantly be monitoring the autopilot and disengage it when it's not operating as expected and to pull the autopilot and pitch trim circuit breakers. Clearly insufficient operator training is to blame.


> And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

just because we do something dumb in one scenario isn't a very persuasive reason to do the same in another.

> then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.

ambulances sometimes get into accidents - we should ban all ambulances, no matter how many lives they save otherwise.


So your only concern is, when something goes wrong, need someone to blame. Who cares about lives saved. Vaccines can cause adverse effects. Let's ban all of them.

If people like you were in charge of anything, we'd still be hitting rocks for fire in caves.


Ok, consider this for a second. You're a director of a hospital that owns a Therac radiotherapy machine for treating cancer. The machine is without any shadow of a doubt saving lives. People without access to it would die or have their prognosis worsen. Yet one day you get a report saying that the machine might sometimes, extremely rarely, accidentally deliver a lethal dose of radiation instead of the therapeutic one.

Do you decide to keep using the machine, or do you order it turned off until that defect can be fixed? Why yes or why not? Why does the same argument apply/not apply in the discussion about self driving cars?

(And in case you haven't heard about it - the Therac radiotherapy machine fault was a real thing, it's being used as a cautionary tell for software development but I sometimes wonder if it should be used in philosophy classes too)


I'd challenge the legitimacy of the claim that it's 10x safer, or even safer at all. The safety data provided isn't compelling to me, it can be games or misrepresented in various ways, as pointed out by others.


That claim wasn't made. It was a hypothetical, what if it was 10x safer? Then would people tolerate it.


yes people would, if we had a reliable metric for safety of these systems besides engaged/disengaged. We don't, and 10x safer with the current metrics is not satisfactory.


Airplane autopilots follow a lateral & sometimes vertical path through the sky prescribed by the pilot(s). They are good at doing that. This does increase safety, because it frees up the pilot(s) from having to carefully maintain a straight 3d line through the sky for hours at a time.

But they do not listen to ATC. They do not know where other planes are. They do not keep themselves away from other planes. Or the ground. Or a flock of birds. They do not handle emergencies. They make only the most basic control-loop decisions about the control surface and power (if even autothrottle equipped, otherwise that's still the meatbag's job) changes needed to follow the magenta line drawn by the pilot given a very small set of input data (position, airspeed, current control positions, etc).

The next nearest airplane is typically at least 3 miles laterally and/or 500' vertically away, because the errors allowed with all these components are measured in hundreds of feet.

None of this is even remotely comparable to a car using a dozen cameras (or lidar) to make real-time decisions to drive itself around imperfect public streets full of erratic drivers and other pedestrians a few feet away.

What it is a lot like is what Tesla actually sells (despite the marketing name). Yes it's "flying" the plane, but you're still responsible for making sure it's doing the right thing, the right way, and not and not going to hit anything or kill anybody.


Thank you for this. The number of people conflating Tesla's Autopilot with an airliner's autopilot, and expecting that use and policies and situations surrounding the two should be directly comparable, is staggering. You'd think people would be better at critical thinking with this, but... here we are.


Ah. Few people realize how dumb aircraft autopilots really are. Even the fanciest ones just follow a series of waypoints.

There is one exception - Garmin Safe Return. That's strictly an emergency system. If it activates, the plane is squawking emergency to ATC and and demanding that airspace and a runway be cleared for it.[1] This has been available since 2019 and does not seem to have yet been activated in an emergency.

[1] https://youtu.be/PiGkzgfR_c0?t=87


It does do that and it's pretty neat, if you have one of the very few modern turboprops or small jets that have G3000s & auto throttle to support it.

Airliners don't have this, but they have a 2nd pilot. A real-world activation needs a single-pilot operation where they're incapacitated, in one of the maybe few hundred nice-but-not-too-nice private planes it's equipped in, and a passenger is there to push it.

But this is all still largely using the current magenta line AP system, and that's how it's verifiable and certifiable. There's still no cameras or vision or AI deciding things, there are a few new bits of relatively simple standalone steps combined to get a good result.

- Pick a new magenta line to an airport (like pressing NRST Enter Enter if you have filtering set to only suitable fields)

- Pick a vertical path that intersects with the runway (Load a straight-in visual approach from the database)

- Ensure that line doesn't hit anything in the terrain/obstacle database. (Terrain warning system has all this info, not sure how it changes the plan if there is a conflict. This is probably the hardest part, with an actual decision to make).

- Look up the tower frequency in DB and broadcast messages. As you said it's telling and not asking/listening.

- Other humans know to get out of the way because this IS what's going to happen. This is normal, an emergency aircraft gets whatever it wants.

- Standard AP and autothrottle flies the newly prescribed path.

- The radio altimeter lets it know when to flare.

- Wheel weight sensors let it know to apply the brakes.

- The airport helps people out and tows the plane away, because it doesn't know how to taxi.

There's also "auto glide" on the more accessible G3x suite for planes that aren't necessarily $3m+. That will do most of the same stuff and get you almost, but not all the way, to the ground in front of a runway automatically.


> and a passenger is there to push it.

I think it will also activate if the pilot is unconscious, for solo flights. It has something like a driver alertness detection system that will alarm if the pilot does nothing for too long. The pilot can reset the alarm, but if they do nothing, the auto return system takes over and lands the plane someplace.


> They do not know where other planes are.

Yes they do. It's called TCAS.

> Or the ground.

Yes they do. It's called Auto-GCAS.


Yes those are optional systems that exist, but they are unrelated to the autopilot (in at least the vast majority of avionics).

They are warning systems that humans respond to. For a TCAS RA the first thing you're doing is disengaging the autopilot.

If you tell the autopilot to fly straight into the path of a mountain, it will happily comply and kill you while the ground proximity warnings blare.

Humans make the decisions in planes. Autopilots are a useful but very basic tool, much more akin to cruise control in a 1998 Civic than a self-driving Tesla/Waymo/erc.


> literally no errors in its operation are tolerated

Aircraft designer here, this is not true. We typically certify to <1 catastrophic failure per 1e9 flight hours. Not zero.


> ”The answer is zero…”

> ”If there is any issue discovered in any car with this tech then it should be disabled worldwide until the root cause is found and eliminated.”

This would literally cost millions of needless deaths in a situation where AI drivers had 1/10th the accident injury rate of human drivers.


Autopilots aren't held to a zero error standard let alone a zero accident standard.


> traveled of the median human driver isn't acceptable.

It's completely acceptable. In fact the numbers are lower than they have been since we've started driving.

> Accidents caused by human drivers

Are there any other types of drivers?

> are one of the largest causes of injury and death

More than half the fatalities on the road are actually caused by the use of drugs and alcohol. The statistics are very clear on this. Impaired people cannot drive well. Non impaired people drive orders of magnitude better.

> technology that could save lives

There is absolutely zero evidence this is true. Everyone is basing this off of a total misunderstanding of the source of fatalities and a willful misapprehension of the technology.


> Non impaired people drive orders of magnitude better.

That raises the question - how many impaired driver-miles are being baked into the collision statistics for "median human" driver-miles? Shouldn't we demand non-impaired driving as the standard for automation, rather than "averaged with drunk / phone-fiddling /senile" driving? We don't give people N-mile allowances for drunk driving based on the size of the drunk driver population, after all.


Motorcycles account for a further 15% of all fatalities in a typical year. Weather is often a factor. Road design is sometimes a factor, remembering several rollover crashes that ended in a body of water and no one in the vehicle surviving. Likewise ejections during fatalities due to lack of seatbelt use is also noticeable.

Once you dig into the data you see that almost every crash, at this point in history, is really a mini-story detailing the confluence of several factors that turned a basic accident into something fatal.

Also, and I only saw this once, but if you literally have a heart attack behind the wheel, you are technically a roadway fatality. The driver was 99. He just died while sitting in slow moving traffic.

Which brings me to my final point which is the rear seats in automobiles are less safe than the front seats. This is true for almost every vehicle on the road. You see _a lot_ of accidents where two 40 to 50 year old passengers are up front and two 70 to 80 year old passengers are in back. The ones up front survive. One or both passengers in the back typically die.


No, that makes no sense, because we can't ensure that human drivers aren't impaired. We test and compare against the reality, not the ideal we'd prefer.


We can sample rate of impairment. We do this quite often actually. It turns out the rate depends on the time of day.


> Are there any other types of drivers [than human drivers]?

Waymo says yes, there are.


> It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

Even if we optimistically assume no "gotchas" in the statistics [0], distilling performance down to a casualty/injury/accident-rate can still be dangerously reductive, when the have a different distribution of failure-modes which do/don't mesh with our other systems and defenses.

A quick thought experiment to prove the point: Imagine a system which compared to human drivers had only half the rate of accidents... But many of those are because it unpredictably decides to jump the sidewalk curb and kill a targeted pedestrian.

The raw numbers are encouraging, but it represents a risk profile that clashes horribly with our other systems of road design, car design, and what incidents humans are expecting and capable of preventing or recovering-from.

[0] Ex: Automation is only being used on certain subsets of all travel which are the "easier" miles or circumstances than the whole gamut a human would handle.


Re: gotchas: an even easier one is that the Tesla FSD statistics don't include when the car does something unsafe and the driver intervenes and takes control, averting a crash.

How often does that happen? We have no idea. Tesla can certainly tell when a driver intervenes, but they can't count every occurrence as safety-related, because a driver might take control for all sorts of reasons.

This is why we can make stronger statements about the safety of Waymo. Their software was only tested by people trained and paid to test it, who were also recording every time they had to intervene because of safety, even if there was no crash. That's a metric they could track and improve.


> It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

Were the Teslas driving under all weather conditions at any location like humans do or is it just cherry picked from the easy travelling conditions?


I think we should not be satisfied with merely “better than a human”. Flying is so safe precisely because we treat any casualty as unacceptable. We should aspire to make automobiles at least that safe.


I don't think the question was what we should be satisfied with or what we should aspire to. I absolutely agree with you that we should strive to make autonomous driving as safe as airline travel.

But the question was when should we allow autonomous driving on our public roads. And I think "when it's at least as safe as the median human driver" is a reasonable threshold.

(The thing about Tesla FSD is that it -- unsupervised -- would probably fall super short of that metric. FSD needs to be supervised to be safer than the median human driver, assuming that's evn currently the case, and not every driver is going to be equally good at supervising it.)


Aspire to, yes. But if we say "we're going to ban FSD until it's perfect, even though it already saves lives relative to the average human driver", you're making automobiles less safe.


> I think we should not be satisfied with merely “better than a human”.

The question is whether you want to outlaw automatic driving just because the system is, say, "only" 50% safer than us.


Before FSD is allowed on public roads?

It’s a net positive, saving lives right now.


There’s two things going on here with there average person that you need to overcome: That when Tesla dodges responsibility all anyone sees is a liar, and that people amalgamate all the FSD crashes and treat the system like a dangerous local driver that nobody can get off the road.

Tesla markets FSD like it’s a silver bullet, and the name is truly misleading. The fine print says you need attention and all that. But again, people read “Full Self Driving” and all the marketing copy and think the system is assuming responsibility for the outcomes. Then a crash happens, Tesla throws the driver under the bus, and everyone gets a bit more skeptical of the system. Plus, doing that to a person rubs people the wrong way, and is in some respects a barrier to sales.

Which leads to the other point: People are tallying up all the accidents and treating the system like a person, and wondering why this dangerous driver is still on the road. Most accidents with dead pedestrian start with someone doing something stupid, which is when they assume all responsibility, legally speaking. Drunk, speeding, etc. Normal drivers in poor conditions slow down and drive carefully. People see this accident, and treat FSD like a serial drunk driver. It’s to the point that I know people that openly say they treat teslas on roads like they’re erratic drivers just for existing.

Until Elon figures out how to fix his perception problem, the calls for investigations and to keep his robotaxis is off the road will only grow.


My dream is of a future where humans are banned from driving without special licenses.


So.........like right now you mean? You need a special licence to drive on a public road right now.


The problem is it’s obviously too easy to get one and keep one, based on some of the drivers I see on the road.


That sounds like a legislative problem where you live, sure it can be fixed by overbearing technology but we already have all the tools we need to fix it, we are just choosing not to for some reason.


Geez, clearly they mean like a CDL


No, you need an entirely common, unspecial license drive on a public road right now.


And yet Tesla's FSD never passed a driving test.


And it can’t legally drive a vehicle


The problem is that Tesla is way behind the industry standards here and it's misrepresenting how good their tech is.


The key here is insurers. Because they pick up the bill when things go wrong. As soon as self driving becomes clearly better than humans, they'll be insisting we stop risking their money by driving ourselves whenever that is feasible. And they'll do that with price incentives. They'll happily insure you if you want to drive yourself. But you'll pay a premium. And a discount if you are happy to let the car do the driving.

Eventually, manual driving should come with a lot more scrutiny. Because once it becomes a choice rather than an economic necessity, other people on the road will want to be sure that you are not needlessly endangering them. So, stricter requirements for getting a drivers license with more training and fitness/health requirements. This too will be driven by insurers. They'll want to make sure you are fit to drive.

And of course when manual driving people get into trouble, taking away their driving license is always a possibility. The main argument against doing that right now is that a lot of people depend economically on being able to drive. But if that argument goes away, there's no reason to not be a lot stricter for e.g. driving under influence, or routinely breaking laws for speeding and other traffic violations. Think higher fines and driving license suspensions.


> The interesting question is how good self-driving has to be before people tolerate it.

It's pretty simple: as good as it can be given available technologies and techniques, without sacrificing safety for cost or style.

With AVs, function and safety should obviate concerns of style, cost, and marketing. If that doesn't work with your business model, well tough luck.

Airplanes are far safer than cars yet we subject their manufacturers to rigorous standards, or seemingly did until recently, as the 737 max saga has revealed. Even still the rigor is very high compared to road vehicles.

And AVs do have to be way better than people at driving because they are machines that have no sense of human judgement, though they operate in a human physical context.

Machines run by corporations are less accountable than human drivers, not at the least because of the wealth and legal armies of those corporations who may have interests other than making the safest possible AV.


Surely the number of cars than can do it, and the price, also matters, unless you're going to ban private cars


> Surely the number of cars than can do it, and the price, also matters, unless you're going to ban private cars

Indeed, like this: the more cars sold that claim fully autonomous capability, and the more affordable they get, the higher the standards should be compared to their AV predecessors, even if they have long eclipsed human driver's safety record.

If this is unpalatable, then let's assign 100% liability with steep monetary penalties to the AV manufacturer for any crash that happens under autonomous driving mode.


And the same penalties for Human Driver Beta


Many people don't (and shouldn't) take the "half the casualty rate" at face value. My biggest concern is that Waymo and Tesla are juking the stats to make self-driving cars seem safer than they really are. I believe this is largely an unintentional consequence of bad actuary science coming from bad qualitative statistics; the worst kind of lying with numbers is lying to yourself.

The biggest gap in these studies: I have yet to see a comparison with human drivers that filters out DUIs, reckless speeding, or mechanical failures. Without doing this it is simply not a fair comparison, because:

1) Self-driving cars won't end drunk driving unless it's made mandatory by outlawing manual driving or ignition is tied to a breathalyzer. Many people will continue to make the dumb decision to drive themselves home because they are drunk and driving is fun. This needs regulation, not technology. And DUIs need to be filtered from the crash statistics when comparing with Waymo.

2) A self-driving car which speeds and runs red lights might well be more dangerous than a similar human, but the data says nothing about this since Waymo is currently on their best behavior. Yet Tesla's own behavior and customers prove that there is demand for reckless self-driving cars, and manufacturers will meet the demand unless the law steps in. Imagine a Waymo competitor that promises Uber-level ETAs for people in a hurry. Technology could in theory solve this but in practice the market could make things worse for several decades until the next research breakthrough. Human accidents coming from distraction are a fair comparison to Waymo, but speeding or aggressiveness should be filtered out. The difficulty of doing so is one of the many reasons I am so skeptical of these stats.

3) Mechanical failures are a hornets' nest of ML edge cases that might work in the lab but fail miserably on the road. Currently it's not a big deal because the cars are shiny and new. Eventually we'll have self-driving clunkers owned by drivers who don't want to pay for the maintenance.

And that's not even mentioning that Waymos are not self-driving, they rely on close remote oversight to guide AI through the many billions of common-sense problems that computets will not able to solve for at least the next decade, probably much longer. True self-driving cars will continue to make inexplicably stupid decisions: these machines are still much dumber than lizards. Stories like "the Tesla slammed into an overturned tractor trailer because the AI wasn't trained on overturned trucks" are a huge problem and society will not let Tesla try to launder it away with statistics.

Self-driving cars might end up saving lives. But would they save more lives than adding mandatory breathalyzers and GPS-based speed limits? And if market competition overtakes business ethics, would they cost more lives than they save? The stats say very little about this.