Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While the specs sound very underwhelming at first (geo-fenced on some roads, very low speeds, no bad weather), they seem to be so confident in their system, that they will take responsibility for the car when in autopilot. AFAIK, no other manufacturer does this for their systems (which capabilities they often make rather wild claims about).



Just from a management perspective, even if you're super-confident in your engineers (which you shouldn't be), doing anything other than a very limited slow roll out would be negligent. This is a tectonic shift in liability. It's one thing if you sell 1,000,000 cars and 100,000 get in an accident. It's actually much worse to sell 10,000 cars and 1,000 of them get in an accident and the company is liable.


'This is a tectonic shift in liability. '

Its the only form of liability i consider acceptable - I woupd never use autopilot where I would be to blame if i fail to react in 50 milliseconds once the car fucks up


I think for this reason full self driving is going to be more feasible in taxi-style vehicles first. You hail the driverless taxi, like with another person driving the taxi the passengers are not liable. It’s a model we are already used to, at least. If the taxi company winds up being the car producer, then the liability is the same as if they didn’t produce but operated the car.


That's not really "full self driving" if you are referring to the SAE scale. Level 5, the top, is fully autonomous driving, everywhere. What you are referring to is level 4; fully autonomous driving in specific geofenced areas on predetermined routes. Think Delamain in Cyberpunk 2077.


Fitting username :)


You are already going to be to blamed if you get in an accident for most hardware failures in a car. Why is this one so different?


Uh, no?

If the car's brakes don't work due to a defect, you are not liable at all.

And if you show reasonable due diligence, that is regular maintenance, the same applies doubly so.

Beyond that, I've seen a lot of accidents, and none of them were the car's fault.

All were literally human error, or external factors. EG, deer leaping on to road, too many frogs on the road, etc


"If the car's brakes don't work due to a defect, you are not liable at all."

Morally perhaps but if you drive into the back of me because your brakes have a manufacturing fault I would still expect you to pay for the damage. The manufacturer may then be liable to you but that that is your business.


You'd don't expect the driver to pay, you just expect to be paid. Doesn't matter to you if it is OP, his insurance company or his car manufacturer. It does matter to the driver though, which is the reason of this discussion on liability.


It goes beyond morals.

If you are driving a car and an crash occurs due to a manufacturing defect, you'll probaly not be found criminally liable.

This becomes super important if someone gets injured or killed.


Yeah, and if the self driving functionality doesn't work at all, it would be the manufacturers fault as well. The liability would work the same.


I guess because appropriate usage and maitenance makes that kind of hardware failure extremely unlikely, no?

I can be "warned" to check for issues if the car is older, creaking, seems to be "driving" different etc

A mistake in a mostly completely uninterpretable bundle of software is definetely much sneakier. You can't prepare for it

So I guess that's one argument.

The other aspect of the same argument would be that if you're in an accident due to your car breaking in a way that is obviously the fault of the maker, they will absolutely eat the liability. I think this is how a few recall cases get kickstarted


I don't think the GP is 100% correct anyway, it depends a great deal on what happened to cause the accident. Brakes fail because you haven't changed the pads? Your fault. Steering shaft u-joint comes loose on your 2020 Ford? You were never expected to maintain that item in that time frame so is it your fault?


Because none of the other hardware features give the driver a false sense of being able to not do their job. It doesn't matter how many times the owner is told, we have already seen people are not smart about this. Drivers shooting video of them being in the passenger seat, people shooting porn while the car is driving, and several other stupid things that people will do just because they think it that's what self-driving is meant to do.


Because you can't control it. Drivers aren't liable for design flaws in their cars be they faulty self-driving, unintended acceleration, etc.


The overwhelming majority of crashes are because of human error, not hardware failure.


Because we have rigorous laws and liability managing hardware quality and maintenance.

If liability for software will rest with the driver, there are no laws regulating it's quality and maintenance, what is the basis to believe it will be anywhere near ad reliable?


Legally, they’re all liable.

If your product is unreasonably dangerous and defective, and that defect is the proximate cause of injury to another, you’re responsible.

I know nothing of German law, but from a US legal perspective, it appears that MB is just promising what they already have to do - compensate those harmed by defects in their products.


No, the competition explicitely tells the driver to be ready to take over any second, they promise full self driving only in the ads, but not the contract.

So when a accident happens, the "driver" can be blamed for not reacting.

Mercedes offers full self driving including legal responsibility. You are not oblieged to watch the road or the car while driving (with the current tight limits)


The competition doesn't get to override laws. Tesla may put as many illegal clauses in their contract as they want, if they crash because of autopilot in Europe, they're liable. That's why Autopilot is barely enabled here at all, because they know their software is unreliable.


There are accidents which are not due to defects or unreasonable dangers in the product.

Shit happens and responsibility is not necessarily due to defects/malfeasance/unreasonable risks. There ara honest-to-God accidents. And you are still liable.


I looked up the number of cars sold. Mercedes sells about 1/2 relative to ford and 22% compared to Toyota

Obviously the statistics of a new car is unknown


Mercedes will obviously outsource the liability to an insurance company. That’s what insurance companies are for.


Insurance isn’t magic get out of risk free, if self driving cars have loads of accidents and payouts the cost to insure will correct itself.


Not obvious at all. Companies the size of Mercedes usually self insure for routine risks.


Oh man, I didn't expect to read a comment saying "it's OK if 100 times more cars have accidents as long as a company isn't liable".


I read the parent as meaning to say that because Mercedes will be liable, even 1,000 accidents will be a real problem for them. This is a good thing because it shows that Mercedes expects a very low number of accidents while the system is enabled.

Mercedes accepting the risk like this is a massive step forward for these reasons. It sets a precedent that hopefully others will follow. They wouldn't transfer the risk if they didn't think it would profit them.


I guess, maybe I misunderstood. I was just surprised to read "it's better to have a ton of deaths, rather than a few deaths we're on the hook for".


From the company's point of view this is correct reasoning. The sooner people realize companies do not have any responsibility to be moral the better


They absolutely do have responsibility. There's no reason we should allow investors limited liability if they are going to be assholes about it.

Corporations should only exist because they are net beneficial to the public!

I do agree that enough companies are unethical that it is reasonable to expect it.


This is just naive. Every company that makes any kind of product has made some kind of trade like this.

Costco sells you knives cheaply because they will not be liable if you murder people with them. If the Costco investors were liable for murder every time one of their knives was used to kill someone, you can bet they would just not sell them entirely.

Just because a company thinks about liability doesn’t mean it’s immoral. Individuals avoid liability as much as possible too (see insurance).

The world is dangerous and “fault” is everywhere.


I'm not sure what your point is. My comment is a statement that corporations do have a responsibility to act in the interests of society, not an analysis of the particular ethics of selling knives or avoiding liability.


So do people, which are the ones that run companies. What’s your point?


It's actually a human thing. When bad things happen, we strongly prefer that they don't happen as a result of something that we did.


On the other hand, if incautiously switching from the former to the latter drives your company bankrupt, the end result doesn't benefit anyone.


I feel like almost any summary of the form "so you're saying it's ok that..." is almost without exception not something the other person would agree with.


That's kind of the whole point. If a decision or policy has predictable consequences that aren't being addressed, either the decision-maker is unaware of those consequences or is accepting of those consequences. Asking the question removes ignorance as a possibility, and lets the conversation continue.

Sometimes the answer is "No, I was unaware, and I will adjust my decision." Sometimes the answer is "Yes, here are the consequences of the alternatives, and here's why I believe this set of consequences to be best." Sometimes the answer is "Yes, I don't care about those people." By asking the question, these three types of answers respectively give the other person an opportunity to improve themselves, give you an opportunity to learn from the other person, or give the audience an opportunity to learn not to trust the other person.


You missed one option; "You're falling prey to the is-ought fallacy." That is, saying that something is true is not the same as saying that something should be true. The original claim was that from the perspective of management at a company, 1,000 accidents the company is legally liable for is worse than 100,000 it isn't. Which is true! From that limited perspective! The reply "so you're saying it's ok that..." implies that the comment agreed with that perspective, which isn't necessarily the case. It could simply be pointing out a failure state of current management practices and corporate law. But further than that, that phrase is usually a particularly uncharitable one, and I find this usage of it to be more common than any other. I think "implying the speaker believes that the unfortunate condition they pointed out is right and just" is the normal use case for that phrase, rather than trying to bring attention to the consequences of a policy.


> You missed one option; "You're falling prey to the is-ought fallacy." That is, saying that something is true is not the same as saying that something should be true.

I think I'd put that as a subcategory of the second case, that the options were considered and this one was considered the best. That may mean that it is the least worst of several bad options, or that there are restricted options to choose from.

> Which is true! From that limited perspective! ... It could simply be pointing out a failure state of current management practices and corporate law.

I definitely agree, this is a fantastic example of options having been considered and rejected. In this case, the alternative would be "A self-driving car company accepts more liability than they can handle, and go bankrupt. This saves lives in the short-term, but costs lives in the long-term." It can then be the start of an entirely different conversation of how to avoid that failure state, and what would need to change in order to still get the benefits of that decision.

> The reply "so you're saying it's ok that..." implies that the comment agreed with that perspective, which isn't necessarily the case.

I'd make a distinction between a comment agreeing with a perspective and a commenter agreeing with a perspective. One is based solely on the text as it is written, and the other is a human's internal belief. It's not necessarily a statement that the person is wrong, but that the words they have spoken may have unintended consequences. The difference between "So you're saying $IDEA." and "So you believe $IDEA."

> I think "implying the speaker believes that the unfortunate condition they pointed out is right and just" is the normal use case for that phrase, rather than trying to bring attention to the consequences of a policy.

Good point. In situations where there are no long-term social relationships to be maintained, and where there isn't a good chance for a reply, the message given to the audience is the only one remaining. This is a major issue I have with any social group beyond a few dozen people, and one that I don't have any good solutions for.


> Sometimes the answer is "Yes, I don't care about those people."

Frequently true but rarely admitted


From a corporate perspective, of course it is. It's the Ford Pinto study all over again.

This is why corporate influence on legislation is bad, as their "best interests" often come at odds with morality-based ones.


Fight Club summarized the Pinto thing nicely:

https://www.quotes.net/mquote/31826

without providing the illusion that such cost benefit analyses are a thing of the past.

(Pintos had a problem with the gas tank, not the differential, but it's pretty clear what they were referring to.)


Most car accidents are 100% operator error, it'd be really far fetched to try to blame those on the manufacturer.

Autopilot not so much, the point stands.


Launch control option added by car manufacturers I'd say is 100% the car manufacturer's fault they thought of it, installed it, promoted it, but it's pointless and dangerous.


It clearly has a point, because they successfully market and sell the feature. And everything is dangerous to some degree.

Clearly it is not 100% their fault because the feature can certainly be used responsibly.

Is there a more nuanced and substantive form of your argument against developing and selling a launch control feature?


>launch control How many accidents happen from a standstill? Id love to see some stats, but i highly doubt it would be a high number


That's not really what the OP is saying though, is it?


True, it was more "the company would rather have 100k accidents it's not liable for rather than 1k accidents it is". Doesn't make it much better for me.


Yes. The liability factor is a great proof point, but the real tech innovation in my mind is not the self driving code, but instead the attack on the fundamental problem of building a model of where and when their stuff is trustworthy. As you point out, everyone else in the industry has simply punted that very difficult problem to the poor users to figure out!


few things inspire confidence like companies putting their literal money where their mouth is


But haven't we all seen Fight Club? It isn't a question a confidence, it is a question of financial math.

This decision tells us nothing about the safety of the Mercedes system compared to its competitors. All it tells us is that adding these limitations to ensure the system is only used in the safest possible scenarios makes taking over liability more reasonable. That isn't surprising. Their competitors' systems are also very safe if used in this manner. The only difference is that the competitors are not satisfied with releasing a system with enough limitations that it only works in stop-and-go highway traffic in clear weather. It is that added functionality that is more dangerous and the reason other manufacturers don't take on liability.

Odds are the marketing and accounting wings of Mercedes had just as much if not more influence on this decision than the tech team.


It’s both, right? The competitors may very well be just as safe in those conditions, but we wouldn’t know based on their liability stance; the Fight Club equation simply doesn’t apply.

With Mercedes, the Fight Club equation gives something like a mathematical guarantee of their estimated confidence of the safety of the system. There are no mathematical guarantees from the competitors.


>It’s both, right? The competitors may very well be just as safe in those conditions, but we wouldn’t know based on their liability stance; the Fight Club equation simply doesn’t apply.

I was referencing Fight Club as an example of an auto manufacturer making a life and death decision based off their financial incentives and not the best interest of the customer. The decision to take on liability is about money, not confidence in safety.

>With Mercedes, the Fight Club equation gives something like a mathematical guarantee of their estimated confidence of the safety of the system. There are no mathematical guarantees from the competitors.

You also have to factor in the marketing aspects. I'll reference another movie here in Tommy Boy[1]. Mercedes knows a move like this is attractive to consumers. People will look at it and think it means the system is safer. This decision will sell cars. But a guarantee doesn't tell you anything about the quality of the product. As Chris Farley's character says, you can "take a dump in a box and mark it guaranteed".

Maybe the system is truly dangerous and taking on this liability would be a losing proposition alone, yet adding in these additional sales from the marketing of this liability coverage yields a net positive for the decision. Or maybe the system truly is incredibly safe. There is no way for us to know. I am simply pointing out that this decision about liability is largely meaningless when judging safety because safety is only one of numerous criteria used to make the decision.

[1] - https://www.youtube.com/watch?v=mEB7WbTTlu4&t=49s


> I was referencing Fight Club as an example of an auto manufacturer making a life and death decision based off their financial incentives and not the best interest of the customer. The decision to take on liability is about money, not confidence in safety.

The Fight Club scene is about how these two things are integrated: their confidence in safety defines their ability to choose to take on liability.

Yes, its intent in the story is to horrify: there's a lack of humanity, a reliance on a simple function relating those two variables.

However, that doesn't imply the two variables are unrelated, in fact, it implies they are completely correlated.


This real life example is more complicated than the Fight Club version. It includes more variables like the added sales I mentioned and all these variables are unknown. How can you draw conclusions about one variable in a formula in which you don't know the value of any of the variables?


Not sure what you mean: the movie scene has the same property. It's not about the risk of individual failures of components, it's risk of a payout

Strong indicator I believe my anti-flood machine is good at preventing floods is I'm willing to take on paying for any liabilities you incur from flooding


You are only thinking about payouts and not the change in sales. Imagine you make $100m selling your anti-flood machines. Maybe your machine fails 10% of the time and a failure costs 2x the unit price. Taking on liability in that situation would bring you down to $80m. Bad deal for you. But what if someone in marketing comes and tells you that market research suggests taking on liability leads to an extra $30m in sales. You come out ahead because the $30m in new revenue exceeds the $26m in new liability. That isn’t confidence in your machine. It is marketing and accounting.


Occam's Razor: would only work in the short run, crash data goes to regulators.


Many things can be marketing - the drain cleaner sold in a bottle in a plastic bag doesn’t need it for safety - it needs it because it makes the product look more “dangerous”.

The interesting part is the balance they have to strike - be too lax and everyone uses it and you get the Tesla “autopilot did a big bad” articles; make it too restricted and you get “the damn thing never lets you turn it on”.


I don't really see what it has to do with fight club.

suppose car A and car B have autonomous driving that perform identically across a wide range of conditions. the manufacturer A enables FSD whenever the customer feels like it, but accepts no liability. manufacturer B accepts full liability for FSD use, but restricts it to situations where that's a good bet. car B is safer for the average customer, because it doesn't let them use FSD when it is especially risky. unless I understood a lot more about ML, CV, etc, I would pick car B every time.


No, this reasoning is flawed.

The baseline safety of cars is an absolute shit show. 30,000 people dying every year and trillion dollars of damages and lost productivity.

Car A can enable FSD in all cases, be safer in all cases, but still not be a good deal economically for the manufacturer to accept liability.

Car B could be making drivers less safe overall by preventing them from enabling FSD in most cases, but at least they aren’t liable.


Your comment is predicated on the assumption that FSD (the actual system installed in the car, not a future theoretical perfect system) is safer than the average driver in the situations where Mercedes currently disables it.

I'm not sure we have data to support that? We know Tesla's autopilot is safer on average, but most of those miles will have been driven in the situations where Mercedes allows it to be used.


> We know Tesla's autopilot is safer on average,

We don't even know this (even if you restrict it to highway miles), since it's not an apples to apples comparison. General safety statistics include old cars with fewer safety features independent of who's driving the car.


I am saying such a scenario may possibly exist, not necessarily that it does exist.

Mercedes could be increasing the number of overall deaths by limiting the availability of the feature and still be reducing their liability for when the system is in use.

Let’s say with FSD on all the time that instead of 30,000 people a year dying that only 20,000 people a year die. Would a company accept the liability?

What if the death rate was 10,000? 1,000? 100?

If FSD could prevent 29,900 deaths a year but still see deadly failures 100 times a year, would a company say “I accept the liability”?

So you see, perhaps it’s crucial that companies not be able to be sued out of existence even if a few hundred people a year are dying in exceptional cases under their software, in order to prevent over a quarter million deaths and untold number of maimings every decade.

Also consider in this ethical and legal liability dilemma that these populations are not necessarily subsets, but could be disjoint populations.


"We know Tesla's autopilot is safer on average"

Am I allowed to just use "LOL" on hackernews?


Well, unless you are going to rebut the statement, I don't see the point.

If you are just basing your point of view on the widely reported Tesla crashes, you might want to look up some actual safety stats. Crashes of human-driven cars happen every day, and they're often fatal.

But as I pointed out, most Tesla autopilot use is presumably in "easy" conditions, which complicates comparisons.


.


You were responding to someone saying that Tesla’s autopilot is safer (based on crash stats per million miles), not FSD. FSD and autopilot are two different features.


Fair enough. I cannot believe those related are not in tight conversation considering the AI element of FSD (or am I mistaken there?).

Either way, in summary, I cannot trust FSD until it is 100% reliable (impossible) and the temporary situation for some time to come (regulated/supervised FSD) drains all the life out of what I enjoy, actual engaged driving! ...

The bits we don't enjoy (stop-start traffic and some motorway driving) have already been taken care off more than a decade ago.

I'd love the option of FSD but ... either FSD will never fully be realised, or will be adopted widely and there'll be some hold outs like me who actually enjoy their driving.

We'll see


> The bits we don't enjoy (stop-start traffic and some motorway driving) have already been taken care off more than a decade ago.

Not really. You're referring to assistance features that require continuous driver attention. I think that highway driving, and perhaps even city driving in some parts of the world, could be completely automated to a level of safety that is far higher than humans can achieve.

I am deeply skeptical that we will ever see a system that can drive in all current road conditions though; I think it's more likely that road systems will eventually co-evolve with automated driving to a point that the automated systems simply never encounter the kind of emergent highly complex road situations that currently exist which they would be unable to handle.

I also enjoy driving, and my 40yo car doesn't even have a radio, let alone Autopilot, but I think it's likely that within our lifetimes, the kind of driving that you and I enjoy will be seen as a (probably expensive) hobby rather than something anyone does to get to work or the shops every day.


I think the majority of people enjoy driving. Driving is fun. Sure, traffic sucks, but the actual act of driving comes with lots of pleasures. Most people don’t seem eager to give up driving, nor are many people ready to hand over control to AI.

I’m a transportation planner, and for many years my specialty was bicyclist & pedestrian planning and safety. I would follow autonomous vehicle news, but always through that lens. In addition, I have sat through lectures, webinars, and sales pitches that tout our wonderful autonomous future. And lemme tell ya, there is little to no mention of all the road users who are not in vehicles. Countless renderings and animations that do no account for our most vulnerable users. It smells like mid-20th century transportation planning mentalities that is completely engineer-driven. Very narrow-focused and regressive.

My coworkers and I enjoyed sitting around and coming up with countless difficult-to-solve scenarios (that my tech friends would look at and say “eh, sounds interesting and solvable”) for AV developers to contend with. And despite pressure from our “future forward” marketing coworkers to focus on this sector, it feels nowhere close to really being ready (20-30 years maybe?).

Anyway, I do think the focus on “allowed in some places” is interesting. I have some trouble seeing “road systems will eventually co-evolve with automated driving” coming to fruition given the glacial pace of road system evolution.


I guess by "road systems will eventually co-evolve with automated driving" I would also include relatively minor interventions like increasing the proportion of controlled intersections, which are much easier for autonomous systems to deal with.

I have spent a lot of time in parts of Asia where massive evolution of transportation infrastructure has happened on a scale of a few decades (or less), so it seems less crazy to me that large-scale road evolution could happen along with autonomous vehicle development than it might seem to someone working in the West.


> This decision tells us nothing about the safety of the Mercedes system compared to its competitors

Mercedes is just taking notice that grandstanding and PR worked for Tesla, so they are doing the same thing.

Everybody who is serious about this knows that unless you get Level 5 it's all just grandstanding.

Level 5 won't come from unleashing Level 3 into the world and throwing deers , cyclists and pedestrians at it (hopefully not literally).

It's just a weird hill that brainpower and capital decided to die on. Deaths on the road are tragic but airbags, seatbelts and ultimately bigger cars and lower alcohol intake are something that is practical, whereas FSD is something like a pie-in-the-sky thing


I don't know how you came to this conclusion.

Level 4 is a serious goal that provides very useful benefits.


Bigger cars? Have you seen the new pickups, what do you want to drive, a tank?

Also bigger cars kill more pedestrians


People who say "bigger cars are safer" mean "the person driving the bigger car is safer". It's an arms race to them and they want the bigger car.


Not all crashes involve multiple cars. Bigger cars reduce the amount of deaths because there is more metal mass between you and the tree/barrier that you hit.

I also said that bigger cars is just one element in the equation, with seatbelt compliance, zero alcohol tolerance, airbags etc.

All those things are much more feasible and practical solutions compared to pie-in-the-sky FSD


US and Germany might be at different stages of this process, though. For example Germany has almost 2x less deaths per km driven and the situation is even better in countries like Switzerland or Norway.

Also we’re talking about the perspective of the car manufacturer. New cars are already significantly safer than an average car on the road so it’s relatively little they can do to address many of the points since there seems to bet not so much room left for non marginal improvements in safety and FSD is a much more attractive proposition to most car buyers than “3.5% safer than the competing brand”.


> FSD is a much more attractive proposition to most car buyers

So is donating to the church hoping to secure a place in Heaven. We all fool ourselves for the sake of peace of mind or immediate gratification, the important thing is to be aware of it.

It's important to be honest with yourself and decide who are the people who can take you for a ride.

For me it's my younger siblings, the sport franchises I support, and attractive women. Because at least in such circumstances it would be a fun and worthy ride.

Being taken for a ride by Musk, Mary Barra, Herbert Diess, (insert automotive CEO), or even Elizabeth Holmes, Bernie Madoff...that's pathetic, not fun and will leave you with regrets...but maybe techno-utopianists have a different mindset and their calculus is the opposite of mine, meaning they have a contempt for the small things in life and only get excited about moonshots and pie-in-the-sky ideas even if it means getting scammed.


I disagree. If often had to spend time in highway traffic jams (which is what Mercedes seems to be offering here) I’d rather pay an extra 10k to get 30 minutes of my life back every day instead of buying a car which in which I’d be 1.5% less likely to die during a crash.


The “Fight Club” math is incomplete for the sake of a good story. Regulators can force a recall on manufacturers and insurance companies can make a car uneconomical to buyers by appropriately pricing in risk. You may argue that the former has been undone by regulatory capture (something I would dispute), but I think we all recognize that insurance companies aren’t particularly charitable.


The Fight Club math is complete. They specifically quote the cost of out of court settlements, implying hush money.

This has been done successfully before. One model of elevators used to turn kids into ground beef (under a dozen a year -- the gap between the inner and outer door was too big).

Eventually they were almost all replaced, the deaths tapered off and everyone involved retired.

One year, much later, one of the few left in service killed a kid. No one working at the elevator company knew what was going on, and all hell broke loose. The remaining surviving culprit was in an old folks home at that point. The company ended up recalling something like two or three antique elevators.


Wouldn’t this logic suggest that the economic value of taking on this liability is minimal considering insurers haven’t made cars with competing tech unaffordable for buyers to insure?


>But haven't we all seen Fight Club?

???


http://inaneexplained.blogspot.com/2011/03/fight-club-car-re...

TL;DR; If there's a defect that kills or maims people, car manufacturers compute if it's cheaper to recall or just deal with the lawsuits.

And this happens regularly: https://en.wikipedia.org/wiki/Unsafe_at_Any_Speed


> And this happens regularly

Do you have an example to demonstrate regularity that's not nearly 60 years old?


Thesr lawyers claim they got a multi million settlement out of a manufacturer of a defective automobile recently:

https://www.raphaelsonlaw.com/questions/how-much-to-expect-f...

Since it's out-of-court, there is probably a non disclosure agreement. Anyway, it happens enough for lawyers to think it's worth targeting the cases, at least.


Honestly, I didn't read that link extensively. It went straight to a sleezey looking "How Much to Expect From a Car Accident Settlement?" That didn't sound like it justified the usage of "regularly" in the statement I responded to.

Feel free to correct me on how its related to the use of "regularly" tho.


There's always a tradeoff between cost and safety, how else would you imagine this to work?


By treating any lies to the customer as criminal fraud.

For some reason misleading investors in a way that causes them to loose money lands you in jail very quickly. You must provide extensive 'inverstment risk' report with the shares you are selling.

But when selling a car or tickets to faulty and deadly airplane, you don't have to inform customers of all the flaws you've discovered.

The lawsuits of Theranos case and Anna Sorokin make it clear that our society has one set of laws for owners of capital, and another one for the plebs.


The proposed alternative, I think, is that the car company values a customer's life at more than the expected cost of settling.

The cost of settling is supposed to approximate the value of a human life, but it sounds pretty bad to say "yeah, we knew this defect would cost $50MM to fix, but we estimated only 5 people would die, and each person's life is worth less than $10MM because the settlement is only like $400k per life".

The proposed alternative is that the car company has to say "We calculated this defect will probably kill 5 people, so we will spend any amount of money to fix it, up to and including us having no profit"


Maybe cars with known technical/mechanical defects that will result in death could be recalled even in cases where it eats a bit more into the profits of the auto manufacturer than it would if they just paid settlements to the families who lost loved ones. That seems like a pretty good way to imagine this to work.


I imagine that they'd ddmit that the car as designed is bad and fix or replace the bad cars. Leaving cars on the road that have a design flaw that randomly kills people, because it's cheaper to pay off dead people's families is reprehensible.


There's a question about what a design flaw means though. What about a car that was built without a backup camera because it was older than when they were commonly included? Or built when they were commonly available, but not before they were mandated? Is that a design flaw?

What about something that's more accidental that causes fewer deaths than the lack of a backup camera, but also costs more to fix than retrofitting a backup camera?


Legally, it’s up to the jury.

Each side has the right to bring in experts to testify about what were reasonable design choices, what was greed, and what was just a bone headed mistake.

If the jury concludes that the product wasn’t unreasonably dangerous or defective, defendant wins. If they find that is was, plaintiff wins.


Narrator: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one.

Business woman on plane: Are there a lot of these kinds of accidents?

Narrator: You wouldn't believe.

Business woman on plane: Which car company do you work for?

Narrator: A major one.



Have you seen the movie, if I remember the scene correctly Norton discusses the financial incentives leading to auto product recalls. I don’t think this is accurate in real life although Boeing hasn’t inspired much confidence.


>I don’t think this is accurate in real life

Ehhhh I mean they often don’t say the quiet part out loud but many companies definitely run a variation of the equation he describes.


The "Pinto Memo" [1] being a notable example. Although, as the article says, the cost / benefit analysis was against "societal costs" of the safety issues, not just the cost of litigation.

[1] https://en.wikipedia.org/wiki/Ford_Pinto#Cost%E2%80%93benefi...


Yeah, very good example and explanation


I feel strangely assured to know that Mercedes Benz will be paying thru their nose when I’m dead


I'm sorry Dave. There's a 60% chance of rain today. You're going to have to drive yourself.


What about legal? As in, if I'm being driven around my Mercedes-Benz while watching a movie on my phone and something happens, am I not gonna get fucked by the cops?


In Germany, the law was changed to account for this.

There are limits - while you can watch a movie, you have to be ready to take over within ten seconds.


10 seconds is a ridiculously long time when driving. What was the logic for that timeframe? It seems to me you either need to be ready to take over in an instant or basically not at all.


Things like approaching ambulances or construction sites that the system can predict, but knows it can't handle.


That would depend on the exact laws in particular jurisdictions where you might operate the vehicle. In some areas, law enforcement could cite you for using a phone even if the manufacturer had taken on the civil liability. Longer term some of those laws will probably change as Level 3 autonomous vehicles start to become more common.


Germany did already change the law (and is, I think, requiring exactly the kind of liability for the manufacturer that MB is offering)


Absent legal/regulatory changes/waivers, the fact that MB is offering to cover financial liabilities would seem to be pretty much irrelevant to legal liabilities. So basically "we'll cover your costs" is pretty much irrelevant if you want to drive in a manner that would normally be considered reckless.

But there's talk of certification so this may be taken into account. One would of course want to see specifics.


> AFAIK, no other manufacturer does this for their systems (which capabilities they often make rather wild claims about).

What's interesting is the level of nit-picking scrutiny this is attracting compared to the uncritical defence of those wild claims that we're seeing here. A whole lot of copium.


I'm more in favor of underwhelmed but responsible. Musk "approach" is despicable on this.


Renault does, they announced it a while back. THIS is the real breakthrough (I worked on self driving cars in the mid 2000s).


I wouldn't be surprised if the EU made this a law.


Confident but limited is probably how the German manufacturers describe most of their cars (although I think mercedes do at least some of their driverless in the USA)


I think other manufacturers will also be taking responsibility for liability when they come to market. It is just insurance. You can already pay an insurance company to accept liability. The difference with lv3 self drive and beyond is that insurance companies no longer have the data to correctly price their policies, because this is all new and they rely on historical data. The car manufacturers believe they have that data, and are now in a position to take that slice of the pie. I expect that once things settle down, the bulk of your insurance will be paid as a yearly fee to the manufacturer to cover self driving (and the manufacturer will likely offload much of the risk to Big Insurance).


> You can already pay an insurance company to accept liability

If the car drives over a child in self-driving mode, will insurance company go to jail instead of you?


Nobody goes to jail, same as now unless you can prove negligence or intent, there might be significant fines and/or settlement money depending on the jurisdiction. But I’m not sure how is this difference than deaths caused by some design flaw in normal cars.


At this stage of development, I think those limitations are entirely appropriate and I would want other manufacturers to be this cautious. Self driving is a convenience feature and limitations like this are reasonable. Once they have enough miles/years of experience, they can begin to remove some limitations.


> they seem to be so confident in their system

Having spent a lifetime building networks and software that actually makes me nervous


There's nothing about the way that I've seen automotive related software built that inspires me with confidence. Quite the opposite.


If you've spent enough time building software you learn that the real world almost inevitably introduces edge cases that even the most careful testing couldn't foresee. Self driving to date has proven itself not to be an exception to this.

I just can't imagine how Mercedes could be confident in a highly complex (self driving) system, designed to manage an insanely complex and unpredictable (driving) environment with no real-world tests. Maybe that's coming from the marketing/PR department and not engineering.


No other manufacturer explicitly says they do it, but guess who is legally responsible for manufacture defects whether or not they explicitly say they are?

There's an extent to which what MB is doing here is turning a fact they have very little ability to evade into a marketing point.


> they will take responsibility for the car when in autopilot

This probably just means autopilot gets disengaged just before impact


This also includes a 10 second takeover window (in accordance with EU regulations)


Can a driver wake up, be calm, get spatial understanding, do bayesian physics predictions and decide the corrective maneuver in 10 seconds?


That's plenty of time. Find a clock and watch the second hand move for ten seconds. It's a long time.


Especially if I am actually asleep, I can barely speak coherently on the telephone within 10 seconds much less take care of situation with a vehicle that a computer has thrown up its virtual hands about dealing with.


Level 3 autonomous driving isn't made with the use case of sleeping but with you being able to do other things while still awake (for example use your phone or watch a movie). For those situations 10s is more than enough time to react.


If you're not paying attention, you can easily end up dozing off. You're either more or less paying attention--and yes drivers' attentions can drift a bit--or you're going to take some time to reacquire some awareness of what exactly is going on.


If you can't stay awake then you need to not use this feature. If you're not exhausted it should be an easy task. If you are exhausted, arrange something else.

Attention drifting is supposed to be fine. That's the entire point of level 3. How long do you think you need to regain focus? Maybe a different brand will cater to that amount of time, or you can lobby regulators.


My point was simply that, if you're not driving and are absorbed in something else whether a book, a movie, a game, or just zoned out, 10 seconds is not a lot of time to figure out what's going on because the car's computer has encountered something that is sufficiently non-standard that it doesn't know what to do.

I do there there is some period of time that is reasonable for a handoff. Maybe a minute? I actually believe that full autonomous driving with geo-gating and condition-restricted limits is probably how things will play out.


We’ve gone from “unless you’re actively engaged you can’t possibly be ready to takeover immediately” to “unless you’re actively engaged you are easily going to be sleeping”.

People fall asleep behind the wheel all the time. They just usually crash and maybe die when this happens, if FSD is not enabled at the time.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: