> According to the Texas Transportation Code, the owner of a driverless car is “considered the operator” and can be cited for breaking traffic laws “regardless of whether the person is physically present in the vehicle.”
This is all well and good when the owner is Waymo or Cruise and actually has direct control over the code that caused the vehicle to fail, but for consumer cars it brings up important questions of what "ownership" even means these days.
Let's say I own a self-driving car that's able to reliably park itself. I pay close attention the first few months to make sure that it's safe and reliable, and once it's proved itself I start trusting it. I've been as careful as can reasonably be expected, but one day the manufacturer pushes a mandatory, automatic over-the-air update that contains a critical bug, and my car suddenly starts misbehaving and I get ticketed.
This isn't an unlikely scenario—how many of us have "owned" a product that has suddenly changed its behavior in undesirable ways after we purchased it? It doesn't seem reasonable to me to hold an individual consumer liable for bugs in a machine that they are unable to fully control.
Either manufacturers need to be required to provide important software freedoms to vehicle owners, or the manufacturers themselves need to be liable for infractions.
There are two components here that both already have an answer in the real world.
1) making owners of cars responsible for (most) traffic infractions
2) issues caused by third parties, with the first party bearing initial liability
Number 1 is surprisingly common in overseas countries. Some of those countries have figured that your garden variety traffic fines clog up the courts, so they’ve lifted simple garden variety infractions out of criminal law and placed them in administrative law.
In doing so, they’ve made the owner of the car (i.e., license plate holder) responsible for these types of traffic infractions, who then receive their fines by mail (which are technically settlement offers to avoid prosecution) with a generous appeal window, after which the fine is finalized (doesn’t end up on criminal record because it’s not criminal anymore).
This frees up court resources, allows for ticketing without stops, and shifts responsibility to the car's owner, who then, if applicable, can settle it with whomever they borrow their car to.
For more severe infractions, it’s still tied to the driver, with the car owner being the initial presumptive driver.
Number 2 is a common situation in the US and overseas and is resolved by going after whoever is responsible (e.g., manufacturer) after the fact to be made whole again.
It’s similar to going after the seller of the stolen goods after the original owner takes back their property from you in jurisdictions where property rights don’t transfer after stolen goods are sold.
>"so they’ve lifted simple garden variety infractions out of criminal law and placed them in administrative law."
Many "garden variety infractions" are human errors. I do not think those were ever a subject to criminal prosecution disregarding whether the ability to clog the courts.
In the US, they are still firmly part of the criminal system and recorded on your criminal record, to the point that if you don’t show up, a warrant will be issued for your arrest in some jurisdictions.
Elsewhere, they also used to be part of the criminal system and handled similarly to how it’s dealt with in the US (minus the more severe tools the criminal justice system has on hand).
As individuals we are never going to own self-driving cars.
If buying the hardware is even an option to begin with, there's going to be a hefty fee to subscribe to the required 'cloud based' self-driving service.
(And there's certainly no way individuals/small businesses will be able to buy such a car and run a self-driving taxi service profitably when a megacorporation could have all the income from all the cars)
That is only a level 2 autonomous driving system, with capabilities on par with many other stock manufacturer driver assistance system. Having an open source alternative is coolsl, but it doesn't really bear on the argument I think is being made.
I think the argument is that we won't see the same availability beyond level 3, since level 4 is the first step where a human is not required behind the wheel. This means you probably need subscription to have a remote human who can step in and take control in edge cases, at least until we develop AGI. Even then, I suspect the economics of running a full AGI from each vehicles will make a subscription mandatory.
I still believe the free market can work for self driving cars. If there's demand for offline self driving cars, someone will make them. Probably someone in Asia will make them affordable.
But I believe regulatory capture and anti-competitive tariffs are likely to make affordable, buy-and-use cars impossible to sell in the US. Regulations will likely change annually, like they do for cars right now. The govt will be worried that China has control of our morning commute.
I hate being cynical, but cars do seem like a lost cause.
I don't have strong opinions about self-driving cars in this context, but the pattern of this argument doesn't track: you can omit "self-driving" and the underlying premise is still the same (that companies have an incentive to lease rather than sell objects to capture service revenues).
But the reality is that people do own cars, and no car manufacturer that I know is also trying to become a taxi operator (except maybe in the indirect investment sense). Or in other words: sales in a market aren't affected by the abstract possibility of vertical integration, because any entity that attempts to vertically integrate will be undercut by other parties making sales at the same time.
From the point of view of a maker and seller of cars, the incentive for self driving is that it increases the size of the market. Rather than just selling one car per adult in a family, it could sell one per adult and one per child.
Car makers operating according to this incentive will try to discourage taxi-like functions in self-driving cars, perhaps by only allowing operation when specified people are recognized by the car (anti-theft feature!).
Software that is a one off sale is rarely profitable to do as that comes with the additional cost of updates and patches.
The development cost of the updates and patches has to come from somewhere - either significantly marking up the initial purchase cost to fund the on going development (side bit: when a one time purchase server shuts down people are up in arms about that) or with some form of subscription model.
If a driverless car maker can be liable for the software that they released some time ago and didn't update, then the funding of that development effort has to come from somewhere.
Software sold as a one-off sale can be, and usually is, very profitable. See e.g., most software companies pre-SaaS-era. This is in fact how most software was sold...and is still sold today.
SaaS is the plague on software development, because the subscription model demands a shift to constant feature "upgrades" and other frequently-useless "updates" intended to justify the perpetual subscription costs to customers.
Individuals can buy self-driving cars from Mercedes-Benz today. They are only level 3 autonomous, but the manufacturer does assume legal liability for incidents that occur during autonomous operation (traffic tickets may be a separate issue). There is a hefty subscription fee for the Drive Pilot service.
Never seems too strong here. In 2000 years when it has been maybe 1500 years since a human turned a steering wheel I can see a situation where an individual owns a self driving car like thing.
Even well before some such future, the type of people who buy Bugatti Veyron will still want to buy something unique and out of reach of the 99.99% of the population. Maybe that is human driven but what if human driven becomes illegal?
We already have laws that make it so the driver is liable for infractions, crashes, etc. Even if you own the car, but someone else drives, driver gets the ticket. If this weren’t the case, rent-a-car agencies wouldn’t exist.
The same rule should extend to self-driving cars. The manufacturer is driving, ergo they should be liable for everything the car does.
> Either manufacturers need to be required to provide important software freedoms to vehicle owners
FOSS robot cars seem like a great way to bypass emissions controls, safety regulations, and plain common sense. It'd take what, half a day, for some teenager to modify the firmware to double the 0-60 and blow up the battery pack at the same time?
I'd want the opposite: for self-driving firmware to be locked down and hardware signed to heck, with an independent auditing body code-reviewing every update.
> or the manufacturers themselves need to be liable for infractions.
Can't it be both? Split legal responsibility between each consumer and manufacturer. Doesn't have to be 50/50, but personal liability (even capped at a few grand) could make people pay attention and take over when the self-driving doesn't seem to be doing its job. We don't need more drivers on the roads who think their wondercars are infallible and blame everyone else for their accidents.
>FOSS robot cars seem like a great way to bypass emissions controls, safety regulations, and plain common sense. It'd take what, half a day, for some teenager to modify the firmware to double the 0-60 and blow up the battery pack at the same time?
generally we don't pen laws in order to save a singular idiot teenager -- more-so the aforementioned teenager bought a product and then ruined it; the teenager already received the punishment for his actions by the destruction of their own property without the need for jurisprudence and complexity.
Also : Nearly every major car manufacturer in the world has been caught semi-recently violating emissions standards. Some have violated safety standards. Given that this hasn't been an impasse for the worlds largest companies, why presume that the FOSS equivalents would be worse? Our regulations have failed to constrain the largest companies who are apparently beholden to the most regulation and law the world has ever seen, but somehow FOSS individuals/collectives need to do better than these billion dollar behemoths? Nice.
I don't get why people suddenly think that when something is 'dangerous' then it's okay to throw in the towel on personal property ownership and take whatever lend-lease-borrow program is offered in exchange. Cars have always been dangerous, and aside from gross safety and emissions laws, we're allowed to modify them.
I don't understand why this changes radically when a vehicle becomes electric, were gas tanks and high pressure fuel injectors suddenly safer than I remember them being? "The stupid teen who burns his house down" certainly applied to things in the past, and we didn't go out of the way to lock down the fuel tank into an inaccessible black box in response, we just dealt with the inherent risk of dangerous equipment and somehow avoided full-blown chaos and calamity through educated precaution and training.
Here's what I want: I want the freedom to modify my own property, and when that modification becomes in violation of an agreed upon law I should be punished.
Forget all this pre-crime 'what-if' save the children bullshit. Punish offenders and let civilian non-offenders do as they please until such time that they violate a law or standard -- which would be more enforcement than the world is seemingly applying to every other car manufacturer in the world.
> I don't understand why this changes radically when a vehicle becomes electric, were gas tanks and high pressure fuel injectors suddenly safer than I remember them being?
That’s not what’s changing. We’d be having the same balancing act discussion of collective safety versus individual rights if self-driving cars were all combustion powered.
> the teenager already received the punishment for his actions by the destruction of their own property
> Cars have always been dangerous, and aside from gross safety and emissions laws, we're allowed to modify them.
These statements are inherently ignoring the externalities that the dangers pose to others on the roads and streets. For years we’ve allowed people to essentially modify their cars however they want, we’re allowing car companies to build vehicles that are literally killing people on the streets with little action to prevent it.
This status quo is really bad and we should fix it. People should not be allowed to drive lifted vehicles on urban streets. If modifying your car was only something that negatively impacted the owner, that would be fine. But the fact is that modifying cars tends to make them more dangerous to others on the street, not just the occupants of the car. We need more safety controls on car companies, not less. We need cars to have automatic speed controls that limit them to the speed limit on the street.
This isn’t a debate about if people should be allowed to modify their equipment, but if they do, they shouldn’t be allowed to drive that on public roads. Take it to the track or some off-roading specific place, not our public streets where we’re now seeing the highest pedestrian death rates in the last 40 years.
> These statements are inherently ignoring the externalities that the dangers pose to others on the roads and streets.
No, they don't. We don't ban alcohol or cars because of drunk drivers. We don't even require a breathlyzer reading before cars start.
Instead, we try to discourage that behavior with education and penalties. We penalize drunk drivers after they harm other road users or we try to catch them beforehand based on driving irregularities.
I do suspect we will see restrictions on what software can be run on cars. I hope those restrictions allow open source solutions to be viable. Ideally there would be a cost effective federal system of safety approval for any release of self driving software and we'll allow users to run any approved system on publix roads so that ownership of devices isn't lost. Unfortunately I think regulatory capture makes a positive outcome here unlikely.
> FOSS robot cars seem like a great way to bypass emissions controls, safety regulations, and plain common sense. It'd take what, half a day, for some teenager to modify the firmware to double the 0-60 and blow up the battery pack at the same time?
Hasn't that always been possible? Just because it's in software now I don't see why it's suddenly different
That's a reasonable argument. In that case, I argue that the manufacturer who wrote and signed the firmware should be liable as the operator of the vehicle.
What happens when the manufacturer folds after the first major accident they're found liable for and now the entire remaining fleet doesn't have anyone liable for it?
If the regulators step in to remove the vehicles from the road then who makes the consumers whole now that their vehicle, and perhaps livelihood, is gone?
I suspect certification is the only reasonable path. If the regulators believe these systems are safe for the road then they should build the framework for establishing that bar. If such a framework is not possible to build then I think that also answers that question.
Since we're talking about traffic tickets, which are (currently) routine and generally try to avoid the court system as much as possible, I'd say that we need laws that provide no ambiguity, and I'd rather those laws place the blame on the party with the most control.
This isn't just a matter of ticketing for minor violations with no consequences. These are unregulated robots that have and will continue to kill people.
Not the person you relied too, but it should primarily be the manufacturers fault for pushing a dangerous update without adequate testing. However, if say, the new behaviour got in the news and the owner didn't update to the latest, safe update it could be their fault.
>FOSS robot cars seem like a great way to bypass emissions controls, safety regulations, and plain common sense. It'd take what, half a day, for some teenager to modify the firmware to double the 0-60 and blow up the battery pack at the same time?
People already mod their cars, and there are vast communities of teens and early-twenties following exhaustive video tutorials on how to illegally and unsafely make their car faster and louder. Cars aren't quite FOSS, but they're better documented than most FOSS is.
> Let's say I own a self-driving car that's able to reliably park itself. I pay close attention the first few months to make sure that it's safe and reliable, and once it's proved itself I start trusting it. I've been as careful as can reasonably be expected, but one day the manufacturer pushes a mandatory, automatic over-the-air update that contains a critical bug, and my car suddenly starts misbehaving and I get ticketed.
But this is very easy. The self-driving is a component, same as brakes or lights. If your brakes stop working because of manufacturing issue and you crash into something it's the manufacturer's fault. Nothing new here.
I agree that's how it should work, but the TX law that the article quotes changes that rule to make the owner of the vehicle at fault for self driving failures.
> one day the manufacturer pushes a mandatory, automatic over-the-air update that contains a critical bug
This is a hidden two-fault scenario, which isn't always "bad engineering" but is usually a smell in that direction. You're (1) accepting that the operator/owner/whatever of the vehicle is technically responsible, (2) acknowledging that this supervision is likely to work in practice, then (3) imagining a hardware/systems failure in the car and simultaneously (4) supposing that the supervision you took for granted in (2) fails at the same time.
Basically, you don't need point 3 to make your point about safety logically. You added it to change the villain from the driver to the manufacturer.
I set up the scenario to show that no matter how careful the consumer is, their supervision will not work in practice (since the vehicle's behavior can change at any time and without warning) and therefore they cannot be held responsible for the behavior of the vehicle.
Either the manufacturer should not be allowed to market it (because it cannot be operated safely) or the manufacturer should be considered the operator.
> I set up the scenario to show that no matter how careful the consumer is, their supervision will not work in practice (since the vehicle's behavior can change at any time and without warning)
Sorry, can you explain that better? The whole point to the metaphor of "supervision" (e.g. of party A by party B) is that party A is not trusted to behave correctly in all circumstances. You seem to be saying that supervision can only work if B is never required to intervene at all (and by extension, won't, if A's behavior changes)? In which case there's no supervision happening, right?
Again, it really seems like you're just playing tricks to make the manufacturer into an unsolvable villain. I mean, you get that's ridiculous right? We trust our cars not to fail already. And have, for a century. And sometimes they do fail, and when that happens we adjust regulatory strategies to minimize risk. Why is any of that different when there's a scary robot involved?
> When the operator doesn't have full Agency of their hardware, can they be held accountable for it's unexpected behavior?
Yes? Because that's what "supervision" means? Again, I'm very much not following your logic.
The whole idea behind supervised autonomy[1] is that the device may not behave correctly. Period. If you aren't capable of supervising your car after an update you clearly weren't doing any supervision before the update.
[1] Or supervised anything! Would you argue against holding parents accountable for childproofing their homes just because their kids got an "update" and learned to walk? Again I really don't think you're thinking this through.
> It doesn't seem reasonable to me to hold an individual consumer liable for bugs in a machine that they are unable to fully control.
They chose to buy it. Manufacturers should also be potentially liable for some classes of problem, but consumers are the ones pressing the "run the self-driving car" button. The court system exists to apportion liability properly past that.
No, absolutely not, never ever. This just highly incentivizes Manufacturers to dump trash garbage into the market (since you've just forced consumers to bear all liability for Manufacturer failures).
You do this once, and you might as well kiss all of society away. Imagine a toy that kills children, and you tell parents, "well, you pressed the 'On' button, so it's your fault". Imagine a furnace that burns down your house, and you tell homeowners, "well, you pressed the 'heat' button, what did you expect?". No one would be willing to buy anything at all, because every manufacturer would be in a race to see the shittiest thing they could trick people into buying. Every product on every shelf would be snake oil.
---
Manufacturers should always be fully liable for any and all problems, that can not be directly attributed to extreme gross negligence on the consumers part.
"wel, consumers are the ones pressing the "run the self-driving car" button" -- absolutely not. Manufacturers sold car with a "self-driving" button, consumers hold zero fault for pressing the button Manufacturers promised would work, unless there's some kind of extreme gross negligence involved. (i.e., pressing the button, while the car is already dangling off a cliff-side, or pressing the button after gouging out every camera and sensor on the car, or whatever).
"But then Manufacturers couldn't sell a self-driving car" -- cool, then the car isn't ready to be sold anyway. If a car company isn't ready to take on the liability for their self-driving product, then it isn't freaking ready for the road -- no exceptions.
> Imagine a toy that kills children, and you tell parents, "well, you pressed the 'On' button, so it's your fault". Imagine a furnace that burns down your house, and you tell homeowners, "well, you pressed the 'heat' button, what did you expect?". No one would be willing to buy anything at all, because every manufacturer would be in a race to see the shittiest thing they could trick people into buying. Every product on every shelf would be snake oil.
You're totally right! This is why we have in the past had a regulatory (and partial participatory) state that, prior to decades of attacks and gutting, prevented that stuff from being sold. Perhaps the self-driving car hypemen need their own equivalent of UL/ETL and need to convince insurers and the NHTSA that they have a sufficient story--and perhaps they shouldn't be allowed to sell cars that sport those features at all until they do!
Completly agree. NHTSA should (imho) have full regulatory control over the approval of any and all self-driving car tech on the road. (Rather than just 'Voluntary Guidance' suggestions and 'safety-related defects', which encourages manufacturers to ship a known-defective product, and hope NHTSA can't catch it in a recall)
100%. Like, maybe my original post sounded like a sop to manufacturers--to be totally clear, hell no. I want them not selling shit at all if it isn't exhaustively checked by an independent regulatory body. But if we can't have that, I want customers feeling the fear of God and not buying the stuff because the responsibility for the tool falls on them too.
Our regulatory state mandates small airplanes use leaded gasoline and has for many decades, literally harming poor children the most.
Our regulatory state drives up the cost of housing in numerous ways (zoning, arc-fault breakers, "look and feel", etc). Children don't hardly see their parents because they must drive long distances to commute, they don't get adequate supervision.
Our culture is what allows this to continue. We culturally don't demand excellence. We denigrate smart people. We refuse to have standards for raising children, leading to inter-generational idiocy. And now those idiots vote to keep making more idiots ("don't say gay", anti-abortion, no free BC, etc.) Regulation won't fix a broken culture.
That's fine. Regulation's "job" isn't to "fix a broken culture", it exists to prevent repeat mistakes -- especially risky/damaging ones. The saying goes, "safety regulations are written in blood" -- that most of the regulations we have exist to prevent some kind of tragedy that has already happened at least once.
> Our regulatory state drives up the cost of housing in numerous ways ("look and feel", etc).
Most of those are not regulations, they're just rules. (i.e., 'neighborhood character / look and feel').
> Our regulatory state drives up the cost of housing (arc-fault breakers)
That's not a 'cost drive-up', that's basic safety. (AFCI breakers literally prevent home electrical fires, Home Depot sells these at full retail markup for like ~$50/each. (120v,15a). You might use, what 12 of these on the average home? $600 total? On a $300k new build, AFCI costs you 0.2% of your total budget. These are not why housing is expensive).
They chose to buy a product with a set of known characteristics
The manufacturer then changed those characteristics.
If a manufacturer makes changes post purchase to a product resulting in harm, they should bear the majority of the liability, if not all of it.
If I had a microwave that for years cooked a potato in 3 minutes safely, but then was modified by an update to catch fire under the same conditions, I shouldn't be the one on the hook for pushing that button.
So we should just plan on every self-driving vehicle ticket ending up in the court system so that we can properly apportion liability between the consumer (who pressed the button) and the manufacturer (who designed the car, wrote the code, advertised the vehicle, and then updated it many times after the consumer purchased it)?
If it isn't UL or ETL certified? You very well might be, depending on circumstances. Your insurer may treat all fire as covered peril, but it is not guaranteed.
Are they? Forget even fully self-driving cars right now--who's certifying your lane assist after every OTA update? Like, point at the guy with the clipboard at USDOT or the NHTSA. Because as far as I know, that's not happening, you're open-palm slamming updates into a system you chose to bought and use, under an operational model where you must be fully ready to take over with no warning, and undertaking that is your call.
To be totally clear: I want a regulatory regime for this stuff such that it is safe-as-possible and not predicated on a human's attention not wandering in the way that a human's attention is going to very understandably wander. Without one, consumers need to not buy it on pain of their own significant liability.
Mercedes does accept liability on their self-driving system, so there is a choice beyond "oops our system killed your wife, sorry, she should have been paying attention while our car pretended to be able to drive".
But there are games like Rocket League who have never changed their controls or physics. They changed everything else except for those things so that people could predictably build and maintain skill. I think it’s not holding a car company to too high or a standard.
OTA updates would require not only 5y simulation testing (dine in 5 days w the right computing power) but would also not be applied all at the same time, in order to get flaws in time before full fleet deployment.
In this case the consumer should of course still be held responsible, and if they think it's wrong they can join in on the class action lawsuit against the manufacturer.
Or, owners should enter into a pre-existing legal relationship with the manufacturer to get reimbursed for any manufacturer error that gets pushed out.
Those who choose to operate driverless vehicles must have some skin in the game in order for this to be fair at all. Bringing your driverless vehicle into an area is the customer's choice, after all.
That's the mostly overly American thing I've read this year... Make consumers file a class action against car manufacturers (good luck with a Tesla and mandatory arbitration), all to avoid doing the sensible thing: Make some rules for this market.
The likes of Musk will moan and bitch about it, but regulation is a perfectly good solution for this.
Going to repeated class action lawsuits every time there is a bad software update sounds like an inefficient use of the court system to me.
I think most people will eventually want the manufacturer to be the defendant in court cases that result from crashes due to a manufacturer updating cars. I can only imagine NHTSA (US Auto Regulator) will evolve to make this the case.
So basically the eventual outcome here is known (industry regulation) but we must take every step to do it the hardest way possible? Intentionally start with a free-for-all then have deaths, a social movement, lawsuits, court cases, politicians talking about it, campaigns, legislative debates, etc to arrive at what we all know is the end game to begin with?
I think there may be some optimization steps here.
Here is a (very) incomplete list. Not all are ones that I've personally experienced, but they're all products that either I or someone in my direct acquaintance have owned/used.
* Microsoft Windows starts showing ads in new and creative places, even though I shelled out $200 for the Pro version.
* An Android update broke my preferred home screen.
* Roomba vacuums go through regular periods of stupidity in between software updates. One day it's able to dodge obstacles, the next it's sucking up anything and everything.
* Basically every SaaS product changes its UI on a semi-regular basis in ways that throw off existing workflows.
EDIT: Oh, and a bonus one that's more on topic—Tesla recently rushed an update to their software and introduced a lot of bugs:
Unlike the potential of a vehicle making driving mistakes, none of your examples is harmful, especially not to random third parties. The only thing on your list I have direct experience with is Windows--I've had the Pro license ever since Win 10 came out, now on three different machines, and I don't see any ads on my desktop or popping up in my file system. (of course, I don't log in to any Microsoft account, maybe that explains it).
I can't be bothered to check, but I assume my corporate laptop is using pro/enterprise Windows, and it has definitely shown ads on the lock screen (it's very laggy when I want to unlock my computer, so very noticable). I don't interact with Windows much beyond opening a terminal, so they may be elsewhere too. I vaguely recall it randomly popping up some Xbox thing at one point.
Honestly I don't see why corporate security groups don't throw a fit over what Microsoft has been doing the last few years. I'd expect the enterprise versions to be almost like Windows 2000 with all of the malware off by default (and possibly impossible to turn on).
A self-respecting established company would ensure their windows installs never show any ads. I hope some of the bigger actors can pressure Microsoft on this point to make it clear to MS.
> Honestly I don't see why corporate security groups don't throw a fit over what Microsoft has been doing the last few years.
I have never seen any corporate security group that does its own thinking instead of rubber-stamping whatever Microsoft tells them.
And the current Windows privacy terms grant MS the right t upload to their servers anything from the disk or memory of your computer. So they already maximized the untrustworthiness here. We can stop expecting that anybody reacts.
Really? Hell my Google Mini has only recently decided to scream "by the way, the mic is off!" on startup every single time. I've had the mic off for years. Google decided to punish me for using it only as a speaker.
Really any cloud-connected device would fit this bill. Many have gone out of business and turned off their backends or changed the way they work in a way that is incompatible with the original purchasing intent of the user. Not to mention software products that evolve in ways the user doesn't like all the time. https://www.bbc.com/news/technology-64249388 is an article with some more specific hardware examples if that's what you're looking for.
> one day the manufacturer pushes a mandatory, automatic over-the-air update that contains a critical bug, and my car suddenly starts misbehaving and I get ticketed
Don’t buy it if the company who controls it does not put it in writing that they indemnify you properly.
What is the utility for deciding is one party or another is liable first? Since self driving accidents are rare, we could just have shared liability and defer to courts. I’d imagine a lot of these accidents go to court now. National streamlined processes could be established as more motorists opt into self driving technology.
"Nearly obvious" seems to be doing a lot of work in that sentence. Maybe elaborate? There are now millions of cars on the road with some level of autonomy, you don't need to hand-wave about what you believe "will" happen. Just show the data.
It's immaterial. Auto manufacturers' product designers and executives aren't being held responsible for rising pedestrian deaths because their oversized killer vehicles keep running over people when driven by humans. Self-driving is responsible for almost no deaths or injuries, most likely net-negative.
This is all well and good when the owner is Waymo or Cruise and actually has direct control over the code that caused the vehicle to fail, but for consumer cars it brings up important questions of what "ownership" even means these days.
Let's say I own a self-driving car that's able to reliably park itself. I pay close attention the first few months to make sure that it's safe and reliable, and once it's proved itself I start trusting it. I've been as careful as can reasonably be expected, but one day the manufacturer pushes a mandatory, automatic over-the-air update that contains a critical bug, and my car suddenly starts misbehaving and I get ticketed.
This isn't an unlikely scenario—how many of us have "owned" a product that has suddenly changed its behavior in undesirable ways after we purchased it? It doesn't seem reasonable to me to hold an individual consumer liable for bugs in a machine that they are unable to fully control.
Either manufacturers need to be required to provide important software freedoms to vehicle owners, or the manufacturers themselves need to be liable for infractions.